Imperial College London > Talks@ee.imperial > CAS Talks > Training Position-Aware Top-k ListNet for Ranking Using Fixed-Point Representation

Training Position-Aware Top-k ListNet for Ranking Using Fixed-Point Representation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact George A Constantinides.

ListNet has been intensively investigated in constructing and training ranking models. Compared with traditional learning approaches like pairwise approach, ListNet delivers better accuracy. However, ListNet is computationally too expensive to learn models with large datasets due to a large number of permutations involved in evaluating the loss and updating models. Previous solutions propose to utilise sampling methods, which take permutations as instances and select a subset of the permutation classes; That means they collect the same number of samples for each position in the permutation, which do not take the importance of positions into account; therefore they cannot guarantee accuracy for top positions, which is critical in many applications. This paper introduces a new position-aware sampling method in contract to existing sampling methods. It significantly reduces the computation complexity without sacrificing accuracy of top positions. Moreover, fixed point quantization of ListNet is introduced, and a large majority of gradient computations are performed in fixed point representation. Next step we plan to implement our approach on a standard FPGA board.

This talk is part of the CAS Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity