Imperial College London > Talks@ee.imperial > CAS Talks > (FPT Preview) A Scalable FPGA Architecture for Non-linear SVM Training

(FPT Preview) A Scalable FPGA Architecture for Non-linear SVM Training

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact George A Constantinides.

Support Vector Machines (SVMs) is a popular supervised learning method, providing state-of-the-art accuracy in various classification tasks. However, SVM training is a time-consuming task for large-scale problems. This work proposes a scalable FPGA architecture which targets a geometric approach to SVM training based on Gilbert’s algorithm using kernel functions. The architecture is partitioned into floating-point and fixed-point domains in order to efficiently exploit the FPGA ’s available resources for the acceleration of the non-linear SVM training. Implementation results present a speed-up factor up to three orders of magnitude of the most computational expensive part of the algorithm compared to the algorithm’s software implementation.

This talk is part of the CAS Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity