Imperial College London > Talks@ee.imperial > CAS Talks > (FPT Preview) A Scalable FPGA Architecture for Non-linear SVM Training
Log inImperial users Other users No account?Information onFinding a talk Adding a talk Syndicating talks Who we are Everything else |
(FPT Preview) A Scalable FPGA Architecture for Non-linear SVM TrainingAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact George A Constantinides. Support Vector Machines (SVMs) is a popular supervised learning method, providing state-of-the-art accuracy in various classification tasks. However, SVM training is a time-consuming task for large-scale problems. This work proposes a scalable FPGA architecture which targets a geometric approach to SVM training based on Gilbert’s algorithm using kernel functions. The architecture is partitioned into floating-point and fixed-point domains in order to efficiently exploit the FPGA ’s available resources for the acceleration of the non-linear SVM training. Implementation results present a speed-up factor up to three orders of magnitude of the most computational expensive part of the algorithm compared to the algorithm’s software implementation. This talk is part of the CAS Talks series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsType the title of a new list here Circuits and Systems Group: Internal Seminars isn_talks@ee.imperialOther talksControlling Spatially Invariant Distributed Parameter Systems using Arrays of Actuators and Sensors Engineering Education at a Crossroads - How Changes in the Technology and in the Marketplace are Forcing a New Reform in Engineering Education |