Imperial College London > Talks@ee.imperial > CAS Talks > Accelerating Non-Linear Support Vector Machine Training using FPGA

Accelerating Non-Linear Support Vector Machine Training using FPGA

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Grigorios Mingas.

This talk has been canceled/deleted

Support Vector Machines (SVM) is a powerful learning method in machine learning. However, the training stage is still a time consuming problem. In this talk, a highly scalable FPGA architecture for accelerating the training phase of nonlinear SVMs is presented. In this architecture, the kernel function is split into fixed-point and floating point operations. This approach efficiently exploits FPGA ’s available resources. In addition, it maximizes the parallelization potential by customizing the fixed point operations to accommodate the different precision requirements for each attribute within the dataset. Block Rams are used to provide a high throughput access to the training points. In addition, a caching scheme is deployed to speedup the overall training process. Implementation results compare favourably with other software and GPU -based implementations.

This talk is part of the CAS Talks series.

Tell a friend about this talk:

This talk is included in these lists:

This talk is not included in any other list

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity