Imperial College London > Talks@ee.imperial > CAS Talks > Rethinking BNN Inference and Training on Embedded FPGAs

Rethinking BNN Inference and Training on Embedded FPGAs

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact James Davis.

Practice talk for RC4ML Workshop

With the growing availability of high-performance edge devices come the rising demand for edge inference or even training applications. In this talk, I will introduce our recent research progress on approximation-based deep neural network training methods which increase the resource-efficiency of inference and training on embedded-scale FPG As. Our first project is LUT Net, an end-to-end hardware-software framework for the construction of area-efficient FPGA -based neural network accelerators using the native LUTs as inference operators. We demonstrate that the exploitation of LUT flexibility allows for far heavier pruning than possible in prior works, resulting in significant area savings while achieving the same accuracy. To reduce LUT Net’s huge training costs, we introduce a low-cost binary neural network training strategy exhibiting aggressive memory footprint reductions and energy savings.

This talk is part of the CAS Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity