Imperial College London > Talks@ee.imperial > CAS Talks > Rethinking BNN Inference and Training on Embedded FPGAs
Log inImperial users Other users No account?Information onFinding a talk Adding a talk Syndicating talks Who we are Everything else |
Rethinking BNN Inference and Training on Embedded FPGAsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact James Davis. Practice talk for RC4ML Workshop With the growing availability of high-performance edge devices come the rising demand for edge inference or even training applications. In this talk, I will introduce our recent research progress on approximation-based deep neural network training methods which increase the resource-efficiency of inference and training on embedded-scale FPG As. Our first project is LUT Net, an end-to-end hardware-software framework for the construction of area-efficient FPGA -based neural network accelerators using the native LUTs as inference operators. We demonstrate that the exploitation of LUT flexibility allows for far heavier pruning than possible in prior works, resulting in significant area savings while achieving the same accuracy. To reduce LUT Net’s huge training costs, we introduce a low-cost binary neural network training strategy exhibiting aggressive memory footprint reductions and energy savings. This talk is part of the CAS Talks series. This talk is included in these lists:Note that ex-directory lists are not shown. |
Other listsTalks@ee.imperial CAS Talks IEEE Magnetics Society Distinguished Lecturer VisitsOther talksIntegration of Distributed Energy Resources via the Low Carbon Energy Platform |