Imperial College London > Talks@ee.imperial > CAS Talks > [FCCM'19 practice talk] LUTNet: Rethinking Inference in FPGA Soft Logic
Log inImperial users Other users No account?Information onFinding a talk Adding a talk Syndicating talks Who we are Everything else |
[FCCM'19 practice talk] LUTNet: Rethinking Inference in FPGA Soft LogicAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact James Davis. Research has shown that deep neural networks contain significant redundancy, and that high classification accuracies can be achieved even when weights and activations are quantised down to binary values. Network binarisation on FPG As greatly increases area efficiency by replacing resource-hungry multipliers with lightweight XNOR gates. However, an FPGA ’s fundamental building block, the K-LUT, is capable of implementing far more than an XNOR : it can perform any K-input Boolean operation. Inspired by this observation, we propose LUT Net, an end-to-end hardware-software framework for the construction of area-efficient FPGA -based neural network accelerators using the native LUTs as inference operators. We demonstrate that the exploitation of LUT flexibility allows for far heavier pruning than possible in prior works, resulting in significant area savings while achieving comparable accuracy. Against the state-of-the-art binarised neural network implementation, we achieve twice the area efficiency for several standard network models when inferencing popular datasets. We also demonstrate that even greater energy efficiency improvements are obtainable. This talk is part of the CAS Talks series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsManikas Talks COMMSP & CP list Type the title of a new list hereOther talksBattling the Extreme with Resilient Power Grids: How ready are we? Control design for linear boundary control systems in port-Hamiltonian form |