Imperial College London > Talks@ee.imperial > CAS Talks > Mixed-TD: Efficient Neural Network Accelerator with Layer-Specific Tensor Decomposition
Log inImperial users Other users No account?Information onFinding a talk Adding a talk Syndicating talks Who we are Everything else |
Mixed-TD: Efficient Neural Network Accelerator with Layer-Specific Tensor DecompositionAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact George A Constantinides. Neural Network designs are quite diverse, from VGG -style to ResNet-style, and from Convolutional Neural Networks to Transformers. Towards the design of efficient accelerators, many works have adopted a dataflow-based, inter-layer pipelined architecture, with a customized hardware towards each layer, achieving ultra high throughput and low latency. The deployment of neural networks to such dataflow architecture accelerators is usually hindered by the available on-chip memory as it is desirable to preload the weights of neural networks on-chip to maximise the system performance. To address this, networks are usually compressed before the deployment through methods such as pruning, quantization and tensor decomposition. In this paper, a framework for mapping CNNs onto FPG As based on a novel tensor decomposition method called Mixed-TD is proposed. The proposed method applies layer-specific Singular Value Decomposition (SVD) and Canonical Polyadic Decomposition (CPD) in a mixed manner, achieving 1.73x to 10.29x throughput per DSP to state-of-the-art CNNs. This talk is part of the CAS Talks series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsAI- and HCI-related talks Type the title of a new list here Freshness and Convenience: Enjoying Your Favorite Vintage Anytime, Anywhere with Canned Wine DeliveryOther talksKeynote Speech on Influence and persuasion Two Applications of Compression in Group Testing and Multi-Armed Bandits |