Imperial College London > Talks@ee.imperial > CAS Talks > Designing fast, adaptable machine learning devices at the edge – and verifying they actually work

Designing fast, adaptable machine learning devices at the edge – and verifying they actually work

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact George A Constantinides.

Machine learning continues to exceed expectations and has become the state-of-the-art across many application domains. Developments in the field of hardware accelerators, such as FPG As, have enabled machine learning algorithms to be implemented on low-power devices and operate with extremely low latency. This has opened doors for the power of machine learning to be utilised in embedded application domains. However, one challenge with embedded devices is that they typically need to be pre-trained with high-power GPUs before being deployed at the edge. The problem with this approach is that embedded devices are likely to experience unique environments; if many embedded devices are based on the same pre-trained network, they are highly unlikely to all achieve the best performance for their individual use case. In this talk, we will discuss how we can try to design accelerators with this adaptability and some applications that may use it. Finally, we will discuss our research into ensuring that these ML accelerators, or hardware accelerators in general, can be more extensively verified to ensure they correctly achieve their desired functionality.

This talk is part of the CAS Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity