Imperial College London > Talks@ee.imperial > CAS Talks > Efficient Video Recognition on Resource Constrained Mobile Devices

Efficient Video Recognition on Resource Constrained Mobile Devices

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact James Davis.

The prevalence of artificial intelligence and mobile/wearable devices has led to increasing demands for analysing data and particularly visual data on resource constrained devices. In recent years, convolutional neural networks (CNNs) have provided promising results in classification and recognition of visual data. High accuracy of CNNs comes with a high computational/power cost which is beyond the current capabilities of mobile devices. Hence, large efforts have been taken to develop hardware/software platforms to reduce the complexity of CNNs, especially for image data. However, less attention has been paid to video data as the video domain presents the additional computational challenges for vision tasks. In this talk, we present the research, carried out at the University of Southampton, on developing efficient hardware/software architectures for video recognition by CNNs on resource constrained devices. We propose a novel framework that reduces the computational complexity of recognition taks by exploiting temporal information in videos.

This talk is part of the CAS Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity