Imperial College London > Talks@ee.imperial > Featured talks > Towards a smart hearing aid: decoding the brain's response to speech

Towards a smart hearing aid: decoding the brain's response to speech

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Joan P O'Brien.

Abstract Understanding speech in noise is a main problem for people with hearing impairment, and it persists for hearing aid users. The problem is impeded by an often imperfect fitting of the settings in a hearing aid that is informed by pure-tone audiometry but not directly by speech-in-noise comprehension. Moreover, algorithms for enhancing the intelligibility of speech in noise exist, but their usage in a hearing aid requires knowledge of the user’s target sound, such as a particular voice amongst competing speakers that the hearing aid wearer wants to listen to. Here we present recent progress on decoding speech comprehension as well as the attentional focus of a listener to one of two competing voices from non-invasive EEG recordings. The decoding is based on both cortical and subcortical neural activity in relation to different acoustic as well as linguistic features of speech. The developed methods may be applied in a smart hearing aid that measures brain activity from electrodes within the ear canal to better fit the hearing aid’s settings as well as to inform its noise-reduction algorithm.

This talk is part of the Featured talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity