Imperial College London > Talks@ee.imperial > COMMSP Seminar > On the application of auditory scene analysis in hearing aids

On the application of auditory scene analysis in hearing aids

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mike Brookes.

Improving speech intelligibility in noise is a main demand for the majority of hearing impaired people who are afflicted by a sensorineural hearing loss. There are encouraging approaches that replicate auditory functions, in particular the function of spatial filtering. Today’s hearing aids are thus often equipped with directional microphones or microphone arrays, which successfully enhance the SNR . As the hearing apparatus is a superior speech processor, a further developmental leap in hearing aid design is expected from the emulation of models of auditory scene analysis (ASA). Common to ASA -models is the categorization of sound sources in a feature-space. Using this representation, the target-speaker and interfering sound sources are separable. In analogy with the psychological attention to one perceptual stream, it is possible to use these ASA -models to enhance a given target. Several of these functional ASA -models showed a considerable improvement of SNR in a broad range of acoustical situations. In this presentation, I would like to present a set of binaural ASA -models and their application in hearing aids as a post-filter to beamforming frontends. Furthermore, I would like to discuss the optimization of these binaural and non-linear hearing aids in different acoustical situations. The optimization, known as a complex problem, is approached by a genetic optimization procedure that incorporates a binaural auditory model of speech intelligibility as an objective function

This talk is part of the COMMSP Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Changes to Talks@imperial | Privacy and Publicity