Imperial College London > Talks@ee.imperial > COMMSP Seminar > Robust Multichannel Equalization for Blind Speech Dereverberation

Robust Multichannel Equalization for Blind Speech Dereverberation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Alastair Moore.

In hands-free communication, the captured speech is typically degraded by additive background noise and reverberation. One approach to dereverberation is channel equalization, where it is assumed that a preprocessing step will provide estimates of the acoustic impulse responses, and the presence of background noise introduces channel estimation errors. In this presentation, multichannel equalization techniques that aim to improve robustness to such estimation errors will be discussed. A class of algorithms developed within the channel shortening paradigm will be presented and reviewed. These algorithms exploit certain psychoacoustic properties of the acoustic impulse response to relax equalization constraints in the region of early reflections. A new approach to dereverberation that performs joint channel equalization and beamforming will then be presented. This combines the channel equalizer’s potential for perfect dereverberation with the beamformer’s spatial robustness.

Biography: Felicia Lim received the M.Eng. degree in Electrical and Electronic Engineering from Imperial College London, U.K., in 2010. She joined the Communications and Signal Processing Group at Imperial College London as a PhD researcher in 2011. She is presently a Software Engineer at Google, Sweden, where she researches and develops speech and audio technologies. Her research interests include multichannel equalization, blind system identification, beamforming and speech coding.

This talk is part of the COMMSP Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


Changes to Talks@imperial | Privacy and Publicity