Acoustic Signal Processing for Next-Generation Multichannel Human/Machine Interfaces: Difference between revisions

From HLT@INESC-ID

No edit summary
 
No edit summary
Line 3: Line 3:
== Date ==
== Date ==


* 14:00, October 01, 2007
* 14:00, Monday, October 01, 2007
* Room C11, IST
* Room C11, IST



Revision as of 08:24, 21 September 2007


Date

  • 14:00, Monday, October 01, 2007
  • Room C11, IST

Speaker

  • Walter Kellermann, Erlangen-Nuremberg University, Distinguished Lecturer of the IEEE Signal Processing Society.

Abstract

The acoustic interface for future multimedia and communication terminals should be hands-free and as natural as possible, which implies that the user should be free to move and and should not need to wear any devices. For digital signal processing this poses major challenges both for signal acquisition and reproduction, which reach far beyond the current state of the technology.

For ideal acquisition of an acoustic source signal in noisy and reverberant environments, we need to compensate acoustic echoes, suppress noise and interferences and we would like to dereverberate the desired source signal.

On the other hand, for a perfect reproduction of real or virtual acoustic scenes we need to create desired sound signals at the listeners ears, while at the same time we have to remove undesired reverberance and to suppress local noise.

In this talk we will briefly analyze the fundamental problems for signal processing in the framework of MIMO (multiple input - multiple output) systems and discuss current solutions.

In accordance with ongoing research we emphasize nonlinear and multichannel acoustic echo cancellation, as well as microphone array signal processing for beamforming, interference suppression, blind source separation, and source localization.