During his 22 years plus career span in signal processing, Amitav had created and contributed to many products particularly in the wireless and consumer electronics space. If you are using any CDMA phone, then you are using speech processing algorithms and standards developed by him and his team. Till 1999, he was an active member of several international standards body such as ITU-T study group 16 (speech), IETF, 3GPP, TIA and contributed to a number of international standards. Amitav has 17 US and world-wide patents granted and few more in the pipeline. He has over 30 publications and several tutorials (recent ones in ICASSP-06 & ICASSP-07). Other than speech processing (modeling, recognition, synthesis, coding, speaker identification), his areas of expertise and research interest also include multimedia signal compression, pattern recognition and machine learning with focus in user identification, face recognition and OCR.
Amitav has been actively involved in the Indian speech and signal processing community since his return to India in 1999. He has been actively involved in IEEE Signal Processing Society India since 1999 and currently he is the chairman. He has been a reviewer for various IEEE/ACM speech processing and computer vision conferences and journals. In 2008, he is chairing and organizing the IEEE SLT-08, an IEEE international speech processing conference in Goa India.
Amitav is an adjunct faculty in IIIT-Bangalore where he teaches courses in signal processing and pattern recognition.
|Addresses: www mail|
Usually signal processing researchers are happy with their various ways of slicing and dicing the signals to explore various aspects of the signals, while the pattern recognition people are busy looking at various recognition/classification algorithms using whatever “features” from the signal are “given” to them. Usually these two groups of researchers each go their own way. But, for a lot of applications it is important to consider both the feature selection and classification method together which is typically NOT done. For example, MFCC is used in speech recognition as a feature which is supposed to be “speaker-independent” and represent what you are saying. But the same feature is used by people working in speaker identification as well!
In my talk, I will give a brief overview of popular and emerging signal processing applications and then pick one of my research areas, namely user-identification, and show how judicious feature selection helps to keep the classification part simple and allows one to develop systems which provide high performance at very low complexity.