Speech recognition for less-represented languages: Difference between revisions

From HLT@INESC-ID

No edit summary
 
No edit summary
 
Line 4: Line 4:
|image=thomas.pellegrini.jpg
|image=thomas.pellegrini.jpg
|email=tuttitom@limsi.fr
|email=tuttitom@limsi.fr
|www=
|www=http://www.limsi.fr/Individu/tuttitom
|bio=Thomas Pellegrini is currently a teaching assistant at Paris la Sorbonne.
|bio=Thomas Pellegrini is currently a teaching assistant at Paris la Sorbonne.
He just received a PhD in Computer Science from the University of
He just received a PhD in Computer Science from the University of

Latest revision as of 15:02, 16 June 2008

Thomas Pellegrini
Thomas Pellegrini
Thomas Pellegrini
Thomas Pellegrini is currently a teaching assistant at Paris la Sorbonne.

He just received a PhD in Computer Science from the University of Paris-Sud, in the Spoken Language Processing Group from [www.limsi.fr/TLP LIMSI-CNRS].

Addresses: www mail

Date

  • 14:00, Wednesday, June 18th, 2008
  • 3rd floor meeting room, INESC-ID

Speaker

  • Thomas Pellegrini, LIMSI-CNRS

Abstract

The last decade has seen growing interest in developing speech and language technologies for a wider range of languages. State-of-the-Art speech recognizers are typically trained on huge amounts of data, both transcribed speech and texts. My thesis work focused on speech recognition for languages for which small amounts of data are available: the "less-represented languages". These languages often suffer from poor representation on the Web, which is the main collecting source. Very high out-of-vocabulary rates and poor language model estimation are common for these languages. In this presentation, I will briefly describe the difficulties posed by building new ASR systems with little data. Then I will present our attempt to improve performance, by using sub-word units in the recognition lexicon. We enhanced a data-driven word decompounding algorithm in order to address the problem of increased phonetic confusability arising from word decompounding. Experiments carried out on two distinct languages, Amharic and Turkish, achieved small but significative improvements, around 5% relative in word error rate, with 30% to 50% relative OOV reductions. The algorithm is relatively language independent and requires minimal adaptation to be applied to other languages.