Speech-to-speech Translation

From HLT@INESC-ID

Revision as of 16:31, 29 June 2006 by Dcaseiro (talk | contribs)

Speech-to-speech machine translation is one of the most strategically relevant areas for L2F. The state of the art in speech translation is crucially dependent on the state of the art of several core technologies: speech recognition, machine translation and, to a lesser extent, text-to-speech synthesis (namely in what concerns voice morphing, in order to reproduce the source speakers’ characteristics in the target speaker’s voice). The main limitations of current machine translation systems are the lack of semantic interpretation and world knowledge as well as insufficient coverage of the large proportion of idiosyncratic linguistic phenomena in lexicon and syntax. The most promising approaches combine improved statistical methods with the improved knowledge-driven methods in a variety of clever ways.

L2F has been investing in statistically based speech-to-speech machine translation approaches based on weighted finite state transducers [Picó 2005] [Caseiro 2006], aiming at a tight integration between recognition and translation. WFSTs are especially well suited for combining different type of approaches, whether statistical or knowledge-based. The combination may be advantageous for achieving two different goals (i) include morpho-syntactic linguistic knowledge into the statistical machine translation paradigm and (ii) tackle the data sparseness problem for speech translation.

This research is carried out within the scope of a national project on “Weighted Finite State Transducers Applied to Spoken Language Processing”. Two PhD theses have recently started in this area.