Entropy-based Pruning for Phrase-based Machine Translation

From HLT@INESC-ID

Revision as of 12:09, 20 June 2012 by Acbm (talk | contribs) (Created page with "__NOTOC__ {{infobox|name=Wang Ling |username=wlin |contact=wlin |phone=+351-213-100-313 |fax=+351-213-145-843 }} == Date == * 15:00, Friday, June 22<sup>nd</sup>, 2012 * Room 3...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Wang Ling
Wang Ling

Date

  • 15:00, Friday, June 22nd, 2012
  • Room 336

Speaker

Abstract

Phrase-based machine translation models have shown to yield better translations than Word-based models, since phrase pairs encode the contextual information that is needed for a more accurate translation. However, many phrase pairs do not encode any relevant context, which means that the translation event encoded in that phrase pair is led by smaller translation events that are independent from each other, and can be found on smaller phrase pairs, with little or no loss in translation accuracy. In this work, we propose a relative entropy model for translation models, that measures how likely a phrase pair encodes a translation event that is derivable using smaller translation events with similar probabilities. This model is then applied to phrase table pruning. Tests show that con- siderable amounts of phrase pairs can be excluded, without much impact on the translation quality. In fact, we show that better translations can be obtained using our pruned models, due to the compression of the search space during decoding.


Note: This seminar will be held in English, if required.