Coarse “split and lump” bilingual language models for richer source information in SMT

Télécharger
  1. (PDF, 661 Ko)
AuteurRechercher : ; Rechercher : ; Rechercher : ; Rechercher :
TypeArticle
Titre du compte renduProceedings of the Eleventh Conference of the Association for Machine Translation in the Americas (AMTA), 2014
Conférence11th Conference of the Association for Machine Translation in the Americas (AMTA), October 22-26, 2014, Vancouver, BC, Canada
ISBN978-000000000-2
Volume1
Pages2841
SujetAluminum; Computational linguistics; Computer aided language translation; Speech transmission; Syntactics Automatically generated; Bilingual language model; Coarse models; Contextual information; Language pairs; Parts of speech; Statistical machine translation; Word clustering
RésuméRecently, there has been interest in automatically generated word classes for improving sta- tistical machine translation (SMT) quality: e.g, (Wuebker et al, 2013). We create new mod- els by replacing words with word classes in features applied during decoding; we call these “coarse models”. We find that coarse versions of the bilingual language models (biLMs) of (Niehues et al, 2011) yield larger BLEU gains than the original biLMs. BiLMs provide phrase-based systems with rich contextual information from the source sentence; because they have a large number of types, they suffer from data sparsity. Niehues et al (2011) miti- gated this problem by replacing source or target words with parts of speech (POSs). We vary their approach in two ways: by clustering words on the source or target side over a range of granularities (word clustering), and by clustering the bilingual units that make up biLMs (bitoken clustering). We find that loglinear combinations of the resulting coarse biLMs with each other and with coarse LMs (LMs based on word classes) yield even higher scores than single coarse models. When we add an appealing “generic” coarse configuration chosen on English > French devtest data to four language pairs (keeping the structure fixed, but providing language-pair-specific models for each pair), BLEU gains on blind test data against strong baselines averaged over 5 runs are +0.80 for English > French, +0.35 for French > English, +1.0 for Arabic > English, and +0.6 for Chinese > English.
Date de publication
Maison d’éditionAssociation for Machine Translation in the Americas
Langueanglais
AffiliationConseil national de recherches Canada; Technologies de l'information et des communications
Publications évaluées par des pairsOui
Numéro NPARC23001442
Exporter la noticeExport en format RIS
Signaler une correctionSignaler une correction
Identificateur de l’enregistrementb9054aec-0086-41ab-b1b8-11dddc2cca51
Enregistrement créé2017-02-08
Enregistrement modifié2017-02-08
Mettre en signet et diffuser
  • Partagez cette page avec Facebook (Ouvre dans une nouvelle fenêtre)
  • Partagez cette page avec Twitter (Ouvre dans une nouvelle fenêtre)
  • Partagez cette page avec Google+ (Ouvre dans une nouvelle fenêtre)
  • Partagez cette page avec Delicious (Ouvre dans une nouvelle fenêtre)