Meta-evaluation of machine translation using parallel legal texts

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this paper we report our recent work on the evaluation of a number of popular automatic evaluation metrics for machine translation using parallel legal texts. The evaluation is carried out, following a recognized evaluation protocol, to assess the reliability, the strengths and weaknesses of these evaluation metrics in terms of their correlation with human judgment of translation quality. The evaluation results confirm the reliability of the well-known evaluation metrics, BLEU and NIST for English-to-Chinese translation, and also show that our evaluation metric ATEC outperforms all others for Chinese-to-English translation. We also demonstrate the remarkable impact of different evaluation metrics on the ranking of online machine translation systems for legal translation.

Original languageEnglish
Title of host publicationComputer Processing of Oriental Languages
Subtitle of host publicationLanguage Technology for the Knowledge-based Economy - 22nd International Conference, ICCPOL 2009, Proceedings
Pages337-344
Number of pages8
DOIs
Publication statusPublished - 2009
Event22nd International Conference on Computer Processing of Oriental Languages, ICCPOL 2009 - , Hong Kong
Duration: 26 Mar 200927 Mar 2009

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume5459 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference22nd International Conference on Computer Processing of Oriental Languages, ICCPOL 2009
Country/TerritoryHong Kong
Period26/03/0927/03/09

Keywords

  • ATEC
  • BLEU
  • BLIS
  • Legal Text
  • Machine Translation Evaluation

Fingerprint

Dive into the research topics of 'Meta-evaluation of machine translation using parallel legal texts'. Together they form a unique fingerprint.

Cite this