A fuzzier approach to machine translation evaluation: A pilot study on post-editing productivity and automated metrics in commercial settings
2015pp. 40–45
Citations Over TimeTop 13% of 2015 papers
Abstract
Machine Translation (MT) quality is typically assessed using automatic evaluation metrics such as BLEU and TER. Despite being generally used in the industry for evaluating the usefulness of Translation Memory (TM) matches based on text similarity, fuzzy match values are not as widely used for this purpose in MT evaluation. We designed an experiment to test if this fuzzy score applied to MT output stands up against traditional methods of MT evaluation. The results obtained seem to confirm that this metric performs at least as well as traditional methods for MT evaluation.
Related Papers
- → Improving English-to-Indian Language Neural Machine Translation Systems(2022)31 cited
- → Evaluation of English–Slovak Neural and Statistical Machine Translation(2021)21 cited
- → Statistical Error Analysis of Machine Translation: The Case of Arabic(2020)3 cited
- Statistical Machine Translation with Rule based Machine Translation.(2011)
- → BLEU deconstructed: Designing a Better MT Evaluation Metric(2021)27 cited