BLEU
2001pp. 311–311
Citations Over TimeTop 1% of 2001 papers
Abstract
Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Related Papers
- → Neural Machine Translation of Indian Languages(2017)44 cited
- Better Evaluation Metrics Lead to Better Machine Translation(2011)
- → ParFDA for Instance Selection for Statistical Machine Translation(2016)7 cited
- Statistical Machine Translation with Rule based Machine Translation.(2011)
- → Factored Statistical Machine Translation for German-English(2018)