Neural Machine Translation for Low-Resource Languages from a Chinese-centric Perspective: A Survey
Citations Over TimeTop 10% of 2024 papers
Abstract
Machine translation–the automatic transformation of one natural language (source language) into another (target language) through computational means–occupies a central role in computational linguistics and stands as a cornerstone of research within the field of Natural Language Processing (NLP). In recent years, the prominence of Neural Machine Translation (NMT) has grown exponentially, offering an advanced framework for machine translation research. It is noted for its superior translation performance, especially when tackling the challenges posed by low-resource language pairs that suffer from a limited corpus of data resources. This article offers an exhaustive exploration of the historical trajectory and advancements in NMT, accompanied by an analysis of the underlying foundational concepts. It subsequently provides a concise demarcation of the unique characteristics associated with low-resource languages and presents a succinct review of pertinent translation models and their applications, specifically within the context of languages with low-resources. Moreover, this article delves deeply into machine translation techniques, highlighting approaches tailored for Chinese-centric low-resource languages. Ultimately, it anticipates upcoming research directions in the realm of low-resource language translation.
Related Papers
- → Improving Statistical Machine Translation with Word Class Models(2013)41 cited
- → Towards State-of-the-art English-Vietnamese Neural Machine Translation(2017)8 cited
- → Machine Translation Using Deep Learning: A Comparison(2020)4 cited
- → English-Japanese Neural Machine Translation with Encoder-Decoder-Reconstructor(2017)2 cited
- → Recurrent Stacking of Layers for Compact Neural Machine Translation Models(2018)2 cited