Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP)
Citations Over Time
Abstract
This paper investigates the performance of massively multilingual neural machine translation (NMT) systems in translating Yorùbá greetings (ε kú 1 ), which are a big part of Yorùbá language and culture, into English.To evaluate these models, we present IkiniYorùbá, a Yorùbá-English translation dataset containing some Yorùbá greetings, and sample use cases.We analysed the performance of different multilingual NMT systems including Google Translate and NLLB and show that these models struggle to accurately translate Yorùbá greetings into English.In addition, we trained a Yorùbá-English model by finetuning an existing NMT model on the training split of IkiniYorùbá and this achieved better performance when compared to the pre-trained multilingual NMT models, although they were trained on a large volume of data.* Equal contribution. 1 For simplicity of notation in the title, we make use of εthe Beninese Yorùbá letter representation of E .(which is used in Nigeria), and provides the context of greeting.Source: E .kú ojúmó ., e .sì
Related Papers
- → Semantic morphological variant selection and translation disambiguation for cross-lingual information retrieval(2021)4 cited
- → Automatic Labeling of Semantic Roles with a Dependency Parser in Hungarian Economic Texts(2015)1 cited
- → A Framework for Language Resource Construction and Syntactic Analysis: Case of Arabic(2018)1 cited
- → Morphological and Syntactic Processing for Text Retrieval(2004)8 cited