Code-Switched Language Models Using Neural Based Synthetic Data from Parallel Sentences
Citations Over Time
Abstract
Training code-switched language models is difficult due to lack of data and complexity in the grammatical structure. Linguistic constraint theories have been used for decades to generate artificial code-switching sentences to cope with this issue. However, this require external word alignments or constituency parsers that create erroneous results on distant languages. We propose a sequence-to-sequence model using a copy mechanism to generate code-switching data by leveraging parallel monolingual translations from a limited source of code-switching data. The model learns how to combine words from parallel sentences and identifies when to switch one language to the other. Moreover, it captures code-switching constraints by attending and aligning the words in inputs, without requiring any external knowledge. Based on experimental results, the language model trained with the generated sentences achieves state-of-theart performance and improves end-to-end automatic speech recognition.
Related Papers
- → Retrieval-Based Neural Code Generation(2018)90 cited
- → Code-Switched Language Models Using Neural Based Synthetic Data from Parallel Sentences(2019)88 cited
- → Dependency Sensitive Convolutional Neural Networks for Modeling Sentences and Documents(2016)6 cited
- → Simplifying Sentences with Sequence to Sequence Models(2018)3 cited
- → Intrinsic evaluation of language models for code-switching(2021)