Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets
Citations Over TimeTop 1% of 2022 papers
Abstract
Abstract With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50% sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.
Related Papers
- → Johnny's Journey Toward Usable Secure Email(2019)29 cited
- → Developing a Metric of Usable Space for Zoo Exhibits(2019)29 cited
- → The policy challenges of universally usable e-government(2011)
- Comments on Our Country's Eco-usable House Policy in Force(2001)
- → Methods for Implementing Usable Secure Online Public Services(2023)