Aaron Mueller
Publications by Year
Research Areas
Topic Modeling, Natural Language Processing Techniques, Text Readability and Simplification, Multimodal Machine Learning Applications, Explainable Artificial Intelligence (XAI)
Most-Cited Works
- → Quantity doesn’t buy quality syntax with neural language models(2019)72 cited
- → Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora(2023)65 cited
- The Johns Hopkins University Bible Corpus: 1600+ Tongues for Typological Exploration(2020)
- → Label Semantic Aware Pre-training for Few-shot Text Classification(2022)33 cited
- → Bernice: A Multilingual Pre-trained Encoder for Twitter(2022)29 cited
- → Inverse Scaling: When Bigger Isn't Better(2023)22 cited
- An Analysis of Massively Multilingual Neural Machine Translation for Low-Resource Languages.(2020)
- → Call for Papers -- The BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus(2023)16 cited
- → What Do NLP Researchers Believe? Results of the NLP Community Metasurvey(2023)16 cited
- Fine-grained Morphosyntactic Analysis and Generation Tools for More Than One Thousand Languages.(2020)