A Holistic Approach to Undesired Content Detection in the Real World
Citations Over TimeTop 10% of 2023 papers
Abstract
We present a holistic approach to building a robust and useful natural language classification system for real-world content moderation. The success of such a system relies on a chain of carefully designed and executed steps, including the design of content taxonomies and labeling instructions, data quality control, an active learning pipeline to capture rare events, and a variety of methods to make the model robust and to avoid overfitting. Our moderation system is trained to detect a broad set of categories of undesired content, including sexual content, hateful content, violence, self-harm, and harassment. This approach generalizes to a wide range of different content taxonomies and can be used to create high-quality content classifiers that outperform off-the-shelf models.
Related Papers
- → On Automating Hyperparameter Optimization for Deep Learning Applications(2021)38 cited
- → Learning by Bagging and Adaboost based on Support Vector Machine(2007)6 cited
- → Machine learning(2022)1 cited
- Measuring Generalization and Overfitting in Machine Learning(2019)
- → Challenges of Deep Learning Methods for COVID-19 Detection Using Public Datasets(2020)10 cited