ConvAI at SemEval-2019 Task 6: Offensive Language Identification and Categorization with Perspective and BERT
Citations Over TimeTop 10% of 2019 papers
Abstract
This paper presents the application of two strong baseline systems for toxicity detection and evaluates their performance in identifying and categorizing offensive language in social media. Perspective is an API, that serves multiple machine learning models for the improvement of conversations online, as well as a toxicity detection system, trained on a wide variety of comments from platforms across the Internet. BERT is a recently popular language representation model, fine tuned per task and achieving state of the art performance in multiple NLP tasks. Perspective performed better than BERT in detecting toxicity, but BERT was much better in categorizing the offensive type. Both baselines were ranked surprisingly high in the SEMEVAL-2019 OFFENSE-VAL competition, Perspective in detecting an offensive post (12th) and BERT in categorizing it (11th). The main contribution of this paper is the assessment of two strong baselines for the identification (Perspective) and the categorization (BERT) of offensive language with little or no additional training data.
Related Papers
- → SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval)(2019)675 cited
- → Cost-Sensitive Learning and Ensemble BERT for Identifying and Categorizing Offensive Language in Social Media(2021)3 cited
- → “Why do I feel offended?” - Korean Dataset for Offensive Language Identification(2023)3 cited
- → Cross-lingual Inductive Transfer to Detect Offensive Language(2020)2 cited
- → Hitachi at SemEval-2020 Task 12: Offensive Language Identification with Noisy Labels using Statistical Sampling and Post-Processing(2020)1 cited