Sarcasm Detection is Way Too Easy! An Empirical Comparison of Human and Machine Sarcasm Detection
Citations Over TimeTop 20% of 2022 papers
Abstract
Recently, author-annotated sarcasm datasets, which focus on intended, rather than perceived sarcasm, have been introduced. Although datasets collected using first-party annotation have important benefits, there is no comparison of human and machine performance on these new datasets. In this paper, we collect new annotations to provide human-level benchmarks for these first-party annotated sarcasm tasks in both English and Arabic, and compare the performance of human annotators to that of state-of-the-art sarcasm detection systems. Our analysis confirms that sarcasm detection is extremely challenging, with individual humans performing close to or slightly worse than the best trained models. With majority voting, however, humans are able to achieve the best results on all tasks. We also perform error analysis, finding that some of the most challenging examples are those that require additional context. We also highlight common features and patterns used to express sarcasm in English and Arabic such as idioms and proverbs. We suggest that to better capture sarcasm, future sarcasm detection datasets and models should focus on representing conversational and cultural context while leveraging world knowledge and common sense.
Related Papers
- → A Survey on Machine Learning and Deep Learning Based Approaches for Sarcasm Identification in Social Media(2020)9 cited
- → Challenges of Sarcasm Detection for Social Network : A Literature Review(2020)6 cited
- → How Do We Understand Sarcasm?(2018)4 cited
- Colorful scene sarcasm-On Ya Xian's poem Rats in the Boat(2002)
- → Content Analysis of Sarcasm in Warintil Episode 186 (July 1 2020)(2022)