Arming the public with artificial intelligence to counter social bots
Citations Over TimeTop 1% of 2019 papers
Abstract
The increased relevance of social media in our daily life has been accompanied by efforts to manipulate online conversations and opinions. Deceptive social bots -- automated or semi-automated accounts designed to impersonate humans -- have been successfully exploited for these kinds of abuse. Researchers have responded by developing AI tools to arm the public in the fight against social bots. Here we review the literature on different types of bots, their impact, and detection methods. We use the case study of Botometer, a popular bot detection tool developed at Indiana University, to illustrate how people interact with AI countermeasures. A user experience survey suggests that bot detection has become an integral part of the social media experience for many users. However, barriers in interpreting the output of AI tools can lead to fundamental misunderstandings. The arms race between machine learning methods to develop sophisticated bots and effective countermeasures makes it necessary to update the training data and features of detection tools. We again use the Botometer case to illustrate both algorithmic and interpretability improvements of bot scores, designed to meet user expectations. We conclude by discussing how future AI developments may affect the fight between malicious bots and the public.
Related Papers
- → When consumers need more interpretability of artificial intelligence (AI) recommendations? The effect of decision-making domains(2023)7 cited
- → Measures of Model Interpretability for Model Selection(2018)10 cited
- → Measuring Interpretability for Different Types of Machine Learning Models(2018)15 cited
- → ML Interpretability: Simple Isn't Easy(2022)2 cited
- Susquehanna Chorale Spring Concert "Roots and Wings"(2017)