Response Quality in Human-Chatbot Collaborative Systems
Citations Over TimeTop 10% of 2020 papers
Abstract
We report the results of a crowdsourcing user study for evaluating the effectiveness of human-chatbot collaborative conversation systems, which aim to extend the ability of a human user to answer another person's requests in a conversation using a chatbot. We examine the quality of responses from two collaborative systems and compare them with human-only and chatbot-only settings. Our two systems both allow users to formulate responses based on a chatbot's top-ranked results as suggestions. But they encourage the synthesis of human and AI outputs to a different extent. Experimental results show that both systems significantly improved the informativeness of messages and reduced user effort compared with a human-only baseline while sacrificing the fluency and humanlikeness of the responses. Compared with a chatbot-only baseline, the collaborative systems provided comparably informative but more fluent and human-like messages.
Related Papers
- → Examination of Ethical Principles for LLM-Based Recommendations in Conversational AI(2023)19 cited
- → Smart Multi-linguistic Health Awareness System using RASA Model(2023)7 cited
- → Obstacles of Mobile Crowdsourcing: A Survey(2019)8 cited
- → College Agent: The Machine Learning Chatbot for College Tasks(2022)6 cited
- → An empirical evaluation of an IR-based strategy for chat-oriented dialogue systems(2014)3 cited