Exploring Trust, Acceptance, and Behavioral Differences When Humans Collaborate with Large Language Models as Tools and Teammates
Citations Over TimeTop 10% of 2025 papers
Abstract
With the emergence of new AI technologies, research on the potential for AI to function as teammates alongside humans has expanded. The recent introduction of highly capable large language models (LLMs) is particularly noteworthy, showing strong potential in human–AI teaming where communication is crucial. However, this novel technology has yet to be validated in human–AI teaming or as a teammate, hindering its application in research and practice. This article presents an empirical online experiment (N = 778) where participants engaged in a real-time and interdependent interaction with a commercially available LLM, with the presentation of the LLM manipulated to be either a tool or a teammate. Results show that when compared to presenting an LLM as a teammate rather than a tool significantly increases trust and significantly impacts the sentiment humans have when talking with their AI, with LLM teammates seeing more positive sentiment. Perceptions of trust, acceptance, and performance were generally high for LLMs presented as teammates. Despite these impacts, participants’ prior experiences with AI technology were still shown to predict the perceptions they formed with their AI teammate. Based on these findings, this article presents an important empirical result, which is that presenting highly capable AI, such as LLMs, as teammates can improve perception and interaction compared to presenting an AI as a tool. In turn, a discussion is had on how future research can continue to identify when and how to introduce LLMs and other AI technologies as teammates.