New research published in Scientific Reports provides insight into how artificial intelligence can impact human communication. The findings indicate that people tend to use more positive language and perceive each other more positively when using an AI-based chat tool. However, people tend to evaluate others more negatively if they suspect that AI-generated “smart replies” are being used in the conversation.
“In tech and research on tech, the primary focus is almost always on the ‘user’ and how the technology impacts or helps the user,” said study author Malte F. Jung, an associate professor at Cornell University and the Nancy H. ’62 and Philip M. ’62 Young Sesquicentennial Faculty Fellow.
“This frustrates me because we often ignore how technology impacts others we live and work with. My early research (e.g. this one) on teams taught me how important our interpersonal interactions are. I want to bring that understanding to the way we design and study technology.”
“When Google released the first smart reply enabled messenger (Google Allo), I was excited to study it because it was one of the first widely deployed AI-based systems that inserted itself directly into our interactions with others,” Jung explained.
“Shifting the focus on how technology impacts our interactions with people is important. Almost anything we are able to achieve at home and at work rests on the quality of our interactions with others. More and more AI systems insert themselves into the very way we interact with others, and we don’t really understand the impact these systems have on interactions.”
“My students and I have spent the past ten years trying to understand how different technologies (including social robots, AI language tools, surgical robots, other machines) impact the way people interact and communicate with each other,” Jung told PsyPost. “We find over and over again that all these systems have a social impact far beyond the person directly interacting with the system. There is a ripple effect. We need to understand that.”
Jung and his colleagues conducted two randomized experiments to study the interpersonal consequences of using AI-generated smart replies during real-time text-based communication. Using the Google Reply API, the researchers developed a custom messaging application that allowed them to control which AI-generated replies were displayed during the conversations. Smart replies like these are generated from large language models (LLMs) to predict plausible next responses.
For the first study, the researchers randomly assigned pairs of participants to have smart replies available or not available during a conversation about a policy issue. This resulted in four conditions: both participants can use smart replies, only one partner can use smart replies, only the other partner can use smart replies, or neither participant can use smart replies.
The participants were recruited via Amazon’s Mechanical Turk and asked to come to an agreement on the “top 3 changes that Mechanical Turk could make to better handle unfairly rejected work.” The final sample included 361 participants and the conversations lasted for approximately 7 minutes on average.
After the conversation, the researchers measured several outcomes using self-report surveys, including perceived dominance, affiliation, cooperative communication, and perceived smart reply use. The researchers also measured communication speed and messaging sentiment using computational tools.
In the second study, 291 pairs of participants were randomly assigned to one of four conditions: (1) Both participants receive smart Google replies, (2) both participants receive smart replies with positive sentiment, (3) both participants receive smart replies with negative sentiment, or (4) no smart replies are provided. The researchers measured messaging sentiment to assess the impact of smart replies, as was done in study 1.
Jung and his colleagues found that smart replies led to faster communication, with 10.2% more messages sent per minute. On average, smart replies accounted for 14.3% of sent messages. Participants were able to detect if their communication partner was using smart replies better than chance. However, this correlation was not very strong, meaning that participants were not always accurate in detecting if their partner was using smart replies or not.
But there was an interesting discrepancy between the perceived use of AI and actual use of AI. People who thought their partner was using smart replies rated them as less cooperative, less affiliative, and more dominant (even if they weren’t actually using them). However, the actual use of smart replies by the partner improved rating of the partner’s cooperation and sense of affiliation.
“My key takeaway would be: AI tools have social side effects. They can change the way we build and maintain social relationships with others,” Jung told PsyPost.
“AI tools such as smart replies or ChatGPT are marketed with the promise to save time and boost productivity. What’s left out is that they impact how we interact with and relate to others. For example, Google markets its smart reply API with the promise to help “users respond to messages quickly, and makes it easier to reply to messages on devices with limited input capabilities.” Yet, we found that it changes our language and how we are perceived by others.”
“The Wall Street Journal published an article describing similar side effects such as the ones we found for ChatGPT and how its use for communication changes how we are perceived (often negatively),” Jung added.
As expected, the researchers found that the availability of negative smart replies caused conversations to have more negative emotional content than conversations with positive smart replies or no smart replies at all. These changes in the language used in conversations were driven by people’s use of smart replies, rather than just exposure to them.
“By the time we ran our final studies what we saw did not really surprise us anymore. In part also because we had found similar ‘social side effects’ from other technologies we studied over the years,” Jung told PsyPost.
“What was interesting to see though, was how the interactions directly corresponded with the linguistic properties of the smart replies they were given. Smart replies with a positive tone led to more positive interactions than negative smart replies. This suggests that whoever has control over the smart replies has a certain degree of control over the interactions we have with others.”
But there is still much to learn about the impact of AI on social interactions.
“As with any study, our study raises more questions than it answers,” Jung explained. “Some of the questions that I am most fascinated in are about the larger scale social implications of such AI systems. E.g. how do systems like predictive text and ChatGPT impact diversity in language use? What social norms will people adopt to navigate the benefits and risks of using these systems? How do we design autonomous systems with a better understanding of their social implications?”
The study, “Artificial intelligence in communication impacts language and social relationships“, was authored by Jess Hohenstein, Rene F. Kizilcec, Dominic DiFranzo, Zhila Aghajari, Hannah Mieczkowski, Karen Levy, Mor Naaman, Jeffrey Hancock, and Malte F. Jung.
Discussion about this post