A group of researchers at Cornell University found that using chat tools powered by artificial intelligence, people can converse more effectively, use more positive language, and perceive each other more positively.
Postdoctoral researcher Jess Hohenstein, MS ā16, MS ā19, Ph.D. ā20, is lead author of āArtificial Intelligence in Communication Impacts Language and Social Relationships,ā published April 4 in Scientific Reports is.
Co-authors include: Malte JungAssociate Professor of Information Science in the Cornell Ann S. Bowers College of Computing and Information Sciences (Cornell Bowers CIS); Rene KizilczekAssistant Professor of Information Science (Cornell Bowers CIS).
Generative AI is poised to impact every aspect of society, communication, and work. New evidence of the technical capabilities of large-scale language models (LLMs) like ChatGPT and GPT-4 is published every day, but the social implications of integrating these technologies into our daily lives remain uncertain. is still not fully understood.
Although AI tools have the potential to improve efficiency, they can also have negative societal side effects. Hohenstein and colleagues investigated how the use of AI in conversations affects the way people express themselves and view each other.
āTechnology companies tend to emphasize the usefulness of AI tools to accomplish tasks faster and better, but they ignore the social aspects,ā Jung said. “We do not live and work in isolation. The systems we use affect our interactions with others.”
In addition to being more efficient and proactive,The group found that when participants thought their partner used more AI-suggested responses, they perceived that partner to be less cooperative and felt less close to them.
“I was surprised that people tended to judge you more negatively just because they suspected you were using AI to create your writing, whether or not that’s actually the case.” said Hohenstein. āThis shows that people have deep-seated general skepticism about AI.ā
For the first experiment, co-author Dominic DiFranzo (former postdoctoral researcher) Cornell University Robotics and Group Lab Now an assistant professor at Lehigh University, the group is developing a smart reply called “Moshi” (“Hello” in Japanese), modeled after Google’s now-defunct “Allo” (“Hello” in French). Developed the platform. -reply platform announced in 2016. Smart Reply is generated from LLM to predict plausible next responses in chat-based interactions.
A total of 219 pairs of participants were asked to talk about policy issues and were assigned to one of three conditions: That both participants can use his reply smartly. He is the only participant who can use Smart Reply. Or neither participant can use Smart Reply.
The researchers discovered that Smart replies improve communication efficiency, provide positive emotional language, and improve positive evaluations from communication partners. On average, smart replies accounted for 14.3% (1 in 7) of messages sent.
However, participants who suspected that their partner was responding with a smart reply were evaluated more negatively than participants who believed they had entered the response themselves, consistent with common assumptions about the negative effects of AI. We are doing so.
In the second experiment, we asked 299 randomly assigned pairs of participants to discuss a policy issue in one of four conditions. Default Google Smart Reply. Smart people respond with a positive emotional tone. and those with a negative emotional tone. The presence of positive Google Smart Replies gave the conversations a more positive emotional tone than negative conversations or conversations without Smart Replies, highlighting the impact of AI on language production in everyday conversations.
āAI may help you write, but it is also changing your language in unexpected ways, including making it sound more positive,ā Hohenstein said. This suggests that by using text-generating AI, you are sacrificing some of your own personal voice. ā
Jung said: āWhat we are observing in this study is the impact of AI on social dynamics and some of the unintended consequences that can arise from integrating AI into social contexts. It suggests that those in control of the algorithms may be influencing how people interact, speak, and perceive each other.”
Includes other co-authors mor naamanProfessor of Information Science at the Jacobs Technion-Cornell Institute and Cornell Bowers CIS at Cornell Tech. Karen Levy, Associate Professor of Information Science (Cornell Bowers CIS). He is a collaborator at Lehigh University and Stanford University.
This research was supported by the National Science Foundation.