Malte Jung, Cornell University – Social Cost of AI in Social Interactions

On Cornell University’s Impacts of A.I. Week:  Technology that guides how we respond in conversations may have some negative effects.

Malte Jung, associate professor of information science, outlines them.

Malte Jung is an Associate Professor of Information Science at Cornell University and the Nancy H. ’62 and Philip M. ’62 Young Sesquicentennial Faculty Fellow. He holds field appointments in Mechanical Engineering, Computer Science, and Communication. His research explores the design of autonomous systems and how they impact people and their interactions with each other. Malte’s work has received several awards including an NSF CAREER award. He holds a Ph.D. in Mechanical Engineering, and a PhD Minor in Psychology from Stanford University and a Diploma in Mechanical Engineering from the Technical University of Munich. Prior to joining Cornell, Malte Jung completed a postdoc at the Center for Work, Technology, and Organization at Stanford University.

 

Social Cost of AI in Social Interactions

“A rapidly growing array of AI tools such as chatGPT and sentence completion promise us to become more productive and efficient, but these benefits come at a social cost. 

“In two recent studies we asked almost 1000 participants to interact with another person using a new messaging app we developed. The app allowed us to control which participants could use smart replies and which could not. Smart replies are short response suggestions such as “hello”, “I am great” or “how are you” that are generated by an AI. People can send them by clicking on them instead of writing their own replies. 

“We found that people use smart replies when given the opportunity and that using them has benefits. They increase efficiency and make people appear more likable and cooperative. 

However, we also found that smart replies have a social cost. The more someone suspects another person uses them, the less likable and cooperative they tend to find them. They also perceive them as more dominant.

“So, what does this mean? Why are there both positive and negative effects?

We believe that there are two things going on. On the one hand, smart replies impact our language. We know that smart replies generated with the Google algorithm tend to have a warm and friendly tone. Using them makes our messages more warm and friendly and that in turn reflects positively on us.

“On the other hand, we believe the negative side effects have something to do with people’s suspicions about AI and with perceptions of authenticity and effort. We want others to make a genuine effort when they interact with us and AI-generated language doesn’t appear genuine.

It is no question that the capabilities promised by AI are enticing and seductive, but understanding their social consequences is crucial before we let these machines speak on our behalf.”

Share