Jonathan P. Chang, Cornell University– How AI Tools Could Help Make Online Discussions Healthier

On this Student Spotlight during Cornell University’s Impacts of A.I. Week :  Online discussions have many pitfalls; what if A.I. could step in to help?

Jonathan P. Chang, PhD candidate in computer science, explores this question.

Jonathan P. Chang is a Ph.D. candidate in Computer Science at Cornell, advised by Cristian Danescu-Niculescu-Mizil. He earned his undergraduate degree in Computer Science at Harvey Mudd College. Jonathan’s current focus is on studying the problem of content moderation on online platforms and social media. He applies NLP and computational social science techniques both to characterize and model the patterns of misbehavior online and to develop computational tools that can help improve the effectiveness of content moderation.

 

How AI Tools Could Help Make Online Discussions Healthier

Online discussions have a reputation for hostility: what starts out as an everyday conversation can derail into toxicity or personal attacks. And this is not just the result of bad-faith trolls; research has shown that even well-intentioned people can turn hostile under certain circumstances.

As humans, we have intuitions about when a conversation might be turning hostile, and we often rely on such intuitions in face-to-face interactions. But online, it’s a lot harder to do this – some of the signals we rely on might be absent or harder to pick up on. If an AI could learn to pick up on these signals of rising tension, it could help supplement our human intuition and guide us towards having healthier online discussions.

To test this idea out, we built a browser extension called ConvoWizard. It’s powered by an AI that was fed millions of examples of online conversations, from which it learns some intuitions about where conversations might be headed. Based on these intuitions, ConvoWizard can warn users when their discussion might be turning tense. In collaboration with a Reddit debate community, we had volunteers test this tool out. In the end, more than half the participants reported that seeing ConvoWizard’s feedback made them rethink posting something they might have regretted. And empirically we find that users who saw a warning from ConvoWizard were more likely to use known deescalation strategies such as being more formal or asking more questions.

We’re really encouraged by these results, because they show that there is potential for this human-AI collaboration approach to make online spaces healthier. But of course there’s also a lot of questions left to answer, like how well these tools scale to larger communities and what their long-term impact is. So we’re excited to keep working on this, and look forward to seeing where it goes next!

Read More:
[NPR] – A new AI tool can moderate your texts to keep the conversation from getting tense

Share