Qian Yang, Cornell University – AI Tool Gains Doctors’ Trust by Giving Advice Like a Colleague

On Cornell University’s Impacts of A.I. Week:  Getting people to trust A.I. can be a tricky process.

Qian Yang, assistant professor in information science, examines how doctors became comfortable with using the technology.

Qian Yang is an assistant professor in Information Science at Cornell University and Human-Computer Interaction (HCI) researcher. Her research expertise is in designing AI systems that effectively collaborate with human experts. She has created systems across many critical human-AI collaboration domains, including clinical decision-making, medical imaging, writing, autonomous driving, and accessibility. Reflecting on this related vein of research, she developed practical methods and tools for AI application designers and innovators. Yang is an AI2050 fellow and her lab’s work has been generously supported by the NSF, the NIH, among others.

 

AI Tool Gains Doctors’ Trust by Giving Advice Like a Colleague

Hospitals have begun using “decision support tools” powered by artificial intelligence models that can diagnose disease, suggest treatment, or predict a surgery’s outcome. But no such model is correct all the time, so how do doctors know when to trust the AI’s recommendation?

Our recent research suggests that if these AI tools can counsel the doctor like a colleague – if the system can point out relevant scientific evidence that supports the decision – then doctors can better weigh the merits of the AI recommendation and adopt the recommendation only when they should.

We are not the first researchers to study how to calibrate clinicians’ trust in AI best. Previously, researchers have often tried to do that by explaining how the underlying algorithm works to the doctors or what data was used to train the AI model. But it turned out that wasn’t sufficient. Doctors wanted to know not just whether the system is accurate 98% or 99% of the time, but more importantly, they want to know, on a case-by-case basis, whether each suggestion is correct, whether it is correct for this particular patient at hand. That is difficult, even for many AI experts.

So we took a different approach. We started by acknowledging that a doctor is not an AI engineer; The most effective way to help them understand whether an AI recommendation is correct will likely be very different from what AI engineers find effective. So we chose first to study how clinicians explained their diagnostic or treatment recommendations to their colleagues and designed AI systems that explain AI recommendations in similar ways. This approach worked. Our studies showed that, if we can build systems that help validate AI suggestions based on information doctors find trustworthy, we can help them understand whether the AI is likely right or wrong for each specific case.

Share