Yuval Avnar, Scripps College – News Echo Chambers

On Scripps College Week: Are news echo chambers really a bad thing?

 Yuval Avnur, professor of philosophy, takes a two-pronged approach to this question.

Yuval Avnur is Professor of Philosophy at Scripps College. He works on a broad range of issues about belief and doubt, including in contemporary epistemology, philosophy of religion, and the history of philosophy. He will be director of the Scripps College Humanities Institute until Spring 2021.

News Echo Chambers

AM-favicon-pink

Many people get their news from social media feeds, like Facebook or twitter, where mostly like-minded friends post articles and opinions. Some worry that, by getting news and opinions primarily through like-minded sources, pre-existing opinions are re-enforced and insulated from opposition; this is the problem of online ‘echo chambers’. But what is the problem, exactly? There are two ways the echo chamber could bias you: by affecting what information you are exposed to, and by affecting the way you process that information.

Sometimes people assume that the problem is of the first kind, that we have become sheltered from unpleasant news and hear only information that conforms to our pre-existing opinions. But echo chambers arguably don’t limit our information in this way, and even if they did it isn’t clear how this would be any different from other kinds of information sources. Instead, the second sort of bias, which some call “motivated reasoning,” is the real problem. Echo chambers facilitate and amplify our common tendency to process whatever information we have in a way that is responsive, not just to the truth, but also our desires, fears, and other non-truth-related motivations. These motivations determine a target conclusion. The people in your feed post material in a way that is likely to support reasoning from the available evidence to your common target. You are encouraged to take some information as big revelation, and some with a grain of salt, depending on whether it supports or threatens the shared target. If so, we should primarily aim to expose and correct biased reasoning, not deficits in information. For example, exposing an anti-vaxxer to more information about vaccines will not help; but targeting their reasoning from the information to their conclusion, and perhaps their original sources for this reasoning, may well make a difference.

Share