Francesco D’Acunto, Georgetown University – Can Algorithms Eliminate Implicit Biases in Financial Decisions?

On Georgetown University’s McDonough School of Business Week: Can algorithms be used to eliminate implicit biases in financial decisions?

Francesco D’Acunto, Clark Chair in Global Real Estate and Provost Distinguished professor of finance, determines the possibilities.

Francesco D’Acunto is the A. James Clark Chair in Global Real Estate and Provost Distinguished Associate Professor of Finance at Georgetown University. His research interests are in the areas of households’ beliefs and economic choices, housing and real estate, FinTech and inequalities. In these areas, Professor D’Acunto studies the formation of beliefs and the financial decision-making of households, how regulation and the financial sector affect inequalities and discrimination through mortgage lending, and how FinTech can reduce these inequalities. His work has been published in top academic journals such as the Proceedings of the National Academy of Sciences (PNAS), Journal of Political Economy, the Review of Economic Studies, and others. His work has been awarded two Cubist Systematic Strategies Awards for Outstanding Research and has been covered in the policy speeches of top global policymakers, such as the President of the European Central Bank and those of the New York Fed, San Francisco Fed, and Cleveland Fed.

Professor D’Acunto received his PhD and MSc in Finance from the University of California at Berkeley. Prior to joining Georgetown, Professor D’Acunto was on the faculty of the Smith School of Business of the University of Maryland and Boston College.

Can Algorithms Eliminate Implicit Biases in Financial Decisions?

 

Every day, we make decisions that can be influenced by the biases we’ve picked up from our upbringing and cultural backgrounds. This is also true for financial decisions, and biases often harm our financial well-being. What are viable solutions? My coauthors and I recently explored how algorithmic suggestions, or robo-advice, could help people make better financial decisions—even if it means challenging deeply ingrained beliefs.

We focused on the growing field of peer-to-peer lending, where everyday people act as lenders, similar to banks, earning interest from loans. Unlike professional bank officers, though, peer-to-peer lenders lack formal training in assessing borrowers. Instead, when choosing borrowers, they often rely on gut feelings, including biases related to ethnicity and race.

Our research, conducted in India, examined how lenders made choices with and without algorithmic recommendations. When lenders made decisions without any guidance, they often overlooked high-quality borrowers from different ethnic or religious backgrounds. Instead, they chose lower-quality borrowers who shared their identity, ultimately costing them money, even when clear information about borrowers’ quality was available.

However, when these same lenders accessed a robo-advisor that suggested borrowers based solely on risk profiles, they began to favor other-identity borrowers over low-quality same-identity borrowers. This shift was notable because lenders could still see the ethnicities and religions of the borrowers, and the robo-advisor only used information that was also available to lenders. Lenders who switched began to earn significantly more money.

These findings reveal the complexities of implicit bias in financial decisions. If unaddressed, such biases can lead to financial losses for lenders in addition to adversely affecting discriminated borrowers. Yet, the effectiveness of a simple algorithmic advisor indicates that these biases might be unintentional, acting as a default decision-making strategy in areas where individuals lack experience. Robo-advising can be a valuable tool to help individuals willingly overcome their biases.

Read More:
[SSRN] – How Costly are Cultural Biases? Evidence from FinTech

Share