Madalina Vlasceanu is an Assistant Professor of Psychology at New York University and director of the Collective Cognition Lab. Madalina obtained a PhD in Psychology and Neuroscience from Princeton University in 2021 and a BA in Psychology and Economics from the University of Rochester in 2016. Her research focuses on the cognitive and social processes that shape individuals’ and collectives’ memories, beliefs, and behaviors, with direct applications for policy. Guided by a theoretical framework of investigation and striving to achieve ecological validity, Madalina employs a large array of methods including behavioral laboratory experiments, field studies, randomized controlled trials, international many lab collaborations, agent based modeling, and social network analysis, with the goal of stimulating social change and improving societal welfare. Her research is situated at the intersection of basic and applied science, incorporates an interdisciplinary perspective, and directly informs policy relevant to current societal issues, such as algorithmic inequality or the climate crisis.
Bias and A.I.
Artificial intelligence, or AI, algorithms have been introduced in almost all aspects of modern society – from the medical and justice systems to education and even national security. We now rely on AI to make vast amounts of decisions that influence our lives.
This streamlining process has initially been celebrated as a step toward faster and more intelligent decision-making. Recently, however, scholars have begun to reveal systemic social biases in the decisions made by AI. For example, AI used by hospitals has been found to unfairly recommend healthcare allocation, discriminating against racial minorities.
Here, we asked, what causes such biases in AI, and what are the effects of biased AI on people?
We examined the Google images displayed when searching for the gender-neutral keyword “person” in different countries around the world. What we found, was staggering – in countries where women are treated more unfairly than men (for example, where women are paid less than men for the same labor) – the results of Googling “person” included more images of men than women. This relationship was linear, that is, the more inequality women experience in a given country, the fewer images of women appeared on Google, despite men and women comprising equal halves of the population. This result suggests that societal biases are embedded deep in the AI algorithms we use in everyday life.
But we didn’t stop there – we then asked, what is the impact of biased AI on society? We examined the hiring choices of people exposed to imbalanced Google Image Search results – and found that people make biased hiring choices that align with AI’s biases.
Together, our findings suggests that gender inequalities in society lead to biased AI, and biased AI leads to reinforced gender inequalities in society, in a self-perpetuating cycle of bias propagation.