Qihang Lin, University of Iowa – Using AI to make AI Less Discriminatory

On Tippie College of Business at the University of Iowa Week: How do we reduce biases in AI learning models?

Qihang Lin, Henry B. Tippie research fellow and associate professor in the department of business analytics, explores the options.

Qihang Lin is Henry B. Tippie Research Fellow and Associate Professor in the Department of Business Analytics at the University of Iowa’s Tippie College of Business. His research focuses on optimization and machine learning, with recent work addressing fairness in machine learning and decision-making. He received his Ph.D. in Operations Research from Carnegie Mellon University in 2013. Dr. Lin currently serves as the Faculty Director of the Part-Time Master of Business Analytics program.

Using AI to Make AI Less Discriminatory

 

Machine learning is increasingly used in high-stakes decision-making for things like hiring, lending, and healthcare–but fairness concerns are also growing. Studies show that machine learning models can unintentionally introduce biases and unfairness toward some groups because of algorithmic design, biased sampling, or societal inequalities encoded into the data. A well-known example is Amazon’s AI hiring tool, which systematically downgraded resumes from women for technical jobs because the model had learned from historical hiring data that favored male applicants.

To address this, we’ve developed optimization algorithms that build fairness constraints into model training. Different from the traditional approach that only maximizes a model’s accuracy, our method provides a structured way to balance fairness and accuracy, for example, by enforcing similar predicted positive rates between groups. We’ve realized in some applicationsthat enforcing absolute fairness can significantly degrade a model’s predictive performance. Therefore, in our method, the users can specify the level of fairness they want in their model according to their applications.

At the end of the day, fairness in AI isn’t just a technical issue—it’s a societal one. By improving the way we optimize for fairness, we can build AI systems that are not only accurate but also equitable and trustworthy, ensuring they serve all groups fairly. This work is just one step toward that goal, but it’s an important one. As machine learning continues to shape our world, we believe fairness should be part of the foundation, not an afterthought.

Read More:
[The Gazette] – AI needs regulation to avoid discrimination

Share