What is a spillover crisis and how can AI contribute to it?
Dan Laufer, professor and head of the School of Communication Studies at the Auckland University of Technology, explains.
Dr Daniel Laufer, PhD, MBA (The University of Texas at Austin, USA), is a Professor and Head of the School of Communication Studies at the Auckland University of Technology (AUT) in New Zealand. His primary area of expertise is Crisis Management, and his research focuses on crisis communications, and gaining a better understanding of how stakeholders react to crises. In addition to publishing articles in leading academic journals, he currently serves as Contributing Editor for Crisis Management at the managerial journal Business Horizons, and he also is a member on the editorial board of Public Relations Review. The article “Managing spillover crises in the age of generative AI” was published in the journal “Business Horizons” in 2025, together with Professor Yijing Wang from Erasmus University in the Netherlands.
Managing Spillover Crises in the Age of Generative AI
In 2023, Levi Strauss’ announcement about the use of AI-generated virtual models led to widespread concerns about the displacement of human labor in the fashion industry, adversely impacting its competitors. Essentially, the reputational damage spread beyond Levi Strauss, to other companies in the industry, highlighting the risk of crisis spillover in the age of AI.
This research identified five types of AI spillover crises by applying the accessibility/diagnosticity framework: authenticity/integrity crises, labor displacement crises, technical failure crises, data security and privacy crises, and discrimination crises
Accessibility relates to belonging to a shared category such as industry, and diagnosticity refers to whether a crisis can be linked to the category. For example, in the case of Levi Strauss other competitors in the fashion industry relates to accessibility, and the perceived connection between AI and layoffs relates to diagnosticity. If both accessibility and diagnosticity exist, there is a high risk of a spillover crisis.
It is worth noting that unlike solutions involving people which are viewed as heterogeneous, researchers have found that AI solutions are viewed as homogeneous, which makes AI-related crises more prone to spillover to other AI-integrated organizations.
An effective strategy to manage a spillover crisis is to differentiate the company from the organization experiencing the AI-related crisis. By communicating how one’s system, governance structures, or human oversight differs, companies can reduce their perceived similarity to the organization involved in the crisis and protect their reputation.
Read More:
[Science Direct] – Managing spillover crises in the age of generative AI

