Date: May 20, 2026 Time: 11.00-13.00 Venue: Aula EFFEDIESSE, Department of Mathematics (Building 14, fourth floor)
Abstract: Generative models are increasingly associated with a variety of first-order normative risks, such as plagiarism, scams, misinformation, psychosis, suicidal ideation, and more. It is sometimes argued that these are familiar problems best addressed through context-specific regulation of particular applications, rather than by treating generative AI as a uniquely exceptional technology. I argue, however, that when generative models scale together along four dimensions – fidelity, latency, reach, and portability – they can lead to the emergence of a distinctive class of second-order normative risks across domains. I call these risks norm collapse. Norm collapse occurs when the conditions under which norms remain enforceable, stable, and action-guiding begin to fail. I identify three modes of collapse. Regulatory collapse occurs when the quantity, ambiguity, and rate of harmful outputs overwhelm the institutions responsible for identifying, verifying, and adjudicating violations, so that norms lose practical enforceability. Compliance collapse occurs when actors cease complying because cheap and opaque defection alters expectations about what others are doing, making honest participation increasingly irrational. Competence collapse occurs when large-scale deskilling erodes the human competences on which normative practices themselves depend, weakening the capacity of institutions and participants to sustain and be guided by those practices. The upshot is that generative models do not only introduce new instances of familiar harms. At scale, they can undermine the very conditions that make norms govern action in the first place.
Eugene Y. S. Chua is currently a Nanyang Assistant Professor of Philosophy at Nanyang Technological University, Singapore, where he also leads the Foundations of Thermodynamics group. He received his PhD in philosophy from the University of California, San Diego, was an Ahmanson Postdoctoral Instructor at Caltech, and was part of the inaugural cohort of Northeastern University’s AI and Data Ethics Summer Institute. His research focuses on the philosophy of science, especially the philosophy of physics, as well as the conceptual and normative issues in AI ethics raised by emerging technologies, such as the ethical risks of LLM-powered psychotherapy chatbots.