Share:

Laura Sartori & Roberta Calegari (University of Bologna): “The two cultures dialogue: How the social and computer sciences talk about bias and fairness in AI”

April 18th at 15:30. Room: Sala Seminari Nicola Schiavoni, ground floor of the Polimi building 20 (via Ponzio 34/5). Streaming available here

Abstract: How could sociology and computer science cooperate towards equitable and trustworthy AI systems? AI awareness, biases and under-represented social groups are key social dimensions that are often underestimated when technically developing a fair and trustworthy automated decision-making system (ADM). This lecture will argue how such features can be effectively taken into account into the design of AI systems in light of the case studies from the European project AEQUITAS.

Laura Sartori is an Associate Professor of Sociology at the Department of Political and Social Sciences at the University of Bologna. She holds a Ph.D. in Sociology and Social Research from the University of Trento (2002) and she is editorial member of the International journal “Sociology”. Her work relates to the social and political implications of technology, with special reference to the reproduction of inequalities. Current projects are about 1. Algorithmic inequalities in Automatic Decision-making systems (ADMs) applied within the labor market, 2. Public opinion and social acceptance of Artificial Intelligence, 3. GenerativeAI, work and professions and 4. ADMs and the medical practice
She takes part to the project Horizon Europe 2020 (G.A. 101070363) about Assessment and engineering of equitable, unbiased, impartial and trustworthy AI systems (AEQUITAS). The project aims to provide an experimentation playground to trustworthy AI systems with special attention to gender, race and LGBTQI+ bias. She is in charge of the sociological methodologies.

Roberta Calegari is an assistant professor at the Department of Computer Science and Engineering and at the Alma Mater Research Institute for Human-Centered Artificial Intelligence at the University of Bologna. Her research field is related to trustworthy and explainable systems, distributed intelligent systems, software engineering, multi-paradigm languages and AI & law. She is the coordinator of the project Horizon Europe 2020 (G.A. 101070363) about Assessment and engineering of equitable, unbiased, impartial and trustworthy AI systems. The project aims to provide an experimentation playground to assess and repair bias on AI. She is part of the EU Horizon 2020 Project “PrePAI” (G.A. 101083674) working on the definition of requirements and mechanisms that ensure all resources published on the future AIonDemand platform can be labelled as trustworthy and in compliance with the future AI regulatory framework. Her research interests lie within the broad area of knowledge representation and reasoning in AI for trustworthy and explainable AI and in particular focus on symbolic AI including computational logic, logic programming, argumentation, logic-based multi-agent systems, non-monotonic/defeasible reasoning. She is member of the Editorial Board of ACM Computing Surveys for the area of Artificial Intelligence.

Share:

Read more:

May 14th at 16:30 at Aula Grandori (basement of Polimi building 4, Dipartimento di Ingegneria Civile ed Ambientale, Piazza Leonardo ...

April 3rd at 17:00. Room: Aula Alpha, ground floor of the Polimi building 24 (via Golgi 40).  Abstract: In this ...

No more post to discover.
More from Events Meta Lectures:
Highlights: