META lectures


Judy Wajcman (London School of Economics): “Optimizing Temporal Capital: How Big Tech Imagines Time as Auditable”

October 6 at 11am. Room Sala Conferenze, on the ground floor of the Polimi building 20 (Leonardo campus, Via Ponzio 34/5).

This talk will show how electronic calendaring systems have become emblematic of the contemporary vision of mastering time, codifying a distinctive quantitative orientation to time. Drawing on interviews with calendar designers, Judy Wajcman explores the quest among knowledge workers in Silicon Valley to embed a culture of temporal optimization through the use of calendaring software. Their response reveals a specific kind of technoscientific world: one fixated with solving the problem of time scarcity in contexts organized around maximizing productivity. Furthermore, this world is increasingly embracing the power of predictive data analytics and artificial intelligence. Yet rather than being the progressive act that many Silicon Valley designers posit, this move toward automating time is the latest in a series of long-standing moral attempts to subject time to a particular brand of rationalization. This orientation to, and valorization of, the fast-paced, full life requires incessant performance on our part and the relentless pursuit of self-enhancement. In other words, the talk will argue that calendaring software configures time events as auditable data that is ripe for accounting in the service of both old and new forms of socially-constructed optimization.

Jos Uffink (University of Minnesota): “The expression of the Uncertainty Principle of Quantum Mechanics”

September 27 at 4pm. Room Aula Consiglio, on the seventh floor of building 14 (Leonardo campus, via Bonardi 9).

This talk will start with a brief review of the early history of the expression of Werner Heisenberg’s famous (1927) Uncertainty Principle for position (Q) and momentum (P) in Quantum Mechanics. In his 1927 paper, Heisenberg only gave a qualitative formulation of the Uncertainty Principle, but within a few months, the physical community settled on a mathematical expression of this principle in terms of an inequality first presented by E.H. Kennard, wherein the product of the standard deviations of Q and P is greater or equal than half of the (reduced) Planck constant. This is still the most common and well-known version of an uncertainty relation today. Nevertheless, it will be shown in this talk that Kennard’s inequality, is not strong enough to provide a foundation for most of the common examples used in textbooks as illustrations of the Uncertainty Principle. I will argue, therefore, that conceptually stronger inequalities are needed for the purpose. I will review several approaches to obtain such inequalities: the Landau & Pollak inequalities (1961); the entropic uncertainty inequality (Bialynicki-Birula & Mycielski,1975) and a statistico-geometric approach to the uncertainty relation by (Hilgevoord & Uffink,1991), and discuss their strengths and weaknesses vis-a-vis the common textbook examples.


Mathias Frisch (Leibniz Universität Hannover): Managing Uncertainties in the Equilibrium Climate Sensitivity

April 19 at 4pm. Room Aula Consiglio, on the seventh floor of building 14 (Leonardo campus, via Bonardi 9).

Despite the great progress that climate science and our understanding of the climate system have made in the last three decades, significant uncertainties in our knowledge of the climate system and future climate change remain. In this talk I will focus on estimates for one climate variable, the equilibrium climate sensitivity (ECS), and examine how uncertainties in its estimates are treated in the most recent IPCC report AR6. AR6 supplements a framework of probability intervals (adopted from earlier reports) by in addition considering low-likelihood but high-impact “storylines”. I will argue in this paper that including such storylines represents an important improvement compared to prior reports. More specifically, I will argue that (i) in light of the deep uncertainties plaguing estimates of the values of central climate variables seemingly probabilistic assessments of these quantities ought to be read in a coarse-grained way and interpreted within a possibilistic framework as statements of ranked possibilities; (ii) given the high stakes involved, an appropriate decision framework for climate policy decisions ought to include broadly precautionary considerations that are sensitive to high-impact and catastrophic climate futures; (iii) storylines are a useful tool for probing possible high-impact climate futures.

Roberto Fumagalli (King’s College London): Preferences versus Opportunities: On the Conceptual Foundations of Normative Welfare Economics

March 29 at 17.15. Room BL28.22, on the second floor of building 28 (Bovisa campus, Via Lambruschini 4).

Abstract: Normative welfare economics commonly assumes that individuals’ preferences can be reliably inferred from their choices and relies on preference satisfaction as the normative standard for welfare. In recent years, several authors have criticized welfare economists’ reliance on preference satisfaction as the normative standard for welfare and have advocated grounding normative welfare economics on opportunities rather than preferences. In this paper, I argue that although preference-based approaches to normative welfare economics face significant conceptual and practical challenges, opportunity-based approaches fail to provide a more reliable and informative foundation for normative welfare economics than preference-based approaches. I then identify and rebut various influential calls to ground normative welfare economics on opportunities rather than preferences to support my qualified defence of preference-based approaches.

Laura Candiotto (University of Pardubice): What trusting an AI means

March 27 at 13.00. Room Sala Seminari Nicola Schiavoni, on the ground floor of building 20 (Leonardo campus, Via Ponzio 34/5).

Abstract: After the publication of the now-classic “extended mind” paper by Andy Clark and David Chalmers (Clark & Chalmers 1998), a profuse debate on the conditions that should be met for granting the extension of the mind in the world has taken place at the crossroads of philosophy of mind, cognitive science, philosophy of technology, and artificial intelligence. Notably, Clark (2010) has claimed that some “glue and trust” criteria allow us to differentiate between genuine extended cognition and the mere instrumental employment of a technological tool. These are constancy in use, direct availability, automatic endorsement, and consciously past endorsement. The affective dimension of trust has been rarely studied in this regard. In this talk, I will argue that trust plays a fundamental role in cognitive integration with AI. We cannot have a phenomenologically accurate description of cognitive integration with AI – not simply relying on it – without considering what trusting an AI means. I will advance the hypothesis that without trust cognitive integration is not attainable most of the time. So, if my hypothesis is correct, trust is a necessary condition of extended cognition. The final upshot is that trust enables the subject to carve out new relationships with the environment through the employment of technology. In tailoring a new affective-cum-cognitive niche through trustful engagement with AI agents and systems, one can extend their action possibilities and thus increase their feeling of agency in the world.

José Antonio Ballesteros-Figueroa (University of Edinburgh): “The Climate Emergency: Between Designing and Following the Future

October 4, 2022 – 9:30. Edificio 21 (building 21, DEIB), Aula Alario, quarto piano (fourth floor), Via Golgi 39.

This lecture will discuss the role of quantitative devices (QDs) in expanding sociotechnical imaginaries. It will address how researchers conceive quantitative data, pay attention to the discourses of hope and expectation embedded in the devices and discuss ethical questions of who can quantify the environment and who can only follow what is datafied for them. In fact, colonial powers have used data driven policy support in the pursuit of achieving their imaginaries by offering a more desired global future. This phrasing signals the impossibility of multiple cosmologies since the achievement of “a future” will require others not to occur, or at least modify their trajectory to merge into the dominant imaginaries. This is particularly clear in the context of the climate emergency, with initiatives such as the Stockholm Knowledge Hub for Climate Security and its goal of providing tools to protect the future

Riccardo Chesta (Scuola Normale Superiore, Firenze): “Azione collettiva ed expertise scientifico nelle politiche ambientali”

September 19, 2022 – 16:30. Edificio 20 (building 20, DEIB), Sala Seminari Nicola Schiavoni, piano terra (ground floor), via Ponzio 34/5.

Based on mixed-methods research and ethnographic fieldwork at various sites in Italy, this lecture examines the relationship between expertise and activism in grassroots environmentalism. Presenting interviews with citizens, activists and experts, it considers environmental activism surrounding infrastructure in urban areas, in connection with water management, transport, tourism and waste disposal. Through comparisons between different political environments, the author analyses the ways in which citizens, political activists and technical experts participate in using expertise, shedding light on the effects of this on the structure and composition of environmental politics and movements, as well as the implications for the mechanisms of participation and the formation of alliances. Bridging the sociology of expertise and environmental politics, this study of the relationship between contentious expertise and democratic accountability shows how conflict transforms, rather than inhibits, expertise production into a ‘contentious politics by other means’.


Katy Shaw (Northumbria University): “Literature 2.0: The Politics of Writing and Reading in the Twenty-First Century”

July 19, 2022 – 15:00. Edificio 29 (building 29, Carta), Sala Riunioni, piano terra (ground floor), Piazza Leonardo da Vinci 26

The Internet and the digitalization of texts have transformed the way literature is viewed, processed and exchanged across the first two decades of the new millennium. Although e-readers have traditionally been used to passively view a text, this medium has evolved beyond its primary function. Some e-readers can now access other electronic documents such as newsfeeds, and e-reader apps can be used in conjunction with the internet to hack existing works and create entirely new one. This empowering development has had wider implications for readers and writers of digitalized literature. As technology has expanded a widening creative field of writings to include supplementary ‘texts’ such as Twitter feeds, Facebook comments, Tumblr blogs and forum exchanges, writers have also grown to extend texts and their contexts in new ways and readers have begun to converse with authors online through social networking sites and sometimes in the collaborative production of new works. This paper will consider how contemporary writings have responded to and been shaped by ongoing challenges and opportunities of digital innovations to publishing in the twenty-first century. Focusing on case studies drawn from Twitterature, mobile novels and text hacking, the paper will ask how Literature 2.0 is impacting the ways we write and read today.

Fleur Jongepier (Radboud University): “Do Algorithms Know Better? Self-knowledge in the Digital Age”

May 23, 2022 – 17:00. Edificio 24 (building 24), aula beta, piano terra (ground floor), Via Golgi 40

In the age of data-driven algorithms, “governments and corporations will soon know you better than you know yourself”, according to Yuval Noah Harari. In an (in)famous article that played a significant role in the Cambridge Analytica scandal, an algorithm is said to require only 10 ‘likes’ to know you better than a colleague and 70 likes to match your friends. The knowledge algorithms have about us is increasingly being used by government institutions, for instance to determine the chance of someone committing a crime, dropping out of school, or illegitimately receiving social benefits. But do algorithms really ‘know us better’ and if so, in what sense? In this talk, I distinguish between different types of epistemic authority and argue that in an important sense algorithms do not, and cannot, know us better than we know ourselves. Following Miranda Fricker, I suggest that contexts in which algorithms are unjustifiably deferred to, this constitutes a distinct type of epistemic injustice, namely, the injustice of not being treated as a self-knower.

Rawad El Skaf (Politecnico di Milano): “Probing theoretical statements with thought experiments

May 4, 2022 – 16:00. Edificio 14 (building 14), Sala Consiglio, piano VII (floor n.7), Via Bonardi 9

Many thought experiments (TEs) are used to probe theoretical statements. One crucial strategy for doing this, or so I will argue, is the following. A TE reveals an inconsistency in part of our previously held, sometimes empirically well-established, theoretical statements. A thought experimenter or her critic then proposes a resolution in the form of a conjecture, a hypothesis that merits further investigation. To explore this characterisation of the epistemic function of such TEs, I first clarify the nature of the inconsistencies they reveal. Second, I describe a common structure for such inconsistency revealing/resolving TEs. I argue that such an epistemic account of TEs can be done without settling the question of which cognitive processes are involved in their performance; be they propositional or non-propositional. The upshot is that TEs’ reliability, like real experiments, is to be found, in part, in their replicability by the epistemic community, not in their cognitive underpinnings. For that, I formulate, following scientific practice, five strategies for the critic of a TE. I conclude by considering some ideas for generalising this account to all scientific TEs, at least in physics.

Diletta De Cristofaro (Politecnico di Milano): “Sleep Mode: Technologies and the Sleep Crisis”

April 27, 2022 – 13:00. Edificio 29 (building 29), sala seminari piano terra (ground floor), Piazza Leonardo da Vinci 26

We are in a sleep crisis: sleep disorders are on the rise and people are sleeping less and less, with disastrous effects on health. Or at least, this is what some studies claim, incessantly amplified by the media. In fact, sleep scientists are divided on whether or not our society is suffering from this crisis of poor sleep. Yet the discourse of contemporary society as profoundly sleep-deprived dominates cultural production. Why have insomnia, burnout, and exhaustion become the main lenses through which we think about sleep in our present? Considering a range of examples from twenty-first-century literature, visual arts, popular and digital culture, this lecture explores the ambiguous role of technologies in the discourse of the sleep crisis: how they are blamed for this supposed public health emergency and, at once, positioned as a possible solution to the crisis. I argue that technologies, and their representations in contemporary culture, illuminate a nexus of concerns, affects, and imperatives that underlie the discourse of the sleep crisis.


Catherine Herfeld (University of Zurich): “The Many Faces of Rational Choice Theory”

May 31, 2021 – 12:00. Dipartimento di Ingegneria Gestionale, Edificio BL26,  via Lambruschini 4. ONLINE

Throughout the second half of the twentieth century, theories of rational choice have been extensively employed in economics and the social sciences more generally. They have been used in the hope of solving a variety of distinct conceptual, methodological and epistemic problems and are thus to be found in nearly any context in which economists aim at generating knowledge about the economy. At the same time, theories of rational choice have been attacked from various sides. As they have been empirically falsified countless times, they have often been identified as responsible for the explanatory and predictive shortcomings of economic models and theories. In this talk, I aim to provide a fresh perspective on persistent debates about the epistemic potentials and limitations of rational choice theory. First, I suggest that rational choice theory has many conceptually and methodologically distinct faces that remain prevalent in contemporary economics, but have emerged from a history of earlier attempts to conceptualize the behavior of human agents. By looking more closely at a set of historical and contemporary cases, I argue that the way in which rational choice theories have been used and justified in economics has depended crucially upon the problems that economists addressed. They should accordingly be evaluated against the backdrop of precisely those problems they were meant to provide a solution for. Second, I argue that even if economists could draw upon an empirically more adequate theory of human behavior, it remains to be seen whether they have found an appropriate solution for the empirical difficulties that economic models and theories actually confront.

Marianna Antonutti Marfori (MCMP, LMU Munich): “The Epistemic Route to Reflection”

April 8, 2021 – 15:00. Dipartimento di Matematica, Edificio 14, Via Bonardi 7. Platform Webex: ONLINE

Gödel’s incompleteness theorems show that any formal theory that contains a certain minimal amount of arithmetic is incomplete: there are statements in the language of the theory that the theory cannot either prove or disprove. Such statements include statements (called reflection principles) that formally express the soundness of the theory, i.e. that everything that is provable from the axioms is true. This seems to contrast with our pre-theoretical understanding of arithmetic in everyday life and in science, and with our readiness to accept that every statement that can be proved from our axioms is true. Many authors have therefore sought to justify the addition to our formal theories of such reflection principles by different means. This paper presents a new way of justifying the addition of reflection principles on the basis of epistemic considerations, by employing certain uncontroversial properties of the notion of informal or absolute provability for arithmetic. Starting from the observation that the fundamental properties of this notion can be formally characterised by using the axioms of the modal logic S4 (along the lines already proposed by Gödel), I will show that the recognition that the axioms and rules of inference of Peano arithmetic (PA) correctly formalise informal arithmetical reasoning is sufficient to formally imply the local and uniform reflection schemes.

Lorenzo Rossi (MCMP, LMU Munich):  “Paradossi: Verità, Vaghezza, e Oltre”

February 24, 2021 – 15:00. Dipartimento di Matematica, Edificio 14, via Bonardi 7. Platform Webex: ONLINE

Un paradosso può essere caratterizzato come un argomento che, muovendo da premesse apparentemente inattaccabili e impiegando solo passaggi apparentemente validi, porta ad una conclusione inaccettabile. I paradossi si sono rivelati una fonte inestimabile di progresso per la conoscenza scientifica, soprattutto in logica e nei fondamenti della matematica. In questo seminario, presenterò un approccio unificato a due tipi fondamentali di paradosso: i paradossi semantici e i paradossi della vaghezza (“paradossi soritici”). Svilupperò un modello formale della nozione di paradosso, e lo applicherò per mostrare che i paradossi semantici e quelli soritici rivelano una struttura simile. Concluderò con alcune prospettive per ulteriori applicazioni del modello.

Gerardo Ienna (Università Ca’ Foscari di Venezia): “La scienza non è neutrale. Il contributo dei fisici italiani all’idea della responsabilità sociale dello scienziato”

October 21, 2020 – 11:00. Dipartimento di Fisica, Edificio 8, Aula 8.0.1, Piazza Leonardo da Vinci 32

La conferenza presenterà il contributo dei fisici italiani all’emergere del dibattito europeo sulla non-neutralità della scienza dagli anni ’60 in poi. Osservando il rafforzamento delle posizioni politiche di scienziati, storici e filosofi coinvolti in questo dibattito, la migrazione di idee e scienziati radicali fra gli Stati Uniti e l’Europa, lo svilupparsi delle così dette “Italian Science Wars”, emergerà in che modo un approccio politicamente impegnato alla scienza abbia influenzato le pratiche della fisica incoraggiando nuove idee e percorsi di ricerca.



Alessandro Blasimme (ETH Zurich): “L’intelligenza artificiale al servizio della salute: questioni etiche ed epistemologiche”

November 19, 2019 – 16:00. Dipartimento di Chimica, Materiali e Ingegneria Chimica, Edificio 6, Aula Lombardi, Piazza Leonardo da Vinci 32

Lo sviluppo di algoritmi basati sull’apprendimento automatico sembra in grado di aprire nuove strade tanto nel campo della ricerca biomedica, quanto nel campo della diagnosi e della cura. Questi avanzamenti danno luogo a un intenso dibattito che riguarda gli aspetti tecnici dell’intelligenza artificiale. Un dibattito altrettanto acceso si rivolge altresì agli aspetti etici ed epistemologici legati alle nuove possibilità che essa offre. Nel passare in rassegna i più recenti sviluppi in questo ambito, porrò l’accento su due questioni fondamentali: il problema della rappresentatività dei dati utilizzati nell’apprendimento automatico; e la questione dell’opacità degli algoritmi. Alcune riflessioni conclusive saranno rivolte al tentativo di comprendere in che modo la digitalizzazione e l’automazione prefigurino nuove articolazioni del rapporto tra salute, malattia e cura.

Luca Mari (Università Carlo Cattaneo – LIUC): “Models of measurement, models in measurement”

November 11, 2019 – 15:00. Dipartimento di Elettronica, Informazione e Bioingegneria, Edificio 24, Aula Alfa, via Golgi 40

Measurement has an important and acknowledged role in connecting the empirical realm and the information realm. Witnessing the diversity of philosophical standpoints in different contexts and times, the nature and justification of this connection have been variously interpreted: from discovery of the true value of the measurand, to assignment to symbols representative of the measurand. A reflection about the unavoidable presence of models in measurement gives us some perspectives on the epistemology of measurement, and therefore on models of measurement.


Annibale Biggeri (Università di Firenze) e Mariachiara Tallacchini (Università Cattolica di Piacenza): “Improving Environmental Health Through Citizen Science and Collaborative Research: New Challenges for Knowledge Making”

January 16, 2019 – 14:45. Dipartimento di Ingegneria Civile e Ambientale, Edificio 4A, Aula Fassò, Piazza Leonardo da Vinci 32

Due nuovi fenomeni sociali e tecno-scientifici si sono fusi negli ultimi vent’anni: l’uso di dispositivi ICT per raccogliere e condividere dati personali e scientifici, e la nascita di nuove forme di produzione della conoscenza guidate dai cittadini, o generate da scienziati e cittadini che collaborano per obiettivi sociali condivisi (peer-production di conoscenza).
Questa combinazione di abilità tecniche e cognitive e di nuove relazioni sociali è stata spesso applicata a iniziative comunitarie. Tali iniziative promuovono un atteggiamento di attenzione e attivazione per la protezione del bene comune, e mirano ad accrescere il controllo sulla salute umana e ambientale. La consapevolezza della necessità di garantire trasparenza, affidabilità e accessibilità dei dati, il bisogno di ricorrere a meccanismi di crowdfunding e l’impegno a migliorare la vita sociale stanno caratterizzando sempre più queste attività, in quanto potenziali modelli per la produzione di conoscenza per le politiche pubbliche. Tuttavia, lo statuto di queste iniziative necessita ancora di una migliore definizione della validità e delle forme di convalida della soggiacente citizen science, della sua relazione con la scienza e gli scienziati tradizionali, degli aspetti etici della ricerca, e di un adeguato riconoscimento del suo ruolo nei processi decisionali. Questa lecture illustra le principali sfide teoriche e pratiche in cui le conoscenze di cittadini e scienziati si fondono e i bisogni e i valori di base che ispirano la ricerca collaborativa nel campo dell’epidemiologia e della salute ambientale. Verranno inoltre presentate due iniziative lanciate a Firenze da cittadini e ricercatori per un monitoraggio fai-da-te dei dispositivi PM10 e PM2.5 (“Mamme no inceneritore” e “Che aria tira”).

Giovanni Sartor (Università di Bologna): “Veicoli autonomi e diritto: la manopola etica”

December 4, 2018 – 15.30. Dipartimento di Meccanica, Edificio B.22, Sala Corsi, piano terra, Via La Masa 1

L’adozione dei veicoli autonomi (VA) solleva importanti dilemmi etici e numerose questioni giuridiche. Come dovrebbe comportarsi un’auto a guida autonoma nei casi in cui un incidente sia inevitabile? Dovrebbe minimizzare la perdita di vite umane, mettendo a repentaglio la sicurezza dei propri passeggeri, o agire per proteggere questi ultimi anche quando ciò comporti il sacrificio di terzi? Secondo alcuni autori, i VA dovrebbero essere “programmati per uccidere”, cioè pre-programmati, sulla base di un algoritmo morale, a scegliere quali vite salvare e quali sacrificare. In questo contributo si esplora un indirizzo diverso, cioè la possibilità di affidare al conducente e/o al passeggero il compito e l’onere di decidere quale principio etico debbano adottare i VA, nelle ipotesi di incidenti inevitabili. Si assume in particolare che i VA possano essere dotati di una “manopola etica”, un sistema che consenta ai passeggeri una personalizzazione etica del veicolo, cioè di selezionare i principi etici che ne guideranno l’azione.

Jim Weatherall (University of California at Irvine): “What Makes Econophysics Distinctive?”

March 29, 2019 – 14:00. Dipartimento di Ingegneria Gestionale, Edificio 20,  Aula MEL 1, Via Lambruschini 17/B

There is a long history of ideas (and people) moving from fields such as physics and mathematics into finance. In the past, the ideas and methods of physicists have been rapidly integrated into economic thought. Beginning around 1990, however, a new movement of physicists attempting to apply methods from statistical physics to economic problems began. Strangely, this time the ideas from physics have not been widely adopted or integrated into mainstream economics. Instead, a new, largely autonomous field of “econophysics” has appeared, in which people trained mostly in physics or by physicists work on problems of economics. In this talk, I will explore some of the reasons for the appearance of this new field. Ultimately I will argue that what make econophysics distinctive — both from economics, and from past attempts to import ideas from physics — is that econophysicists seem to recognize, and attempt to meet, and explanatory demand that economists reject, concerning the relationship between models of individual actors and market-level and economy-level models.


Marcello Di Bello (City University of New York): “Profile Evidence, Fairness and the Risks of Mistaken Verdicts”

January 12, 2018 – 9:30. Dipartimento di Matematica, Edificio 14, Aula Saleri, Via Bonardi 7

Profile evidence expresses a positive statistical correlation between membership in a group and crime. For example, statistics indicate that people who committed a burglary before are more likely to commit burglary compared to people in the general population. This profile evidence, if it is introduced at trial against a defendant charged with burglary and known to have committed burglary before, would be highly probative of guilt. But even when it is probative of guilt, many resist admitting profile evidence in a criminal trial for the purpose of helping to establish the guilt of the defendant. Coming up with a satisfactory account of the intuitive resistance to admitting profile evidence has proven difficult in the literature. The paper argues that concerns about admitting profile evidence can be vindicated by appeal to another fundamental legal value, equality before the law. In particular, equality before the law requires that innocent defendants be exposed to equal risks of mistaken conviction. As it turns out, profile evidence puts the innocent defendants who belong to the profiled groups at greater risk of mistaken conviction, relative to other innocent defendants who do not belong to profiled groups. This is the problem with admitting profile evidence at trial. This line of argument has a number of applications. For example, consider the debate among forensic scientists about the proper use of DNA profiling technology in cold-hit cases. Many have warned that in these cases a DNA match should have a weaker probative value than in standard DNA evidence cases. But others, especially Bayesian statisticians, have disagreed. The equal risk approach presented in this paper justifies, on fairness grounds, weakening the probative value of a DNA match in cold-hit cases, while also accommodating the dissenting arguments put forward by the Bayesian statisticians.

Jan Sprenger (Università di Torino): “Philosophy of Science Meets Statistical Inference: Reviving the Concept of Corroboration”

March 9, 2018 – 9:30. Dipartimento di Fisica, Edificio 8, Aula Rossa, Piazza Leonardo da Vinci 32

The most common way of testing statistical hypotheses is to conduct null hypothesis significance tests (NHST) and to use a p-value to describe evidence against the null hypothesis. In this talk, I would like to highlight a fundamental conceptual problem with this approach: the impossibility to express support for the null hypothesis. Since null hypotheses are often simple and precise idealizations of complex models with substantial theoretical importance, a good method for scientific hypothesis tests has to be able to express support for them. Also, the one-sided nature of p-values arguably aggravates the replication crisis in various scientific disciplines. My approach is twofold: first I explain why classical NHST and classical Bayesian inference fail to evaluate a null hypothesis in an appropriate way; then I develop a measure of corroboration, taking inspiration from both Bayesian and frequentist procedures. I argue that degrees of corroboration achieve a more nuanced judgment on the evidence in favor of a null hypothesis and that they can be used in a variety of cases in statistical inference.

Federico Boem (Università di Milano): “The Impact of Big Data on Biomedical Research: From Epistemic to Ethical and Political Considerations”

April 13, 2018 – 11:00. Dipartimento di Elettronica, Informazione a Bioingegneria, Edificio 21, Aula Bio 1, Via Ponzio 34/5

Big Data science is often invoked by an increasing number of biomedical scientists as a way to justify not just the methodology of their approach but also both the quality and the power of it. As a matter of fact, Big Data had a strong impact on the life sciences in many ways. The speed of processing, the range of feasible experiments, the type of possible questions have dramatically changed both the theory and practice of biomedical research. However, scientists and other scholars still discuss about the nature of this change and its possible outcomes. All of this might have a consequence not only on the way science is pursued now and in the future but also on the way science is structured and perceived from an institutional perspective. As this change directly affects the connection and the mutual interplay between science and society, ethical and political issues should also be taken into account for a more adequate representation of the current image of biomedical research and its development.

Giorgio Osti (Università di Trieste): “Frames cognitivi, sistemi socio-tecnici e pratiche di gioco nella transizione energetica”

June 5, 2018 – 16:00. Dipartimento di Energia, Edificio BL25, Sala Consiglio, Via Lambruschini 4

La lezione mira a chiarire e esemplificare alcuni contributi che le scienze sociali possono fornire alla comprensione della transizione energetica. I contributi sociologici analizzati sono tre: (a) cognitivo – l’energia è pensata come un modo di conoscere, un macro-concetto che lavora come un frame, un racconto o una convenzione (giustificazioni-rivendicazioni); (b) strumentale o organizzativo – l’energia è usata dagli esseri umani per raggiungere determinati fini attraverso una sapiente organizzazione di parti e ruoli, a sua volta inserita in ‘campi’ o ‘regimi’; (c) rituale – l’energia entra in ricorrenti combinazioni di azioni e circostanze che chiamiamo pratiche. La metafora del gioco verrà usata come possibile sintesi degli approcci e come strumento euristico. Ognuno degli approcci verrà illustrato attraverso una ricerca, cercando di mettere in luce alcune peculiarità metodologiche.

Giovanni Sartor (Università di Bologna): “Veicoli autonomi e diritto: la manopola etica”

December 4, 2018 – 15.30. Dipartimento di Meccanica, Edificio B.22, Sala Corsi, piano terra, Via La Masa 1

L’adozione dei veicoli autonomi (VA) solleva importanti dilemmi etici e numerose questioni giuridiche. Come dovrebbe comportarsi un’auto a guida autonoma nei casi in cui un incidente sia inevitabile? Dovrebbe minimizzare la perdita di vite umane, mettendo a repentaglio la sicurezza dei propri passeggeri, o agire per proteggere questi ultimi anche quando ciò comporti il sacrificio di terzi? Secondo alcuni autori, i VA dovrebbero essere “programmati per uccidere”, cioè pre-programmati, sulla base di un algoritmo morale, a scegliere quali vite salvare e quali sacrificare. In questo contributo si esplora un indirizzo diverso, cioè la possibilità di affidare al conducente e/o al passeggero il compito e l’onere di decidere quale principio etico debbano adottare i VA, nelle ipotesi di incidenti inevitabili. Si assume in particolare che i VA possano essere dotati di una “manopola etica”, un sistema che consenta ai passeggeri una personalizzazione etica del veicolo, cioè di selezionare i principi etici che ne guideranno l’azion