META lectures

Coming up

Katy Shaw (Northumbria University): “Literature 2.0: The Politics of Writing and Reading in the Twenty-First Century”

July 19, 2022 – 15:00. Edificio 29 (building 29), Sala Riunioni, piano terra (ground floor), Via Ponzio 34/3

The Internet and the digitalization of texts have transformed the way literature is viewed, processed and exchanged across the first two decades of the new millennium. Although e-readers have traditionally been used to passively view a text, this medium has evolved beyond its primary function. Some e-readers can now access other electronic documents such as newsfeeds, and e-reader apps can be used in conjunction with the internet to hack existing works and create entirely new one. This empowering development has had wider implications for readers and writers of digitalized literature. As technology has expanded a widening creative field of writings to include supplementary ‘texts’ such as Twitter feeds, Facebook comments, Tumblr blogs and forum exchanges, writers have also grown to extend texts and their contexts in new ways and readers have begun to converse with authors online through social networking sites and sometimes in the collaborative production of new works. This paper will consider how contemporary writings have responded to and been shaped by ongoing challenges and opportunities of digital innovations to publishing in the twenty-first century. Focusing on case studies drawn from Twitterature, mobile novels and text hacking, the paper will ask how Literature 2.0 is impacting the ways we write and read today.


Fleur Jongepier (Radboud University): “Do Algorithms Know Better? Self-knowledge in the Digital Age”

May 23, 2022 – 17:00. Edificio 24 (building 24), aula beta, piano terra (ground floor), Via Golgi 40

In the age of data-driven algorithms, “governments and corporations will soon know you better than you know yourself”, according to Yuval Noah Harari. In an (in)famous article that played a significant role in the Cambridge Analytica scandal, an algorithm is said to require only 10 ‘likes’ to know you better than a colleague and 70 likes to match your friends. The knowledge algorithms have about us is increasingly being used by government institutions, for instance to determine the chance of someone committing a crime, dropping out of school, or illegitimately receiving social benefits. But do algorithms really ‘know us better’ and if so, in what sense? In this talk, I distinguish between different types of epistemic authority and argue that in an important sense algorithms do not, and cannot, know us better than we know ourselves. Following Miranda Fricker, I suggest that contexts in which algorithms are unjustifiably deferred to, this constitutes a distinct type of epistemic injustice, namely, the injustice of not being treated as a self-knower.

Rawad El Skaf (Politecnico di Milano): “Probing theoretical statements with thought experiments

May 4, 2022 – 16:00. Edificio 14 (building 14), Sala Consiglio, piano VII (floor n.7), Via Bonardi 9

Many thought experiments (TEs) are used to probe theoretical statements. One crucial strategy for doing this, or so I will argue, is the following. A TE reveals an inconsistency in part of our previously held, sometimes empirically well-established, theoretical statements. A thought experimenter or her critic then proposes a resolution in the form of a conjecture, a hypothesis that merits further investigation. To explore this characterisation of the epistemic function of such TEs, I first clarify the nature of the inconsistencies they reveal. Second, I describe a common structure for such inconsistency revealing/resolving TEs. I argue that such an epistemic account of TEs can be done without settling the question of which cognitive processes are involved in their performance; be they propositional or non-propositional. The upshot is that TEs’ reliability, like real experiments, is to be found, in part, in their replicability by the epistemic community, not in their cognitive underpinnings. For that, I formulate, following scientific practice, five strategies for the critic of a TE. I conclude by considering some ideas for generalising this account to all scientific TEs, at least in physics.

Diletta De Cristofaro (Politecnico di Milano): “Sleep Mode: Technologies and the Sleep Crisis”

April 27, 2022 – 13:00. Edificio 29 (building 29), sala seminari piano terra (ground floor), Piazza Leonardo da Vinci 26

We are in a sleep crisis: sleep disorders are on the rise and people are sleeping less and less, with disastrous effects on health. Or at least, this is what some studies claim, incessantly amplified by the media. In fact, sleep scientists are divided on whether or not our society is suffering from this crisis of poor sleep. Yet the discourse of contemporary society as profoundly sleep-deprived dominates cultural production. Why have insomnia, burnout, and exhaustion become the main lenses through which we think about sleep in our present? Considering a range of examples from twenty-first-century literature, visual arts, popular and digital culture, this lecture explores the ambiguous role of technologies in the discourse of the sleep crisis: how they are blamed for this supposed public health emergency and, at once, positioned as a possible solution to the crisis. I argue that technologies, and their representations in contemporary culture, illuminate a nexus of concerns, affects, and imperatives that underlie the discourse of the sleep crisis.


Gerardo Ienna (Università Ca’ Foscari di Venezia): “La scienza non è neutrale. Il contributo dei fisici italiani all’idea della responsabilità sociale dello scienziato”

October 21, 2020 – 11:00. Dipartimento di Fisica, Edificio 8, Aula 8.0.1, Piazza Leonardo da Vinci 32

La conferenza presenterà il contributo dei fisici italiani all’emergere del dibattito europeo sulla non-neutralità della scienza dagli anni ’60 in poi. Osservando il rafforzamento delle posizioni politiche di scienziati, storici e filosofi coinvolti in questo dibattito, la migrazione di idee e scienziati radicali fra gli Stati Uniti e l’Europa, lo svilupparsi delle così dette “Italian Science Wars”, emergerà in che modo un approccio politicamente impegnato alla scienza abbia influenzato le pratiche della fisica incoraggiando nuove idee e percorsi di ricerca.

Lorenzo Rossi (MCMP, LMU Munich):  “Paradossi: Verità, Vaghezza, e Oltre”

February 24, 2021 – 15:00. Dipartimento di Matematica, Edificio 14, via Bonardi 7. Platform Webex: ONLINE

Un paradosso può essere caratterizzato come un argomento che, muovendo da premesse apparentemente inattaccabili e impiegando solo passaggi apparentemente validi, porta ad una conclusione inaccettabile. I paradossi si sono rivelati una fonte inestimabile di progresso per la conoscenza scientifica, soprattutto in logica e nei fondamenti della matematica. In questo seminario, presenterò un approccio unificato a due tipi fondamentali di paradosso: i paradossi semantici e i paradossi della vaghezza (“paradossi soritici”). Svilupperò un modello formale della nozione di paradosso, e lo applicherò per mostrare che i paradossi semantici e quelli soritici rivelano una struttura simile. Concluderò con alcune prospettive per ulteriori applicazioni del modello.

Marianna Antonutti Marfori (MCMP, LMU Munich): “The Epistemic Route to Reflection”

April 8, 2021 – 15:00. Dipartimento di Matematica, Edificio 14, Via Bonardi 7. Platform Webex: ONLINE

Gödel’s incompleteness theorems show that any formal theory that contains a certain minimal amount of arithmetic is incomplete: there are statements in the language of the theory that the theory cannot either prove or disprove. Such statements include statements (called reflection principles) that formally express the soundness of the theory, i.e. that everything that is provable from the axioms is true. This seems to contrast with our pre-theoretical understanding of arithmetic in everyday life and in science, and with our readiness to accept that every statement that can be proved from our axioms is true. Many authors have therefore sought to justify the addition to our formal theories of such reflection principles by different means. This paper presents a new way of justifying the addition of reflection principles on the basis of epistemic considerations, by employing certain uncontroversial properties of the notion of informal or absolute provability for arithmetic. Starting from the observation that the fundamental properties of this notion can be formally characterised by using the axioms of the modal logic S4 (along the lines already proposed by Gödel), I will show that the recognition that the axioms and rules of inference of Peano arithmetic (PA) correctly formalise informal arithmetical reasoning is sufficient to formally imply the local and uniform reflection schemes.

Catherine Herfeld (University of Zurich): “The Many Faces of Rational Choice Theory”

May 31, 2021 – 12:00. Dipartimento di Ingegneria Gestionale, Edificio BL26,  via Lambruschini 4. ONLINE

Throughout the second half of the twentieth century, theories of rational choice have been extensively employed in economics and the social sciences more generally. They have been used in the hope of solving a variety of distinct conceptual, methodological and epistemic problems and are thus to be found in nearly any context in which economists aim at generating knowledge about the economy. At the same time, theories of rational choice have been attacked from various sides. As they have been empirically falsified countless times, they have often been identified as responsible for the explanatory and predictive shortcomings of economic models and theories. In this talk, I aim to provide a fresh perspective on persistent debates about the epistemic potentials and limitations of rational choice theory. First, I suggest that rational choice theory has many conceptually and methodologically distinct faces that remain prevalent in contemporary economics, but have emerged from a history of earlier attempts to conceptualize the behavior of human agents. By looking more closely at a set of historical and contemporary cases, I argue that the way in which rational choice theories have been used and justified in economics has depended crucially upon the problems that economists addressed. They should accordingly be evaluated against the backdrop of precisely those problems they were meant to provide a solution for. Second, I argue that even if economists could draw upon an empirically more adequate theory of human behavior, it remains to be seen whether they have found an appropriate solution for the empirical difficulties that economic models and theories actually confront.



Luca Mari (Università Carlo Cattaneo – LIUC): “Models of measurement, models in measurement”

November 11, 2019 – 15:00. Dipartimento di Elettronica, Informazione e Bioingegneria, Edificio 24, Aula Alfa, via Golgi 40

Measurement has an important and acknowledged role in connecting the empirical realm and the information realm. Witnessing the diversity of philosophical standpoints in different contexts and times, the nature and justification of this connection have been variously interpreted: from discovery of the true value of the measurand, to assignment to symbols representative of the measurand. A reflection about the unavoidable presence of models in measurement gives us some perspectives on the epistemology of measurement, and therefore on models of measurement.

Alessandro Blasimme (ETH Zurich): “L’intelligenza artificiale al servizio della salute: questioni etiche ed epistemologiche”

November 19, 2019 – 16:00. Dipartimento di Chimica, Materiali e Ingegneria Chimica, Edificio 6, Aula Lombardi, Piazza Leonardo da Vinci 32

Lo sviluppo di algoritmi basati sull’apprendimento automatico sembra in grado di aprire nuove strade tanto nel campo della ricerca biomedica, quanto nel campo della diagnosi e della cura. Questi avanzamenti danno luogo a un intenso dibattito che riguarda gli aspetti tecnici dell’intelligenza artificiale. Un dibattito altrettanto acceso si rivolge altresì agli aspetti etici ed epistemologici legati alle nuove possibilità che essa offre. Nel passare in rassegna i più recenti sviluppi in questo ambito, porrò l’accento su due questioni fondamentali: il problema della rappresentatività dei dati utilizzati nell’apprendimento automatico; e la questione dell’opacità degli algoritmi. Alcune riflessioni conclusive saranno rivolte al tentativo di comprendere in che modo la digitalizzazione e l’automazione prefigurino nuove articolazioni del rapporto tra salute, malattia e cura.


Giovanni Sartor (Università di Bologna): “Veicoli autonomi e diritto: la manopola etica”

December 4, 2018 – 15.30. Dipartimento di Meccanica, Edificio B.22, Sala Corsi, piano terra, Via La Masa 1

L’adozione dei veicoli autonomi (VA) solleva importanti dilemmi etici e numerose questioni giuridiche. Come dovrebbe comportarsi un’auto a guida autonoma nei casi in cui un incidente sia inevitabile? Dovrebbe minimizzare la perdita di vite umane, mettendo a repentaglio la sicurezza dei propri passeggeri, o agire per proteggere questi ultimi anche quando ciò comporti il sacrificio di terzi? Secondo alcuni autori, i VA dovrebbero essere “programmati per uccidere”, cioè pre-programmati, sulla base di un algoritmo morale, a scegliere quali vite salvare e quali sacrificare. In questo contributo si esplora un indirizzo diverso, cioè la possibilità di affidare al conducente e/o al passeggero il compito e l’onere di decidere quale principio etico debbano adottare i VA, nelle ipotesi di incidenti inevitabili. Si assume in particolare che i VA possano essere dotati di una “manopola etica”, un sistema che consenta ai passeggeri una personalizzazione etica del veicolo, cioè di selezionare i principi etici che ne guideranno l’azione.

Annibale Biggeri (Università di Firenze) e Mariachiara Tallacchini (Università Cattolica di Piacenza): “Improving Environmental Health Through Citizen Science and Collaborative Research: New Challenges for Knowledge Making”

January 16, 2019 – 14:45. Dipartimento di Ingegneria Civile e Ambientale, Edificio 4A, Aula Fassò, Piazza Leonardo da Vinci 32

Due nuovi fenomeni sociali e tecno-scientifici si sono fusi negli ultimi vent’anni: l’uso di dispositivi ICT per raccogliere e condividere dati personali e scientifici, e la nascita di nuove forme di produzione della conoscenza guidate dai cittadini, o generate da scienziati e cittadini che collaborano per obiettivi sociali condivisi (peer-production di conoscenza).
Questa combinazione di abilità tecniche e cognitive e di nuove relazioni sociali è stata spesso applicata a iniziative comunitarie. Tali iniziative promuovono un atteggiamento di attenzione e attivazione per la protezione del bene comune, e mirano ad accrescere il controllo sulla salute umana e ambientale. La consapevolezza della necessità di garantire trasparenza, affidabilità e accessibilità dei dati, il bisogno di ricorrere a meccanismi di crowdfunding e l’impegno a migliorare la vita sociale stanno caratterizzando sempre più queste attività, in quanto potenziali modelli per la produzione di conoscenza per le politiche pubbliche. Tuttavia, lo statuto di queste iniziative necessita ancora di una migliore definizione della validità e delle forme di convalida della soggiacente citizen science, della sua relazione con la scienza e gli scienziati tradizionali, degli aspetti etici della ricerca, e di un adeguato riconoscimento del suo ruolo nei processi decisionali. Questa lecture illustra le principali sfide teoriche e pratiche in cui le conoscenze di cittadini e scienziati si fondono e i bisogni e i valori di base che ispirano la ricerca collaborativa nel campo dell’epidemiologia e della salute ambientale. Verranno inoltre presentate due iniziative lanciate a Firenze da cittadini e ricercatori per un monitoraggio fai-da-te dei dispositivi PM10 e PM2.5 (“Mamme no inceneritore” e “Che aria tira”).

Jim Weatherall (University of California at Irvine): “What Makes Econophysics Distinctive?”

March 29, 2019 – 14:00. Dipartimento di Ingegneria Gestionale, Edificio 20,  Aula MEL 1, Via Lambruschini 17/B

There is a long history of ideas (and people) moving from fields such as physics and mathematics into finance. In the past, the ideas and methods of physicists have been rapidly integrated into economic thought. Beginning around 1990, however, a new movement of physicists attempting to apply methods from statistical physics to economic problems began. Strangely, this time the ideas from physics have not been widely adopted or integrated into mainstream economics. Instead, a new, largely autonomous field of “econophysics” has appeared, in which people trained mostly in physics or by physicists work on problems of economics. In this talk, I will explore some of the reasons for the appearance of this new field. Ultimately I will argue that what make econophysics distinctive — both from economics, and from past attempts to import ideas from physics — is that econophysicists seem to recognize, and attempt to meet, and explanatory demand that economists reject, concerning the relationship between models of individual actors and market-level and economy-level models.


Marcello Di Bello (City University of New York): “Profile Evidence, Fairness and the Risks of Mistaken Verdicts”

January 12, 2018 – 9:30. Dipartimento di Matematica, Edificio 14, Aula Saleri, Via Bonardi 7

Profile evidence expresses a positive statistical correlation between membership in a group and crime. For example, statistics indicate that people who committed a burglary before are more likely to commit burglary compared to people in the general population. This profile evidence, if it is introduced at trial against a defendant charged with burglary and known to have committed burglary before, would be highly probative of guilt. But even when it is probative of guilt, many resist admitting profile evidence in a criminal trial for the purpose of helping to establish the guilt of the defendant. Coming up with a satisfactory account of the intuitive resistance to admitting profile evidence has proven difficult in the literature. The paper argues that concerns about admitting profile evidence can be vindicated by appeal to another fundamental legal value, equality before the law. In particular, equality before the law requires that innocent defendants be exposed to equal risks of mistaken conviction. As it turns out, profile evidence puts the innocent defendants who belong to the profiled groups at greater risk of mistaken conviction, relative to other innocent defendants who do not belong to profiled groups. This is the problem with admitting profile evidence at trial. This line of argument has a number of applications. For example, consider the debate among forensic scientists about the proper use of DNA profiling technology in cold-hit cases. Many have warned that in these cases a DNA match should have a weaker probative value than in standard DNA evidence cases. But others, especially Bayesian statisticians, have disagreed. The equal risk approach presented in this paper justifies, on fairness grounds, weakening the probative value of a DNA match in cold-hit cases, while also accommodating the dissenting arguments put forward by the Bayesian statisticians.

Jan Sprenger (Università di Torino): “Philosophy of Science Meets Statistical Inference: Reviving the Concept of Corroboration”

March 9, 2018 – 9:30. Dipartimento di Fisica, Edificio 8, Aula Rossa, Piazza Leonardo da Vinci 32

The most common way of testing statistical hypotheses is to conduct null hypothesis significance tests (NHST) and to use a p-value to describe evidence against the null hypothesis. In this talk, I would like to highlight a fundamental conceptual problem with this approach: the impossibility to express support for the null hypothesis. Since null hypotheses are often simple and precise idealizations of complex models with substantial theoretical importance, a good method for scientific hypothesis tests has to be able to express support for them. Also, the one-sided nature of p-values arguably aggravates the replication crisis in various scientific disciplines. My approach is twofold: first I explain why classical NHST and classical Bayesian inference fail to evaluate a null hypothesis in an appropriate way; then I develop a measure of corroboration, taking inspiration from both Bayesian and frequentist procedures. I argue that degrees of corroboration achieve a more nuanced judgment on the evidence in favor of a null hypothesis and that they can be used in a variety of cases in statistical inference.

Federico Boem (Università di Milano): “The Impact of Big Data on Biomedical Research: From Epistemic to Ethical and Political Considerations”

April 13, 2018 – 11:00. Dipartimento di Elettronica, Informazione a Bioingegneria, Edificio 21, Aula Bio 1, Via Ponzio 34/5

Big Data science is often invoked by an increasing number of biomedical scientists as a way to justify not just the methodology of their approach but also both the quality and the power of it. As a matter of fact, Big Data had a strong impact on the life sciences in many ways. The speed of processing, the range of feasible experiments, the type of possible questions have dramatically changed both the theory and practice of biomedical research. However, scientists and other scholars still discuss about the nature of this change and its possible outcomes. All of this might have a consequence not only on the way science is pursued now and in the future but also on the way science is structured and perceived from an institutional perspective. As this change directly affects the connection and the mutual interplay between science and society, ethical and political issues should also be taken into account for a more adequate representation of the current image of biomedical research and its development.

Giorgio Osti (Università di Trieste): “Frames cognitivi, sistemi socio-tecnici e pratiche di gioco nella transizione energetica”

June 5, 2018 – 16:00. Dipartimento di Energia, Edificio BL25, Sala Consiglio, Via Lambruschini 4

La lezione mira a chiarire e esemplificare alcuni contributi che le scienze sociali possono fornire alla comprensione della transizione energetica. I contributi sociologici analizzati sono tre: (a) cognitivo – l’energia è pensata come un modo di conoscere, un macro-concetto che lavora come un frame, un racconto o una convenzione (giustificazioni-rivendicazioni); (b) strumentale o organizzativo – l’energia è usata dagli esseri umani per raggiungere determinati fini attraverso una sapiente organizzazione di parti e ruoli, a sua volta inserita in ‘campi’ o ‘regimi’; (c) rituale – l’energia entra in ricorrenti combinazioni di azioni e circostanze che chiamiamo pratiche. La metafora del gioco verrà usata come possibile sintesi degli approcci e come strumento euristico. Ognuno degli approcci verrà illustrato attraverso una ricerca, cercando di mettere in luce alcune peculiarità metodologiche.