April 3rd at 17:00. Room: Aula Alpha, ground floor of the Polimi building 24 (via Golgi 40).
Abstract: In this talk, I aim to explore the significance of trust and justification in machine learning (ML). To begin, I’ll briefly touch upon two promising epistemologies for ML — transparency and computational reliabilism (CR). However, my focus will be on defending the latter, requiring a more in-depth discussion. I’ll dedicate some time to elucidate how CR operates, and which assumptions are built-in. Next, I plan to illustrate how CR works in the context of Forensic ML. Lastly, I’ll address two objections against CR: i) the concern that, under CR, statistically insignificant yet serious errors can compromise the reliability of AI algorithms; and ii) the argument that CR, being a reliabilist epistemology, demands a high frequency of success, ultimately posing an issue of high predictive accuracy. I’ll present arguments to counter these objections, advocating for computational reliabilism as a promising epistemology for ML.