Dr. Alberto Termine

Guest Scientist

Dr. Alberto Termine is an AI researcher within the Trustworthy Autonomous Systems Lab at the Dalle Molle Institute for Artitifical Intelligence (IDSIA USI-SUPSI), a leading no-profit research institution in the filed of AI located in Lugano, Switzerland. He has also an appointment as PostDoctoral research fellow within the Intelligent Systems Ethics Group at the College of Humanities, Ecole Polytechnique Federale de Lausanne (EPFL), under the supervision of Prof. Marcello Ienca. Previous to that, he earned a PhD in Logic within the Logic, Uncertainty, Information and Computation (LUCI) lab, Department of Philosophy, University of Milan, under the supervision of Prof. Giuseppe Primiero.

Since his PhD, Dr Termine’s research has focused on the links between technical and epistemological/ethical aspects of contemporary AI research, with particular reference to issues such as opacity, explainability, robustness and fairness. In his PhD thesis entitled ‚Probabilistic Model Checking with Markov Model Semantics: New Developments and Applications‘, he focused on the development of formal verification techniques suitable for the analysis of contemporary artificial intelligence models, with particular reference to eXplainable AI models. 

After the PhD, he has shifted his research focus more towards the intersection of eXplainable AI and causal inference techniques. In this context, he is currently involved in a number of (both academic and industry) projects concerning the development of causally explainable AI models and the application of these models in the biomedical field and in multi-modal transports. On the philosophical side, he is currently interested in a variety of topics, such as the role of theory in the construction and validation of deep learning models, and the epistemological status of AI models as experts and/or epistemic authorities. Since March 2023, he has been collaborating with the Chair of AI Ethics and Neurotechnology at TUM in a project on the ethics of predictive AI in psychiatry. In this context, he mainly works on defining and analysing the main ethical and epistemological issues arising from the different applications of generative AI in psychiatry. 

In addition to the aforementioned research topics, Dr Termine has a strong interest in science dissemination and AI education. On this front, he is actively involved in several initiatives aimed at raising public awareness of the potential, limitations and risks of AI technologies, including a project with Swiss National Television aimed to develop educational videos that teach AI principles to children, and an SNFS Agora project aimed at fostering AI literacy among secondary and upper secondary school teachers.

 

web-page: https://sites.google.com/view/albertotermine/home-page 

Google Scholarhttps://scholar.google.com/citations?user=BTluw7sAAAAJ&hl=en 

  • Causal and Counterfactual Methods in Explainable Artificial Intelligence;
  • Probabilistic Model Checking for the analysis of Trustworthy AI systems;
  • Metaphysics and Epistemology of Machine Learning
  • Ethical and Social Dimensions of Artificial Intelligence

Termine, A., & Primiero, G. (2024). Causality Problems in Machine Learning Systems. in Routledge Handbook of Causality and Causal Methods, F. Russo and P. Illari, Eds., Routledge, forthcoming.

Ferrario, A., Termine, A., & Facchini, A. (2024). Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach. forthcoming in 3rd Workshop on Human Centric eXplainable AI, HCXAI 2024, Honolulu, Hawaii, May 12, 2024. Proceedings.

Facchini, A., Ferrario, A., & Termine, A.* (2024). Experts or authorities? The strange case of the presumed epistemic superiority of artificial intelligence systems. Minds and Machines Ferrario, A., Facchini, A. & Termine, A. Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems. Minds & Machines. 34, 30.

Termine, A. (2023) Probabilistic Model Checking with Markov Models Semantics: New Developments and Applications. Doctoral Dissertation. Official Publications of the University of Milan. Milan, Italy. Online available at www.air.unimi.it

Termine, A., Antonucci, A., Facchini, A. (2023) Machine Learning Explanations by Surrogate Causal Models (MaLESCaMo). I1st XAI World Conference, XAI 2023, Lisbon, Portugal, July 26-28, 2023, Proceedings (late-breaking works and demo). Communications in Computer and Information Science, Cham: Springer International Publishing. 

Facchini, A., & Termine, A.* (2022). Towards a Taxonomy for the Opacity of AI Systems. In Conference on Philosophy and Theory of Artificial Intelligence (pp. 73-89). Cham: Springer International Publishing.

Termine, A., Primiero, G., & D’Asaro, F. A. (2021). Modelling accuracy and trustworthiness of explaining agents. In Logic, Rationality, and Interaction: 8th International Workshop, LORI 2021, Xi’ian, China, October 16-18, 2021, Proceedings 8 (pp. 232-245). Springer International Publishing.

Termine, A., Antonucci, A., Primiero, G., & Facchini, A. (2021). Logic and model checking by imprecise probabilistic interpreted systems. In Multi-Agent Systems: 18th European Conference, EUMAS 2021, Virtual Event, June 28–29, 2021, Revised Selected Papers 18 (pp. 211-227). Springer International Publishing.

Termine, A., Antonucci, A., Facchini, A., & Primiero, G. (2021, August). Robust model checking with imprecise Markov reward models. In International Symposium on Imprecise Probability: Theories and Applications (pp. 299-309). PMLR.

*equal contribution.

de_DEDE