FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare
New Publication in The British Medical Journal
March 13, 2025
The FUTURE-AI framework for building trustworthy AI, an effort with researchers from all around the world, including IHEM researchers Marie-Christine Fritzsche and Alena Buyx is tackling one of healthcare AI's most critical challenges: 𝘣𝘶𝘪𝘭𝘥𝘪𝘯𝘨 𝘵𝘳𝘶𝘴𝘵𝘵.
The "paper describes the FUTURE-AI framework, which provides guidance for the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI Consortium was founded in 2021 and comprises 117 interdisciplinary experts from 50 countries representing all continents, including AI scientists, clinical researchers, biomedical ethicists, and social scientists. The FUTURE-AI guideline was established through consensus based on six guiding principles-fairness, universality, traceability, usability, robustness, and explainability. To operationalize trustworthy AI in healthcare, a set of 30 best practices were defined, addressing technical, clinical, socioethical, and legal dimensions. The recommendations cover the entire lifecycle of healthcare AI, from design, development, and validation to regulation, deployment, and monitoring."
𝗙𝗨𝗧𝗨𝗥𝗘-𝗔𝗜 is an international guideline for building trustworthy and deployable AI tools in healthcare, co-developed by 𝟭𝟭𝟳 𝗲𝘅𝗽𝗲𝗿𝘁𝘀 from 𝟱𝟬 𝗰𝗼𝘂𝗻𝘁𝗿𝗶𝗲𝘀.
The guideline comprises:
- 6 core principles for trustworthy AI (Fairness, Universality, Traceability, Usability, Robustness, Explainability), to ensure alignment with key ethical standards.
- 30 best practices structured across the four main phases of healthcare AI (design, development, validation, deployment), to provide guidance on practical implementation.
- Step-by-step recommendations on how to operationalize each best practice, with examples of possible methods and approaches.
- Expertise spanning data scientists, clinicians, ethicists, social scientists, and regulators, addressing the technical, clinical, ethical, and legal challenges of healthcare AI.
Summary points
-
"Despite major advances in medical artificial intelligence (AI) research, clinical adoption of emerging AI solutions remains challenging owing to limited trust and ethical concerns
-
The FUTURE-AI Consortium unites 117 experts from 50 countries to define international guidelines for trustworthy healthcare AI
-
The FUTURE-AI framework is structured around six guiding principles: fairness, universality, traceability, usability, robustness, and explainability
-
The guideline addresses the entire AI lifecycle, from design and development to validation and deployment, ensuring alignment with real world needs and ethical requirements
-
The framework includes 30 detailed recommendations for building trustworthy and deployable AI systems, emphasizing multistakeholder collaboration
-
Continuous risk assessment and mitigation are fundamental, addressing biases, data variations, and evolving challenges during the AI lifecycle
-
FUTURE-AI is designed as a dynamic framework, which will evolve with technological advancements and stakeholder feedback. "1
See the full paper here: https://www.bmj.com/content/388/bmj-2024-081554.
1 Lekadir K, Frangi A F, Porras A R, Glocker B, Cintas C, Langlotz C P et al. FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare BMJ 2025; 388 :e081554 doi:10.1136/bmj-2024-081554.
Contact
The Institute for the History and Ethics of Medicine welcomes your contact.
M office.ethics@mh.tum.de
Mon-Thu: 9 a.m.-3 p.m., Fri: 9 a.m.-1 p.m. (core hours)
81675 Munich
