AI, neurotechnology and society — a question of trust
New opinion paper in Nature Reviews Neurology
27. Juni 2025
In this article, Prof. Dr. Marcello Ienca and Dr. Georg Starke argue that the convergence of AI and neurotechnology (NeuroAI) holds immense promise for transforming neurological care, from seizure prediction to closed-loop BCIs. But with this potential comes a profound ethical imperative: earning and maintaining public trust.
They state that trust in NeuroAI is not merely a technical or clinical issue — it is a structural and societal challenge. In their view, trust must be built through:
– Human-centred design and inclusive development
– Transparent communication and accountable governance
– Ethical reflection that bridges neuroethics and AI ethics
Trust in NeuroAI is not just about system performance but about the larger web of society we want to build. Although neurology alone cannot resolve all the challenges of NeuroAI, the field can still make a pivotal contribution by ensuring that the integration of AI and neurotechnology happens in a responsible manner, guided by patients’ interest and preventing failures that could inflict lasting damage to public trust. In doing so, neurology can help to secure a future in which public trust in NeuroAI, and in healthcare more broadly, is safeguarded. As neurotechnology increasingly acts upon the brain — not just reads from it — trust becomes a fragile but foundational condition for responsible innovation.
They also caution that trust should not be treated as a substitute for regulation. Instead, it must be grounded in democratic values and sustained through openness, scrutiny, and shared responsibility.
Full article here: https://www.nature.com/articles/s41582-025-00984-7
Starke, G., Ienca, M. AI, neurotechnology and society — a question of trust. Nat Rev Neurol (2025). https://doi.org/10.1038/s41582-025-01114-6
Kontakt
Das Institut für Geschichte und Ethik der Medizin freut sich über Ihre Kontaktaufnahme.
81675 München
