New Article: Clinician perspectives on explainability in AI-driven closed-loop neurotechnology
How do clinicians working with closed-loop neurotechnologies conceptualize explainability?
22. Dezember 2025
Artificial Intelligence (AI) is rapidly transforming our ability to understand and intervene in brain function and dysfunction, particularly when integrated with neurotechnology. AI-driven tools are being increasingly explored across a spectrum of neurotechnological applications, from Brain-Computer Interfaces (BCIs) that enable motor or speech restoration in individuals with paralysis to neuromodulation therapies such as deep brain stimulation (DBS) for treatment-resistant neurological disorders or repetitive transcranial magnetic stimulation (rTMS) for psychiatric disorders. Many of these technologies rely on closed-loop architectures, in which AI models continuously analyze neural signals and adapt stimulation or decoding parameters in real time. In these contexts, AI has shown promise: for example, in DBS, it can support target identification and individualized stimulation protocols by detecting complex disease patterns in large-scale neural datasets. Despite these advances, the clinical integration of AI into neurotechnology is still in its infancy. A key barrier in medicine is the opacity of many state-of-the-art AI models. This lack of transparency poses significant challenges for clinical adoption, particularly in applications that directly interact with the human brain. In such high-stakes settings, explainability is not only a technical concern but also an ethical requirement for ensuring patient safety, preserving autonomy, and fostering trust among clinicians and patients alike.
The study aqualitatively investigates how clinicians working with closed-loop neurotechnologies conceptualize explainability: what forms of explanation they find meaningful, useful, or necessary for clinical practice. Drawing on semi-structured interviews with twenty clinicians in neurology, neurosurgery, and psychiatry across Germany and Switzerland, the authors aim to map clinician-centered requirements for explainability in the context of closed-loop neurotechnologies. To their knowledge, this is the first empirical study focusing on clinical explainability needs in the context of closed-loop systems for neurological and psychiatric disorders. Their findings aim to inform technology design, regulatory frameworks, and ethical guidelines to ensure that any AI-driven neurotechnology is both clinically robust and socially aligned.
Published in NATURE, Oct. 3, 2025
https://doi.org/10.1038/s41598-025-19510-9
Authors: Laura Schopp, Georg Starke, Marcello Ienca
Read the full article here
Kontakt
Das Institut für Geschichte und Ethik der Medizin freut sich über Ihre Kontaktaufnahme.
81675 München