Agentic AI is like an orchestra conductor, but in a hospital

Tue 5 August 2025
AI
Interview

Agentic AI will optimize care pathways, track patient progress, and manage surgical operations, according to Prof. Stéphanie Allassonnière, Vice-President for Valorization at the University Paris Cité. We discuss what this new supervisor of all AI and human agents means for medicine and when it will be implemented in hospitals and clinics.

Agentic AI is coming to healthcare. What is agentic AI, and when can we expect it to be applied in clinical workflows?

Agentic AI refers to systems that can autonomously plan, decide, and act to achieve goals, with minimal human input. It is like an orchestra conductor who will make each agent play at the right time to make the whole piece coherent. However, there is no pre-designed score, and the Agentic AI will have to choose the optimal one to reach the given goal.

In healthcare, we can expect early applications, such as clinical decision support or care coordination, to emerge within the next 3 to 5 years, particularly in settings with robust digital infrastructure. But it will, of course, also depend on regulation. Suppose the agent is trying to act on both the medical and patient sides, for example, in a surgical room. In that case, the regulatory bodies will have to decide whether to accept these new agents (the agentic AI being a “Big AI agent itself”) and under what conditions.

Whereas if it is dedicated to the organization of infrastructure or hospitals, then it will probably take less time.

Healthcare has never been fast in adopting breakthrough technologies. Why will it be different with agentic AI?

Healthcare has lagged in adopting tech, but agentic AI may be different. It can address urgent problems, such as clinician burnout, staffing gaps, and rising costs, by automating complex and time-consuming tasks. These pressures are so acute that we’re likely to see real-world deployment ahead of full regulation, especially in supportive or non-diagnostic roles. This is my hope!

So, agentic systems are capable of managing entire clinical workflows. How do we safeguard keeping humans "in the loop" as these systems begin to make decisions and delegate tasks?

Keeping humans in the loop means designing agentic systems that are documented, auditable, and interruptible. Clinicians must be able to oversee decisions, override actions, and understand the system's reasoning. This means that we will need to train clinicians to effectively control AI. They will supervise most of the time, but they still need to know what to do in case the AI agent makes an error. We also need to involve clinicians in the design of these agents to ensure a good fit with the target goal and that they will be user-friendly for clinicians to interact with.

This will also require consideration of the governance and ethics surrounding these objects, as they play a crucial role in ensuring that these tools empower, not replace, human judgment.

How can we ensure that the recommendations made by AI agents are interpretable and trusted by healthcare professionals?

As with any other medical device, clinical trials will be necessary. The Agentic AI will have to present supporting evidence and align with clinical guidelines. Involving clinicians in development and testing also builds confidence and ensures the outputs make practical sense. This is the concept of “Human guarantee” that is now enshrined in European law.

But, for the same reason that you don’t ask a medical doctor to know the chemistries and the mechanisms of action of compounds, which remain a “black box”, you should not ask these devices to be completely clear for every user.

In a system run by AI orchestration, who is accountable when something goes wrong?

Well, in Europe, at least, the final medical decision always comes from a Human being. This means that all these AI agents should be supervised throughout their process by a human expert.

When considering hospital administrative organization, human supervision should also be in place, although the impact of a wrong decision will not have the same far-reaching consequences. However, I still believe these tools will require constant supervision when used in the healthcare system.

Agentic AI promises to reduce clinician burden by automating care coordination. How can we prevent over-reliance on automation from creating new forms of cognitive overload or deskilling for healthcare workers?

Automation should support but not replace clinical reasoning. Again, the clinician will have the final decision to make and supervise the whole process. But to prevent overload, agentic AI must be designed to keep clinicians engaged in key decisions only. Since they will still need to know their work to be able to criticize the agent and act if required, there will be no loss of knowledge or skills. There will be a change in their practice for sure, but their capacities will have to remain unchanged.

AI agents require access to data to perform their tasks accurately and effectively. Does it threaten patients' rights to privacy and autonomy?

In my opinion, this will not alter the existing rules for any data sharing project. GDPR must be respected. So, informing patients will be compulsory as it is now. I don’t see any specific issue for Agentic AI that does not already exist for other types of AI-based methods.

Agentic AI can manage multi-phase surgical processes, including real-time interventions. How do we test and validate such systems before they operate in life-critical scenarios? And ensure they won't suddenly start to hallucinate when they lack data?

As for a new drug, I think it must undergo rigorous, phased validation, first in simulations, then in real-world settings under supervision. It is like going through a clinical trial. In a real-world context, real-time monitoring is essential. We are once again faced with the necessity of human supervision.

However, there exists a way to prevent hallucinations; systems must be trained with high-quality clinical data, may be updated if they do wrong, and be designed to defer decisions when their confidence is low.

Can we also expect personalized AI agents to accompany individuals and guide them on how to live a healthy life?

Of course! This is where this may have a great impact. Prevention is at the core of what AI can produce. And Agentic AI, which will be able to adapt its decisions to each person, will be a revolution in prevention.

How can healthcare organizations begin preparing to implement AI Agents?

This is a good question. Collecting and pre-processing their data so that it will be a faithful and optimal quality representation of each organization would be a good start. Of course, making the data interoperable. The second important thing for me is to start training the collaborators.