'Responsible use of AI is an ethical issue that concerns us all'

Mon 2 June 2025
Ethics
News

During Fe+Male Tech Heroes, Responsible AI Clinical Lead at Philips, Vanda Vitorino de Almeida, spoke about the responsible use of AI in healthcare. She said that everyone has a responsibility to ensure that AI is used responsibly in healthcare. Vanda's workshop coincides with the global publication of the Philips Future Health Index report. Philips will present the Dutch report on June 10.

In this report, healthcare professionals and patients are asked about the use of AI in healthcare and the preconditions for effective use of AI. The report concludes with a number of recommendations to strengthen trust in AI, including proven effectiveness, the prevention of bias in data, and the need for the right legal frameworks for the use of AI.

AI is changing healthcare

The way healthcare is delivered in hospitals could change dramatically with the advent of artificial intelligence. AI is already being used in solutions such as the Philips MR SmartSpeed and AI Manager, which help speed up clinical processes and support healthcare professionals. AI can save healthcare professionals time on certain tasks, allowing them to focus their attention on the patient.

In her speech, Vitorino de Almeida explains that AI is not there to replace people, but to enhance the physical capacity of healthcare providers. “The real value of AI comes when it helps people deliver better care,” says Vanda. AI was an important theme during the 2025 edition of the Fe+Male Tech Heroes conference, which this year was titled “Rethinking Leadership.” “Leadership in responsible AI is a basic prerequisite for the success of AI. Only if we do it right can it reach its full potential,” she says.

As far as Vitorino de Almeida is concerned, this responsibility does not lie with one department, but everyone must be aware of the importance of responsible use of AI. She gave the participants in her workshop an insight into the conditions that responsible AI must meet. Using the Philips Responsible AI Principles, eight in total, Vanda guided the participants through the fundamentals of safe, ethical, and reliable AI solutions. Issues such as human oversight, privacy and security, transparency, model explainability, and fair representation of target groups were explored in greater depth.

Biases due to limited data

If AI models receive input from limited or biased datasets, certain groups in society will not be represented in the solution. One example is the diagnosis of heart attacks in women. This is more difficult because most data on heart attacks comes from men.

“Crucial decisions are then made based on incorrect assumptions,” says Vitorino de Almeida. “And that is not a technical issue, but an ethical one.” Vitorino de Almeida got the participants thinking with questions such as: Are you critical of the data you use? Has permission been given for this? Are we measuring what we want to measure? Are the conclusions from this research correct? When asked how individuals can trust the quality of a study, article, or news item, Vanda is clear: “Awareness is the most important starting point. And then, check the sources.” She goes on to say that when using AI, there must always be human oversight. And according to Vitorino de Almeida, everyone can play a role in this.

The voice of the patient

According to her, ethical reflection on AI is not a luxury or a formality but the basis of safe and fair technology. She believes that patients are not always able to make the most sensible choice, and this requires extra care when it comes to privacy and the use of data. “We are their voice, and we must handle that responsibility with care,” she concludes.

Associate professor of bioethics Karin Jongsma of UMC Utrecht is also conducting research into the collaboration between humans and AI. Her research focuses primarily on the ethics of AI applications in healthcare. Specifically, it concerns the influence of AI on human expertise and the collaboration between doctors and AI, also known as “human-AI collaboration.” According to her, AI certainly offers opportunities, but it will not automatically go in the right direction.


By innovation partner