Is AI an angel or a devil? AI Health Literacy holds the key

Mon 2 June 2025
AI
News

AI transforms healthcare, prevention, and well-being from tool-based approaches to generative, agent-based systems. It brings powerful opportunities but also introduces serious challenges related to trust, transparency, and human autonomy. Unlike traditional digital tools that depend on human input, AI functions as an independent agent, capable of generating and spreading ideas through complex, often opaque processes like machine learning and algorithmic decision-making.

Without AI health literacy, people risk becoming passively dependent on systems they don’t understand, undermining informed consent and trust. In contrast, AI health literacy empowers individuals to engage critically, ethically, and confidently with AI, ensuring AI acts more like a guardian angel than a devil in disguise.

AI’s impact on the doctor-patient relationship

At the AI Healthcare and Human Rights Conference in Finland in May 2025, the Council of Europe presented a report on how AI affects doctor–patient relationships, emphasizing human rights. The report identified key challenges: unequal access to quality healthcare, lack of transparency, social bias in AI systems, reduced attention to patients’ lived experiences, over-reliance on automation, loss of professional skills, blurred accountability, and threats to privacy.

Speakers also explored how AI could impact patient autonomy. They highlighted risks to the right to privacy, the right to be informed (or not), and the right to be forgotten, which are particularly problematic since AI systems rarely forget. Health professionals must actively guard against digital paternalism, where algorithms and digital tools subtly influence or override patient choices without full awareness or consent. This concept reflects traditional paternalism but plays out through automated systems, nudges, or opaque algorithms.

While some participants worried AI could reduce empathy and human interaction in care, others, especially younger users, saw tools like ChatGPT as comforting: always available, always listening. These are qualities that healthcare professionals and systems don’t always provide.

AI as guardian angel vs. devil in disguise

Is AI a guardian angel or a devil in disguise? The answer depends on how we design and use it. AI can act as a guardian angel, detecting diseases early, personalizing treatments, automating routine tasks, and expanding access to care through chatbots and remote tools. It can even predict outbreaks and guide public health policy, as it did during COVID-19.

But without oversight, AI can become a devil in disguise just as easily. It can entrench biases, erode human judgment, damage trust, and spread misinformation. Generative AI tools might offer inaccurate or commercially biased health advice, without users realizing it.

What is AI health literacy, and why does it matter?

To navigate AI’s dual nature, both patients and professionals need a new core skill: AI health literacy. This builds on the broader concept of health literacy, defined as "the knowledge, motivation, and competencies to access, understand, appraise, and apply health information to make decisions in daily life regarding healthcare, disease prevention, and health promotion."

AI health literacy extends this to include understanding how AI functions, where it's applied in health, and what ethical, legal, and social issues it raises. It equips individuals to ask critical questions, make informed decisions, protect their data, and advocate for ethical, inclusive technologies. For organizations, it means building systems that support transparency, accessibility, and shared decision-making.

AI Health Literacy vs. Digital Health Literacy

AI Health Literacy is also closely related to Digital Health Literacy which is recognized as a super social determinant of health and can be defined as “the ability to seek, find, understand, and appraise health information from electronic sources and apply the knowledge gained to addressing or solving a health problem.”

However, the two concepts focus on different aspects of health technology engagement. AI Health Literacy is about understanding AI tools (e.g., chatbots, predictive models), like evaluating a symptom checker’s AI-generated diagnosis. Digital Health Literacy is about using digital tools (e.g., websites, apps, patient portals), such as finding reliable health information on a government website. AI Health Literacy emphasizes critical thinking about algorithmic systems and their limitations, whereas digital health literacy highlights critical thinking about online health content and tech usability. Digital Health Literacy provides a foundation, but AI Health Literacy requires deeper engagement with how automated decisions shape care and health behaviour.

Why AI health literacy is key to mitigating the beneficial and harmful impacts of AI

AI Health Literacy helps individuals recognize when AI shapes health decisions, ask the right questions, and retain control over their care. It supports shared decision-making, ethical use, and informed consent. Whether AI becomes a guardian angel or devil in disguise hinges on AI Health Literacy and the ability of individuals, communities, and organizations to implement and engage in ethical design, inclusive development, and transparent validation of data and applications.

Moreover, strong governance is needed to protect human rights and promote the human-centred integration of AI opportunities. Whether AI fulfils its promise or causes harm depends on how well individuals, communities, and health systems understand and govern it. Therefore, to strike the right balance between innovation and accountability in the future, AI Health Literacy must be made a top priority in AI-related public health policy, research, and practice.