No one understands the cultural, strategic, and practical realities behind the success or failure of digital health and AI adoption better than Dr. Mehdi Khaled, an Austria-based physician and global digital health expert with over 25 years of experience across 40 health systems. In this interview for ICT&Health Global, he shares an insider’s view on what it truly takes to make digital transformation work in healthcare, from policy and infrastructure to leadership and culture.
You’ve worked across many regions with different cultures, including healthcare cultures. How do cultural differences shape the success or failure of digital health implementations in various parts of the world?
It’s interesting how cultural contexts are rarely taken into consideration when comparing health systems and their modernization efforts. And while digital health failures remain poorly documented for various reasons, the AI-amplified digitization ‘noise’ is coming predominantly from two parts of the world: the US and the Middle East. The former is an individualistic system that still produces some of the world’s most ‘non-enviable’ health indicators, and the latter is a magnificent investment and marketing machine that has yet to show any tangible impact from its digitization efforts. Meanwhile, healthcare systems in the Nordics (EU), East Asia, and Australia have shown great progress, without much chest-beating.
Asian societies are deeply rooted in Confucian values, which are highly community-oriented and communal in nature. “If my community succeeds, I succeed” is a common modus operandi in Asian cultures. This not only boosts the adoption and impact of digital health interventions but also emphasizes the importance of prevention and the digitization efforts that support it. That’s a sizeable multiplier. Singapore’s National Step Challenge, launched in 2017 as part of its “War on Diabetes” program, has been a major success. Conversely, Western cultures are highly individualistic, and, except for the Nordics, the adoption of digital health programs at scale in Europe and the US has consistently fallen short of expectations.
Moreover, the social barriers to implementing government health policies and digital health initiatives are much lower in the East than in the West. When appropriately designed and implemented in a timely manner, these policies can dramatically accelerate the time-to-impact of digital health interventions. When designed poorly, both public trust and adoption suffer. Australia’s opt-in/opt-out policy debacle around Personal Health Records dragged the country into a national digital health schizophrenia crisis between 2012 and 2016. It remains one of the best examples of how not to design digital health policy. But fortunately, Australia learned quickly from those mistakes and made huge leaps, propelling itself 25 years ahead of most of its peers in the past decade.
Finally, my personal observation is that, beyond the cultural dimension, digitizing an already advanced healthcare and social system delivers a significantly higher impact at a lower investment level and in less time. Whereas throwing a hodgepodge of health technologies into an underperforming and broken system almost always leads to the creation of an expensive, digital extension of mediocrity. It’s about the maturity and capacity of a healthcare system’s actors to handle progress and technology, as well as the culture surrounding them.
Artificial intelligence is a rapidly growing topic in healthcare. From a global perspective, how does the perception and acceptance of AI in healthcare differ between regions such as Europe, North America, the Middle East, and Asia?
The comedian Russell Peters once said, “I don’t make stereotypes, I only see them!” (laughs) – so I’m going to base my answer on observed behaviors.
AI in healthcare has been an academic playground for decades. The tech breakthrough of 2022 accelerated and expanded its applications in both clinical and non-clinical settings. Today, the AI agenda – both in healthcare and beyond – is largely driven by tech multinationals eager to sell products and make a quick profit. Remember, the success of a tech company is measured by its revenue, margins, and growth. Sales reps are not incentivized on ‘ethics.’ So it’s easy to understand why many of these companies don’t care about the ethical boundaries of their tech.
Those who believe they do are hallucinating, to use their own terminology. The techno-politics framing the American establishment increasingly favors the financial interests of tech giants over public well-being. China follows a different equation, but the result is basically the same: tech always wins. Europe is stuck between a rock and a hard place, still struggling to balance regulating AI in healthcare with finding a competitive edge. The Middle East’s AI investment efforts are clearly led by Saudi Arabia and the UAE, driven in no small part by American lobbying. Today, some see Saudi Arabia as a rival to Silicon Valley in terms of ambition, but not quite in terms of vision or capability.
There’s still too much AI noise and dust in healthcare. The jury is still out when it comes to practical, impactful applications at scale; and even more so when it comes to a grander vision that modernizes healthcare’s outdated information structures, which AI now demands. We’re dealing with a brand-new technology that caught us all by surprise, just as we were still struggling with half-broken data structures and decades-old healthcare architectures.
Currently, the understanding, perception, expectations, and impact of AI in healthcare resemble scattered atolls in the South Pacific. Only time will tell whether they come together or drift further apart.
When hospitals bring you in as a consultant and ask, “Where should we begin with AI?”, what’s your usual answer, and how do you assess whether they’re ready?
Simon Sinek’s Start with Why is always a good conversation starter. There are several AI readiness scores and frameworks available for healthcare organizations; the WHO, HIMSS, Stanford, and Harvard, to name a few, have developed good tools. They offer applicable assessments and references. In essence, some hospitals will always be more ready than others.
In 2024, China officially opened its first fully AI-powered hospital, “Agent Hospital,” developed by Tsinghua University in partnership with Chinese tech companies. It’s still under complete human oversight, but the goal is full autonomy. That’s where they are. And we know where the rest of the world stands.
The first step in assessing AI readiness is to identify, quantify, and mitigate operational, financial, reputational, and clinical risks. Benefits and impact should come after that equation has returned a clear “go.”
Digital health interventions, especially those with clinical implications, should be treated with the same care and consideration as medical prescriptions. They need clear indications, and there must be sufficient evidence to justify implementation. AI is no different. We’re dealing with human lives and an oath here. This is an ethical obligation that transcends regulation. This isn’t banking in the wild west.
My initial AI discussions with clients are always tough because I have to bring them back down to earth before helping them navigate their reality. Clients in the Middle East are the hardest to land, because the question is not “Do we need AI?” but usually, “When can we buy it? We have a budget.” It’s a blindfolded FOMO race. Not always fun.
The first three questions we wrestle with are:
- What problem are you trying to solve?
- Are there more mature, proven tech, or non-tech options?
- What are your accelerators? (Usually policy-driven)
Eighty percent of the time, the answer is not AI. Or the problem isn’t clearly defined enough to justify an AI discussion. Sometimes it feels like someone is just trying to build an excuse to say, “We have AI too.”
Then, if AI is an option, four more questions follow:
- Is your data quality and governance good enough to support this tech? Show me the evidence.
- What’s your human capacity (skills + experience) to deal with this AI today? Are you truly ready? Or do you just think you are? Again, show me facts.
- What are the success metrics, and how will you measure impact?
- Who owns the project, and what’s the governance structure?
Not many healthcare players, from governments to providers, can answer these questions today. My clients know I’m not putting them on the spot – I’m trying to save them from public embarrassment… or getting fired.
A safe starting point, if anywhere, would be administrative processes. However, even then, if the goal is process automation, you need to be careful not to automate obsolete processes, just as we digitized bad paper EMR forms back in the early 2000s.
Finally, I’d like to remind the fact that health technology is ‘just’ another costly intervention. In public healthcare, comparative effectiveness assessments remain the gold standard to demonstrate the objective value of these interventions, their true impact, and ROI. The UK, Australia, and Singapore are consistently showing the way and being pragmatic with this process. I believe EU countries can accelerate the impact of digital health by harmonizing and properly implementing their currently fragmented health technology assessment frameworks (HTA). This should be the overarching imperative that frames EHDS and all other tech interventions in EU health and care. It’s a great challenge to raise to. It’s the right battle to fight. It’s what Simon Sinek calls ‘The long game’.
Is now the right moment for hospitals to heavily invest in AI? Or should they focus on foundational digital infrastructure first?
I feel like, while we were still having deep debates about data governance, standards, interoperability, and EHDS frameworks, AI snuck in through the back door and hijacked the community’s attention – and all the oxygen in the room. Nobody was ready for this! Not at this speed, nor with this level of vendor enthusiasm.
So, jumping on the AI bandwagon “because everyone else is doing it” doesn’t sound like a responsible adult decision to me. Yet that’s exactly what many are doing. That’s the paradox of AI: many have surrendered their managerial ability to reason to a new instance called LLMs. Organizations are behaving as if there will be no AI available tomorrow.
Governance, infrastructure, skills, and data security and quality must always come first. Every other investment, including AI, should follow. There isn’t a single healthcare use case I know that justifies an urgent AI buy. Vendors need to better understand healthcare dynamics and adjust their sales cycles. At the same time, hospital CxOs need to educate themselves on AI and spread some sanity.
What are the most common mistakes or blind spots you see healthcare leaders fall into when adopting new technologies?
One of the most common, and arguably most destructive, mistakes I see is underestimating technical and manageme debt. It’s also the least talked about. Technology thrives on well-designed, mature processes. However, leaders often rush in, layering new tools over shaky foundations, hiring individuals without the necessary skills, or relying on poorly conceived governance structures. It always backfires, and the resulting costs are not just financial; they’re structural and long-lasting.
Then there's the blind trust in vendor claims. Some leaders are far too quick to believe glossy marketing without demanding peer-reviewed studies or evidence of real-world impact. It’s as if the mere presence of a tech solution is enough to justify investment. That’s not due diligence; it’s wishful thinking.
A related trap is the belief that new technologies, especially AI, can somehow shortcut deep-rooted, systemic issues. But these aren’t magic bullets. New tech doesn’t solve old problems by itself. If anything, it often exposes just how unready the organization really is.
Another frequent misstep is the “me too” mentality. I’ve seen hospitals and ministries buy tech simply because their neighbors did. There’s this illusion that digital transformation is contagious; that if it worked next door, it must work here. So they copy-paste applications or entire system architectures without taking the time to understand their own context, needs, or capacity. That’s not strategy, that’s mimicry.
And underpinning all of this is a deeper issue: putting technology before health and care. That inversion of priorities is surprisingly common. Tech is meant to serve health outcomes, not the other way around. When that gets forgotten, the entire digital health strategy becomes performative – more about optics than impact. All these errors stem from one thing: poor governance and/or immature leadership.
But the bigger issue is our community’s unwillingness to talk about mistakes. Everyone’s too busy bragging about their (relative) success. I wish we had more “black-box” community events to share and learn from each other’s failures. It’s humbling, but it’s the right thing to do. We love referencing the airline industry as a benchmark, but very few are ready to walk the talk. Talk is cheap.
How should healthcare facilities balance today’s urgent needs, like staffing shortages or financial pressures, with long-term investments in innovation?
There’s always a solution to every problem, including urgent ones. A key part of that is bringing in people from outside your organization, or even your country, to provide new perspectives.
One big issue in European healthcare is the obsessive focus on access and care delivery for all, all the time. Our paternalistic, know-it-all systems have long neglected preventive policies and just kept building more hospitals and beds. It’s like thinking the best way to solve traffic jams is to build more roads. We know it’s not. Buying more tech on top of a broken system only adds to the management debt, without fixing anything.
The WHO’s 10-year performance report on NCD-related SDG targets was a sobering reality check – a slap in the face. There’s still a small window to redesign and future-proof European health systems by focusing on enabling population self-care and implementing bold policies to curb lifestyle diseases. New Zealand’s law to ban tobacco by 2030 is a great example. Austria, on the other hand, was the last EU country to ban public smoking, as recently as 2019!
The problem is, policies aren’t sexy. Technology is. Yet in healthcare, policies are far more cost-effective than tech. The correct order for health system evolution is policy first, followed by technology. That’s how most advanced systems have evolved to where they are today.
What are the superpowers of healthcare institutions truly leading in digital transformation?
A competent, disciplined, and pragmatic leadership team, the one focused on the big health and care picture, understanding the value of data and technology, knowing when to use them, and how to manage them.
At the hospital level, there are many such organizations across Europe and beyond. At the government level, you can count them on one and a half hands: Singapore, Australia, Hong Kong, the Nordic countries, Slovenia, and Catalonia in Spain are my heroes. Sorry if I missed others. The rest are somewhere between a noisy place and a dark zone.
What are the most important lessons you’ve learned from consulting and innovation in health tech about what it takes to make digital health projects succeed?
I don’t like the word innovation. I believe in impact. “Innovation” is overused and undermines the real human effort behind improving things iteratively. The Japanese term for this concept is Kaizen, and it remains at the core of most healthcare advancements. Yes, AI holds the promise of jumpstarting some of that, such as in drug discovery, but it’s still just a tool that requires careful management.
To me, digital health project success means better outcomes at both the individual and population levels, fewer complications, disease prevention, improved operational efficiency, and reduced waste. The systems that get this right are those with a shared purpose, which design processes and policies to utilize data and technology meaningfully to achieve their goals.