Researchers at the University of California, Los Angeles (UCLA) have developed an AI system that converts structured hospital data, typically presented in complex tables, into readable text. According to the researchers, this approach makes it possible to effectively use existing LLMs, such as ChatGPT, in clinical decision-making, particularly in emergency care.
The new model, called Multimodal Embedding Model for EHR (MEME), bridges the gap between the tabular structure of electronic health records (EHR) and the narrative input required for advanced AI analyses. Instead of working with rows full of codes, measurements and medical terminology, MEME generates so-called pseudo-notes. These are textual representations of patient data that mimic the structure of real clinical documentation. This removes an important barrier to the use of AI in acute care.
Natural language
Many AI models are trained on natural language, while medical data is often recorded in non-textual formats. This discrepancy hinders the application of LLMs in clinical environments. In emergency situations, where every second counts and doctors have to make complex decisions under high pressure, access to quickly interpretable information can make all the difference. MEME enables healthcare professionals to initiate treatments faster and more accurately by basing AI insights on a more complete understanding of a patient's medical history.
The developed system is modular. Patient data is divided into content blocks. These include medication history, triage data, vital signs and laboratory results, which are then converted into text using medical templates. Each text block is analysed separately by a language model, creating a nuanced, multidimensional picture of the patient. This is more in line with the way doctors reason clinically.
Extensive validation study
In an extensive validation study published in npj Digital Medicine, involving more than 1.3 million emergency room visits from both the well-known MIMIC database and UCLA's own healthcare systems, MEME was found to perform significantly better than existing AI solutions. The model outperformed traditional machine learning techniques, specialized EHR models such as CLMBR and Clinical Longformer, and prompting-based methods.
Importantly, MEME also proved to be highly transferable between hospitals with different data structures and coding standards. This is an essential feature for widespread implementation. According to the research team, this is a first step towards AI systems that are not only powerful, but also flexible and scalable.
Testing in other departments
Future research will focus on testing MEME outside the emergency department, for example in intensive care units or within chronic care. Attention will also be paid to the integration of new medical concepts and evolving data structures, so that the model can continue to adapt to changing healthcare needs.
‘This system bridges the gap between the most powerful AI models and the reality of medical data. By converting medical records into text that is understandable to language models, we are unlocking AI capabilities that until recently were beyond the reach of healthcare professionals,’ said Simon Lee, a PhD student at UCLA.
Earlier this year, we wrote about a similar initiative in which an AI tool was developed that automatically converts laboratory results into understandable language for patients. This tool helps doctors save time by reducing administrative tasks and improving communication with patients. Following positive results from pilots with general practitioners, the tool is now being used in primary care, with plans to expand to specialist care. The text generated by the AI is always reviewed by the doctor before being sent to the patient.