AI glasses could spark a breakthrough for people with hearing loss

Mon 18 August 2025
Innovations
News

Research at the University of Stirling, in collaboration with Heriot-Watt, Edinburgh and Napier, has led to the development of AI glasses that can drastically improve the listening experience for people with hearing loss.

These glasses use a built-in camera to track a speaker's lip movements in real time. A smartphone app sends the audio and video content via 5G to a powerful cloud server, where AI uses audiovisual speech enhancement to isolate the voice from ambient noise. The purified sound is then sent back to the user's hearing aid or headphones almost instantly. The technology is also effective in noisy environments with overlapping sounds, such as multiple people speaking at the same time or “distracting” background noise.

New generation of hearing aids

Dr Ahsan Adeel of the University of Stirling says it is ‘exciting to see the vision for the next generation of hearing aids taking shape’, thanks in part to collaboration with other universities. The technology goes beyond traditional AI; it utilises the properties of pyramidal neocortical cells for human speech processing at the cellular level. This results in hearing aids that:

  • consume less power than a light bulb
  • offer minimal delay (latency)
  • guarantee complete privacy

This approach also contributes to a better understanding of the neurobiology of multisensory speech processing and contributes to the development of biologically inspired hearing systems that approximate human performance.

‘This groundbreaking approach shifts from abstract, human-level cognitive audiovisual models to real multisensory processing at the cellular level, enabling the world's first personalised, autonomous, data centre and cloud-independent, biologically plausible hearing aids – an achievement that surpasses current AI and neuromorphic systems,’ said Dr Adeel.

From invention to practice

Although it is still a prototype, the initial user results are promising. Discussions are underway with manufacturers to make the technology available, affordable and accessible on a large scale. Workshops with wearable users are already taking place, collecting sound samples from everyday environments.

According to project leader Professor Mathini Sellathurai of Heriot-Watt, users only need to look at the person they want to hear. Even when there are multiple conversation partners, the AI automatically filters out the desired voice. Thanks to 5G, the delay is so small that it can be considered a real-time experience.

The technology has potential for broader applications: from use in noisy environments such as cafés and hospitals to oil rigs. The aim is to break down barriers and give more people, especially children and the elderly, access to affordable, AI-driven hearing support.

AI innovation for the deaf and hard of hearing

Research into the added value of AI for the deaf and hard of hearing has been ongoing in various places for some time. At the end of 2023, for example, an AI-driven and “self-adjusting” hearing aid was approved by the FDA for the consumer market. The device is equipped with the innovative and award-winning Tuned app. This smart app uses the first globally patented AI hearing assistant.

And in 2019, researchers at the LUMC in the Netherlands received a €2.3 million grant from the Netherlands Organisation for Scientific Research (NWO) to use AI to improve the effectiveness of cochlear implants (CIs). Unlike traditional tests in soundproof listening booths, the research focuses on everyday listening situations such as classrooms, playgrounds and workplaces, where ambient noises such as traffic, conversation and noise play a role. The aim of the research was to develop machine learning techniques that enable CIs to become self-learning. These could, for example, optimise signals, better suppress background noise or process combinations of signals from both ears, enabling users to follow conversations more easily while people are moving around.