Better hearing through artificial intelligence

Hearing loss happens to many of us. The US National Institutes of Health, for example, estimates that one in eight Americans aged 12 years and older has hearing loss in both ears. Twenty-five per cent of adults aged between 65 and 75, and half of those aged 75 and older, experience disabling hearing loss. According to NIH’s National Institute on Deafness and Other Communication Disorders, 28.8 million US adults could benefit from using hearing aids.

Those who could benefit aren’t necessarily rushing to achieve better hearing. While most first-time hearing aid users are in their late 60s and early 70s, their hearing problems will have started years earlier; it takes seven years on average to give hearing technology a try.

Those working in the industry are confident that change is on the horizon, because the sophistication of the innovation in hearing science is changing hearing aids from good amplifiers to personalised hearing solutions with the best natural sound. Ultimately this means everyone with hearing loss can enjoy life, healthy and active with good hearing in all situations. A significant part of the innovation is coming thanks to artificial intelligence.

There’s a confluence of factors contributing to this brighter future for people suffering hearing loss. For starters, hearing-aid technology continues to improve and evolve, from simple amplifiers with low-cut and high-pass filters and a singular focus on amplifying speech in quiet environments, to sophisticated applications of sound analysis; environment classification; inter-ear data processing; adaptive directional microphones; wireless programming; direct connectivity, and more.

People are also more used to mobile technology enhancing their digital lives, whether it’s the wireless earbuds they wear constantly to listen to music, or the health ‘wearables’ such as smart watches and other devices which monitor vitals and track exercise patterns. As a result, wearing devices for better hearing is more accepted.

Now factor in emerging artificial intelligence technology and hearing aids become something other than medical devices. They more closely resemble the burgeoning class of lifestyle ‘hearables’ that are a key component of ubiquitous computing, like virtual assistants or smart glasses.

Fitting hearing aids takes skill and insight by a hearing care professional. The process starts with a detailed audiological evaluation, but even when we audiologists do our best work, hearing aids can be hard to get used to. Evidence shows a period of acclimatisation — in audiology, a period of adjustment to the hearing aids’ amplification benefits — is the norm. What if we could avoid the acclimatisation period by designing a personalised sound experience at the outset?

As hearing care professionals tailor hearing aids to their patients, they know that each patient’s needs are different — and every place they want to experience better hearing has its own unique sound characteristics. Hearing aids can already be adjusted according to the listening environment of the user — they can lower the general conversation level in a coffee shop or boost the voice of a friend in a park. But doing so can be unnecessarily ad hoc and manual. By applying artificial intelligence, integrated with today’s advanced digital hearing aids, a hearing solution can learn how users best prefer various listening environments and give them greater automated control over their hearing experience. In other words, personalised, natural sound for the real world.

Hearing aids already exist that can communicate wirelessly with smartphones, either for basic adjustment or even to stream music. Add to the mix a machine-learning application that processes inputs from connected hearing aids and can send and receive anonymised data from a cloud-based AI system of hearing aid settings and we’ve created the basis of a new approach to better hearing.

New hearing technologies, like Widex’s SoundSense Learn, present hearing aid users with simple A-B comparisons to begin understanding how a person wearing compatible hearing aids prefers sound in an environment. Mind you, if a set of hearing aids manages three acoustic parameters — low, mid and high frequencies — which can be set to 13 different levels, we’re talking about more than 2,000 possible settings. To A-B test each of those settings — asking through a smartphone app, in effect, “Do you prefer setting A or setting B?” — would require more than two million tests. Therefore, machine-learning algorithms, which constantly track the adjustments hearing aid users make and draw from the settings of other users stored in the cloud, are used to calculate the optimal settings from just a dozen or so comparisons.

When applied, those settings create a personalised hearing experience based on context and environment (where the listener is); content (what the listener wants to hear, such as a musical performance or a conversation), and intent (whether the listener is focused on better hearing one element of her environment over another). Users can store the settings as programs in their smartphones and activate them throughout the day, such as when they’re at work, at the supermarket or in their kitchen. Anonymised, those programs can also be stored in a secure, cloud-based system to help enhance the hearing experiences of others.

Studies show that hearing aid users have a significant preference for the personalised settings achieved through artificial intelligence and machine learning and that 80 per cent would recommend the function to others. Notably, the analyses also indicate how users reach their personal more natural sound. For example, by looking at anonymised settings for what users categorise as ‘work,’ we see an amazing diversity that simply could not be addressed through traditional manipulation of hearing aid settings. Only artificial intelligence and machine learning could create personalised natural sound experiences for each individual user so smoothly.

We’re just at the beginning of this journey. With advances in sensor technology, integration with other wearable platforms, and increasingly sophisticated artificial intelligence, smart hearing aids will be able to tailor experiences on the fly based upon, among other things, where the wearer may be looking. If a father is on a bench in a park, talking to a friend while monitoring his child several yards away, his hearing aids could adjust to better hear the conversation when it’s clear he’s looking at his companion, but then re-adjust when he looks away to hear what his child may be doing. Or, with their embedded microphones, advanced processing and artificial intelligence, hearing aids could know their wearer has entered the supermarket, for example, and automatically adjust settings.

In the end, hearing aids supported by artificial intelligence and machine learning put the focus on personalised sound quality and listener intent where it belongs. Perhaps more than any other smart device, this new generation of hearing aids stands to seamlessly improve the lives of millions. That’s what technology should do.