Time to Read: 30 minutes
Hearing Aid technology has advanced in leaps and bounds over the past 20 years. One of the newest innovations now available in the latest models of hearing aid is artificial intelligence. Value Hearing founder and clinical audiologist, Christo Fourie explores the world of Artificial Intelligence and machine learning, what that means for hearing technology and how it can help you maximise your best possible hearing.
Watch the video or read the transcript below.
Good morning, welcome to today's webinar. Today I'd like to talk about something I'm no expert at but I'm very interested in and that is machine learning and artificial intelligence particularly as it relates to audiology and hearing aids.
What we'll look at first of all is what is machine learning or AI? How does it actually help you, the consumer? What are the applications we're currently seeing in audiology?
And then also the applications we're currently seeing in hearing aids and I'll also cover some common questions and concerns. Please comment in the comment section below if you've got any questions or you don't agree with what I've said. I'm learning as much as you are.
Then you've got general AI which is a computer which is a smart in most tasks as a human and that's not something we've achieved just yet but it's certainly where things are progressing in time.
AI is really a learning and problem solving computer system or program you can look at and it covers all sorts of tasks, depending on what we want to apply it to.
Machine learning is really a subset of AI and it's where a machine looks at big sets of data and learns from the data and can then do very specific tasks probably better than a human being can. We're not very good at looking at lots of figures and data, and trying to find patterns in that data, but machines are incredibly good at that. By using big data, you assign a result you want to find. That machine can then look at patterns and start predicting which combination of patterns will deliver a specific result.
And then, when you look at much more complex sets of data, like sound data for instance, and hearing aids, you need to look at deep neural networks which have many different connections all working together to figure out the answer to a very complex problem.
It also will, hopefully, deliver better outcomes to you. Well, that's the promise at least.
In audiology we're already seeing quite a few applications of machine learning, particularly we've got examples in diagnosis. So, if you look at one of the websites which I'll share in the resources, Computational Audiology have an article where they take a child's speech patterns and they can actually detect whether that child has hearing loss based on the speech patterns and what kind of hearing loss without actually doing a hearing test.
That's quite great. You don't need someone skilled in testing, you just need a recording of a child's voice to start screening for children who have hearing loss. You can address that and diagnose that a lot easier earlier.
They're also looking at individualised test battery or processes. So, for instance, based on the information the audiologist or clinician gains from you, and the test results they gain from you in real time, those processes could be adjusted to deliver the quickest and best diagnosis for you.
Currently, we follow very specific steps with every one of those steps can start being individualized based on you.
We also have applications where on an iPhone you can take a photo of someone's eardrum and that picture is then analysed to determine whether there's any middle ear disorders, anything that needs medical referral, just from that picture.
Again, it makes healthcare a bit more global. You can send that bit of equipment - the phone and the otoscope - to someone in a third world country. You might not be qualified but you can pick up these issues much much sooner without the involvement of a professional in every single case.
There's even technologies that can predict someone's chance of suffering tinnitus from a brain stem test where we look at the brain waves in response to sound and the AI can use that data to predict whether that person has tinnitus or tinnitus issues, so that's quite interesting.
All new audiologists find that quite daunting, particularly students, but through this method they can actually reduce the errors that occur and increase the test time by using machine learning.
Treatment options examples would be hearing aid selection.
We're actually working on a process where, based on different test factors, we can start finding out which hearing aid is going to work best for you out of the thousands available.
We’re starting to introduce machine learning into that as well, and in time we'll have a much more accurate model in that as well.
There's machine learning that can adjust hearing aid settings. I'll speak about that a little bit later, but essentially we can upload your situation to your phone, for instance. The AI can analyse that and send back some settings which will work best for that kind of situation. For example, hearing aid noise reduction, also available in one of the hearing aids, can actually separate the different speech sounds and noise sounds and then reduce the noise.
Before, the only way that could be done was to simply pull down different channels hoping to only address noise and not speech, but often you affected speech as well. This now separates the two to give you better noise reduction.
Automatic programs have been available since 1999. The first iterations for instance, were Phonak, where it can analyse the environment, detect whether it's speech or noise and then change different programs thus making the hearing aid more automatic.
It’s come a long way since then, for example recognising fall detection. You’ve seen it promoted by certain hearing aids like Starkey. It also uses artificial intelligence to look at how that accelerometer data is coming through and detecting whether it's a real fall or just someone bending over or moving quickly or dancing.
Then with outcomes, we obviously want to get better performance in noise and that comes back to that hearing aid noise reduction, hearing aid selection, and hearing aid settings. All of that together really enables better individual hearing aid matches, which again comes into better performance in noise.
The more suited the hearing aid is to an individual, the better your outcomes are likely to be.
With machine learning we can get data from the hearing aid. We can get data from your hearing loss and we can start seeing what pathway to care is going to serve you and best deliver the best outcome.
It's not one service suits all anymore, it's a very individualized approach that becomes available by using machine learning in our industry.
If it's correct, they say, ‘yeah it's correct and the machine knows that's correct’ and then try something else if it's wrong. It actually gets better and better at predicting what a situation is so this can, for instance, identify speech in quiet from speech in noise, from noise alone - from car noise, from music and quite a few other things.
It also uses information from accelerometers. So, how you’re moving your head, to change how the hearing aid works. What it does, essentially, is go to a program, or a combination of programs, and those program parameters are set so it differs from some other strategies which I'll discuss in a moment.
Then you've got your fine tuning options. For example, they have wide access to sound scenes to learn from by using the phone app.
You give a recording or explain the situation you're in. The hearing aid will then give you some A/B comparisons of settings and you can fine tune it. This can very quickly get you within about three to five steps to the exact settings which can address your needs in that environment.
It only adjusts the three frequency bars or controllers. It adjusts low frequency, versus mid frequency or high frequency. That alone can make quite a bit of difference to how you experience that situation. That data is then fed anonymously to the cloud and they compare how other people rated it and it gets better and better and better as it learns.
Just switching on that feature starts giving you improved results without going through too much fiddling. You could also fiddle manually if you wanted to - using the data from hundreds of thousands of people to get you the right result for that specific situation you find yourself in now - but it does require the use of your phone, so it's not on the hearing aid itself.
The same with Starkey Edge. What they have is through a double tap or a press of a button on the phone app. It essentially takes a little recording of the sound you’re in, in that particular environment, sends it to the phone which has a very powerful computer on it can then process that signal and send back the best settings for that particular environment.
The misfortune of that is every time you change environment you've got to run it again because it would still be in the old environment until you tell it, ‘I want to relearn again’. That’s where autosense is probably better. It's not doing machine learning on the hearing aid itself, but it can classify the situations going certain to change programs which means you don't have to manually adjust your program.
So, different approaches to the same problem.
The Oticon More is probably the most advanced and I'm happy to stand corrected so share in the comments, but essentially what they've done is they've put a deep neural network on on the hearing aid itself so you don't need to connect your phone or send any information to your phone to be able to separate and get the best set in settings on the hearing aid itself.
They've installed a pre-trained deep neural network, so what that means is they've essentially taken 12 million plus sound samples and sound scenes like party noise outdoors etc, and they fed that through the algorithm. The algorithm then learns how to separate each sound scene into its component sounds.
That gives you an awareness of your whole environment while you can still hear the speech in that environment so that's quite unique and I'm sure there's a lot more to come.
I actually attended the research lab in Eriksholm (Denmark), and they already have algorithms that can detect two different speakers and then pick one speaker and takes that person's speech out of the combined speech and raise it.
The problem though is it still needs to figure out which speaker you're actually paying attention to and they're looking at things like eye tracking so if they put in electrodes in your ear, when you move your eye, there’s electrical potential that can be measured in the ear and through that they can actually determine where you're looking.
So, they're looking at ways of introducing electrodes and then using artificial intelligence like that to actually separate one speaker's voice from another speaker's voice. Currently we're still using directional microphones to do that in the Oticon More but it's a big step forward in artificial intelligence on a hearing aid itself.
Fall detection, as I mentioned earlier, as well just briefly, like the Starkey Livio AI. There's a difference in the pattern of the accelerometer, little sensor in the hearing aid that can measure movement from tripping and falling to just stumbling a little bit, or even dancing, or doing gardening.
Then the next layer of artificial intelligence or machine learning on hearing aids is related to apps via Bluetooth on the phone. There's an app called Chatable and I've just read this week they're actually releasing the algorithm to be licensed. Because they've now developed it they can run in real time on a hearing aid. Before, with Chatable, you needed to run it through a phone. Essentially, the hearing aid sound is going through the phone and then the phone processes it, separates the speech in noise and gives you the speech dominance back - pretty much what the Oticon More does but just on an app.
But now they've developed something that can be put on hearing aids and “hearables” directly, so watch the space.
I'm sure we'll see more of what Oticon's doing in other devices if that takes off. And then things like voice transcribing.
These apps on the app store where you can have a phone call through the app and it transcribes what people are saying into readable words so you don't have to hear them well, you can actually read it off your phone and that kind of voice to text translation also uses machine learning same as Siri all those sort of things.
It's quite exciting but there are some common concerns about this.
No, it's currently not smart, it's just very good at doing very specific tasks. It's not smart like a human. You can't run an IQ test on it and say it's that smart, although they are working towards that, and that's also when things start getting scary.
Hopefully they'll think about a lot of those things very carefully before we have machines running in the world.
Machines which have enough data to learn from do deliver in some cases, or many cases,
actually more accurate results than a human trying to perform that same task. As I say, they're very good at doing very, very specific tasks.
So, yes the answer is it can be trusted and people actually tend to trust them automatically more than a human doing the same.
Another big question also for people who work in machine learning is, ‘what about privacy?’ Because you are often taking people's data and using that in machine learning algorithm. The thing about machine learning is it’s not using any specific users data in isolation, it also doesn't need to know who the user is, but it does need to know some features of the user - things like age, weight in some cases, hearing loss in our industry.
The current situation in how you do that with the hearing aid you're wearing. Anonymised data is being used to train these machines and it's literally thousands or millions of rows of data so millions of bits of data.
So anytime machine learning algorithms get fed data, the user criteria is taken out of it so it's not something that can be identified back to you but most of these systems do actually ask your permission to share that anonymised data and they can use that data from millions of other people to get the model.
Once the model's trained it actually doesn't look back to your data it's just to train - i.e. ‘yes this is correct’, ‘no that's not correct’ and it learns from that data but once it's learned, it doesn't need your individual data.
Then you can start applying your individual data to get a prediction from the machine learning algorithm but again, that can be done quite anonymously and to your benefit. Researchers are very aware of these concerns and it's definitely discussed quite extensively, particularly in healthcare.
There's also a question about discrimination in AI. They've particularly seen that in facial recognition AIs or even police AI - some police departments throughout the world use AI to predict where crime's going to occur and that tends to be biased towards certain populations.
Facial discrimination is is poorer at identifying certain races than other races and it's all to do with the data being put in so if you only put in data of a certain demographic that machine learning algorithm is going to be very good at predicting for that demographic but you can't then apply that machine learning model to another demographic because the results aren't going to be accurate.
There might be other factors or features in that data that results in a different answer, so it is very important for developers of these algorithms to take those biases into account. And they're very much aware of it so it's not like, ‘oops, what happened?’, they actually think about this proactively.
I’m very optimistic. I am in most things. I love technology. I think the future certainly, particularly with healthcare, is quite positive. We're already seeing things like IBM’s Watson .
just again this week is able to show any human protein in the body or predict it and show the shape and molecular structure of any protein in the human body, which can then lead to better medications, individualised medications.
AI could be applied to solve different diseases which currently we can't do because it's so much information. It might be the smallest variable that changes your genetics from being prone to a condition to actually having condition. That's the sort of data that machine learning thrives on.
So, the better we become at first of all in collecting data, and feeding these algorithms, and getting better algorithms because they're also changing all the time, the better outcomes we're going to get for humans.
There is a fear that machines become so smart that they don't need us anymore, they build smarter machines and in the end, the machine could simply see humans as a problem to be solved which might not result in very positive things for the human race, but again, it's something researchers and developers of these algorithms are aware of.
It's certainly something we need to be aware of and have our policy makers in force to try and protect us.
So, there's pros and cons to everything, but I do believe in the near future we're going to see very big developments. We're already seeing things like Siri and voice assistants helping us, self-driving cars that involve machine learning and AI.
So many things which can improve human life it's just we've got to be careful that it doesn't land in the wrong hands because again, it depends on your intent - you can make something that's good at doing something do something bad as well so just something to consider.
That really comes down to the point of singularity in AI. It's basically the time when a computer isn't as intelligent as a human being and in general AI, the thinking then is that the computer can then develop another computer that's smarter than it and that growth in intelligence from machines will take off exponentially from that point.
So, if in a year you'll have, potentially, a computer that's as smart as everyone together on the planet and who knows what happens, then we won't understand how it even thinks to be able to solve it.
There's different dates put on it. Some people are saying 2050, some people saying never. Again it's open to conjecture. Luckily people are already philosophising about the concerns relating to general AI and I’m hopeful that we'll end up going down the right path - same with nuclear weapons, all those things.
There's always threats, but it all depends on how we as a society deal with those threats and which direction we choose to take.
So, if you're more interested - apart from the gloom and doom with the last little bit - in learning about AI, particularly with relation to hearing and hearing aids on YouTube, you can simply Google Autosense 4.0. They've got a really nice video from Peter Willis explaining that.
Computationalaudiology.com is really the organisation in audiology that brings all the researchers from AI and machine learning together on one location they've got conferences took their last one about a month ago, so very interesting stuff happening there.
If you want to read more about the Widex soundsense learn, they talk about Artificial Intelligence, explain it quite well as well. Starkey's Edge is discussed as well on YouTube, so that's another good way if you want to learn a bit more.
We've also got some reviews on that and then the Oticon deep neural network, it's a very interesting video probably explains that a lot better than I do and it's it's very very good to see and then obviously feel free to ask contact us with any questions.
I hope this was helpful as I'm still learning but I'm quite excited about this and I look forward to see where this takes us.