A new approach to implant listening
Hearing loss is a global challenge, particularly in countries with ageing populations. Unaddressed hearing loss poses an annual global cost of $750 billion and recent work has shown a strong link between hearing loss and dementia. Over a billion young people are at risk of hearing loss because of the way they listen to music.
Hundreds of thousands of people depend on implanted electronic devices to hear. These devices, known as auditory or cochlear implants, aren't perfect. Implant users often have difficulty knowing where sounds originate, are commonly unable to enjoy music, and find it difficult to understand speech when there is background noise - like in a busy workplace, crowded restaurant or chaotic classroom. We have a new approach to solve this problem by transferring crucial sound information through the skin.
We are developing a low-cost haptic device that could revolutionise the treatment of hearing loss.
Hundreds of thousands of people depend on implanted electronic devices to hear. These devices, known as auditory or cochlear implants, aren't perfect. Implant users often have difficulty knowing where sounds originate, are commonly unable to enjoy music, and find it difficult to understand speech when there is background noise - like in a busy workplace, crowded restaurant or chaotic classroom. We have a new approach to solve this problem by transferring crucial sound information through the skin.
We are developing a low-cost haptic device that could revolutionise the treatment of hearing loss.
Implants in noise
Cochlear implant users struggle to understand speech in noisy environments.
People with auditory implants hear the world in a very different way to people with healthy hearing. In an implant user, the sound that is usually transmitted to the brain by thousands of extraordinarily sensitive cells in the ear is instead transmitted by just 22 micro-electrodes. This means that the information transmitted to the brain is severely limited. We've made a quick demo that simulates how hard it can be for auditory implant users to understand speech in complex sound environments. It's available to use for free as a teaching or demonstrating tool (YouTube or Download). |
Note: these are just simulations based on models of how cochlear implants work with the brain. Cochlear implant users experience their device in different ways depending on a range of factors.
|
Implants and Music
Note: these are just simulations based on models of how cochlear implants work with the brain. Cochlear implant users experience their device in different ways depending on a range of factors.
|
Music sounds very different to implant users. They can struggle to distinguish different pitches, and they have poor access to the quality, or "timbre", of sounds. This short demo that simulates what it is like to hear music as a auditory implant user. Like the demo above, it's available to use for free as a teaching or demonstrating tool (YouTube or Download). |
Hearing through your skin
"The brain takes information from the senses to build a model of the world. When information is missing from one sense, we can use another sense to add it back in." - Dr Mark Fletcher, principal investigator
Our research is all about finding ways to send crucial sound information through the skin as small vibrations. We've shown that this can work in the lab–now we're taking it to the real world. We are developing a wrist-worn device that we have shown can improve implant users’ speech understanding and could transform their lives. |
|
Our new haptic device
3D render of our mosaicOne_C device. We've been working hard to improve our device and get it ready to be used outside of the lab. Further iterations of the mosaicOne will be coming very soon!
|
|
Training at home
Learning to use new information is an important part of the electro-haptic effect. We're building a remote training system to see how people learn to use electo-haptic stimulation over time.
Our research uses RealSpeech, an auditory training app developed by Dr Mark Fletcher and Dr Ian Wiggins from the electro-haptics team. RealSpeech has a large library of talkers and background environments, over 20 hours of synchronised high-definition video and audio, and is set up for remote data collection. |
Virtual acoustics
Working with the University of Southampton Virtual Acoustics and Audio Engineering Research Group, we're developing a system for creating carefully controlled realistic sound environments for use in the lab, in the clinic, and at home. Currently, a large, acoustically-treated space and an expensive system with several loudspeakers in a ring around the listener is required to create such sound environments. We hope our system can opening up the possibility of more realistic testing much more widely for clinics and research labs and allow for much more sophisticated at-home training and testing.
Peer-reviewed publications
Our first paper of 2021, published in Nature Scientific Reports
|
Our 2020 review paper on the challenges of developing a new haptic device, published in Expert Review of Medical Devices
|
Our forth paper of 2020, published in Nature Scientific Reports
|
Our third paper of 2020, published in Nature Scientific Reports
|
Our second paper of 2020, published in Nature Scientific Reports
|
Our first paper of 2020, published in Nature Scientific Reports
|
Our 2019 paper in Nature Scientific Reports
|
Our first EHS paper, published in Trends in Hearing
|
Presentations
Mark talks about improving sound localisation in cochlear implant users with haptics at the 2020 British Cochlear Implant Group meeting
|
Mark's 2020 electro-haptics guest lecture for the "Current Developments in Bioengineering" 3rd year BEng module at Nottingham Trent University School of Engineering and Technology
|
Our 2019 talk at the National Cochlear Implant Users’ Association AGM
|
Electro-haptics visits University College London
|
Our 2018 presentation at the British Society of Audiology
|
Media and outreach
The EHS team support the Robosapiens permanent exhibition by the Science Museum of Minnesota
|
The EHS project featured in the British Society of Audiology magazine 'Audacity'
|
Our latest work featured in The Conversation
|
Mark talks about neuroscience and the Electro-Haptics Project to secondary-school students for the Smallpiece Trust
|
Research workshop at the University of Cambridge
|
EHS featured in Hoorzaken
|
Some of the professional bodies that have shared our work:
|
Funding
£500k grant to support Electro-Haptics Research
|
|
£155k grant to support Ahmed Bin Afif, our new PhD student
|
£80k grant to support our Virtual Acoustics project
|
£10k to fund an internship for 6 months on the electro-haptics project
|
£1k equipment fund to seed a collaboration between the University of Southampton and the University of Iowa awarded by the Iowa Neuroscience Institute
|
The team
Dr Mark FletcherPrincipal Investigator
University of Southampton |
Ama HadeediEHS Speech Enhancement
University of Southampton |
Sean R. MillsTactile Neuroscience
University of Southampton |
Prof. Carl VerschuurDirector, University of Southampton Auditory Implant Service
|
Robyn CunninghamEHS Spatial Hearing
University of Southampton |
Ahmed Bin AfifEHS Speech Enhancement
University of Southampton |
Dr Tobias GoehringImplant Signal Processing
University of Cambridge |
Nour ThiniEHS Music Enhancement
University of Southampton |
Dr Devyanne BeleClinical Scientist
University of Southampton Auditory Implant Service |
Dr Ian WigginsCI Neuroimaging
University of Nottingham |
Marianna VattiDSP Engineering
Oticon Medical |
Sam B.P. PerryReal-Time DSP
University of Southampton |
Prof. Rúnar UnnþórssonMechanical Engineering
University of Iceland |
Dr Mark SteadmanElectronics & Mechanical Engineering
Imperial College London |
Dr Ben LinetonLecturer in Audiology
University of Southampton |
Dr Jeremy MarozeauMusic in CI users
Technical University of Denmark |