UW News

October 9, 2013

New strategy lets cochlear implant users hear music

UW News

For many, music is a universal language that unites people when words cannot. But for those who use cochlear implants – technology that allows deaf and hard of hearing people to comprehend speech – hearing music remains extremely challenging.

Illustration of a cochlear implant.

Illustration of a cochlear implant.NIH

University of Washington scientists hope to change this. They have developed a new way of processing the signals in cochlear implants to help users hear music better. The technique lets users perceive differences between musical instruments, a significant improvement from what standard cochlear implants can offer, said lead researcher Les Atlas, a University of Washington professor of electrical engineering.

“Right now, cochlear-implant subjects do well when it’s quiet and there is a single person talking, but with music, noisy rooms or multiple people talking, it’s difficult to hear,” Atlas said. “We are on the way to solving the issue with music.” Atlas and other researchers believe that hearing music has possible links to hearing speech better in noisy settings, another goal of this research.

Atlas and collaborator Jay Rubinstein, a UW professor of otolaryngology and of bioengineering, and members of their labs recently published their initial findings in the IEEE Transactions on Neural Systems and Rehabilitation Engineering. A study on eight cochlear-implant users showed that using this new coding strategy let them distinguish between musical instruments much more accurately than with the standard devices.

The researchers hope to fine-tune the signal processing to make it compatible with cochlear implants already on the market so users can improve their music perception right away. They also are working on algorithms to better support device users’ perception of pitch and melody.

“This is the critical first-step that opened the door,” Atlas said.

A cochlear implant is a small, electronic device that lets a person who is profoundly deaf or hard of hearing perceive sound. One piece is placed on the skin behind a person’s ear, while another portion is surgically inserted under the skin. The implant works by directly stimulating the auditory nerve, bypassing damaged portions of the ear. The implant’s signals are sent to the brain, which recognizes the signals as sounds.

Cochlear implants are different from hearing aids, which amplify sounds so they can be detected by damaged ears.

The UW scientists developed a new way to process the sounds of musical melodies and notes, which tend to be more complex than speech. Specifically, they tried to improve the ability of cochlear-implant users to detect pitch and timbre in songs.

Pitch is associated with the melody of a song and intonation when speaking. Timbre, while hard to define, relates most closely to the varying sounds that different instruments make when playing the same note. For example, a bass will sound much different from a flute when they both strike a middle C.

People who use cochlear implants usually perceive words by their syllables and rhythms, not through tone or inflection. The researchers tested their new processing technique on cochlear-implant users by playing common melodies such as “Twinkle, Twinkle, Little Star” with the rhythms removed. They found that timbre recognition – the ability to distinguish between instruments – increased significantly, but the ability to perceive a melody was still difficult for most people.

“This is the first time anyone has demonstrated increased timbre perception using a different signal-processing system,” said Rubinstein, a physician at the UW Medical Center and Seattle Children’s hospital and director of the Virginia Merrill Bloedel Hearing Research Center. “With cochlear implants, we’ve always been oriented more toward speech sounds. This strategy represents a different way of thinking about signal processing for music.”

Atlas has a background in music, having designed guitar amplifiers and effects for rock musicians before becoming an electrical engineering professor. Rubinstein plays a variety of instruments and has been a classical, jazz and blues musician since he was 5 years old. His interest in neuroscience started around that time when he wondered why minor chords sound sad. Rubinstein comes from a musical engineering family – his brother, Jon Rubinstein, invented the iPod.

Co-authors are Xing Li, who recently completed her doctorate in electrical engineering at the UW; Kaibao Nie, an otolaryngology lecturer and adjunct lecturer in electrical engineering; and Nikita Imennov, who recently completed his UW doctorate in bioengineering.

This research was funded by the National Institutes of Health, the National Science Foundation, the U.S. Army Research Office, the Institute of Translational Health Sciences at the UW and a Virginia Merrill Bloedel Scholar Award.

###

For more information, contact Atlas at atlas@uw.edu or 206-526-0329, and Rubinstein at rubinj@uw.edu or 206-616-6655.

Tag(s):