In order to hear your friend talking, sound waves enter your ear canal and transfer to the cochlea. Inside, the sound waves wiggle thousands of tiny hair cells, sending electrical impulses to the brain. But they don’t just pass along data — they’re also able to enhance weak signals that need amplifying.
![]() |
Sound waves come in the ear canal, vibrate the tympanic membrane, and are then transferred into the cochlea. Thousands of hair cells inside the cochlea then transmit electrical pulses to the brain. Image Credit: Inductiveload. |
The model allowed for them to accurately portray compression rates and a more effective isolation of low-intensity sounds. Other models have focused on the specific point of divergence, but for this experiment, they focused their model away from the bifurcation point. Not only were they able to identify the pitch, but their model was also able to successfully suppress the frequencies they didn’t want to hear.
Being able to reproduce this ability could be valuable to people. Damaged hair cells in the inner leads hearing loss overall, as well as the reduction of your ability to isolate still audible sounds. Common hearing aids require working hair cells but the more damage you have to the hair cells in your inner ear, the harder it is for hearing aids to help.
New hearing aids could use the research team’s technique to better reproduce the electrical signals that the missing inner ear hair cells would otherwise create. This could help a new segment of the population with hearing damage that present day hearing aids don’t work on.
But humans aren’t the only ones this new model could help; computers can also benefit from this type of auditory isolation. As artificial intelligence becomes more responsive, a phone wouldn’t just respond to anyone’s voice — it could pick out and respond to only your voice. Talking to someone in a crowded place on the phone could become easier if the phone isolates your voice and suppresses all the background noise on the call.
Being able to give robots voice commands in a noisy or chaotic environment could also prove useful. Often there’s a great deal of background noise that robots used in emergency response situations have to pick through. The ability for a robot to be able to tune in to and recognize human voices among the chaos could be crucial to locating survivors faster.
The team’s results were published on February 27, 2014 in the journal Physical Review Applied.