Brain Programmed To Anticipate Sounds

Scientists find the route to listening is more complex than they thought.

Originally published: Jun 10 2015 – 8:00am, Inside Science News Service
By: Joel N. Shurkin, Contributor

(Inside Science) — You are sitting in a concert hall about to hear Beethoven’s Fifth Symphony, anticipating, among other things, the famous first four notes. When it comes, it sounds just like you thought it would.

Man with headphones credit to Warren Goldswain via shutterstock  | composite image credit Michael Greshko

That anticipation may not just be the fact you know intellectually what’s coming, but something quite physiological: your brain is anticipating some essential properties of the sound and may even be adjusting what you will hear toward what you are expecting. According to research in Germany and the United Kingdom, sound perception is often “top-down”–ear to brain and back down to midbrain in the auditory system. Conventional theory has been fundamentally focused on bottom-up, ear to brain.

The process would particularly be true of pitch, the degree of highness or lowness of the tone, said Emili Balaguer-Ballester, a computational neuroscientist at Bournemouth University in England.

“What your brain expects to hear can be as important as the sound itself,” he said. In these researchers’ hypothesis, the adjustment made by the anticipation occurs in a matter of milliseconds.

In the traditional theory, the vibrations that produce sound go in the ear, where it is analyzed in the cochlea, an organ in the inner ear that sends nerve impulses to the brain stem. The brain stem then combines the sound information coming in from the cochlea of both ears.

Then the components of the sound goes to the auditory cortex, where the pitch of the sound is encoded and ultimately renders a representation of what we hear. Sound processing in the brain stem is so reliable that researchers can monitor the neural activity encoding the sound using an imaging technique called magnetoencephalography.

This is a new technology used by Andre Rupp’s lab at Heidelberg University, in Germany. They send the data to Balaguer-Ballester, who was trained as a physicist, and his graduate student, Alejandro Tabas, for analysis.

Their work is published in the journal PLOS Computational Biology.

This data would enable the researchers to answer a puzzle: How can a listener know whether a speaker is male or female, or recognize Beethoven’s first four notes, in less time than the tens or hundreds of milliseconds it takes for a nerve impulse to make the trip from ear to brain. How can we discern sound patterns before we should?

The new work suggests that the brain is continuously making predictions of the next sound in advance, based on expectations. Therefore, with very limited information about the input sound, the auditory cortex can create an almost instantaneous image of the identity of a speaker, the family of an instrument or the pitch of a note in advance, Balaguer-Ballester said.

What happens when the actual sound does not match the expectation? The auditory system gets more information from the brain stem area and modifies the sound to make it form “a more likely pattern,” he said.

The possibility that expectation plays a role in sound perception is not entirely new, according to Josef Rauschecker, director of the Laboratory of Integrative Neuroscience and Cognition at Georgetown University in Washington. The Balaguer-Ballester paper takes it further than others, he said, and it has clear clinical implications.

Rauschecker and his laboratory uses the concept in researching tinnitus, the persistent ringing in the ears, the most common auditory disorder. It is a very trendy topic in auditory science, he said

“It has been one of the big unsolved problems, how perception happens.”

The anatomy involved “has been known forever,” Rauschecker said, including research done a decade ago by those studying how bats processes echolocation. Most researchers think bats send out signals and then compare what returns with what was sent out in a running analysis.

In 2005, researchers at the Johns Hopkins Institute for Basic Biomedical Sciences discovered a region deep in the brain of monkeys that processed pitch, and reported that humans probably had the same structure. The monkeys also used a top-down method to process sound.

There is evidence that the top-down process goes even further than the research done at Bournemouth and Heidelberg, down as far as the thalamus, an organ deep in the brain that relays sense inputs to other parts of the brain, and into the cochlea. That the thalamus is involved in pitch perception, Rauschecker said, is not a surprise.

Joel Shurkin is a freelance writer based in Baltimore. He is the author of nine books on science and the history of science, and has taught science journalism at Stanford University, UC Santa Cruz and the University of Alaska Fairbanks. He tweets at @shurkin.

You may also read these articles