November 22, 2010

from LabSpaces Website
 

 

(Photo: Max Brown/STOCK.XCHNG)
 

Like musical compression saves space on your mp3 player, the human brain has ways of recoding sounds to save precious processing power.

To whittle a recording of your favorite song down to a manageable pile of megabytes, computers take advantage of reliable qualities of sounds to reduce the amount of information needed.

 

Collections of neurons have their own ways to efficiently encode sound properties that are predictable.

"In perception, whether visual or auditory, sensory input has a lot of structure to it," said Keith Kluender, a psychology professor at the University of Wisconsin-Madison.

 

"Your brain takes advantage of the fact that the world is predictable, and pays less attention to parts it can predict."

Along with graduate student Christian Stilp and assistant professor Timothy Rogers, Kluender co-authored a study published in this week's (Nov. 22) early online edition of the Proceedings of the National Academy of Sciences showing listeners can become effectively deaf to sounds that do not conform to their brains' expectations.

The researchers crafted an orderly set of novel sounds that combined elements of a tenor saxophone and a French horn. The sounds also varied systematically in onset - from abrupt, like the pluck of a violin string, to gradual, like a bowed string.

 

These sounds were played in the background while test subjects played with Etch-a-Sketches.

After a little more than seven minutes, listeners completed trials where they were asked to identify one sound in a set of three that was unlike the other two.

Distinguishing sounds that varied in instrument and onset in the same way they had just heard was a simple matter. But sounds that didn't fit - with, say, more pluck and not enough saxophone - were completely lost to the listeners.

 

They could not correctly identify one of the non-conforming sounds as the odd one among three examples.

"They're so good at perceiving the correlations between the orderly sounds, that's all they hear," says Kluender, whose work is funded by the National Institute of Deafness and Other Communication Disorders.

 

"Perceptually, they've discarded the physical attributes of the sounds."

The results jibe well with theoretical descriptions of an efficient brain, and the researchers were able to accurately predict listener performance using a computational model simulating brain connections.

"The world around us isn't random," Stilp says.

 

"If you have an efficient system, you should take advantage of that in the way you perceive the world around you. That's never been demonstrated this clearly with people."

To avoid having to carefully take in and remember every last bit of visual or audible stimulus it encounters, the mind quickly acquaints itself with the world's predictability and redundancy.

"That's part of why people can understand speech even in really terrible conditions," Kluender says.

 

"You can press your ear to the wall in a cheap apartment and make out a conversation going on next door even though the wall removes two-thirds of the acoustic information. From just small pieces of sounds, your brain can predict the rest."