In both animals and humans, vocal signals used for communication contain a wide array of different sounds that are determined by the vibrational frequencies of vocal cords. For example, the pitch of someone's voice and how it changes as they are speaking depend on a complex series of varying frequencies. Knowing how the brain sorts out these frequency-modulated (FM) sweeps is believed to be essential to understanding many hearing-related behaviors, like speech.
Now, a pair of biologists at the California Institute of Technology, in Pasadena, has identified how and where the brain processes this type of sound signal (Neuron, March 8, 2012).
Knowing the direction of an FM sweep - if it is rising or falling, for example - and decoding its meaning are important in every language. The significance of the direction of an FM sweep is most evident in tone languages such as Mandarin Chinese, in which rising or dipping frequencies within a single syllable can change the meaning of a word.
In the current study the researchers pinpointed the brain region in rats where the task of sorting FM sweeps begins.
"This type of processing is very important for understanding language and speech in humans," said principal investigator Guangying Wu, PhD, a senior research fellow in brain circuitry. "Some people have deficits in processing this kind of changing frequency. They experience difficulty in reading and learning language and in perceiving the emotional states of speakers. Our research might help us understand these types of disorders and may give some clues for future therapeutic designs or designs for prostheses like hearing implants."
The research team found that the processing of FM sweeps begins in the midbrain, an area located below the cerebral cortex near the center of the brain. This was a surprise, Dr. Wu said. "Some people thought this type of sorting happened in a different region, for example in the auditory nerve or in the brainstem. Others argued that it might happen in the cortex or thalamus."
To acquire high-quality, in-vivo measurements in the midbrain - which is located deep within the brain - the team designed a novel technique using two paired, or co-axial, electrodes. Previously, it had been very difficult for scientists to acquire recordings in areas such as the midbrain, thalamus and brainstem. Dr. Wu believes the new method will be applicable to a wide range of deep-brain research studies.
In addition to finding the site where FM sweep selectivity begins, the researchers discovered how auditory neurons in the midbrain respond to these frequency changes. Combining physical measurements with computational models confirmed that the recorded neurons were able to selectively respond to FM sweeps based on their directions. For example, some neurons were more sensitive to upward sweeps, while others responded more to downward sweeps.
"Our findings suggest that neural networks in the midbrain can convert from non-selective neurons that process all sounds to direction-selective neurons that help us give meanings to words based on how they are spoken," said Dr. Wu. "That's a very fundamental process."
He plans to continue this line of research, with an eye toward helping people with hearing-related disorders. "We might be able to target this area of the midbrain for treatment in the near future," he said.
The research team included Richard Kuo, a research technician in Dr. Wu's laboratory at the time of the study and now a graduate student at the University of Edinburg.
The study was funded by grants from the Broad Fellows Program in Brain Circuitry of the Broad Foundation and Caltech.