The major goal of our research is to understand processing and perception of human speech, music and natural sounds. My laboratory is interested in understanding biological important sounds processing and perception. We are interested in neural and circuit mechanisms underlying temporal processing, sound contextual and brain-state modulations on sound perception. Our studies are carried out in awake marmoset monkeys, a social non-human primate model with a stronger homology to the human brain than rodents. We mainly perform intracellular recording, single or multi-electrode extracellular recordings from the auditory cortex and/or thalamus of awake head-fixed marmosets when the animals passively listen to sound stimuli or perform a sound discrimination task. These experiments allow us to correlate the neural activities with animal behavior or examine the subthreshold mechanisms underlying sound processing and perception. Optogenetics manipulation or pharmacological method will be applied to activate/inactivate the neural activities in a focal brain area to study the circuit mechanisms underlying sound processing and perception.
1. Temporal processing in thalamocortical circuitry
How the brain processes temporal information embedded in sound remains a core question in auditory research. Temporal information over a wide range of time scales conveys perceptually important information. The high frequency component determines the fine temporal structures of sound, such as pitch and roughness. The low frequency modulation in millisecond scale determines the coarse temporal structure of sound, which is important for speech perception and melody recognition. The representation of such temporal structure undergoes a transformation between the auditory periphery and auditory cortex. Our laboratory is interested in understanding how the auditory system processes temporal information to form perceptual representations of biological important sounds and the underlying neural and circuit mechanisms.
2. Sound contextual and brain-state modulations on sound perception
The natural acoustic environment is composed of sounds from anywhere at any time. Humans and animals have the ability to perceive a particular sound while filtering out other sounds and background noise in the environment, as exemplified in the cocktail party effect. And also, in other cases, a sound cannot be unambiguously identified without reference to its stimulus context, such as consonant–vowel combinations in speech. Extracellular recording studies have demonstrated long-lasting contextual modulations (> 500 ms) in spiking activity in the auditory cortex of many species. However, mechanisms for the contextual modulations in auditory cortex are largely unknown. Our laboratory is interested in understanding modulations of attention and brain-state on auditory perception and sound contextual processing.
3. Subcortical processing of complex sounds and harmonocity
We live in an acoustic environment full of harmonic sounds, a component frequency of which is an integer multiple of a fundamental frequency. Many of natural and man-made sounds, such as species-specific animal vocalizations, human speech, and sounds from many musical instruments, contain rich harmonic structures. Currently, it is still largely unknown what the physiological, anatomical and developmental bases of harmonocity are across mammalian species, especially in subcortical regions. Our laboratory is interested in understanding the neural bases for processing of complex sounds and harmonocity in subcortical regions.