Context-dependent sensory modulation underlies Bayesian vocal sequence perception

Kavli Affiliate: Ali Brivanlou, Eric Siggia

| Authors: Anna Yoney, Lu Bai, Ali H. Brivanlou and Eric D Siggia

| Summary:

Vocal communication in both songbirds and humans relies on categorical perception of smoothly varying acoustic spaces. Vocal perception can be biased by expectation and context, but the mechanisms of this bias are not well understood. We developed a behavioral task in which songbirds, European starlings, are trained to to classify smoothly varying song syllables in the context of predictive syllable sequences. We find that syllable-sequence predictability biases perceptual categorization following a Bayesian model of probabilistic information integration. We then recorded from populations of neurons in the auditory forebrain while birds actively categorized song syllables, observing large proportions of neurons that track the smoothly varying natural feature space of syllable categories. We observe that predictive information in the syllable sequences dynamically modulates sensory neural representations. These results support a Bayesian model of perception where predictive information acts to dynamically reallocate sensory neural resources, sharpening acuity (i.e. the likelihood) in high-probability regions of stimulus space.

Read More

Leave a Reply