Kavli Affiliate: John Serences
| Authors: Margaret M Henderson, John T Serences and Nuttida Rungratsameetaweemana
| Summary:
Abstract Everyday perceptual tasks require sensory stimuli to be dynamically encoded and analyzed according to changing behavioral goals. For example, when searching for an apple at the supermarket, one might first find the Granny Smith apples by separating all visible apples into the categories “green” and “non-green”. However, suddenly remembering that your family actually likes Fuji apples would necessitate reconfiguring the boundary to separate “red” from “red-yellow” objects. This flexible processing enables identical sensory stimuli to elicit varied behaviors based on the current task context. While this phenomenon is ubiquitous in nature, little is known about the neural mechanisms that underlie such flexible computation. Traditionally, sensory regions have been viewed as mainly devoted to processing inputs, with limited involvement in adapting to varying task contexts. However, from the standpoint of efficient computation, it is plausible that sensory regions integrate inputs with current task goals, facilitating more effective information relay to higher-level cortical areas. Here we test this possibility by asking human participants to visually categorize novel shape stimuli based on different linear and non-linear boundaries. Using fMRI and multivariate analyses of retinotopically-defined visual areas, we found that shape representations in visual cortex became more distinct across relevant decision boundaries in a context-dependent manner, with the largest changes in discriminability observed for stimuli near the decision boundary. Importantly, these context-driven modulations were associated with improved categorization performance. Together, these findings demonstrate that codes in visual cortex are adaptively modulated to optimize object separability based on currently relevant decision boundaries.