Kavli Affiliate: Tatyana Sharpee
| Authors: Ryan J. Rowekamp and Tatyana Sharpee
| Summary:
Abstract Despite recent successes in machine vision, artificial recognition systems continue to be less robust than biological systems. The brittleness of artificial recognition system has been attributed to the linearity of the core operation that matches inputs to target patterns at each stage of the system. Here we analyze responses of neurons from the visual areas V1, V2, and V4 of the brain using the framework that incorporates quadratic computations into multi-stage models. These quadratic computations make it possible to capture local recurrent computation, and in particular, nonlinear suppressive interactions between visual features. We find that incorporating quadratic computation not only strongly improved predictive power of the resulting model, but also revealed several computation motifs that increased the selectivity of neural responses to natural stimuli. These motifs included the organization of excitatory and suppressive features along mutually exclusive hypotheses about incoming stimuli, such as orthogonal orientations or opposing motion directions. The balance between excitatory and suppressive features was largely maintained across brain regions. These results emphasize the importance and properties of quadratic computations that are necessary for achieving robust object recognition. Competing Interest Statement The authors have declared no competing interest.