Recurrent connections enable point attractor dynamics and dimensionality reduction in a connectome-constrained model of the insect learning center

Kavli Affiliate: Grace Hwang and Kechen Zhang

| Authors: Justin Joyce, Raphael Norman-Tenazas, Patricia Rivlin, Grace M. Hwang, Isaac Western, Kechen Zhang, William Gray-Roncal and Brian Robinson

| Summary:

The learning center in the insect, the mushroom body (MB) with its predominant population of Kenyon Cells (KCs), is a widely studied model system to investigate neural processing principles, both experimentally and theoretically. While many computational models of the MB have been studied, the computational role of recurrent connectivity between KCs remains inadequately understood. Dynamical point attractors are a candidate theoretical framework where recurrent connections in a neural network can enable a discrete set of stable activation patterns. However, given that detailed, full recurrent connectivity patterns in biological neuron populations are mostly unknown, how theoretical models are substantiated by specific networks found in biology has not been clear. Leveraging the recent release of the full synapse-level connectivity of the MB in the fly, we performed a series of analyses and network model simulations to investigate the computational role of the recurrent KC connections, especially their significance in attractor dynamics. Structurally, the recurrent excitation (RE) connections are highly symmetric and balanced with feedforward input. In simulations, RE facilitates dimensionality reduction and allows a small set of self-sustaining point attractor states to emerge. To further quantify the possible range of network properties mediated by RE, we systematically explored the dynamical regimes enabled by changing recurrent connectivity strength. Finally, we establish connections between our findings and potential functional or behavioral implications. Overall, our work provides quantitative insights into the possible functional role of the recurrent excitatory connections in the MB by quantifying the point attractor network dynamics within a full synapse-level connectome-constrained highly recurrent network model. These findings advance our understanding of how biological neural networks may utilize point attractor dynamics. Author summary Point attractor neural networks are widely used theoretical models of associative memory, where recurrent connections between neurons enable a discrete set of stable activation patterns that can recover a full pattern based on partial cues. The detailed full recurrent connectivity patterns in biological neuron populations are largely unknown, however, raising questions about the precise correspondence between theoretical point attractor models and neural networks found in biology. Recent breakthroughs have unveiled the synapse-level connectivity of all neurons within the learning center of an insect, including recurrent connections between the primary neuron type—a crucial component with an elusive computational role. In this work, we perform analyses of these recurrent connectivity patterns and simulate neural network models that have these biologically constrained neural network patterns. We find that these recurrent connections are highly symmetric and balanced with input to the memory center. In simulations, we find that these recurrent connections perform dimensionality reduction and enable a small set of point attractor states. We additionally characterize how the strength of these recurrent connections affects network properties and downstream behavioral consequences. Overall, this work advances an understanding of the insect learning center as well as the relationship between theoretical and biological recurrent networks.

Read More