Kavli Affiliate: Marcelo Mattar
| Authors: Li Ji-An, Marcus K Benna and Marcelo G Mattar
| Summary:
Normative frameworks such as Bayesian inference and reward-based learning are useful tools for explaining the fundamental principles of adaptive behavior. However, their ability to describe realistic animal behavior is limited by the often small number of parameters that are fit to data, leading to a cycle of handcrafted adjustments and model comparison procedures that are often subject to research subjectivity. Here, we present a novel modeling approach leveraging Recurrent Neural Networks (RNNs) to automatically discover the cognitive algorithms governing biological decision-making. We demonstrate that RNNs with only one or two units can predict individual animals’ choices more accurately than classical normative models, and as accurately as larger neural networks, in three well-studied reward learning tasks. Using tools from discrete dynamical systems theory, such as state-space and fixed point attractors, we show that trained networks uncovered numerous cognitive strategies overlooked in classical cognitive models and task-optimized neural networks. Our approach also allows for a unified comparison of different models and provides insights into the dimensionality of behavior and the emergence of meta-learning algorithms. Overall, we offer a systematic approach for the automatic discovery of interpretable cognitive strategies underlying decision-making, shedding light onto neural mechanisms and providing novel insights into healthy and dysfunctional cognition.