Neural Mechanisms Linking Global Maps to First-Person Perspectives

Kavli Affiliate: Douglas Nitz

| Authors: Hin Wai Lui, Elizabeth R. Chrastil, Douglas A. Nitz and Jeffrey L. Krichmar

| Summary:

Humans and many animals possess the remarkable ability to navigate environments by seamlessly switching between first-person perspectives (FPP) and global map perspectives (GMP). However, the neural mechanisms that underlie this transformation remain poorly understood. In this study, we developed a variational autoencoder (VAE) model, enhanced with recurrent neural networks (RNNs), to investigate the computational principles behind perspective transformations. Our results reveal that temporal sequence modeling is crucial for maintaining spatial continuity and improving transformation accuracy when switching between FPPs and GMPs. The model’s latent variables capture many representational forms seen in the distributed cognitive maps of the mammalian brain, such as head direction cells, place cells, corner cells, and border cells, but notably not grid cells, suggesting that perspective transformation engages multiple brain regions beyond the hippocampus and entorhinal cortex. Furthermore, our findings demonstrate that landmark encoding, particularly proximal environmental cues such as boundaries and objects, play a critical role in enabling successful perspective shifts, whereas distal cues are less influential. These insights on perspective linking provide a new computational framework for understanding spatial cognition and offer valuable directions for future animal and human studies, highlighting the significance of temporal sequences, distributed representations, and proximal cues in navigating complex environments.

Read More