Compression-enabled interpretability of voxel-wise encoding models

Kavli Affiliate: Reza Abbasi-Asl

| Authors: Fatemeh Kamali, Amir Abolfazl Suratgar, Mohammadbagher Menhaj and Reza Abbasi-Asl

| Summary:

Voxel-wise encoding models based on convolutional neural networks (CNNs) have emerged as state-of-the-art predictive models of brain activity evoked by natural movies. Despite the superior predictive performance of CNN-based models, the huge number of parameters in these models have made them difficult to interpret for domain experts. Here, we investigate the role of model compression in building more interpretable and more stable CNN-based voxel-wise models. We used (1) structural compression techniques to prune less important CNN filters and connections, (2) a receptive field compression method to choose the model receptive fields with optimal center and size, and (3) principal component analysis to reduce the dimensionality of the model. We demonstrate that the compressed models offer a more stable interpretation of voxel-wise pattern selectivity compared to uncompressed models. Furthermore, our receptive field compressed models reveal that the optimal model receptive fields become larger along the ventral visual pathway, as the receptive fields become more centralized. Overall, our findings unveil the role of model compression in building more interpretable voxel-wise models.

Read More