Sparse-Coding Variational Auto-Encoders

Kavli Affiliate: Adam S. Charles

| Authors: Victor Geadah, G. Barello, Adam Charles and Jonathan Pillow

| Summary:

The sparse coding model posits that the visual system has evolved to efficiently code natural stimuli using a sparse set of features from an overcomplete dictionary. The original sparse coding model suffered from two key limitations, however: (1) computing the neural response to an image patch required minimizing a nonlinear objective function via recurrent dynamics; (2) fitting relied on approximate inference methods that ignored uncertainty. Although subsequent work has developed several methods to overcome these obstacles, we propose a novel solution inspired by the variational auto-encoder (VAE) framework. We introduce the sparse-coding variational auto-encoder (SVAE), which augments the sparse coding model with a probabilistic recognition model parametrized by a deep neural network. This recognition model provides a neurally plausible feedforward implementation for the mapping from image patches to neural activities, and enables a principled method for fitting the sparse coding model to data via maximization of the evidence lower bound (ELBO). The SVAE differs from standard VAEs in three key respects: the latent representation is overcomplete (there are more latent dimensions than image pixels), the prior is sparse or heavy-tailed instead of Gaussian, and the decoder network is a linear projection instead of a deep network. We fit the SVAE to natural image data under different assumed prior distributions, and show that it obtains higher test performance than previous fitting methods. Finally, we examine the response properties of the recognition network and show that it captures important nonlinear properties of neurons in the early visual pathway.

Read More