Search Papers | Poster Sessions | All Posters

Poster B146 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink

PoissonVAE: combining Bayesian Inference with Predictive Coding results in Amortized Sparse Coding

Hadi Vafaii1 (), Jacob L. Yates1; 1UC Berkeley

Variational autoencoders (VAE) employ Bayesian inference to interpret sensory inputs, mirroring processes that occur in primate vision across both ventral (Higgins et al., 2021) and dorsal (Vafaii, Yates, & Butts, 2023) pathways. Despite their success, traditional VAEs rely on continuous latent variables, which significantly deviates from the discrete nature of biological neurons. Here, we developed the Poisson VAE (P-VAE), a novel architecture that combines principles of predictive coding with a VAE that encodes inputs into discrete spike counts. Combining Poisson-distributed latent variables with predictive coding introduces a metabolic cost term in the model loss function, suggesting a relationship with sparse coding. We explored this connection, training a P-VAE with a linear decoder and an overcomplete latent space on natural image patches, contrasting it with a traditional Gaussian VAE. Unlike the Gaussian VAE, which learned features similar to principal component analysis, P-VAE exhibited Gabor-like feature selectivity, reminiscent of sparse coding patterns. Notably, P-VAE with a linear decoder effectively implements "Amortized Sparse Coding," where inference over neural activations is achieved through the VAE encoder. Our work provides an interpretable computational framework to study brain-like sensory processing and paves the way for a deeper understanding of perception as an inferential process.

Keywords: bayesian inference predictive coding sparse coding VAE 

View Paper PDF