Search Papers | Poster Sessions | All Posters

Poster B64 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink

HMAX Strikes Back: Self-supervised Learning of Human-Like Scale Invariant Representations

Ivan Felipe Rodriguez1 (), Nishka Pant1, Arjun Beniwal2, Scott Warren3, Thomas Serre1; 1Brown University, 2New York University, 3Brown Medical School

Early hierarchical models of the visual cortex, such as HMAX , have now been superseded by modern deep neural networks. Modern deep neural networks optimized for image categorization have been shown to outperform HMAX (and related models) significantly on image categorization tasks and to fit better neural data from the visual cortex, even though they were not explicitly constrained by neuroscience data. However, earlier hierarchical models were also trained with simpler local learning rules in the pre-deep learning era. So far, these models have yet to be updated with modern gradient-based training methods. Here, we describe a novel contrastive learning algorithm to train HMAX (CHMAX) to learn scale-invariant object representations. Unlike standard deep neural networks trained with data augmentation methods, we show that CHMAX learns visual representations that generalize to novel objects at levels of generalizations comparable to human observers. We hope our results will help spur some renewed interest in other classic biologically-inspired vision models.

Keywords: biologically-inspired vision, scale invariance 

View Paper PDF