Search Papers | Poster Sessions | All Posters

Poster B78 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink

Scaling Laws for Task-Optimized Models of the Primate Visual Ventral Stream

Abdulkadir Gokce1 (), Martin Schrimpf1; 1Swiss Federal Institute of Technology Lausanne (EPFL)

When trained on sufficiently large object classification datasets, particular artificial neural network models provide a reasonable match to core object recognition (COR) behaviors and the underlying neural response patterns across the primate visual ventral stream (VVS). Recent findings in machine learning suggest that training larger models on larger datasets with more compute budget translates into improved task performance, but how scale affects brain alignment is currently unclear. We here investigate the scaling laws for modeling the primate VVS with respect to the compute-optimal allocation of dataset and model size across over 300 models trained in a controlled manner. To evaluate models' brain alignment, we use a set of benchmarks spanning the entire VVS and COR behavior. We find that while increasing the number of model parameters initially improves brain alignment, larger models eventually lead to diminishing returns. Increasing the dataset size consistently improves alignment empirically, but we extrapolate that scale here also flattens out for very large datasets. Combining our optimal compute budget allocation for model and data size into scaling laws we predict that scale alone will not lead to substantial gains in brain alignment with current architectures and datasets.

Keywords: neural alignment behavioral alignment scaling laws primate visual ventral stream 

View Paper PDF