Search Papers | Poster Sessions | All Posters

Poster C2 in Poster Session C - Friday, August 9, 2024, 11:15 am – 1:15 pm, Johnson Ice Rink

Current DNNs are Unable to Integrate Visual Information Across Object Discontinuities

Ben Lonnqvist1 (), Elsa Scialom1, Zehra Merchant1, Michael H. Herzog1, Martin Schrimpf1; 1EPFL (Swiss Federal Institute of Technology Lausanne)

Particular deep neural networks (DNNs) are the current best models of the primate visual ventral stream and the core object recognition behaviors it supports. Despite a rich history of studying visual function in the field of psychophysics however, DNNs have not been thoroughly evaluated via classical psychophysical experiments. To address this gap, we designed a 12-AFC object recognition experiment with object stimuli containing various degrees of contour discontinuities. Humans (n=50 in-laboratory participants) perform well above chance even for images containing very few fragments, with performance scaling logarithmically with the number of fragmented elements, up to near-perfect performance. Leading DNN models on the other hand fail to recognize these fragmented objects, performing at chance throughout. Attempting to remedy this object recognition gap, we fit a linear decoder on model activations to fragmented stimuli, but even with additional supervised trials model representations were unable to support human-level fragmented object recognition performance. Despite this, models as well as humans performed better on directional segment stimuli compared to phosphene-like dot stimuli. Taken together, our results show a striking failure case of current models of the human visual system that is not trivial to rescue - suggesting a critical difference in how models and humans integrate visual information.

Keywords: psychophysics object recognition deep neural networks brain-score 

View Paper PDF