Search Papers | Poster Sessions | All Posters
Poster B72 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink
Using CNNs to understand how bottom-up and top-down processes shape human face detection
Sule Tasliyurt Celebi1 (), Benjamin de Haas1,2, Melissa L.-H. Võ3, Katharina Dobs1,2; 1Justus Liebig University Giessen, 2Center for Mind, Brain and Behavior, Universities of Marburg, Giessen, and Darmstadt, 3Goethe University Frankfurt
Understanding the interplay between bottom-up and top-down processing remains a crucial challenge in human perception and cognition. The success of feedforward deep convolutional neural networks (CNNs) in mirroring aspects of human visual perception offers support for the role of bottom-up processing. Here, we leverage the feedforward characteristics of CNNs to differentiate between bottom-up and top-down processes in a core visual task–the rapid detection of faces. By manipulating the presence of scene previews in human face detection tasks, we examine the influence of top-down processing. We found that scene preview enhances face detection, supporting the role of top-down processing in this condition. Encoding model analyses show that while basic visual features, such as face eccentricity, and high-level features extracted from CNNs predict face detection latency, scene preview selectively alters their predictivity, revealing a dynamic and context-dependent contribution. Our results offer a novel approach for understanding the complex dynamics between bottom-up and top-down influences in human visual perception.
Keywords: Face perception Scene perception Convolutional Neural Networks Predictive Processing