Search Papers | Poster Sessions | All Posters

Poster C1 in Poster Session C - Friday, August 9, 2024, 11:15 am – 1:15 pm, Johnson Ice Rink

Contextual Information Representation of Objects-Scenes in Deep CNNs: Effects of Training and Architectures

Rahul Ohlan1, Daniel Leeds2, Elissa Aminoff2; 1The Graduate Center, CUNY, 2Fordham University

This research provides a comprehensive analysis of contextual information representation in state-of-the-art convolutional neural networks (CNNs) trained on ImageNet and Places365 datasets for object and scene recognition tasks. While current CNN models excel at object detection and image classification, our study investigates how these networks capture relationships between objects and scenes. We demonstrate that objects within related scenes exhibit closer contextual associations compared to those in unrelated contexts. Moreover, we investigate the effects of training and different CNN architectures on this relationship, providing insights into the nuanced representation of contextual information in deep learning-based computer vision systems. Dataset collecting open-source images spanning 50 diverse contexts with each comprising of objects and related scenes images observed in that context was used in this analysis.

Keywords: Scenes Object-Recognition Deep Neural Networks Vision 

View Paper PDF