Search Papers | Poster Sessions | All Posters
Poster B91 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink
A Feedback Model of Flexible Context Guided Sensory Processing
Abhiram Iyer1 (), Lakshmi Narasimhan Govindarajan1, Ila Fiete1; 1MIT
Visual representations become progressively more abstract along the cortical hierarchy. These abstractions allow us to define notions like objects and shapes, and more generally organize sensory experience. Low-level regions, by contrast, represent simple local features of their inputs. How do the abstract, spatially non-specific, low-dimensional summaries of sensory information in high-level areas flexibly modulate the spatially specific and local low-level sensory representations in appropriate ways to guide attention, context-driven, and goal-directed behaviors across a range of tasks? We build a biologically motivated and trainable neural network model of dynamics in the visual pathway, incorporating lateral, feedforward, and local feedback synaptic connections, and excitatory and inhibitory neurons, together with long-range top-down inputs conceptualized as low-rank modulations of the input-driven sensory responses by high-level areas. We study this model in a visual counting task with images containing several novel 3D objects, each composed of new shape, size, and color combinations. First cued by a visual input depicting one object of a particular color or shape, the model uses its remembered representations of the cue to then modulate the perceptual and counting process for the subsequent image to report the number of objects with the cued color or shape. We show that this model is able to accurately and generalizably count novel combinations of novel objects with the cued attribute. We examine the neural representations that make this possible, shedding light on the nature of top-down contextual modulation of sensory processing and generating predictions for experiments.
Keywords: context feedback convolutional RNNs cued-visual search