Search Papers | Poster Sessions | All Posters
Poster B137 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink
Single neurons in human hippocampus and amygdala track the depth-of-processing elicited by visual representations of images
Aalap Shah1 (), Richard Xue1, Qi Lin2, Runnan Cao3, Shuo Wang3, Ilker Yildirim1; 1Yale University, 2RIKEN, 3Washington University in St. Louis
The spontaneous processing of visual information plays a significant role in shaping memory, sometimes even overshadowing voluntary efforts to encode specific details. What are the neurocomputational mechanisms that underlie the transformation of percepts to memories in the brain? To address this, we analyzed single neuron recordings in hippocampus and amygdala, two important structures in the medial temporal lobe (MTL), collected while human participants viewed sequences of object images. We hypothesize that the activity of single neurons in these MTL structures track the depth-of-processing of incoming visual information, thereby supporting the perception to memory interface, with more deeply processed images leading to stronger memory traces. Inspired by recent work, we derived a computational signature for the depth-of-processing of visual representations based on the iterative reconstruction loop in a sparse coding model. Consistent with our hypothesis, we found that the firing rates in both hippocampus and amygdala correlate with the number of iterations required for reconstruction --- and do so in complementary ways. Moreover, single neurons that are more strongly associated with the number of model iterations also fire more. Our results provide an algorithmic account for how MTL might support the adaptive interface between perception and memory.
Keywords: depth-of-processing sparse coding hippocampus amygdala