Search Papers | Poster Sessions | All Posters

Poster A75 in Poster Session A - Tuesday, August 6, 2024, 4:15 – 6:15 pm, Johnson Ice Rink

Pruning sparse features for cognitive modeling

Nhut Truong1, Uri Hasson1; 1University of Trento

While deep neural networks are increasingly adopted in cognitive sciences, they are often computationally expensive and contain irrelevant information for downstream tasks. In contrast to pruning approaches that aim to maintain classification accuracy, we present a pruning method to compress entire models while preserving their representation geometry. The target representational space can derived from a neural network or from human similarity space. Our method involves eliminating sparse, rarely activated components throughout the entire network architecture, employing both top-down and bottom-up directions. We show that a deep model's representational space can be preserved or minimally altered when sparse features are removed, producing a compact model for network distillation and predicting human similarity judgments. Furthermore, since our method is structured pruning, it can identify modular structures within pre-trained models.

Keywords: pruning sparse representational geometry human similarity judgments 

View Paper PDF