Search Papers | Poster Sessions | All Posters

Poster B53 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink

Understanding Feature Learning in Neural Networks via Manifold Capacity and Effective Geometry

Hang Le1, Chi-Ning Chou1, Yichen Wang1,2, SueYeon Chung1,3; 1Flatiron Institute, 2University of California, Los Angeles, 3New York University

Humans learn to perform complicated tasks through incorporating task-relevant features into neural representations in the brain. This ability, known as feature learning, has been widely demonstrated in various brain areas as well as artificial neural networks. However, fundamental questions, such as quantifying the degree of feature learning and gaining mechanistic understanding of feature learning, remain elusive. In this work, we propose the utilization of manifold capacity theory to understand feature learning. Manifold capacity has been shown to quantify task-relevant coding efficiency of neural representations beyond training and testing accuracy. The increase in capacity alongside learning can thus be considered a signature of task-relevant feature learning. Moreover, capacity is analytically linked to effective geometric measures such as manifold radius and dimension. As a consequence, the dynamics of effective manifold geometry can further elucidate the underlying mechanisms of feature learning. We demonstrate the applicability of using manifold capacity and effective geometry to understand feature learning though artificial neural networks. Concretely, we use these quantitative measures as mesoscopic descriptors to describe different learning strategies and stages throughout learning. Moreover, we use these understanding to explain how neural networks generalize to other tasks with a distribution shift.

Keywords: feature learning neural representations population geometry manifold capacity 

View Paper PDF