Search Papers | Poster Sessions | All Posters

Poster A69 in Poster Session A - Tuesday, August 6, 2024, 4:15 – 6:15 pm, Johnson Ice Rink

The Expressivity of Random Neural Networks with Learned Inputs

Avery Hee-Woon Ryoo1,2 (), Ezekiel Williams1,2, Thomas Jiralerspong1,2, Matthew G. Perich1,2, Luca Mazzucato3, Guillaume Lajoie1,2; 1Université de Montréal, 2Mila - Quebec AI Institute, 3University of Oregon

The expressivity of a neural network where all weights are initialized randomly and only constant inputs (biases) are learned is not well-studied and of interest in two domains. In neuroscience, the contribution of inputs from upstream regions, versus local plasticity, to learning in neural circuits (e.g. motor cortex) is poorly understood. In artificial intelligence (AI), recent empirical work has shown that fine-tuning biases alone can yield efficient multi-task learning. However, both fields lack a thorough understanding of the limits of input-only learning. Here, we provide theoretical and empirical evidence that a wide class of functions and finite trajectories from many dynamical systems can be well approximated by randomly initialized networks where only biases are optimized. These results extend our understanding of neural network models, providing guidance for future AI development and models of inter-region learning in the brain.

Keywords: deep learning recurrent neural networks neural dynamics random networks 

View Paper PDF