Search Papers | Poster Sessions | All Posters
Poster B44 in Poster Session B - Thursday, August 8, 2024, 1:30 – 3:30 pm, Johnson Ice Rink
Adaptive Learning Under Uncertainty With Variational Belief Deep Reinforcement Learning
Po-Chen Kuo1 (), Han Hou2, Edgar Y. Walker1; 1University of Washington, 2Allen Institute for Neural Dynamics
Animals live in environments that are inherently uncertain and constantly changing. To thrive, they must learn to mitigate uncertainty and achieve their goals. For instance, when foraging in stochastic and dynamic environments, animals learn to adapt their strategies based on experience and trade off exploration and exploitation. Adaptive learning under uncertainty involves not only acquiring action-outcome contingencies but also discovering environmental regularities. Past computational modeling has largely studied the two aspects separately: contingency learning through reinforcement learning (RL) or structure learning through Bayesian inference. However, recent studies show animals may combine different strategies, which asks for an integrated approach to understand the computational basis of adaptive learning. Leveraging advances in deep RL and variational inference, we develop a flexible computational framework – variational belief deep RL – to incorporate Bayesian inference with RL. Focusing on a series of dynamic foraging tasks with various reward and temporal structures, we show how variational belief deep RL can provide effective modeling tools for both structural inference and fast adaptation to understand the computational and neural mechanisms of adaptive learning under uncertainty.
Keywords: deep reinforcement learning Bayesian inference decision-making under uncertainty foraging