Search Papers | Poster Sessions | All Posters

Poster A64 in Poster Session A - Tuesday, August 6, 2024, 4:15 – 6:15 pm, Johnson Ice Rink

Diffusion models and reinforcement learning: Novel pathways to modeling decoded fMRI neurofeedback

Hojjat Azimi Asrari1 (), Megan Peters1; 1University of California, Irvine

This study explores the application of diffusion models and reinforcement learning to model Decoded Neurofeedback (DecNef), as applied via functional magnetic resonance imaging (fMRI). Our methodology, Denoising Diffusion Policy Optimization (DDPO), integrates diffusion models trained via reinforcement learning to navigate the complex dynamics of brain activity changes. Using a pre-existing DecNef dataset, we implemented policy gradient methods to iteratively refine the diffusion models, aiming to produce target patterns of neural (voxel) activity. Our results demonstrate the potential of this approach for accurately modeling policies that allow the achievement of target brain states, offering a foundation for investigating the mechanisms of neurofeedback and its implications for basic science research and conducting more effective neurofeedback experiments.

Keywords: Decoded Neurofeedback (DecNef) Diffusion Models Explainable Reinforcement Learning fMRI 

View Paper PDF