Search Papers | Poster Sessions | All Posters

Poster C58 in Poster Session C - Friday, August 9, 2024, 11:15 am – 1:15 pm, Johnson Ice Rink

Are LLMs tools to understand human neurocognition during abstract reasoning?

Christopher Pinier1, Claire E. Stevenson1, Michael D. Nunez1; 1University of Amsterdam

Abstract reasoning, a key component of human intelligence, seems to have recently emerged in large language models (LLMs). If so, LLMs could help us pro- vide a mechanistic explanation for the brain processes behind the abstract reasoning abilities of humans. In this study, we compared the performance of multiple LLMs to human performance in a visual abstract reasoning task. We found that while most LLMs cannot perform this task as well as human participants, some LLMs are com- petent enough for use as potential descriptive models. We propose that the best-performing LLMs can be used as models to understand human performance, response times, and the timing of Event-Related Potentials (ERPs) as recorded by electroencephalography (EEG) during the task. We show initial behavioral and ERP results, and present our plan to compare LLM embeddings and surprisal measures to cortical activity patterns. This is the first step in a larger project to create neurally-informed artificial networks as tools to understand human neurocognition.

Keywords: Abstract Reasoning AI Large Language Models (LLMs) Event- Related Potentials (ERPs) 

View Paper PDF