Search Papers | Poster Sessions | All Posters

Poster A88 in Poster Session A - Tuesday, August 6, 2024, 4:15 – 6:15 pm, Johnson Ice Rink

Evaluating Predictive Performance and Learning Efficiency of Large Language Models with Think Aloud in Risky Decision Making

Hanbo Xie1 (), Huadong Xiong1, Robert Wilson1; 1The University of Arizona

Predicting human behaviors and explaining the underlying mental processes have long been objectives in psychology. Traditionally, researchers have used computational models to hypothesize cognitive processes and test these models against behavioral data. However, this approach is often constrained by the researchers' theoretical insights and may not directly reflect the actual underlying cognitive functions. The \textbf{Think Aloud} protocol offers a more direct approach by capturing participants' thought processes on a trial-by-trial basis during cognitive tasks. However, analyzing verbal data from Think Aloud protocols is labor-intensive and subjective. Advancements in Natural Language Processing and Large Language Models (LLM) significantly mitigate these challenges, enabling comprehensive and scalable analysis of verbal data. This study evaluates the effectiveness of LLMs combined with Think Aloud data as cognitive models in risky decision-making tasks. We compare the predictive performance and learning efficiency to previously well-established symbolic and small neural network models. Our results indicate that LLMs, enhanced with Think Aloud data, accurately predict human decisions and show superior training efficiency and generalizability with minimal data. This approach advances our computational understanding of human decision-making processes, shedding light on the mechanisms of human cognitive computation.

Keywords: Large Language Models Think Aloud Protocol Risky Decision-Making Computational Modeling 

View Paper PDF