Search Papers | Poster Sessions | All Posters

Poster A112 in Poster Session A - Tuesday, August 6, 2024, 4:15 – 6:15 pm, Johnson Ice Rink

Probing semantic and visual representations in material perception through psychophysics and unsupervised learning

Chenxi Liao1 (), Masataka Sawayama2, Bei Xiao1; 1American University, 2The University of Tokyo

We investigate the relationship between visual judgment and language expression in material perception to understand how visual features relate to semantic or categorical representations. We use deep generative networks to construct an expandable image space to systematically sample familiar and unfamiliar materials. We compare the perceptual representations of materials from two tasks, visual material similarity judgments, and verbal descriptions, and discover a moderate correlation between vision and language within individuals. However, we also find a gap between these two modalities, signifying that while verbal descriptions capture material qualities on the coarse level, they may not fully convey visual nuances. Furthermore, we examine the image representation of materials derived from various data-rich neural network models and demonstrate that the distilled image features from these models have the potential to capture the human representation of materials.

Keywords: Human perception Transfer learning Generative model Large Language Models 

View Paper PDF