Whodunnit? Inferring what happened from multimodal evidence

Abstract

Humans are remarkably adept at inferring the causes of events in their environment; doing so often requires incorporating information from multiple sensory modalities. For instance, if a car slows down in front of us, inferences about why they did so are rapidly revised if we also hear sirens in the distance. Here, we investigate the ability to reconstruct others’ actions and events from the past by integrating multimodal information. Participants were asked to infer which of two agents performed an action in a household setting given either visual evidence, auditory evidence, or both. We develop a computational model that makes inferences by generating multimodal simulations, and also evaluate our task on a large language model (GPT-4) and a large multimodal model (GPT-4V). We find that humans are relatively accurate overall and perform best when given multimodal evidence. GPT-4 and GPT-4V performance comes close overall, but is very weakly correlated with participants across individual trials. Meanwhile, the simulation model captures the pattern of human responses well. Multimodal event reconstruction represents a challenge for current AI systems, and frameworks that draw on the cognitive processes underlying people’s ability to reconstruct events offer a promising avenue forward.

Publication
Wu, S. A., Brockbank, E., Cha, H., Fränken, J., Jin, E., Huang, Z., Liu, W., Zhang, R., Wu, J., Gerstenberg, T. (2024). Whodunnit? Inferring what happened from multimodal evidence. In Proceedings of the 46th Annual Conference of the Cognitive Science Society.
Date

<< Back to list of publications