People use varied language to express their causal understanding of the world. But how does that language map onto people’s underlying representations, and how do people choose between competing ways to best describe what happened? In this paper we develop a model that integrates computational tools for causal judgment and pragmatic inference to address these questions. The model has three components: a causal inference component which computes counterfactual simulations that capture whether and how a candidate cause made a difference to the outcome, a literal semantics that maps the outcome of these counterfactual simulations onto different causal expressions (such as ‘caused’, ‘enabled’, ‘affected’, or ‘made no difference’), and a pragmatics component that considers how informative each causal expression would be for figuring out what happened. We test our model in an experiment that asks participants to select which expression best describes what happened in video clips depicting physical interactions.
<< Back to list of publications