A counterfactual simulation model of causal language

Abstract

The words we use to describe what happened shape the story a listener imagines. How do speakers choose what causal expression to use? How does that impact what listeners infer about what happened? In this paper, we develop a computational model of how people use the causal expressions ‘caused’, ‘enabled’, ‘affected’, and ‘made no difference’. The model first builds a causal representation of what happened. By running counterfactual simulations, the model computes causal aspects that capture the different ways in which a candidate cause made a difference to the outcome. Logical combinations of these aspects define a semantics for the different causal expressions. The model then uses pragmatic inference favoring informative utterances to decide what word to use in context. We test our model in a series of experiments. In a set of psycholinguistic studies, we verify semantic and pragmatic assumptions of our model. We show that the causal expressions exist on a hierarchy of informativeness, and that participants draw informative pragmatic inferences in line with this scale. In the next two studies, we demonstrate that our model quantitatively fits participant behavior in a speaker task and a listener task involving dynamic physical scenarios. We compare our model to two lesioned alternatives, one which removes the pragmatic inference component, and another which additionally removes the semantics of the causal expressions. Our full model better accounts for participants’ behavior than both alternatives, suggesting that causal knowledge, semantics, and pragmatics are all important for understanding how people produce and comprehend causal language.

Publication
Beller A., Gerstenberg T. (2023). A counterfactual simulation model of causal language. PsyArXiv.
Date

<< Back to list of publications