There are at least three ways of learning how the world works: learning from observations, from interventions, and from explanations. Prior work on causal inference focused on how people learn causal structures through observation and intervention. Our study is the first to look at how explanations support causal structure learning. We develop a normative inference model that learns from observations and explanations, and compare the model’s predictions to participants’ judgments. The task is to infer the causal connections in 3-node graphs, based on information about their co-activation, and explanations of the kind ‘B activated because A activated’. We find that participants learn better from explanations than from observations. However, while the normative model benefits from having observations in addition to explanations, participants did not.
<< Back to list of publications