There is wide agreement that the mind has different mechanisms it can use to make moral judgments. But how does it decide which one to use when? Recent theoretical work has suggested that people select mechanisms of moral judgment in a way that is resource-rational — that is, by rationally trading off effort against utility. For instance, people may follow general rules in low-stakes situations, but engage more computationally intensive mechanisms such as consequentialist or contractualist reasoning when the stakes are high. Here, we evaluate whether humans and large language models (LLMs) exhibit resource-rational moral reasoning in two moral dilemmas by manipulating the stakes of each scenario. As predicted, we found that the higher the stakes, the more people employed a more effortful mechanism over following a general rule. However, there was mixed evidence for similar resource-rational moral reasoning in the LLMs. Our results provide evidence that people’s moral judgments reflect resource-rational cognitive constraints, and they highlight the opportunities for developing AI systems better aligned with human moral values.
<< Back to list of publications