Fifteen years ago, Britain’s Conservative party, then in opposition, latched on to behavioural economics as an attractive alternative to old-fashioned nannying interference in people’s affairs. Richard Thaler and Cass Sunstein’s best-seller Nudge was included in Tory MPs’ 2008 summer reading list. Once in power, David Cameron set up the Behavioural Insights Team at the heart of government.
Other governments established their own “nudge units”, using Thaler and Sunstein’s brand of “libertarian paternalism” to guide citizens towards better choices in areas from pension enrolment to organ donation. It is hardly a surprise that nudging turned out not to be a silver bullet for knotty policy dilemmas. The FT warned nudges should not be confused with “a coherent political philosophy”.
The reputation of behavioural science has been badly dented recently. But the present danger is a different one: that policymakers might abandon a useful complement to traditional legislative and regulatory action when it still has much to offer.
Concerns about the robustness of behavioural science started to spread in the 2010s as it proved hard to replicate some headline-grabbing findings at scale. For instance, later studies cast doubt on research that seemed to show that adopting a “power pose” increased testosterone and lowered cortisol. Some of the effects of “priming”, or exposing someone to a prompt that subconsciously influences their actions, have been discredited.
More recently, Francesca Gino, a high-profile Harvard expert on dishonesty, faced accusations of fraud in papers she had co-authored. This month, Gino brought a defamation suit against Harvard and the bloggers who had made the allegations, stating: “I have never, ever falsified data or engaged in research misconduct of any kind.” Dan Ariely, another star behavioural scientist, is under investigation by his university, Duke, following concerns about his research into dishonesty. “What I know for sure is that I never did, nor ever would, falsify data,” he has told the FT.
It is important to distinguish between fraudulent findings, which need to be investigated and exposed, false positives, which replication should weed out, and robust results that have been tested at scale. Accusing behavioural science of “physics envy”, as some critics have done, is unhelpful. It is the responsibility of universities, academics and scientific journals to improve the quality of output. That could involve different measures, such as more preregistration of hypotheses, to stop researchers cherry-picking results, wider sharing of raw data, and curbs on the academy’s “publish-or-perish” culture.
A further distinction needs to be made between behavioural science and behavioural economics. The economists take the scientists’ findings and examine the consequences, intended and unintended. Policymakers applying such findings in the real world have an even greater responsibility than academics, let alone the media, not to hype exciting experimental results. But they also have the advantage that they are able to test behavioural economic policy at scale, yielding results more robust than laboratory experiments.
It is important to understand the limits of behavioural economic policies. In a recently published manifesto for applying behavioural science, the BIT, which has now been spun out from the UK government, urges humility. It points out that even apparently universal cognitive processes are shaped by their context, for instance. Despite the caveats, though, behavioural science has enlarged a discipline that had laid dangerous emphasis on the idea of humans as perfectly rational economic computers of risks and rewards. That the field should now be revealing some of its human flaws is strangely appropriate. But it is not a reason to ditch it entirely.
Letter in response to this editorial comment:
World in 2023 depends on behavioural science / From Andrew Oswald, Professor of Economics and Behavioural Science, University of Warwick, Coventry, Warwickshire, UK