22 October, 2018
Sam Ashworth-Hayes and Matthew Hellon
Behavioural economics can sometimes seem very simple. People tend to save too little, but if we can convince them to enrol into automatic deductions we can persuade them to do better. People use too much water, but if we tell them their neighbours use less we can get them to cut back. Pilots waste fuel, but with well-timed nudges we can fix this . The hidden catch is that while successful interventions are easy to describe, they’re harder to design.
Seemingly small changes in the way a message is delivered or worded can lead to significant differences in outcomes. In a 2013 paper , The Behaviouralist’s cofounder Rob Metcalfe found that while a social norm message delivered by letter was effective in changing energy use, a message delivered by email had no effect on behaviour. To generate effective behavioural change, we need to understand the context a message is sent in, how the consumer will view it, and how that might translate to action.
To generate effective behavioural change, we need to understand the context a message is sent in, how the consumer will view it, and how that might translate to action.
Take another example. A number of economists have successfully used social norms messaging to improve tax collection rates. But in a recent study by Peter John, an attempt to use social norm messaging to improve tax collection in Lambeth actually backfired. Subtle differences between the messages used in this experiment and those deployed in previous, successful attempts led to the letters reducing payments.
Designing behavioural interventions is a bit like “routine” surgery; it’s routine for the expert, but it’s probably not a good idea to open yourself up and start pulling bits out. And just as you wouldn’t introduce a new procedure without evidence to back it, it’s important to test interventions before trying them at scale. Rolling an unsuccessful intervention out to a client can cost a small fortune in lost revenue – both direct losses if the intervention backfires, and the forgone revenue from a better intervention design.
This, broadly, is why we approach things the way we do. We design our nudges building in the latest academic insights and our considerable experience of successful interventions, but we also test them rigorously. Our standard method is to test interventions in a randomised control trial, often considered the gold standard for programme evaluation. This not only gives our clients confidence that the effects we detect are real and causal, but allows them to ‘see’ the effect of the letter in a small group before it’s finally rolled out – avoiding costly errors. To learn more about our approach and see how we can help you, get in touch with us here.