From Understanding to Designing
In 2008, Richard Thaler and Cass Sunstein published Nudge, a book that changed public policy worldwide. Their central argument was elegant: by redesigning the environment in which people make decisions (the 'choice architecture'), you can predictably shift behaviour toward better outcomes without restricting options or applying sanctions. The UK government established the Behavioural Insights Team in 2010, later known informally as the Nudge Unit, and dozens of governments followed. The approach has since produced measurable improvements in tax compliance, pension enrolment, organ donation, and public health across multiple countries (Thaler and Sunstein, 2008).
Cybersecurity has been slower to adopt these principles, but the logic transfers directly. If training cannot reliably change the cognitive processes that make people vulnerable, as Parts 1 and 2 of this series established, then the practical alternative is to redesign the environment so that those processes lead toward secure outcomes rather than insecure ones. The employee does not need to think differently. The system needs to make a secure choice, the path of least resistance.
The EAST Framework
The Behavioural Insights Team distilled the lessons of applied behavioural science into a practical design framework: EAST. The acronym stands for Easy, Attractive, Social, and Timely. Each element identifies a lever that can be used to increase the likelihood of a target behaviour, and together they provide a structured method for diagnosing why a behaviour is not occurring and designing interventions to address the specific barrier (Service et al., 2014).
|
EAST Element |
Core Principle |
Security Application |
Example Intervention |
|
Easy |
Reduce friction on the desired behaviour; increase friction on the undesired one |
Make reporting a suspicious email take one click, not seven |
Dedicated 'Report Phishing' button integrated directly into the email client |
|
Attractive |
Draw attention to the desired behaviour using salient design and relevant framing |
Make security notifications visually distinct and personally relevant |
Risk alerts that name the specific system the employee uses, not generic warnings |
|
Social |
Use peer norms and social proof to signal that secure behaviour is the expected standard |
Communicate what most people in the organisation actually do |
"87% of your team already uses the password manager" rather than 'Please use the password manager' |
|
Timely |
Deliver prompts and interventions at the moment of decision, not in advance |
Surface security guidance when it is immediately relevant |
MFA setup prompt triggered at first login to a sensitive system, not in a monthly newsletter |
Source: Adapted from Service et al. (2014), EAST: Four Simple Ways to Apply Behavioural Insights, Behavioural Insights Team.
A Case Study in Friction Reduction
A UK financial services organisation had a phishing reporting rate of under three per cent despite a mature awareness programme. Analysis of the reporting process revealed that it required employees to forward suspicious emails to a dedicated mailbox, copy in their line manager, delete the original, and log the report in a ticketing system. The process involved seven discrete steps, took approximately four minutes, and generated no visible feedback to the reporter.
After deploying a one-click reporting button integrated directly into the email client, with immediate automated acknowledgement and a monthly digest showing how many confirmed threats had been caught, the reporting rate rose to 31 per cent within 90 days. Nothing about the employees' knowledge or motivation had changed. The secure behaviour had simply been made easy enough to compete with the alternative: doing nothing.
|
Reporting rate rose from under 3% to 31% in 90 days. No training change. No awareness campaign. Only friction reduction and visible feedback. |
This case illustrates a principle that is consistent across the evidence base: the single most powerful intervention in security behaviour is usually friction reduction, not information provision. When secure behaviour requires less effort than insecure behaviour, people perform it. When it requires more effort, they do not, regardless of how much they know about why it matters.
Choice Architecture for Security Teams
Beyond nudges, choice architecture offers a broader set of design techniques that security teams can apply to the environments in which security decisions are made. Four are particularly relevant.
Default Settings
Defaults are the most powerful tool in the choice architect's repertoire. People disproportionately stick with whatever option is presented as the default, whether this is the pre-selected option in a form, the factory setting on a device, or the standard configuration of a software tool. This tendency is robust, consistent across cultures, and resistant to information-based persuasion.
For security, the implication is direct: MFA should be on by default, not presented as an option. Automatic screen lock should be configured, not left to the user to enable. Secure browser settings should be the baseline, not the result of a deliberate configuration choice. Every insecure default in your environment is an architectural decision that is working against you.
Commitment Devices
A commitment device is a mechanism by which people voluntarily constrain their future choices to make it easier to stick to a desired behaviour. In behavioural economics, classic examples include automatic pension contributions and pre-commitment savings accounts. In security, the equivalent is the pre-approved security policy that removes the decision from the individual at the point of pressure.
When a finance employee has pre-agreed, in a low-pressure setting, that they will always call back a known number to verify any payment instruction received by email or phone, they are not making a judgement call in the moment of attack. They are executing a standing commitment. The attacker's pressure tactics are less effective because the employee's behaviour is not determined by their real-time assessment of the situation.
Social Norms Communication
Most organisations communicate their security expectations through policy documents, training courses, and periodic reminders. Relatively few systematically communicate what employees actually do. This is a missed opportunity, because descriptive social norms (what most people do) tend to be more persuasive than injunctive norms (what people ought to do) when the two diverge.
If 78 per cent of employees in a given department are already using the approved password manager, communicating that fact to the remaining 22 per cent is likely to be more effective than restating the policy requirement. If reporting rates are rising, sharing that trend makes the reporting behaviour feel like the normal thing to do. The evidence from public health and environmental behaviour is consistent: communicating accurate positive norms shifts behaviour toward the norm.
Feedback Loops
One of the reasons security behaviour is difficult to sustain is that the consequences of secure behaviour are typically invisible. Employees who report a phishing email rarely find out whether it was a genuine threat. Those who use MFA never see the attack attempts it blocked. In the absence of feedback, the brain's reinforcement systems cannot connect the behaviour to any meaningful outcome, which makes the behaviour feel effortful and arbitrary.
Closing feedback loops is one of the highest-return interventions available to security teams. This does not require complex technology. A monthly communication that quantifies what the reporting programme caught, what the patch programme prevented, or what the access controls blocked provides the narrative connection between behaviour and consequence that sustains engagement. Employees who can see that what they do matters are more likely to keep doing it.
Putting It Together: A Five-Step Intervention Design Process
|
Step |
Question to Answer |
Tools and Methods |
|
1. Define the target behaviour |
What specific, observable action do you want to increase or decrease? |
Be concrete: 'employees report suspicious emails within one hour' not 'employees are security aware' |
|
2. Diagnose the barrier |
Is the barrier capability, opportunity, or motivation (COM-B)? |
Employee interviews, process mapping, behavioural observation, data analysis |
|
3. Select the intervention type |
Which EAST element or choice architecture technique addresses the diagnosed barrier? |
Match intervention type to barrier: friction reduction for opportunity barriers, social norms for motivation barriers |
|
4. Design for context |
How will this intervention fit into the actual workflow without adding burden? |
Prototype with a small user group; observe, do not just survey |
|
5. Measure and iterate |
Did the behaviour change? By how much? What drove the change? |
Leading indicators (behaviour rates) and lagging indicators (incident data); Part 4 covers this in detail |
The Ethical Dimension
Any discussion of nudging and choice architecture must address the ethics of behavioural influence. The distinction between a nudge and manipulation is not always obvious, but Thaler and Sunstein's original framework provides a useful test: a nudge should preserve full freedom of choice, be transparent in intent, and serve the interests of the person being nudged rather than primarily those of the designer.
Punitive phishing simulations that publicly shame employees, link simulation performance to performance reviews, or use deceptive pretexts that exploit personal circumstances fail this test and violate every principle discussed in this series. Research by Lain and colleagues (2022) found that organisations using punitive phishing programmes had lower reporting rates than those using supportive approaches, suggesting that the harm to security culture outweighs any short-term awareness benefit. If your intervention would not survive public disclosure, it probably should not be deployed.
References
Lain, D., Kostiainen, K. and Capkun, S. (2022). Phishing in organizations: Findings from a large-scale and long-term study. In 2022 IEEE Symposium on Security and Privacy (SP), pp.842-859.
Service, O., Hallsworth, M., Halpern, D., Algate, F., Gallagher, R., Nguyen, S., Ruda, S. and Sanders, M. (2014). EAST: Four Simple Ways to Apply Behavioural Insights. Behavioural Insights Team.
Thaler, R.H. and Sunstein, C.R. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale University Press.