The Problem With Blaming People

In 1999, Angela Sasse and Anne Adams published a study in Communications of the ACM that should have changed the security industry permanently. Their research into password practices found something counterintuitive: employees were not bypassing security controls out of laziness or indifference. They were doing so because the controls were poorly designed, made legitimate work harder, and gave no visible return for the effort they demanded. The employees had made a rational calculation, and the calculation was correct (Adams and Sasse, 1999).

More than two decades later, the dominant response to human risk in cybersecurity remains the same: tell people what the threats are, test whether they retained the information, and attribute any subsequent incidents to user error. The Verizon Data Breach Investigations Report consistently attributes the majority of breaches to the human element, a framing that is statistically accurate but analytically unhelpful. Knowing that humans are involved in most breaches tells us nothing useful about what to do.

This article argues that the security profession needs a different conceptual foundation, one borrowed from behavioural science. The shift is not cosmetic. It changes what questions you ask, what interventions you design, and what you measure. It also tends to produce better outcomes.

Two Systems, One Brain

The most useful single framework for understanding human security behaviour comes not from cybersecurity research but from cognitive psychology. Daniel Kahneman's dual process model, popularised in Thinking, Fast and Slow (2011) and grounded in decades of experimental work, proposes that human cognition operates through two distinct modes.

System 1 is fast, automatic, associative, and largely unconscious. It is responsible for most of the moment-to-moment decisions people make during the working day: recognising a colleague's face, understanding a sentence, deciding whether an email looks legitimate. System 1 does not deliberate. It pattern-matches against prior experience and generates a response, usually before System 2 is even aware that a decision was made.

System 2 is slow, effortful, analytical, and conscious. It is capable of reasoning through novel problems, overriding System 1 responses, and evaluating evidence carefully. It is also cognitively expensive. Sustained System 2 engagement depletes mental resources, and the brain reliably reverts to System 1 when those resources are low, when time pressure is high, or when the situation feels familiar enough not to warrant careful analysis.

 

Security awareness training addresses System 2. Attackers target System 1. This mismatch is the fundamental reason that awareness programmes persistently underperform.

 

The implications for security are direct. Phishing attacks, vishing calls, and social engineering in all its forms are designed to activate System 1 responses: urgency, authority, familiarity, and fear. When a fraudulent email creates a sense of panic about an unpaid invoice, it is not trying to deceive a careful analytical mind. It is trying to generate an automatic response before that careful analytical mind gets involved. Security awareness training, by contrast, delivers information through a medium that requires System 2 engagement, in a low-stakes setting, at a time that is entirely disconnected from the moment of attack. It addresses the wrong system.

The Knowledge-Behaviour Gap

The most important empirical finding in behavioural security research is also the most uncomfortable for the training industry: knowledge about threats does not reliably predict secure behaviour. This is not a new observation. The gap between what people know and what they do has been documented across health, financial decision-making, environmental behaviour, and safety contexts for decades. It applies with particular force in cybersecurity, where the threat is invisible, the consequences are delayed and often indirect, and the secure behaviour typically involves additional effort for no immediately visible benefit.

A large-scale study by Zimmermann and Renaud (2019), drawing on survey data from over 500 employees across multiple sectors, found that employees who scored highest on security knowledge assessments were not significantly more likely to engage in secure behaviours than those who scored lowest. The correlation between knowledge and behaviour was weak enough to be operationally irrelevant. Training programmes designed to improve knowledge scores, on this evidence, measure the wrong thing.

 

What Security Awareness Assumes

What the Evidence Shows

People behave insecurely because they lack knowledge

Knowledge and behaviour are weakly correlated at best

Providing information about threats changes behaviour

Information alone rarely drives behaviour change without environmental support

Repeated training reinforces secure habits

Spaced repetition improves recall, not necessarily real-world decision-making

Test scores measure security readiness

Quiz performance predicts quiz performance, not incident rates

Users are the weakest link

System design and process architecture drive most security outcomes

Source: Synthesis of Zimmermann and Renaud (2019), Adams and Sasse (1999), and Schneier (2000).

A Practitioner Case Study: The FACC Fraud

In 2016, FACC AG, an Austrian aerospace manufacturer, lost approximately EUR 50 million to a business email compromise attack. The fraud followed a now-familiar pattern: an attacker impersonating the CEO sent an urgent email to the finance department requesting a wire transfer in connection with a confidential acquisition. The transfer was authorised and executed.

The subsequent analysis is instructive. The employee who processed the transfer was not untrained. FACC had a security awareness programme. The email did not contain obvious grammatical errors or suspicious links. It arrived at a time of genuine business pressure, it invoked the authority of a senior executive, and it imposed time constraints that made verification feel difficult. Every element of the attack was designed to activate System 1 processing and prevent System 2 from intervening.

The failure was not one of knowledge. It was one of the process designs. No single-authorisation transfer at that value should have been possible without an out-of-band verification step. The awareness programme had no bearing on the outcome because the outcome was determined by the architecture of the approval process, not by the employee who initiated its knowledge state.

KEY LESSON: When an incident occurs despite adequate training, the appropriate question is not 'why did the user fail?' It is 'what would have had to be true about the environment for the user to succeed?' That question leads to process redesign, not another training module.

What Behavioural Science Offers Instead

Behavioural science does not dismiss training as a tool. It repositions it as one element of a broader programme that addresses behaviour through multiple channels, most of which are more reliably effective than information delivery alone. These channels include the design of the physical and digital environment in which decisions are made, the social norms that shape what people consider normal and acceptable, the feedback mechanisms that make consequences of decisions visible, and the friction structures that make secure choices easier than insecure ones.

The COM-B model, developed by Michie and colleagues at University College London, provides a practical framework for diagnosing why a specific security behaviour is occurring or not. It proposes that any behaviour requires three components: capability (the knowledge and skills to perform it), opportunity (the environmental conditions that allow it), and motivation (the will to do it). Most security interventions address only capability. The model predicts, and the evidence confirms, that this is insufficient when the barriers to secure behaviour are primarily located in opportunity or motivation (Michie et al., 2011).

Part 2 of this series examines the specific cognitive biases that make humans reliably exploitable, and how understanding those biases changes the design of both technical controls and human-facing processes. Part 3 covers the practical frameworks for designing interventions that work, and Part 4 addresses measurement.

 

References

Adams, A. and Sasse, M.A. (1999). Users are not the enemy. Communications of the ACM, 42(12), pp.40-46.

Kahneman, D. (2011). Thinking, Fast and Slow. Penguin Books.

Michie, S., van Stralen, M.M. and West, R. (2011). The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation Science, 6(1), p.42.

Schneier, B. (2000). Secrets and Lies: Digital Security in a Networked World. John Wiley and Sons.

Verizon (2024). Data Breach Investigations Report. Verizon Business.

Zimmermann, V. and Renaud, K. (2019). Moving from a 'human-as-problem' to a 'human-as-solution' cybersecurity mindset. International Journal of Human-Computer Studies, 131, pp.169-187.