Why Psychological and Behavioural Factors Belong in Threat Models

In cybersecurity, we pride ourselves on our frameworks, controls, and technology stacks. Yet one of the most persistent blind spots in threat modelling is also one of the most obvious: human behaviour.

Traditional threat models focus heavily on technical vectors – malware payloads, privilege escalation, misconfigurations, and lateral movement. These are critical, but they only paint half the picture. The majority of breaches today begin with a human – a click, a disclosure, a misjudgement, or an omission. If we treat people as static, rational elements in the system, our threat models remain incomplete.

It’s time to bring behavioural modelling into the heart of threat assessment.

Why Psychological and Behavioural Factors Matter

Attackers don’t just target systems; they target cognitive biases, social dynamics, and cultural norms. Consider the following:

  • Cognitive Biases: Confirmation bias, authority bias, and urgency effects make us vulnerable to phishing and social engineering.
  • Emotional States: Stress, fatigue, and distraction impair judgement, increasing susceptibility to mistakes.
  • Social Norms: If “everyone shares passwords” in a team, then policy is overridden by practice.
  • Trust Dynamics: We are wired to trust colleagues, senior leaders, or familiar-looking messages – even when manipulated.

Ignoring these factors is like modelling a building for fire risk but forgetting that people sometimes prop open fire doors.

Expanding Threat Models with Behavioural Dimensions

Traditional frameworks like STRIDE or MITRE ATT&CK are powerful but technical. They help us map spoofing, tampering, information disclosure, denial of service, and other system-level threats. But they typically assume the human is either rational or passive — and that’s rarely the case.

Instead of reinventing the wheel, we can build on STRIDE by adding behavioural factors to each category:

  • Spoofing → Not just stolen credentials, but why someone might hand them over: authority bias, phishing under pressure, or social trust in a colleague.
  • Tampering → Employees bypassing or altering security settings “to get work done,” often because processes are too complex or slow.
  • Repudiation → Human reluctance to report mistakes due to fear of blame or lack of psychological safety.
  • Information Disclosure → Mis-sent emails, oversharing on collaboration tools, or cultural norms that encourage “open by default.”
  • Denial of Service → Accidental misuse of systems (e.g., mass downloads, errors under stress) or deliberate insider disruption due to resentment.
  • Elevation of Privilege → Weak segregation of duties amplified by trust dynamics (“I’ll just share my admin account with you, it’s quicker”).

By extending STRIDE with behavioural layers, we avoid discarding a well-established model. Instead, we enrich it with a human dimension, giving us a socio-technical threat model that reflects the real world.

This approach can be applied to any framework: MITRE ATT&CK techniques can be annotated with the psychological levers they rely on, and kill chain stages can be cross-mapped to behavioural attack vectors like persuasion, deception, or fatigue.

Frameworks and Tools to Leverage

We don’t need to start from scratch. Behavioural science provides robust models that integrate well into threat assessment:

  • COM-B Model (Capability, Opportunity, Motivation – Behaviour): Helps identify what drives secure or insecure actions.
  • Behavioural Bias Libraries: Catalogues of cognitive biases relevant to decision-making under stress or deception.
  • Social Network Analysis (SNA): Maps influence, trust, and communication patterns that shape security behaviour.
  • Human Reliability Analysis (HRA): Borrowed from safety engineering, this quantifies error likelihood in human-system interaction.

By embedding these into threat modelling sessions, we move from speculation about “what people might do” to structured, evidence-based analysis.

Making It Practical

How can organisations begin embedding behavioural modelling into threat models?

  1. Start Small, Add Layers: Begin by identifying behavioural attack vectors alongside technical ones in existing threat models.
  2. Use Cross-Disciplinary Teams: Bring behavioural scientists, psychologists, or even HR into threat modelling workshops.
  3. Gather Data: Analyse incident reports, phishing simulations, and culture surveys to identify recurring behavioural risks.
  4. Model Interventions: For each behavioural risk, document potential mitigations — nudges, training, environmental redesign, or process changes.
  5. Review Regularly: Human behaviour is dynamic. Periodically re-assess to reflect changing work patterns, stressors, or organisational culture.

Shifting the Mindset

Incorporating behavioural factors isn’t about blaming employees. It’s about realism. Humans are part of the system - often the most targeted part. If our threat models ignore them, they are dangerously incomplete.

By recognising cognitive biases, social pressures, and cultural dynamics, we design defences that fit people, not just machines. Behavioural modelling turns security from a purely technical discipline into a socio-technical practice that mirrors how attacks really unfold.

Final Thought

Cybersecurity threats evolve because attackers understand people as well as systems. To keep pace, we must do the same. The next generation of threat models will not only chart exploits and payloads but also map trust, fatigue, bias, and culture.

When behavioural science and threat modelling meet, we stop building castles on sand. We build defences grounded in the messy, fascinating, and very human reality of how work gets done.



#CyberSecurity #ThreatModeling #HumanRisk #BehaviouralScience #SecurityCulture #CognitiveBias #RiskManagement #STRIDE #MITREATTACK #BehaviouralModelling