๐Ÿง  Evidence-Based Behavioural Science

Behavioural Model for Cyber Security

A multi-layered interactive model mapping cognitive, behavioural, social, and organisational dimensions of cyber risk and resilience โ€” applied to human and AI agent behaviour.

Why Behaviour Matters in Cyber Security

Technology alone cannot secure an organisation. Research consistently shows that human behaviour is the decisive factor in the vast majority of security incidents โ€” from clicking phishing links and reusing passwords to ignoring policy and falling for social engineering. Yet most security programmes still focus overwhelmingly on technical controls, treating people as the problem rather than the solution.

The Behavioural Model for Cyber Security changes that. Developed by CyBehave and grounded in evidence-based behavioural science โ€” including the COM-B model, the Behaviour Change Wheel, Nudge Theory, and Protection Motivation Theory โ€” this interactive framework provides a structured, multi-layered approach to understanding why people behave the way they do in security contexts, and how to design interventions that actually change behaviour.

Critically, as autonomous AI agents become embedded in enterprise workflows, the same behavioural questions apply to them. AI agents exhibit functional analogues of cognitive bias, habit formation, authority compliance, and social norm adoption. CyBehave is developing what we are calling Behavioural Convergence Theory (BCT) โ€” an emerging body of research that investigates whether and how established human behavioural science frameworks can be meaningfully extended to govern AI agent behaviour. This work is ongoing, but the early evidence suggests that human behavioural science provides the most robust existing toolkit for understanding agentic AI risk.

What This Model Provides

The interactive visualisation below maps 16 behavioural factors across four concentric layers, each representing a different dimension of cyber security behaviour. You can overlay threat vectors to see where attacks exploit behavioural weaknesses, intervention functions to explore evidence-based approaches to behaviour change, and measurement dimensions to assess organisational maturity. The three-lens system lets you examine each factor from a human, AI agent, or convergent perspective.

Alignment to Human Cyber Risk Management (HCRM)

Each behavioural factor in this model is assessed for its alignment to established Human Cyber Risk Management (HCRM) theories, models, and frameworks โ€” the body of academic and practitioner research that underpins how we understand human behaviour in security contexts. As part of CyBehave's ongoing research into Behavioural Convergence Theory, we evaluate how directly these HCRM concepts translate to AI agent behaviour. When viewing factors through the AI Agent or Convergent lens, each factor displays an applicability badge indicating the strength of this alignment:

Strong Analogy

The HCRM concept has a direct, well-evidenced functional equivalent in AI agent behaviour. The underlying mechanism differs, but the observable outcome and security implications are structurally parallel. For example, Risk Perception in humans maps directly to Threat Scoring in AI โ€” both produce assessments systematically biased by prior exposure.

Adapted

The HCRM concept requires meaningful adaptation to apply to AI agents. The human framing from established theories does not transfer literally, but a functionally analogous process exists when reinterpreted through an agentic lens. For example, Motivation in humans becomes Objective Functions in AI โ€” not the same mechanism, but producing equivalent behavioural outcomes when misaligned.

Limited Analogy

The analogy between the HCRM framework and AI agent behaviour is partial or metaphorical. The concept applies to AI agents only in a loose structural sense, and the human framing must be substantially reinterpreted. These factors represent the frontier of CyBehave's BCT research and require the most careful handling when designing cross-domain interventions.

๐Ÿง 

Cognitive

How individuals perceive, process, and evaluate security risks under cognitive constraints

๐Ÿ’ช

Behavioural

COM-B framework โ€” the capability, opportunity, and motivation driving secure actions

๐Ÿ‘ฅ

Social

Group norms, authority dynamics, and cultural influences that shape collective security behaviour

๐Ÿ›๏ธ

Organisational

Policies, nudges, training architecture, and incident response systems that structure behaviour

Explore the interactive model below. Switch between Human, AI Agent, and Convergent lenses to see how behavioural science applies across both human and agentic AI cyber security.

View
Lens
Overlays
๐Ÿง‘ HUMAN
Behaviour Change Wheel โ€” Cyber Adaptation Based on the Behaviour Change Wheel (Michie, van Stralen & West, 2011). Adapted for cybersecurity by CyBehave.

HCRM Alignment Summary

Across all 16 behavioural factors in this model, CyBehave's Behavioural Convergence Theory research evaluates how directly established Human Cyber Risk Management concepts translate to AI agent behaviour.

11
Strong Analogy
69% of factors
5
Adapted
31% of factors
0
Limited Analogy
0% of factors

Strong Analogy

LayerHuman FactorAI Equivalent
CognitiveRisk PerceptionThreat Scoring
CognitiveMental ModelsLearned Representations
CognitiveCognitive LoadContext Constraints
BehaviouralCapabilityAgent Capability
BehaviouralOpportunityEnvironmental Permissions
BehaviouralHabit FormationLearned Defaults
SocialAuthority ComplianceInstruction Following
OrganisationalPolicy ArchitectureGovernance Frameworks
OrganisationalNudge DesignPrompt & Default Engineering
OrganisationalTraining & AwarenessAlignment & Fine-tuning
OrganisationalIncident ResponseKill Switches & Rollback

Adapted

LayerHuman FactorAI Equivalent
CognitiveDecision FatigueInference Degradation
BehaviouralMotivationObjective Functions
SocialSocial NormsEmergent Agent Conventions
SocialSecurity ChampionsSentinel Agents
SocialCulture & ClimateSystem-Level Norms

Key Insight

The Social layer has the highest concentration of adapted factors (3 of 4) โ€” social dynamics are the hardest to map to AI agents. The Organisational layer is entirely strong analogy, reflecting that governance structures translate most directly to agentic AI systems.

#StrongerTogether ยฉ 2026 CyBehave Ltd. Behavioural Model for Cyber Security. BCW adapted from Michie, S., van Stralen, M.M. & West, R. (2011). The Behaviour Change Wheel: A new method for characterising and designing behaviour change interventions. Implementation Science, 6(42).