From Insight to Foresight
For the past decade, human behaviour has been recognised as cybersecurity’s greatest vulnerability. This has led to the rise of Human Cyber Risk Management (HCRM) – a discipline that draws on behavioural science to understand why people click, share, trust, or ignore warnings, and how we can shape cultures of secure behaviour.
But today, we stand on the edge of something bigger. Artificial Intelligence is not just another tool in the security stack; it is reshaping the very fabric of how people work, learn, and interact. And with it, the way we must think about human cyber risk.
The Shift from Human-Centric to Human–AI-Centric Risk
Traditional HCRM focuses on the employee – their habits, choices, and cultural environment. But in the AI-enabled workplace, risk is no longer just about the individual. It’s about the interplay between humans and intelligent systems.
- AI as the Adversary’s Weapon: Deepfakes, hyper-personalised phishing, automated reconnaissance – attackers are using AI to exploit psychological biases at unprecedented scale.
- AI as the Defender’s Partner: Adaptive nudges, real-time anomaly detection, predictive analytics – security teams can use AI to anticipate human lapses before they become incidents.
Risk management must now account for both dimensions at once.
Why HCRM Needs to Evolve
The strength of HCRM lies in its behavioural lens. It explains why humans behave as they do under pressure, distraction, or deception. But without AI, it remains limited by scale and speed.
With AI, HCRM can evolve from descriptive to predictive:
- Moving from reporting what happened to anticipating what will likely happen.
- Moving from generic awareness campaigns to context-aware interventions.
- Moving from periodic measurement to continuous, real-time insights.
This evolution won’t replace behavioural security expertise. It will extend and amplify it.
A Balanced Future
As with any technological shift, there are risks: over-surveillance, loss of trust, and ethical missteps that undermine the very culture we’re trying to build. That’s why the behavioural foundation of HCRM remains essential.
AI can show us patterns, but it cannot tell us what is ethical, sustainable, or human-centred. It is the blending of both – the science of behaviour and the intelligence of machines – that will define the next chapter of cyber resilience.
What’s Next?
For me, exploring the convergence of HCRM and AI feels like a natural extension of the work we’ve been doing for years. Human risk is not going away, but the tools we use to understand and manage it are transforming.
I believe the future of behavioural security lies not in choosing between human insight or machine intelligence, but in combining them. Human+AI Risk Management will be the foundation for building truly resilient organisations in the decade ahead.
👉 What’s your perspective?
- How comfortable are you with AI taking a more active role in shaping human behaviour at work?
- Where should we draw the ethical line between protecting people and monitoring them?