Practical advice, expert perspectives, and applied guides on building security culture, managing human risk, and running effective Champions programmes.
This is Part 2 of a four-part series. Part 1 introduced dual process theory and the knowledge-behaviour gap. This article maps specific cognitive biases to the attack techniques that exploit them, and examines how the same biases affect security professionals as well as the users they protect. Parts 3 and 4 cover intervention design and measurement.
Read article →This is Part 1 of a four-part series on behavioural science for cybersecurity practitioners. It introduces the foundational theory that explains why people behave insecurely despite knowing better, and why the security industry's default response has been so persistently ineffective. Parts 2, 3 and 4 cover cognitive biases, intervention design, and measurement, respectively
Read article →Somewhere in your organisation, a team’s cyber risk is elevated, not from unpatched systems, but behaviour: how files are shared, credentials handled, and requests judged under pressure. Controls and policy exist, yet culture undermines them. Then one well-placed person joins, and within months reporting rises, people pause before clicking, and leaders flag issues early. No mandates, no new training. That is the Security Champion Effect.
Read article →There is a familiar paradox at the heart of most enterprise security programmes. The harder organisations push to control human behaviour through rigid, uniform controls, the more creatively employees find ways around them. Security teams tighten the perimeter, and a shadow IT ecosystem quietly flourishes just beyond it. This is not a discipline problem. It is a design problem.
Read article →Moltbook is making headlines. It looks like Reddit, but AI agents are doing the posting, commenting, and upvoting while humans are mostly just watching. In just four days, it's captured attention because of the sheer novelty: agent-to-agent interaction at scale, complete with playful collective narratives and what look like emergent "belief systems."
Read article →Research in organisational behaviour consistently demonstrates that strategic positioning of change agents within social networks significantly accelerates behaviour adoption. Studies show that when change agents are positioned at network connection points and among opinion leaders, behaviour spread occurs 2-3 times faster than random distribution. Yet most Security Champions programmes ignore this evidence, recruiting volunteers without considering their network position.
Read article →The cybersecurity industry has spent two decades trying to "change culture" through awareness training, phishing simulations, and policy mandates. The results speak for themselves: human factors remain implicated in over 70% of breaches, and most organisations report little meaningful improvement despite significant investment. The problem isn't effort. It's the sequence.
Read article →Security Champions programmes are growing. That is the good news. The harder truth is that many programmes plateau after the initial enthusiasm. Champions attend calls, share comms, complete training, and we still see a lot of the same risky behaviours keep surfacing. This article sets out a modern, practical model for helping champion networks to be more effective agents of behaviour
Read article →If you are looking for a single, high-leverage move to strengthen your security culture in 2026, build (and genuinely enforce) a cyber psychological safety policy. Not a poster. Not a slogan. A clear organisational mandate that tells your people, in plain terms, that raising security concerns, reporting mistakes, and admitting uncertainty will be met with fairness, support, and learning, not blame.
Read article →While organisations worldwide struggle with a well-documented skills shortage, we simultaneously lack a comprehensive framework that addresses the human and behavioural dimensions of cyber risk management. Technical certifications abound - from CISSP to CEH - but where are the frameworks that guide professionals working at the intersection of human behaviour, psychology, and cybersecurity?
Read article →This article explores what secure and ethical behaviour looks like in an era of agentic AI, how human and AI behaviours intersect, and what you can do to promote safe, responsible use across your organisation. The focus is practical: helping you support innovation while keeping security, safety, and ethics firmly in view.
Read article →The rise of artificial intelligence has been compared to the invention of electricity, the printing press, even the internet itself. But unlike those revolutions, AI doesn’t just extend our capabilities, it begins to mirror them. As we edge closer to the idea of the Singularity – a tipping point where machine intelligence accelerates beyond human control, the most pressing risks won’t come from code alone. They’ll come from us.
Read article →Start your Security Champions programme with CyBehave Heroes.