Practical advice, expert perspectives, and applied guides on building security culture, managing human risk, and running effective Champions programmes.
Traditional threat models focus heavily on technical vectors, malware payloads, privilege escalation, misconfigurations, and lateral movement. These are critical, but they only paint half the picture. The majority of breaches today begin with a human, a click, a disclosure, a misjudgement, or an omission. If we treat people as static, rational elements in the system, our threat models remain incomplete. It’s time to bring behavioural modelling into the heart of threat assessment.
Read article →From automating processes to generating insights, AI offers unprecedented opportunities. But alongside this opportunity comes a quieter, less technical challenge: AI misuse by humans inside organisations. When we talk about AI risk, the conversation often fixates on model bias, adversarial attacks, or regulatory compliance. Yet many of the most immediate risks don’t come from the technology itself – they come from the way people choose to use it.
Read article →Your face. Your voice. Your words – used against you. In the age of AI, deception just became terrifyingly personal.
Read article →We often talk about layered defence, about defending against sophisticated nation-state actors, insider threats, supply chain vulnerabilities, and AI-driven phishing campaigns. But let’s be honest: we’re still losing ground to the simplest exploit vector of all – passwords.
Read article →Start your Security Champions programme with CyBehave Heroes.