Somewhere in your organisation right now, there is a team with an elevated risk profile. Not because their systems are misconfigured or their software is unpatched. Because of how they behave. The way they share files, handle credentials, respond to unusual requests, and make decisions under pressure. The technical controls are in place. The policy exists. But something in the team's culture is quietly undermining both.
Now imagine that within six months of a single person joining that team in a particular role, the observable behaviours begin to shift. Reporting rates increase. Colleagues start asking questions before clicking. The team lead starts flagging issues rather than resolving them silently. Nothing was mandated. No policy changed. No training day was scheduled. One person changed the team, because they were the right person in the right place at the right time.
This is the Security Champion Effect. And understanding it is one of the most powerful things a security leader can do.
The most effective risk reduction lever you have is not a control. It is a human being, positioned strategically within a social network.
Why Technical Risk Models Miss the Point
The dominant models of organisational risk are built on the assumption that threat surfaces are primarily technical. Vulnerabilities, misconfigurations, unpatched systems. This is not wrong, but it is profoundly incomplete. Research consistently shows that the majority of significant security incidents involve a human decision at some point in the chain. A credential shared. An attachment opened. An anomaly ignored.
What is less often discussed is that these decisions are not random. They cluster. Teams with poor security culture make more of them, more frequently, and with less awareness that they are doing so. And security culture is not an abstract concept. It is the sum of the norms, behaviours, and social expectations that govern how people act when no one is formally watching.
The implication is significant. If risk is unevenly distributed across teams according to their behavioural norms, then risk reduction strategies need to be designed around the architecture of those norms. That means thinking less like a systems engineer and more like a social scientist.
The Network Science of Influence
Social network theory has a long and rigorous history of explaining how behaviours, ideas, and norms propagate through human populations. The foundational insight, developed by researchers including Stanley Milgram and later extended by Nicholas Christakis and James Fowler, is that influence does not flow uniformly. It flows through nodes.
A node with high degree centrality is connected to many people. A node with high betweenness centrality sits on the pathways between many others. A node with high closeness centrality can reach the rest of the network quickly. These are not just abstract network properties. In organisational terms, they describe the people through whom information, norms, and behaviours travel.
The Christakis and Fowler research on social contagion demonstrated that behaviours spread through networks in ways that mirror the propagation of infections. Happiness, smoking cessation, exercise habits and crucially, norms around trust and risk, all exhibit what they called the three degrees of influence rule: your behaviour influences not just those directly connected to you, but their connections, and their connections in turn.
For security leaders, this has a direct and actionable implication. Placing a well-chosen champion at a high-centrality node within a team does not just give that team a local resource. It seeds a behavioural contagion.
Behaviours spread. The question is not whether influence will flow through your organisation. It is whether you are being intentional about what flows and through whom.
What a Champion Actually Does: The Behavioural Mechanisms
To understand why placement matters, it helps to be precise about what a security champion is actually doing when they are working well. It is rarely the explicit activities that matter most. The workshops, the newsletters, the awareness campaigns. What matters is the informal, ambient influence that a well-positioned champion exerts simply by being present and visibly behaving in particular ways.
The first mechanism is modelling. Bandura's social learning theory established that humans learn and adopt behaviours by observing others, particularly others they perceive as credible, similar to themselves, and embedded in the same social context. A security champion who visibly pauses before clicking a link, who says 'I am going to verify this before I act' aloud in a team meeting, who treats a security question as a normal and intelligent thing to ask, is providing a continuous behavioural model for everyone around them.
The second mechanism is normalisation. One of the most powerful determinants of human behaviour is perceived social norms: what we believe most people in our group do or would approve of. In teams where security-conscious behaviour is rare, asking whether something is safe can feel paranoid or obstructive. A champion changes that calculation. When the champion is trusted and respected, their security-conscious behaviour shifts what counts as normal in that team's microculture.
The third mechanism is friction reduction. Many security failures happen not because people do not care, but because doing the secure thing is harder than the alternative. A champion who knows the tools, knows the escalation path, and can answer a colleague's question in thirty seconds removes the friction that otherwise produces workarounds. They are, in effect, a live, contextual intervention.
The Halo Effect in Practice
The halo effect is a well-documented cognitive phenomenon: our overall impression of a person influences how we evaluate their specific attributes and, crucially, how we respond to their behaviours. When a trusted colleague models secure behaviour, observers do not just note the behaviour. They are more likely to adopt it, because the source carries positive associations.
This is why champion selection is not simply a matter of enthusiasm or technical knowledge. The most technically proficient person in a team is rarely the most influential. Influence flows from trust, relatability, and social standing. Research on peer health educators, community behaviour change programmes, and workplace wellness initiatives consistently shows that the best messengers are those already embedded in the target social network at a position of trust.
A champion who is regarded as knowledgeable but remote will generate compliance at best. A champion who is genuinely liked, consulted for non-security reasons, and seen as an authentic member of the team will generate behaviour change. The halo they carry extends to the behaviours they model and the norms they embody.
The best champions are not the most technically expert. They are the most trusted. Those are rarely the same person.
Measuring the Effect: Moving Beyond Training Metrics
One of the persistent failures of security culture measurement is the reliance on participation-based metrics. Training completion rates, phishing simulation click rates, awareness campaign reach. These measure exposure, not behaviour change. And behaviour change, not exposure, is what reduces risk.
The Security Champion Effect is measurable, but it requires different instruments. The indicators that matter are behavioural and relational. How frequently is the champion consulted by peers? Has the rate of voluntary security reporting in the team changed? Are colleagues asking questions earlier in processes rather than after the fact? Is the team's incident profile shifting over time?
These measures require baseline data, longitudinal tracking, and the ability to correlate behavioural indicators with organisational network position. This is precisely the kind of measurement infrastructure that most security programmes lack and that behavioural cybersecurity platforms are designed to provide. When you can map who the champions are, where they sit in the network, who they are influencing, and how behaviours are shifting as a result, you move from anecdote to evidence.
Identifying Where to Place Your First Champions
If placement is the critical variable, the logical question is how to identify the right positions. This is not a gut-feel exercise, though experienced security leaders often have strong intuitions that align with more formal analysis. It is a question of understanding the social topology of your organisation.
Start by identifying teams with elevated behavioural risk profiles. These are the teams where security incidents have originated, where workarounds are common, where the gap between policy and practice is widest. They are your highest-priority targets for champion placement.
Within those teams, identify individuals with high informal trust. Not the managers. Not the most senior technical people. The people colleagues go to when something goes wrong, when they need an honest opinion, when they want to understand something without formal escalation. These are your candidate champions.
Then ask the network questions. Who sits at the intersection of multiple sub-groups? Who is respected across functional boundaries? Who do people listen to even when they disagree? In network terms, these are your high-betweenness nodes, the individuals whose influence radiates outward most efficiently.
A single champion placed here will do more measurable work in twelve months than a dozen champions placed at the organisational periphery.
You are not looking for volunteers. You are looking for nodes. The distinction matters enormously.
The Compounding Return on a Single Strategic Placement
The economic case for security champions is usually made in terms of headcount leverage: a champion extends the reach of the security function without adding to its cost. This is true but undersells the value. The real return is compounding.
A well-placed champion does not simply perform security functions. They reshape the team's behavioural baseline. That changed baseline persists. When the champion moves on or changes role, they leave a team that has been fundamentally altered in its security norms. The colleagues they influenced now carry those norms into other teams, other roles, other organisations. The effect propagates beyond the individual and beyond the programme.
This is not theoretical. Longitudinal studies of peer-led behaviour change interventions consistently show that teams which receive high-quality peer influence develop more durable behaviour change than teams that receive equivalent information through formal channels. The mechanism is the same whether the domain is public health, financial behaviour, or security.
A single champion, in the right place, with the right support, creates a residual benefit that outlasts their tenure in the role. That is a return on investment that no piece of technology, no policy document, and no annual awareness campaign can match.
Building a Champions Programme Around Network Logic
Most security champions programmes are built around convenience. Volunteers are recruited, given some training, and asked to do tasks. Placement is determined by who puts their hand up, not by where influence is most needed. This produces programmes that feel active but deliver marginal returns, because the champions are not where the levers are.
A network-informed programme starts differently. It begins with a map, whether formal or informal, of the organisation's social topology. It identifies the teams with the highest behavioural risk and the highest-centrality individuals within those teams. It recruits with precision rather than open invitation. And it measures not activity but influence.
This shift in design philosophy is significant. It reframes the security champion from a volunteer with extra duties to a strategically deployed agent of behavioural change. It treats culture as an architecture to be designed, not a climate to be hoped for.
It also changes what organisations owe their champions. If you are asking someone to act as a high-centrality node in a behaviour change network, you owe them genuine capability development, visible organisational recognition, and a clear feedback loop that shows them the impact of their work. Motivation matters as much as placement. The most strategically positioned champion in the world will disengage without the conditions that sustain their effort.
Starting Where You Are
None of this requires a large programme, a significant budget, or executive backing at the outset. The most compelling champions programmes in organisations of all sizes started with a single well-chosen person in a single high-risk team.
The decision to place one champion thoughtfully, to support them properly, and to measure what changes is the entire programme in miniature. Get that right and you have proof of concept, a model to replicate, and evidence to take upward for investment. Get it wrong, by placing the wrong person, giving them no support, or measuring the wrong things, and you will conclude that champions do not work. The difference is not the concept. It is the design.
Security Champions programmes fail because they are treated as communications exercises. They succeed when they are treated as behavioural interventions, grounded in the science of how influence moves through human networks, measured by the things that actually matter, and built around the understanding that in every organisation, in every team, there is someone with the potential to change everything around them.
The question is whether you know who that person is, and whether they know what they could do.