Legal

AI Policy

Last updated: 9 May 2026

Contents
Who we are Our AI principles How we use AI Third-party AI services EU AI Act risk classification UK regulatory alignment Transparency and human oversight Practices we will not engage in Your data and AI Your rights AI in our research Governance and updates Contact us

Who we are

CyBehave is a behavioural cybersecurity research organisation and SaaS platform provider registered in England and Wales. This AI Policy describes how we use, govern, and disclose artificial intelligence (AI) across our properties, namely the CyBehave marketing site (cybehave.com) and the CyBehave Heroes platform (heroes.cybehave.com).

This policy sits alongside our Privacy Policy and Cookie Policy. Where AI processing involves personal data, both this policy and our Privacy Policy apply. Contact: ai@cybehave.com

Our AI principles

We treat AI the way we treat the rest of our work: behaviourally informed, evidence-based, and privacy-first. Our commitments are:

  • Human-centred - AI supports human decision-making, it does not replace it
  • Privacy-first - AI features never use individual-level surveillance, behavioural profiling of named users, or covert monitoring
  • Proportionate - we use AI only where it offers clear value over simpler alternatives
  • Transparent - we tell you when you are interacting with AI and when content has been AI-assisted
  • Accountable - a named human owner is responsible for every AI feature we deploy
  • Reviewable - AI outputs that affect users are open to challenge, correction, and human review

How we use AI

Our use of AI is deliberately limited and disclosed at the point of use. The categories below describe current and near-term planned uses.

On the marketing site (cybehave.com)

  • Content drafting and editing - some articles, insights, and resource materials are drafted with AI assistance and reviewed, edited, and approved by a human author before publication. Authorship and editorial responsibility remain with CyBehave
  • Search and discovery - we may use AI to suggest related articles or insights based on the page you are reading. These suggestions are generated from public site content, not from your personal data
  • Interactive demonstrations - selected pages may include AI-powered demonstrations of our research or platform capabilities. Where these are present, they are clearly labelled and use only the inputs you provide in the demo itself

On the CyBehave Heroes platform

  • Maturity assessment scoring - statistical scoring of anonymous, aggregated assessment responses. This is rule-based analysis rather than a generative AI model
  • Recommendation generation - the Intervention Designer module may use AI to suggest behavioural interventions based on aggregated, anonymised maturity scores. Recommendations are advisory and require human selection and approval before deployment
  • Network analysis - Social Network Analysis features in the Champions module use deterministic graph algorithms, not machine learning, to identify influence patterns from anonymised relationship data
  • Drafting assistance - administrators may use AI-assisted drafting tools within the platform to write nudges, communications, or campaign content. All output is editable and requires human approval before sending

We do not use AI to make automated decisions that produce legal or similarly significant effects on individuals. We do not use AI for individual-level behavioural prediction, employee scoring, or covert assessment of named users.

Third-party AI services

Where we use AI, we rely on established providers under contractual data processing agreements. Our current AI sub-processors are:

  • OpenAI - used for AI enablement features within the CyBehave Heroes platform, such as generating behavioural nudges, drafting communications, and producing recommendation text within administrator workflows
  • Anthropic - used to support content development and interactive demonstrations on the CyBehave marketing site (cybehave.com), including AI-assisted drafting and editorial review of articles, insights, and resource materials

Both providers operate under terms that, by default, exclude API inputs from being used to train their underlying models. We select providers based on:

  • Stated data handling commitments, including no use of API inputs for model training by default
  • Hosting location and data transfer safeguards appropriate to UK GDPR
  • Published safety and acceptable use policies that align with our own values
  • Operational reliability and security posture

If we add or change AI sub-processors in future, this section will be updated and the "Last updated" date at the top of this page will be amended. Material changes will also be communicated to registered Heroes users.

EU AI Act risk classification

Regulation (EU) 2024/1689 (the EU AI Act) classifies AI systems by risk. We have assessed our AI uses against the Act and conclude that our current and near-term uses fall within the lower-risk categories.

Unacceptable risk (prohibited)

We do not deploy any AI system in the prohibited categories under Article 5 of the Act. This includes social scoring, exploitative manipulation, untargeted scraping of facial images, real-time remote biometric identification in public spaces, predictive policing based solely on profiling, emotion recognition in workplaces or educational institutions, and biometric categorisation by sensitive attributes.

High risk (Annex III)

None of our current AI uses fall within the high-risk categories listed in Annex III of the Act. We do not deploy AI for employment decisions, access to essential services, law enforcement, education or training assessment of natural persons, or critical infrastructure operation. Our maturity assessment scoring evaluates organisational culture in aggregate, not individual employees.

Limited risk (transparency obligations)

Some of our features fall within the limited-risk category and trigger transparency obligations under Article 50 of the Act. Specifically:

  • AI-assisted content is identified where reasonable
  • AI-powered demonstrations and chat-style features disclose that you are interacting with an AI system
  • Synthetic or substantially AI-generated images, audio, or video are labelled as such

Minimal risk

The majority of our AI uses fall into the minimal-risk category, for which the Act imposes no specific obligations beyond voluntary codes of conduct. We nevertheless apply our internal AI principles to these uses.

General-purpose AI models

We are a deployer of general-purpose AI models supplied by third-party providers; we are not a provider of foundation models. Our providers are responsible for compliance with the obligations applicable to GPAI model providers under Articles 51 to 55 of the Act.

UK regulatory alignment

The United Kingdom does not currently have a single statutory equivalent to the EU AI Act. Instead, the UK applies a sectoral, pro-innovation framework set out in the 2023 government white paper, with existing regulators applying their remits to AI use within their sectors. Where we operate in the UK, we align our AI practices with:

  • UK GDPR and the Data Protection Act 2018 - enforced by the Information Commissioner's Office (ICO), including its published guidance on AI and data protection
  • The five cross-sectoral AI principles - safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress
  • The Equality Act 2010 - to avoid AI-driven outcomes that produce unlawful discrimination
  • NCSC guidance on the secure development and deployment of AI systems
  • ICO guidance on automated decision-making, AI risk assessments, and the right to human review

Where the UK introduces statutory AI legislation in future, we will reassess this policy and update it accordingly. Where EU AI Act standards are stricter than UK requirements, we apply the EU standard as our baseline.

Transparency and human oversight

We design AI features so that you can tell when AI is involved and so that a human remains in control of consequential decisions.

  • Disclosure at point of use - AI-powered features carry visible labels and a brief explanation of what the AI does
  • Editable output - AI-generated suggestions, drafts, and recommendations are editable by the human user before any action is taken
  • Human approval before action - no AI-generated content is published, sent, or applied to your organisation without explicit human approval
  • Explainability - we will explain, on request, the general method by which an AI feature produced an output, including the model used and the inputs considered
  • Logging - significant AI interactions are logged for audit and review purposes, in line with our Privacy Policy

Practices we will not engage in

The following uses are prohibited across all CyBehave properties, regardless of customer or user request:

  • Individual-level behavioural surveillance, profiling, or scoring of named employees
  • Use of AI for covert monitoring of staff communications, productivity, or sentiment
  • Emotion recognition or affect inference in workplace or educational contexts
  • Biometric categorisation by race, religion, political opinion, sexual orientation, or other sensitive attributes
  • Sharing of customer or user data with AI providers for the purpose of training their models
  • Use of AI outputs to make final, automated decisions about a person without human review
  • Generation of deceptive synthetic content (deepfakes) of real individuals
  • Any deployment that would fall under Article 5 of the EU AI Act as a prohibited practice

Your data and AI

By default, content you submit to AI-powered features on our properties is not used to train third-party AI models. We rely on AI provider configurations and contractual terms that exclude API inputs from model training.

Where AI features process personal data, we apply the same legal bases, retention rules, and security controls described in our Privacy Policy. We minimise the personal data sent to AI providers and, where feasible, send only anonymised, aggregated, or synthetic inputs.

We do not train our own foundation models on customer or user data. Where we develop AI features in-house, training and tuning use synthetic data, public datasets, or data we have explicit rights to use.

Your rights

In addition to your rights under UK GDPR (see Privacy Policy), you have the following rights in relation to AI features on our properties:

  • Right to know - to be told whether an interaction or output involves AI
  • Right to explanation - to receive a meaningful description of how an AI feature reached an output that affects you
  • Right to human review - to have an AI-influenced decision or output reviewed by a human at CyBehave
  • Right to opt out - to decline AI-powered features where they are optional, without losing access to core services
  • Right to challenge - to contest AI outputs you consider inaccurate, biased, or unfair

To exercise any of these rights, contact ai@cybehave.com. You may also raise concerns with the ICO or, where applicable, with your national data protection authority in the EU/EEA.

AI in our research

CyBehave conducts research at the intersection of behavioural science, cybersecurity, and AI. Our research programmes, including Behavioural Convergence Theory (BCT), examine how established human behavioural frameworks can be extended to govern the behaviour of agentic AI systems.

Research outputs published on cybehave.com may use AI tools for literature review, data analysis, drafting, and visualisation. Research that involves human participants follows separate ethics, consent, and data protection processes documented at the time of recruitment. We do not conduct research on AI systems in ways that would breach safety guidelines published by the underlying model provider.

Governance and updates

This policy is owned by the CyBehave AI Governance function and is reviewed at least annually, and additionally whenever:

  • We introduce a materially new AI feature or change an existing one
  • An AI sub-processor is added or replaced
  • Applicable law or regulatory guidance changes (for example, new EU AI Act application dates or new UK AI legislation)
  • An incident or near-miss prompts a review of our controls

Material changes will be reflected in the "Last updated" date at the top of this page and, where appropriate, communicated to registered Heroes users by email.

Contact us

AI policy and AI feature queries: ai@cybehave.com
Privacy queries: privacy@cybehave.com
General enquiries: Contact page