Last updated: 9 May 2026
CyBehave is a behavioural cybersecurity research organisation and SaaS platform provider registered in England and Wales. This AI Policy describes how we use, govern, and disclose artificial intelligence (AI) across our properties, namely the CyBehave marketing site (cybehave.com) and the CyBehave Heroes platform (heroes.cybehave.com).
This policy sits alongside our Privacy Policy and Cookie Policy. Where AI processing involves personal data, both this policy and our Privacy Policy apply. Contact: ai@cybehave.com
We treat AI the way we treat the rest of our work: behaviourally informed, evidence-based, and privacy-first. Our commitments are:
Our use of AI is deliberately limited and disclosed at the point of use. The categories below describe current and near-term planned uses.
We do not use AI to make automated decisions that produce legal or similarly significant effects on individuals. We do not use AI for individual-level behavioural prediction, employee scoring, or covert assessment of named users.
Where we use AI, we rely on established providers under contractual data processing agreements. Our current AI sub-processors are:
Both providers operate under terms that, by default, exclude API inputs from being used to train their underlying models. We select providers based on:
If we add or change AI sub-processors in future, this section will be updated and the "Last updated" date at the top of this page will be amended. Material changes will also be communicated to registered Heroes users.
Regulation (EU) 2024/1689 (the EU AI Act) classifies AI systems by risk. We have assessed our AI uses against the Act and conclude that our current and near-term uses fall within the lower-risk categories.
We do not deploy any AI system in the prohibited categories under Article 5 of the Act. This includes social scoring, exploitative manipulation, untargeted scraping of facial images, real-time remote biometric identification in public spaces, predictive policing based solely on profiling, emotion recognition in workplaces or educational institutions, and biometric categorisation by sensitive attributes.
None of our current AI uses fall within the high-risk categories listed in Annex III of the Act. We do not deploy AI for employment decisions, access to essential services, law enforcement, education or training assessment of natural persons, or critical infrastructure operation. Our maturity assessment scoring evaluates organisational culture in aggregate, not individual employees.
Some of our features fall within the limited-risk category and trigger transparency obligations under Article 50 of the Act. Specifically:
The majority of our AI uses fall into the minimal-risk category, for which the Act imposes no specific obligations beyond voluntary codes of conduct. We nevertheless apply our internal AI principles to these uses.
We are a deployer of general-purpose AI models supplied by third-party providers; we are not a provider of foundation models. Our providers are responsible for compliance with the obligations applicable to GPAI model providers under Articles 51 to 55 of the Act.
The United Kingdom does not currently have a single statutory equivalent to the EU AI Act. Instead, the UK applies a sectoral, pro-innovation framework set out in the 2023 government white paper, with existing regulators applying their remits to AI use within their sectors. Where we operate in the UK, we align our AI practices with:
Where the UK introduces statutory AI legislation in future, we will reassess this policy and update it accordingly. Where EU AI Act standards are stricter than UK requirements, we apply the EU standard as our baseline.
We design AI features so that you can tell when AI is involved and so that a human remains in control of consequential decisions.
The following uses are prohibited across all CyBehave properties, regardless of customer or user request:
By default, content you submit to AI-powered features on our properties is not used to train third-party AI models. We rely on AI provider configurations and contractual terms that exclude API inputs from model training.
Where AI features process personal data, we apply the same legal bases, retention rules, and security controls described in our Privacy Policy. We minimise the personal data sent to AI providers and, where feasible, send only anonymised, aggregated, or synthetic inputs.
We do not train our own foundation models on customer or user data. Where we develop AI features in-house, training and tuning use synthetic data, public datasets, or data we have explicit rights to use.
In addition to your rights under UK GDPR (see Privacy Policy), you have the following rights in relation to AI features on our properties:
To exercise any of these rights, contact ai@cybehave.com. You may also raise concerns with the ICO or, where applicable, with your national data protection authority in the EU/EEA.
CyBehave conducts research at the intersection of behavioural science, cybersecurity, and AI. Our research programmes, including Behavioural Convergence Theory (BCT), examine how established human behavioural frameworks can be extended to govern the behaviour of agentic AI systems.
Research outputs published on cybehave.com may use AI tools for literature review, data analysis, drafting, and visualisation. Research that involves human participants follows separate ethics, consent, and data protection processes documented at the time of recruitment. We do not conduct research on AI systems in ways that would breach safety guidelines published by the underlying model provider.
This policy is owned by the CyBehave AI Governance function and is reviewed at least annually, and additionally whenever:
Material changes will be reflected in the "Last updated" date at the top of this page and, where appropriate, communicated to registered Heroes users by email.
AI policy and AI feature queries: ai@cybehave.com
Privacy queries: privacy@cybehave.com
General enquiries: Contact page