ETSI Baseline AI Security Requirements
Based on the UK AI Cyber Security Code of Practice (2025)
QuickStart Guide
π What is it?
- The ETSI Baseline AI Security Requirements (ETSI TS 104 223) based on the UKβs AI Cyber Security Code of Practice, published January 2025.
- The core standard:
- Covers the full AI lifecycle: Design (including getting ready and governance), Development, Deployment, Maintenance, and End-of-Life.
- Sets the baseline security requirements for AI systems in a set of mandatory (βshallβ), optional (βshouldβ), and permissive (βmayβ) provisions.
- Comes with a detailed companion Implementation Guide.
- It has now been elevated into an international standard with its Implementation Guide as ETSI TR 104 128.
π Why it matters
- Common baseline across stakeholders β shared security language for developers, deployers, and regulators.
- Built on existing cybersecurity but with AI-specific threats β explicitly addresses risks like data poisoning, evasion, prompt injection, but in the context of existing cybersecurity, not as a new silo.
- A practical strategic compass for rolling out AI Security with:
- Contextual lifecycle coverage β from strategy to end-of-life, not just deployment.
- Risk assessment and threat modelling β focus on your own risks, not headlines.
- Scenario-based examples β for organisations of different size and type.
- Cuts through standards overload β mapping existing standards and guidelines for each provision.
- International reach with harmonisation β mapped to NIST CSF and being aligned with EU AI Act, CEN/CENELEC, ISO, and others.
- Conformity Assessment planned for end of 2025 to provide a standardised benchmark of maturity and compliance.
π§© Core Structure & Baseline Requirements
Phase | Summary of Requirements | Risks Addressed |
---|---|---|
π Secure Design (includes βGetting Readyβ) | Define objectives and governance, assign stakeholder roles, assess risk appetite, and conduct structured threat modelling. Ensure AI-specific training with role-based awareness and continuous updates on evolving threats. | Misaligned projects, accountability gaps, wasted effort, over/under-engineering, untrained staff missing new risks. |
π» Secure Development | Protect environments and data pipelines, track provenance, apply sufficient data protection and least-privilege access, secure dependencies (SBOMs), follow secure coding, and validate model behaviour through adversarial testing and AI red teaming. | Compromise of environments, supply-chain attacks, data leakage or poisoning, exploitable models/code. |
π¦ Secure Deployment | Enforce guardrails and policies, provide safe-use guidance, manage configuration and change control, ensure isolation, apply API protections (rate limiting), and conduct end-to-end penetration tests and AI red teaming. | Misuse or abuse, prompt injection, interface-based attacks, untested controls failing under adversarial conditions. |
π Secure Maintenance | Continuously monitor performance and logs for anomalies or drift, apply timely patches/updates, conduct periodic re-testing, and ensure incident response and disaster recovery cover AI. | Undetected model degradation, evolving attack vectors, delayed or ineffective incident response. |
π Secure End-of-Life | Decommission responsibly: securely delete or transfer models, datasets, prompts, and configurations to eliminate residual risk. | Orphaned or abandoned AI assets being exploited as attack vectors. |
π How to use it
- Getting Ready β review AI cyber maturity, define goals, map systems to lifecycle, assign roles, and plan next steps. Prioritise training and ensure monitoring for updates. β οΈ If you already have deployed AI, address immediate risks first; then work backwards to uplift security, conform with the standard, and scale securely.
- Adopt threat modelling early and iteratively β focus on real risks, not headlines, use it to evaluate risks, plan mitigations, and scope testing; embed evaluation from the start.
- Embed human oversight and user communication from day one, including explainability.
- Secure your environments β apply production-grade protections to models, prompts, datasets, with GDPR-compliant data handling.
- Secure your supply chain β update rules to cover new AI artefacts including datasets, models, and AI agents.
- Document provenance (data, models, prompts, agents) and keep audit trails.
- Test end-to-end β penetration testing, AI red teaming, adversarial evaluation.
- Monitor, patch, and update continuously β covering AI-specific aspects (drift, bias, non-determinism).
- Revisit Incident Response & Disaster Recovery β ensure AI-specific concerns (hallucinations, misuse, model state) are covered and teams have required skills.
- Plan secure end-of-life disposal β include GDPR-compliant data handling.
- Use the Implementation Guide or ETSI TR 104 128 for scenarios and examples.
- Iterate maturity β start with baseline, layer stronger practices over time.