ETSI Baseline AI Security Requirements

Based on the UK AI Cyber Security Code of Practice (2025)

QuickStart Guide

πŸ“Œ What is it?

  • The ETSI Baseline AI Security Requirements (ETSI TS 104 223) based on the UK’s AI Cyber Security Code of Practice, published January 2025.
  • The core standard:
    • Covers the full AI lifecycle: Design (including getting ready and governance), Development, Deployment, Maintenance, and End-of-Life.
    • Sets the baseline security requirements for AI systems in a set of mandatory (β€œshall”), optional (β€œshould”), and permissive (β€œmay”) provisions.
    • Comes with a detailed companion Implementation Guide.
  • It has now been elevated into an international standard with its Implementation Guide as ETSI TR 104 128.

🌍 Why it matters

  • Common baseline across stakeholders β†’ shared security language for developers, deployers, and regulators.
  • Built on existing cybersecurity but with AI-specific threats β†’ explicitly addresses risks like data poisoning, evasion, prompt injection, but in the context of existing cybersecurity, not as a new silo.
  • A practical strategic compass for rolling out AI Security with:
    • Contextual lifecycle coverage β†’ from strategy to end-of-life, not just deployment.
    • Risk assessment and threat modelling β†’ focus on your own risks, not headlines.
    • Scenario-based examples β†’ for organisations of different size and type.
    • Cuts through standards overload β†’ mapping existing standards and guidelines for each provision.
  • International reach with harmonisation β†’ mapped to NIST CSF and being aligned with EU AI Act, CEN/CENELEC, ISO, and others.
  • Conformity Assessment planned for end of 2025 to provide a standardised benchmark of maturity and compliance.

🧩 Core Structure & Baseline Requirements

Phase Summary of Requirements Risks Addressed
πŸ— Secure Design (includes β€œGetting Ready”) Define objectives and governance, assign stakeholder roles, assess risk appetite, and conduct structured threat modelling. Ensure AI-specific training with role-based awareness and continuous updates on evolving threats. Misaligned projects, accountability gaps, wasted effort, over/under-engineering, untrained staff missing new risks.
πŸ’» Secure Development Protect environments and data pipelines, track provenance, apply sufficient data protection and least-privilege access, secure dependencies (SBOMs), follow secure coding, and validate model behaviour through adversarial testing and AI red teaming. Compromise of environments, supply-chain attacks, data leakage or poisoning, exploitable models/code.
🚦 Secure Deployment Enforce guardrails and policies, provide safe-use guidance, manage configuration and change control, ensure isolation, apply API protections (rate limiting), and conduct end-to-end penetration tests and AI red teaming. Misuse or abuse, prompt injection, interface-based attacks, untested controls failing under adversarial conditions.
πŸ”„ Secure Maintenance Continuously monitor performance and logs for anomalies or drift, apply timely patches/updates, conduct periodic re-testing, and ensure incident response and disaster recovery cover AI. Undetected model degradation, evolving attack vectors, delayed or ineffective incident response.
πŸ—‘ Secure End-of-Life Decommission responsibly: securely delete or transfer models, datasets, prompts, and configurations to eliminate residual risk. Orphaned or abandoned AI assets being exploited as attack vectors.

πŸš€ How to use it

  1. Getting Ready β†’ review AI cyber maturity, define goals, map systems to lifecycle, assign roles, and plan next steps. Prioritise training and ensure monitoring for updates. ⚠️ If you already have deployed AI, address immediate risks first; then work backwards to uplift security, conform with the standard, and scale securely.
  2. Adopt threat modelling early and iteratively β†’ focus on real risks, not headlines, use it to evaluate risks, plan mitigations, and scope testing; embed evaluation from the start.
  3. Embed human oversight and user communication from day one, including explainability.
  4. Secure your environments β†’ apply production-grade protections to models, prompts, datasets, with GDPR-compliant data handling.
  5. Secure your supply chain β†’ update rules to cover new AI artefacts including datasets, models, and AI agents.
  6. Document provenance (data, models, prompts, agents) and keep audit trails.
  7. Test end-to-end β†’ penetration testing, AI red teaming, adversarial evaluation.
  8. Monitor, patch, and update continuously β†’ covering AI-specific aspects (drift, bias, non-determinism).
  9. Revisit Incident Response & Disaster Recovery β†’ ensure AI-specific concerns (hallucinations, misuse, model state) are covered and teams have required skills.
  10. Plan secure end-of-life disposal β†’ include GDPR-compliant data handling.
  11. Use the Implementation Guide or ETSI TR 104 128 for scenarios and examples.
  12. Iterate maturity β†’ start with baseline, layer stronger practices over time.
Scroll to Top