Compliance·22 min read

EU AI Act Compliance: The Complete Guide to August 2, 2026 Enforcement

The EU AI Act starts enforcing high-risk AI system obligations on August 2, 2026. Penalties reach €35M or 7% of global revenue. This is the complete guide — every Article, every deadline, every control, with the technical path to continuous compliance.

E

EchelonGraph

Founder

TL;DR. The EU AI Act starts enforcing high-risk AI system obligations on August 2, 2026. Penalties reach €35 million or 7% of global annual revenue, whichever is higher — more punitive than GDPR. Most current compliance tooling (Vanta, Drata, OneTrust) treats AI Act compliance as policy templates: PDFs you sign. The Act requires live evidence that controls are operating in your actual cloud and Kubernetes infrastructure. This guide walks through every relevant Article, the timeline, who's in scope, the penalty structure, and the technical path to continuous compliance.

1. The August 2, 2026 deadline

The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024. Its provisions phase in over three years:

DateWhat kicks inWho it applies to
Feb 2, 2025Prohibited AI practices (Chapter II) + AI literacy obligationsEveryone
Aug 2, 2025Rules for General-Purpose AI (GPAI) models + governance + penalty regimeGPAI providers (OpenAI, Anthropic, Google, etc.)
Aug 2, 2026High-risk AI system obligations (Annex III) — the bulk of the regulationMost enterprises shipping AI features
Aug 2, 2027High-risk AI systems that are products covered by other EU harmonization legislation (Annex I)Regulated industries (medical devices, machinery, automotive)

The August 2, 2026 deadline is the one that matters for most enterprises. That's when the rules covering AI used in hiring, credit, healthcare, critical infrastructure, biometrics, education, and law enforcement become legally binding.

EU AI Act — enforcement timelineRegulation (EU) 2024/1689Aug 1, 2024In forceFeb 2, 2025ProhibitionsAug 2, 2025GPAI rulesAug 2, 2026HIGH-RISKobligationsAug 2, 2027Annex I systemsAug 2, 2026 is the deadline that matters for most enterprises shipping AI features.

2. Who is in scope

You are in scope of the August 2, 2026 deadline if you:

  • Operate in the EU (sell to EU customers, employ EU staff, run servers in the EU), OR
  • Place an AI system on the EU market (your product is available to EU users), OR
  • Use AI output in the EU (your AI-generated content reaches EU users)
  • This is extraterritorial — like GDPR. A US-headquartered company with EU customers is in scope.

    The Act distinguishes four risk tiers:

  • Unacceptable risk (banned outright as of Feb 2, 2025): social scoring, manipulative AI, untargeted biometric scraping
  • High-risk (in scope Aug 2, 2026): the focus of this guide — listed in Annex III
  • Limited risk (transparency obligations): chatbots, deepfakes, AI-generated content
  • Minimal risk (no specific obligations): spam filters, AI in video games
  • High-risk AI systems (Annex III)

    Your AI system is "high-risk" if it falls into any of these categories:

  • Biometrics — remote biometric identification, biometric categorization, emotion recognition
  • Critical infrastructure — road traffic, water/gas/heating/electricity supply, digital infrastructure
  • Education and vocational training — AI determining access, evaluating learning outcomes, monitoring student behavior
  • Employment and worker management — recruitment, screening, evaluation, promotion, termination decisions
  • Essential private and public services — credit scoring, life and health insurance risk assessment, emergency services dispatch
  • Law enforcement — risk assessments, evidence evaluation, profiling
  • Migration, asylum and border control — risk assessments, document verification
  • Administration of justice and democratic processes — judicial decision support, electoral influence
  • If you ship AI in any of these categories to EU users, you are in scope.

    3. The penalty structure

    ViolationMaximum fine
    Prohibited AI practices (Chapter II)€35M or 7% of global annual turnover — whichever is higher
    Non-compliance with high-risk system obligations (Articles 8-15, 17)€15M or 3% of global revenue
    Supplying incorrect, incomplete, or misleading information to authorities€7.5M or 1% of global revenue

    For comparison: GDPR caps at €20M / 4%. The AI Act is more punitive — and enforcement is per-incident, not per-year.

    National regulators (the AI Office at EU level + national supervisory authorities) have the power to issue fines, demand product withdrawal from the market, and impose corrective actions.

    4. The Articles that matter for cloud security

    The Act has 113 articles. For a cloud security and compliance platform, four articles do most of the work.

    Article 15 — cross-framework collapseImplement controls once · satisfy 5 frameworksEU AI ActArticle 9Risk ManagementArticle 15CybersecurityArticle 16Provider obligationsArticle 17Quality Mgmt SystemISO/IEC 42001:2023§8.2 · §8.4NIST AI-RMF 1.0MEASURE 2.5, 2.6 · MANAGE 2.1MITRE ATLASAML.T0011 · AML.T0024OWASP LLM Top 10LLM02 · LLM03 · LLM07SOC 2 (AICPA TSP)CC6.1 · CC7.1Same control evidence satisfies all 5 frameworks

    Article 9 — Risk Management System

    Providers of high-risk AI systems must establish, implement, document, and maintain a risk management system running across the entire lifecycle of the AI system. This includes:

  • Identifying reasonably foreseeable risks
  • Estimating and evaluating risks that may emerge during foreseeable misuse
  • Adopting risk-management measures
  • Testing the system to ensure it performs consistently
  • What this means in practice: you need continuous, documented evidence that you are identifying and mitigating risks. Annual reviews are insufficient.

    Article 15 — Accuracy, Robustness, and Cybersecurity

    The article most relevant to a security platform. Article 15 requires high-risk AI systems to:

  • Be designed and developed to achieve appropriate accuracy for their intended purpose
  • Be robust — resilient to errors, faults, inconsistencies within the system or environment
  • Have cybersecurity measures appropriate to the risk and the state of the art
  • Be resilient against attempts by unauthorized third parties to alter use, outputs, or performance by exploiting vulnerabilities
  • "Cybersecurity appropriate to the risk and state of the art" is doing enormous work in that sentence. In practice it requires:

  • AI workload inventory — you must know which AI systems you operate
  • Continuous posture monitoring — annual scanning will not suffice
  • Adversarial testing — against the threat landscape (MITRE ATLAS)
  • Audit trail — proof the controls are operating
  • Article 16 — Obligations of Providers of High-Risk AI Systems

    Providers must:

  • Ensure their AI systems comply with Article 9-15 requirements
  • Have a quality management system in place
  • Keep documentation for 10 years after the system is placed on the market
  • Conduct conformity assessment procedures before placing the system on the EU market
  • Register the system in the EU database
  • Take corrective action if non-compliance is identified
  • Article 17 — Quality Management System

    The QMS must cover (among other things):

  • Strategy for regulatory compliance
  • Techniques for design control and verification
  • Technical specifications and standards applied
  • Systems for data management — data collection, preparation, labeling, storage
  • The risk management system (Article 9)
  • Post-market monitoring — continuous evaluation of system performance in production
  • Procedures for reporting serious incidents
  • Procedures for record-keeping including event logs as referred to in Article 12
  • The QMS is a living, breathing system with continuous evidence requirements.

    5. Why current compliance tooling does not cover this

    The AI Act assumes you can produce technical evidence that controls are operating right now — not that you have a policy saying they should be.

    VendorWhat they actually do for the AI Act
    VantaAI compliance = policy template you sign + a separate AI risk questionnaire. No live infrastructure scoring against AI Act Article 15.
    DrataAI risk assessment questionnaires; documentation collection for the QMS. No live scoring.
    OneTrust"AI Governance" as a separate SKU; primarily about model inventory + DPIA-style risk assessment. Not live cloud-state scoring.
    SecureframeAI vendor-risk reviews; document collection. Same gap.
    Most legacy GRCSpreadsheets.

    The gap: none of these score actual cloud and Kubernetes state against the actual control text every 5 minutes. They give you documentation. The AI Act wants documentation and technical evidence.

    6. The technical path to continuous AI Act compliance

    6.1 — Inventory every AI workload

    Article 16 obligation: know your inventory.

  • Containers running model-serving (KServe, Triton, BentoML, Seldon, Ray Serve, vLLM, Ollama)
  • Vector databases (Milvus, Weaviate, Qdrant, Chroma, Pinecone)
  • Inference proxies (LiteLLM, BerriAI, OpenAI proxy gateways)
  • Model registries (MLflow, Weights & Biases self-hosted, HuggingFace private)
  • Training pipelines (Kubeflow, Ray clusters, Argo Workflows running ML jobs)
  • Notebook environments (JupyterHub, SageMaker, Vertex Workbench)
  • The inventory must be discovered, not declared — anything declared-only will go stale within a quarter.

    6.2 — Map each workload to the relevant Articles

    Workload typeArticles in scope
    Recruitment screening AI9, 15, 16, 17 (high-risk, Annex III §4)
    Credit-scoring AI9, 15, 16, 17 (high-risk, Annex III §5)
    Internal coding assistantLimited-risk transparency obligations (Article 50)
    Customer-facing chatbotLimited-risk transparency obligations (Article 50)
    Image-generation model for marketingLimited-risk, content-labeling under Article 50

    6.3 — Operationalize Article 15 cybersecurity controls

    For each high-risk workload, run continuous controls covering:

  • Network exposure — is the inference endpoint reachable from the public internet without authentication?
  • Identity & access — is the workload IAM/RBAC scoped to least privilege?
  • Encryption — is data in transit and at rest encrypted? With what keys?
  • Supply chain — what model weights, datasets, libraries did this workload use? Are they signed? Are there known CVEs?
  • Adversarial resilience — has this workload been tested against MITRE ATLAS techniques (model extraction, prompt injection, training-data poisoning)?
  • Audit logging — is every inference / training event captured and tamper-evident?
  • These are not annual checkbox items. The Act requires continuous evidence.

    6.4 — Cross-framework control mapping

    Article 15 cybersecurity controls overlap heavily with:

  • ISO/IEC 42001:2023 (AI management system standard) §8.2, §8.4
  • NIST AI Risk Management Framework MEASURE 2.5, 2.6; MANAGE 2.1
  • MITRE ATLAS AML.T0011 (Shadow AI), AML.T0024 (Model Inversion)
  • OWASP LLM Top 10 LLM02 (Sensitive Information Disclosure), LLM07 (Supply Chain), LLM03 (Training Data Poisoning)
  • SOC 2 CC6.1 (Logical Access), CC7.1 (System Monitoring)
  • Implement once, satisfy multiple frameworks. The right platform makes this automatic.

    7. How EchelonGraph fits

    EchelonGraph is a 3-tier cloud security platform with live AI compliance scoring as a native property — not a separate SKU. Every tier is shipping today.

    EchelonGraph — coverage map for EU AI Act Articles 9, 15, 16, 17Tier 1 — EcheSky (FREE)Agentless multi-cloud scanner · 440+ CIS rules · CVE correlation · 3D blast-radius attack graphCovers: AI workload inventory (Art. 16) · Risk baseline (Art. 9) · Static control evidence (Art. 15)Tier 2 — EcheNet (licensed)In-cluster agent · SBOM generation · K8s audit · MITRE ATT&CK mapping · Runtime vuln detectionCovers: Supply-chain evidence (Art. 15) · Post-market monitoring (Art. 17) · Audit trail (Art. 16)Tier 3 — EcheDeep (licensed)eBPF runtime monitoring · AAD-bound BYOK · Shadow API discovery · ML anomaly detection · Auto-remediationCovers: Adversarial resilience (Art. 15) · Real-time alteration detection (Art. 15) · 10-yr audit log (Art. 16)Differentiator: zero-knowledge from kernel to dashboard — we never see customer plaintext

    Tier 1 — EcheSky (Free for customers)

    Agentless multi-cloud scanner across AWS, GCP, and Azure. Discovers your AI workload inventory automatically, scores against 12 compliance frameworks live, including 5 AI-specific ones:

  • NIST AI Risk Management Framework (Govern / Map / Measure / Manage)
  • EU AI Act (Articles 9, 15, 16, 17)
  • ISO/IEC 42001:2023 (AI Management System)
  • MITRE ATLAS (Adversarial ML Threat Tactics)
  • OWASP LLM Top 10 (LLM01–LLM10)
  • 440+ misconfiguration rules mapped to CIS v2.0. CVE correlation. 3D blast-radius attack graph on Neo4j showing exactly which AI workloads are reachable from the internet and what they can access.

    Tier 2 — EcheNet (licensed)

    Lightweight in-cluster agent. Container image scanning, SBOM generation, Kubernetes audit (RBAC, network policies, Pod Security Standards), MITRE ATT&CK mapping, runtime vulnerability detection. The inventory layer for high-risk AI systems — every model-serving container, every vector DB, every inference proxy.

    Tier 3 — EcheDeep (licensed)

    eBPF runtime monitoring with AAD-bound BYOK encryption — each customer's AI workload telemetry is encrypted at the kernel boundary with a key bound to that tenant's identity. Sysdig, Aqua, and Falco cannot match this. Shadow API discovery from traffic analysis, ML-based anomaly detection, auto-remediation via IaC pull requests (Terraform / Pulumi / Helm), threat intelligence feeds.

    This is the Article 15 cybersecurity layer: kernel-level visibility into what every AI workload is actually doing at runtime.

    What this gives you concretely

    AI Act requirementEchelonGraph capability
    Article 9 — Risk management systemLive risk scoring + 30/60/90-day trending across 12 frameworks
    Article 15 — Cybersecurity appropriate to the riskContinuous Tier 1+2+3 scanning across cloud, cluster, and kernel
    Article 15 — Robustness against unauthorized alterationTier 3 eBPF runtime detection of process anomalies, lateral movement, container escape
    Article 15 — Audit trail / supply chainSBOM (Tier 2) + signed event log (Tier 3) + cross-framework control mapping
    Article 16 — Documentation kept for 10 yearsCompliance score snapshots, daily, tamper-evident
    Article 17 — Post-market monitoringContinuous Tier 2+3 telemetry from production workloads
    MITRE ATLAS AML.T0011 (Shadow AI)Shadow AI Radar (free) + Shadow Engine Map (Tier 2)

    8. What to do this quarter

    If August 2, 2026 is on your calendar:

  • Inventory your AI workloads. Run EchelonGraph's free Tier 1 scan on one AWS / GCP / Azure account today — discovers your AI workloads in 8 minutes.
  • Map each high-risk workload to Articles 9 + 15 + 16 + 17. EchelonGraph's compliance page does this automatically.
  • Establish continuous evidence collection. Tier 2 + Tier 3 agents start producing audit-grade evidence within 24 hours of install.
  • Run a cross-framework gap analysis. Find the overlap with SOC 2, ISO 27001, ISO 42001, NIST AI-RMF — implement once, satisfy multiple.
  • Document the QMS. Article 17 wants a quality management system. Score history + remediation history forms the evidence base.
  • 9. Authoritative sources

  • Regulation (EU) 2024/1689 — full text on EUR-Lex
  • European Commission AI Act resource page
  • European Parliament: The AI Act explainer
  • Independent AI Act tracker (artificialintelligenceact.eu)
  • Council of the EU press release on adoption
  • ENISA — EU Agency for Cybersecurity — AI threat landscape
  • ISO/IEC 42001:2023 standard summary
  • NIST AI Risk Management Framework
  • MITRE ATLAS
  • OWASP Top 10 for LLM Applications
  • 10. Want help?

    EchelonGraph is open to 5 design partners for the EU AI Act readiness program: full Enterprise access free for 6 months in exchange for feedback and production deployment.

  • See Design Partner Program
  • Run a free Tier 1 scan
  • Talk to the founder on LinkedIn — building this in public
  • Follow EchelonGraph on LinkedIn — product updates, compliance research, design partner news
  • The August 2, 2026 deadline is closer than the procurement cycle of most enterprises. Start now.

    Protect your infrastructure before the breach

    Map your attack surface, automate compliance, and detect insider threats in real time.

    Start free trial →