🇪🇺

EU AI Act — Regulation (EU) 2024/1689

The world's first comprehensive AI regulation. High-risk AI system obligations under Articles 9-17 begin enforcement on August 2, 2026. Penalties up to €35M or 7% of global annual revenue — more punitive than GDPR. Extraterritorial reach: applies to any provider, deployer, importer, or distributor whose AI output reaches the EU market.

5 critical11 high2 medium
ART9-RMEUAIA-9-001critical

Risk management system established and maintained

Article 9 — Providers of high-risk AI systems must establish, implement, document, and maintain a risk management system running across the entire lifecycle.

ART9-FORESEEABLEEUAIA-9-002high

Foreseeable risks and misuse identified

Article 9(2)(a)(b) — Identification of known and reasonably foreseeable risks; estimation under reasonably foreseeable misuse.

ART10-DATA-GOVEUAIA-10-001high

Training, validation, and testing data governance

Article 10 — Data sets used for training, validation, testing meet quality criteria: relevance, representativeness, accuracy, completeness; statistical properties documented.

ART11-TECH-DOCEUAIA-11-001high

Technical documentation maintained per Annex IV

Article 11 — Technical documentation drawn up before placement on the market; covers system description, intended purpose, technical specs, design choices, validation results.

ART12-LOGGINGEUAIA-12-001critical

Automatic event logging over system lifetime

Article 12 — High-risk AI systems automatically record events (logs) traceable to the system's lifecycle; logs cover use periods, identification of natural persons checking outputs, and reference databases used.

ART13-TRANSPARENCYEUAIA-13-001high

Transparency for deployers (instructions for use)

Article 13 — Providers must furnish deployers with instructions for use enabling them to interpret outputs and use the system appropriately.

ART14-HUMAN-OVERSIGHTEUAIA-14-001critical

Human oversight measures during use

Article 14 — High-risk AI systems must be effectively overseen by natural persons during use; human-in-the-loop or human-on-the-loop measures implemented.

ART15-ACCURACYEUAIA-15-001high

Accuracy declared and met

Article 15(1) — High-risk AI systems achieve appropriate accuracy for their intended purpose; accuracy metrics declared in instructions for use.

ART15-ROBUSTNESSEUAIA-15-002high

Robustness against errors and faults

Article 15(3) — High-risk AI systems are resilient to errors, faults, inconsistencies; redundancy and fail-safe measures implemented.

ART15-CYBERSECEUAIA-15-003critical

Cybersecurity appropriate to the risk

Article 15(4) — High-risk AI systems resilient against attempts to alter use, outputs, or performance by exploiting vulnerabilities. The clause that makes Article 15 a cybersecurity-team concern.

ART16-RBACEUAIA-16-001high

Provider quality management system

Article 16(a) — Providers establish a QMS ensuring compliance with the regulation; documented procedures, accountability, continual improvement.

ART16-CORRECTIVEEUAIA-16-002high

Corrective action procedures

Article 16(j) — Provider takes corrective action where the AI system poses risk; informs distributors, deployers, and authorities.

ART17-QMSEUAIA-17-001high

QMS documentation

Article 17 — QMS documented covering: strategy for regulatory compliance, design control, technical specifications, data management, risk management, post-market monitoring, incident reporting, record-keeping.

ART27-FRIAEUAIA-27-001high

Fundamental Rights Impact Assessment (FRIA)

Article 27 — Public-sector deployers + private-sector deployers of certain Annex III systems conduct a FRIA before first use.

ART50-TRANSPARENCYEUAIA-50-001medium

Transparency obligations — chatbots, deepfakes, AI-generated content

Article 50 — Providers and deployers inform users they are interacting with AI; AI-generated synthetic content (image, audio, text) labelled as such.

ART61-POST-MARKETEUAIA-61-001high

Post-market monitoring system

Article 61 — Provider establishes a post-market monitoring system proportionate to risk; collects telemetry; analyses for emerging risks.

ART72-INCIDENTEUAIA-72-001critical

Serious incident reporting

Article 72 — Providers report serious incidents to market-surveillance authorities within 15 days (or 2 days for widespread infringement / fatality / critical infrastructure disruption).

ART99-PENALTYEUAIA-99-001medium

Penalty exposure awareness

Article 99 — Penalty structure: €35M / 7% global turnover (prohibited AI), €15M / 3% (high-risk non-compliance), €7.5M / 1% (incorrect information).