EU AI Act — Regulation (EU) 2024/1689
The world's first comprehensive AI regulation. High-risk AI system obligations under Articles 9-17 begin enforcement on August 2, 2026. Penalties up to €35M or 7% of global annual revenue — more punitive than GDPR. Extraterritorial reach: applies to any provider, deployer, importer, or distributor whose AI output reaches the EU market.
Risk management system established and maintained
Article 9 — Providers of high-risk AI systems must establish, implement, document, and maintain a risk management system running across the entire lifecycle.
Foreseeable risks and misuse identified
Article 9(2)(a)(b) — Identification of known and reasonably foreseeable risks; estimation under reasonably foreseeable misuse.
Training, validation, and testing data governance
Article 10 — Data sets used for training, validation, testing meet quality criteria: relevance, representativeness, accuracy, completeness; statistical properties documented.
Technical documentation maintained per Annex IV
Article 11 — Technical documentation drawn up before placement on the market; covers system description, intended purpose, technical specs, design choices, validation results.
Automatic event logging over system lifetime
Article 12 — High-risk AI systems automatically record events (logs) traceable to the system's lifecycle; logs cover use periods, identification of natural persons checking outputs, and reference databases used.
Transparency for deployers (instructions for use)
Article 13 — Providers must furnish deployers with instructions for use enabling them to interpret outputs and use the system appropriately.
Human oversight measures during use
Article 14 — High-risk AI systems must be effectively overseen by natural persons during use; human-in-the-loop or human-on-the-loop measures implemented.
Accuracy declared and met
Article 15(1) — High-risk AI systems achieve appropriate accuracy for their intended purpose; accuracy metrics declared in instructions for use.
Robustness against errors and faults
Article 15(3) — High-risk AI systems are resilient to errors, faults, inconsistencies; redundancy and fail-safe measures implemented.
Cybersecurity appropriate to the risk
Article 15(4) — High-risk AI systems resilient against attempts to alter use, outputs, or performance by exploiting vulnerabilities. The clause that makes Article 15 a cybersecurity-team concern.
Provider quality management system
Article 16(a) — Providers establish a QMS ensuring compliance with the regulation; documented procedures, accountability, continual improvement.
Corrective action procedures
Article 16(j) — Provider takes corrective action where the AI system poses risk; informs distributors, deployers, and authorities.
QMS documentation
Article 17 — QMS documented covering: strategy for regulatory compliance, design control, technical specifications, data management, risk management, post-market monitoring, incident reporting, record-keeping.
Fundamental Rights Impact Assessment (FRIA)
Article 27 — Public-sector deployers + private-sector deployers of certain Annex III systems conduct a FRIA before first use.
Transparency obligations — chatbots, deepfakes, AI-generated content
Article 50 — Providers and deployers inform users they are interacting with AI; AI-generated synthetic content (image, audio, text) labelled as such.
Post-market monitoring system
Article 61 — Provider establishes a post-market monitoring system proportionate to risk; collects telemetry; analyses for emerging risks.
Serious incident reporting
Article 72 — Providers report serious incidents to market-surveillance authorities within 15 days (or 2 days for widespread infringement / fatality / critical infrastructure disruption).
Penalty exposure awareness
Article 99 — Penalty structure: €35M / 7% global turnover (prohibited AI), €15M / 3% (high-risk non-compliance), €7.5M / 1% (incorrect information).