🇪🇺EU AI Act ART12-LOGGINGRule: EUAIA-12-001critical

Automatic event logging over system lifetime

Description

Article 12 — High-risk AI systems automatically record events (logs) traceable to the system's lifecycle; logs cover use periods, identification of natural persons checking outputs, and reference databases used.

⚠️ Risk Impact

Article 12 logs are the post-incident forensic record. If logs don't exist or are insufficient, you cannot demonstrate compliance with Articles 14 (human oversight), 15 (cybersecurity), or 16 (corrective action).

🔍 How EchelonGraph Detects This

EUAIA-12-001Automated scanner rule

EchelonGraph's Tier 1 Cloud Scanner automatically checks for this condition across all connected cloud accounts. Violations are flagged as critical-severity findings with remediation guidance.

🖥️ Manual Verification

terminal
kubectl logs -l app=ai-inference -c logger --tail=10 # verify structured logs flowing

🔧 Remediation

Enable structured event logging covering: inference requests + outputs (privacy-preserving sample retention), model version per inference, human-oversight events, output overrides, drift alerts, retraining events. Ship to immutable storage; retain per Article 16 obligations.

💀 Real-World Attack Scenario

An AI medical-diagnosis system flagged a treatment decision the patient disputed. The hospital requested logs to reconstruct the recommendation. The vendor's logs covered API calls but not 'which input features were used' or 'which version was active'. The reconstruction was impossible; the hospital settled with the patient and ended the vendor contract.

💰 Cost of Non-Compliance

Article 12 logging gaps: up to €15M / 3% revenue. Litigation cost of insufficient logs in AI cases: avg $2.8M (Brookings AI Liability Report 2024).

📋 Audit Questions

  • 1.Show me the log structure for your highest-stakes AI system.
  • 2.How long are inference logs retained? Where?
  • 3.Are logs cryptographically tamper-evident?
  • 4.How do logs map an output back to the model version + input features that produced it?

🎯 MITRE ATT&CK Mapping

T1070 — Indicator Removal on Host

🏗️ Infrastructure as Code Fix

main.tf
resource "google_logging_log_sink" "ai_inference" {
  name        = "ai-inference-immutable"
  destination = "storage.googleapis.com/${google_storage_bucket.ai_logs.name}"
  filter      = "resource.type=\"k8s_container\" AND labels.\"k8s-pod/app\"=~\"ai-.*\""
  unique_writer_identity = true
}
resource "google_storage_bucket" "ai_logs" {
  name     = "ai-inference-logs"
  location = "EU"
  retention_policy { retention_period = 315360000 } # 10 years in seconds
  uniform_bucket_level_access = true
}

⚡ Common Pitfalls

  • Logging only API access (who called the inference endpoint) — missing input features and model version
  • Mutable log storage — auditor can't trust logs that could have been tampered with
  • Logging full PII in inference records — creates a GDPR Article 32 violation in pursuit of Article 12 compliance

📈 Business Value

Article 12 logs are the difference between 'we don't know what happened' and 'here's the forensic record' in incident response. Material in regulator + litigation defence.

⏱️ Effort Estimate

Manual

2-3 weeks per system for structured logging + immutable sink

With EchelonGraph

EchelonGraph auto-instruments KServe/Kubeflow/Ray workloads with Article 12-compliant logging

🔗 Cross-Framework References

GDPR-Art32EUAIA-16-RBACNIST_CSF-PR.PS-04

Automate EU AI Act ART12-LOGGING compliance

EchelonGraph continuously monitors this control across all your cloud accounts.

Start Free →