🎯

MITRE ATLAS — Adversarial Threat Landscape for AI Systems

MITRE ATLAS catalogues adversarial tactics, techniques, and case studies specific to AI/ML systems. The AI counterpart to MITRE ATT&CK, ATLAS provides the structured taxonomy security teams need to threat-model AI workloads. Used by NIST AI-RMF, EU AI Act guidance, and OWASP LLM Top 10 as the canonical adversarial-ML reference.

4 critical6 high2 medium
AML.T0000ATLAS-RECON-001medium

Reconnaissance: AI Model Search

Attackers search public model registries (HuggingFace Hub, ModelScope, Replicate, OpenAI fine-tune APIs) for target organisations' models or fine-tunes.

AML.T0010ATLAS-INIT-001critical

ML Supply Chain Compromise

Attacker compromises ML supply chain components — datasets, models, libraries, container images — that downstream organisations depend on.

AML.T0011ATLAS-DISC-001high

Shadow AI Detection

Detection of unauthorised, undocumented AI workloads running in the organisation's infrastructure. Shadow AI = AI workloads not in the inventory.

AML.T0015ATLAS-DEF-001high

Evade ML Model

Adversarial inputs crafted to evade model detection: image perturbation, prompt obfuscation, content-filter bypass.

AML.T0018ATLAS-PERS-001critical

Backdoor ML Model

Attacker plants a backdoor in the model during training or fine-tuning. Trigger inputs activate the backdoor; the model behaves maliciously only when triggered.

AML.T0020ATLAS-RD-001critical

Poison Training Data

Attacker injects malicious samples into the training data to alter model behaviour. Can be label-flipping, feature manipulation, or trigger insertion.

AML.T0024ATLAS-EXF-001high

Model Inversion

Attacker reconstructs training data from model outputs. Particularly impactful when models are trained on sensitive PII (medical records, facial images, financial data).

AML.T0025ATLAS-EXF-002high

Model Extraction

Attacker reconstructs a functional copy of the model through query-and-response. Sufficient queries enable training a substitute model with comparable accuracy.

AML.T0026ATLAS-EXF-003medium

Membership Inference

Attacker determines whether specific records were in the model's training data. Particularly impactful when training data is sensitive (medical, financial, employment).

AML.T0029ATLAS-IMP-001high

Denial of ML Service

Adversary disrupts AI/ML service availability via crafted high-cost queries (token-heavy LLM prompts, GPU-saturating image inputs, recursive query patterns).

AML.T0031ATLAS-IMP-002high

Erode ML Model Integrity

Adversary causes the ML model to perform poorly over time via feedback-loop manipulation, distribution shift, or sustained adversarial input.

AML.T0040ATLAS-IMP-003critical

ML Intellectual Property Theft

Adversary steals model weights, training data, or proprietary architecture. The 'crown jewel' AI attack — material competitive impact.