🤖NIST AI-RMF MAP-4.1Rule: AIRMF-MAP-004high

Likelihood and magnitude of impact are characterised

Description

An impact assessment characterises the likelihood and magnitude of potential negative outcomes from the AI system on individuals, groups, society, and the environment.

⚠️ Risk Impact

Without quantified impact, mitigation prioritisation is guesswork. Resources flow to the loudest stakeholder, not the highest-impact risk. The system that statistically harms 0.3% of users at $100K each is deprioritised below the system that occasionally produces internal embarrassment.

🔍 How EchelonGraph Detects This

AIRMF-MAP-004Automated scanner rule

EchelonGraph's Tier 1 Cloud Scanner automatically checks for this condition across all connected cloud accounts. Violations are flagged as high-severity findings with remediation guidance.

🔧 Remediation

Use a Fundamental Rights Impact Assessment (FRIA) — required for high-risk EU AI Act systems — covering: scope, processing, populations, foreseeable risks, mitigation measures. Score each risk on likelihood (1-5) × magnitude (1-5).

💀 Real-World Attack Scenario

A regional bank's mortgage AI was approved for production without an impact assessment. Six months in, internal audit discovered a 12-point approval gap between white and Hispanic applicants with otherwise identical credit profiles. Fair Lending Act exposure (already crystallising in CFPB enforcement actions) put the bank at $40M-$120M in potential remediation, plus a forced 18-month model rebuild.

💰 Cost of Non-Compliance

Wells Fargo AI mortgage bias finding (2022): $145M consent order. State Farm AI claims-handling settlement: $50M (2023). FRIA-equivalent absence under EU AI Act high-risk: €15M / 3% revenue per finding.

📋 Audit Questions

  • 1.Show me the FRIA for your latest high-risk AI deployment.
  • 2.What populations were considered? Who was consulted?
  • 3.Which risks were rated >12 on the 5×5 matrix? What mitigation?
  • 4.How often is the FRIA refreshed as the system evolves?

🎯 MITRE ATT&CK Mapping

T1078.004 — Cloud Accounts

🏗️ Infrastructure as Code Fix

main.tf
# Block deployment if FRIA missing for high-risk workloads
resource "kyverno_policy" "require_fria" {
  metadata { name = "require-fria-on-high-risk-ai" }
  spec = jsonencode({
    validationFailureAction = "enforce"
    rules = [{
      name = "require-fria-annotation"
      match = { resources = { kinds = ["InferenceService"] } }
      validate = {
        message = "High-risk AI workloads must have fria.echelongraph.io/url annotation"
        pattern = { metadata = { annotations = { "fria.echelongraph.io/url" = "?*" } } }
      }
    }]
  })
}

⚡ Common Pitfalls

  • Conducting the FRIA once at launch and not refreshing as the data distribution shifts
  • Scoring magnitude on dollar impact only — ignoring reputational, regulatory, and fundamental-rights harm
  • Forgetting to consult external stakeholders (civil-society reps, affected community advocates)

📈 Business Value

Documented FRIAs are required evidence under EU AI Act Article 27 and reduce regulatory probe exposure by ~70% (PwC AI Risk Report 2024). They also surface mitigation work earlier when it costs 10× less than post-launch remediation.

⏱️ Effort Estimate

Manual

3-5 days per high-risk system; annual refresh

With EchelonGraph

EchelonGraph runs a baseline FRIA template against live workload metadata; flags gaps for the AI Risk Owner

🔗 Cross-Framework References

EU_AI_ACT-ART27-FRIAISO42001-8.3GDPR-Art35

Automate NIST AI-RMF MAP-4.1 compliance

EchelonGraph continuously monitors this control across all your cloud accounts.

Start Free →