🤖NIST AI-RMF MAP-1.1Rule: AIRMF-MAP-001high

AI system context, capabilities, and limitations are documented

Description

The intended purpose, capabilities, limitations, and potential negative impacts of each AI system are documented in a model card or equivalent artefact.

⚠️ Risk Impact

Without documented capability and limitation statements, deployers misuse models — using a non-clinical chatbot for triage, applying a US-trained credit model in the EU, or scaling a prototype past its tested data distribution. These misuses produce harm the provider is liable for.

🔍 How EchelonGraph Detects This

AIRMF-MAP-001Automated scanner rule

EchelonGraph's Tier 1 Cloud Scanner automatically checks for this condition across all connected cloud accounts. Violations are flagged as high-severity findings with remediation guidance.

🖥️ Manual Verification

terminal
find ./models -name 'MODEL_CARD.md' -mtime +180 # flag stale model cards (older than 6 months)

🔧 Remediation

Publish a model card for every deployed AI system covering: intended use, out-of-scope uses, training data characteristics, known limitations, fairness evaluation results, retraining cadence. Hugging Face model card template is a useful starting point.

💀 Real-World Attack Scenario

A pharma company licensed an LLM for internal use 'for general productivity'. A clinical researcher used it to summarise FDA submission drafts — a use case explicitly out-of-scope per the vendor's model card (which was never read internally). When the FDA later discovered LLM-generated content in the submission, the company faced a Form 483 finding for inadequate AI controls and a 5-month submission delay costing ~$2M in market entry timing.

💰 Cost of Non-Compliance

FDA 483 findings related to undocumented AI: 17 issued in 2024-2026 (FDA enforcement data). Avg cost: $300K-$2M depending on submission stage. EU AI Act Article 13 requires deployer transparency or €15M / 3% revenue penalty.

📋 Audit Questions

  • 1.Show me the model card for your customer-facing chatbot.
  • 2.What is the documented out-of-scope list for that model?
  • 3.How are deployers (downstream users) made aware of these limitations?
  • 4.When was the model card last reviewed against current deployment behaviour?

⚡ Common Pitfalls

  • Writing model cards once at launch and never updating them as the model drifts
  • Listing only the marketed capabilities and omitting failure modes
  • Storing model cards in a separate repo that deployers can't easily find

📈 Business Value

Published model cards reduce vendor-customer disputes by 70% in measured deployments (Stanford HAI 2024) and constitute Article 13 evidence under the EU AI Act.

⏱️ Effort Estimate

Manual

8-16 hours per model card; quarterly review

With EchelonGraph

EchelonGraph auto-generates model card drafts from training metadata + evaluation results in your registry

🔗 Cross-Framework References

EU_AI_ACT-ART13-TRANSPARENCYISO42001-8.3

Automate NIST AI-RMF MAP-1.1 compliance

EchelonGraph continuously monitors this control across all your cloud accounts.

Start Free →