🤖NIST AI-RMF MAP-3.1Rule: AIRMF-MAP-003medium

AI capabilities and limitations communicated to relevant audiences

Description

Capabilities, limitations, and known failure modes are communicated to deployers, end-users, and impacted populations in clear language.

⚠️ Risk Impact

Users who don't understand limitations over-rely on AI output — accepting hallucinated citations, biased decisions, or out-of-distribution predictions. The Air Canada chatbot case (2024) is the textbook example of this risk materialising as contract liability.

🔍 How EchelonGraph Detects This

AIRMF-MAP-003Automated scanner rule

EchelonGraph's Tier 1 Cloud Scanner automatically checks for this condition across all connected cloud accounts. Violations are flagged as medium-severity findings with remediation guidance.

🔧 Remediation

Surface 'AI-generated' disclosure in the UI for every AI-mediated interaction. Document known failure modes in user-facing FAQ. For high-stakes outputs (legal, medical, financial), add automated decline-to-answer thresholds.

💀 Real-World Attack Scenario

Air Canada's website chatbot promised a customer a bereavement-fare discount that wasn't company policy. When the customer attempted to claim, the airline refused. A Canadian civil tribunal ruled in February 2024 that Air Canada was bound by what its chatbot said — establishing AI output as legally binding contract content. Airline ordered to pay damages + costs.

💰 Cost of Non-Compliance

Air Canada Moffatt v Air Canada (2024): contractual liability for chatbot statements. Class-action exposure for misleading AI outputs at scale: avg $5-15M per case (US contract law). EU AI Act Article 50 transparency violation: €15M / 3% revenue.

📋 Audit Questions

  • 1.Where in your product do users learn they are interacting with AI?
  • 2.How does your chatbot decline to answer questions outside its competence?
  • 3.Show me the disclaimer + decline-to-answer thresholds for medical/legal/financial queries.
  • 4.What is the user-feedback mechanism for incorrect AI outputs?

⚡ Common Pitfalls

  • Disclosing AI involvement in the ToS but not in the chat UI itself
  • Setting decline-to-answer thresholds too aggressively (everything is declined) or too leniently (nothing is)
  • Failing to handle 'safe but wrong' outputs — the model is confident but factually incorrect

📈 Business Value

Documented AI limitation communication is Article 50 evidence under the EU AI Act and reduces consumer-protection litigation exposure by ~60% (Brookings AI Liability Report 2024).

⏱️ Effort Estimate

Manual

4-8 hours per system to implement disclosure + decline thresholds

With EchelonGraph

EchelonGraph monitors output rate of declined queries; alerts on threshold drift

🔗 Cross-Framework References

EU_AI_ACT-ART50-TRANSPARENCYOWASP_LLM-LLM09

Automate NIST AI-RMF MAP-3.1 compliance

EchelonGraph continuously monitors this control across all your cloud accounts.

Start Free →