🤖

NIST AI Risk Management Framework 1.0

National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (NIST AI-RMF 1.0). Four functions — Govern, Map, Measure, Manage — providing voluntary guidance for trustworthy AI development and deployment. The de-facto reference standard cited by US federal AI Executive Order 14110 and state-level AI laws.

1 critical11 high6 medium
GOVERN-1.1AIRMF-GV-001high

Legal and regulatory requirements for AI are understood and documented

The organisation identifies, understands, manages, and documents legal, regulatory, and contractual requirements applicable to its AI systems.

GOVERN-1.4AIRMF-GV-002medium

Roles and responsibilities for AI risk management are documented

Roles, responsibilities, and lines of communication for AI risk management are clearly defined, documented, and communicated.

GOVERN-2.1AIRMF-GV-003medium

AI risk tolerance is determined and communicated

Risk tolerance for AI systems is explicit, approved by leadership, and reflected in deployment decisions.

MAP-1.1AIRMF-MAP-001high

AI system context, capabilities, and limitations are documented

The intended purpose, capabilities, limitations, and potential negative impacts of each AI system are documented in a model card or equivalent artefact.

MAP-2.1AIRMF-MAP-002medium

AI system intended purpose and benefits are categorised

Each AI system is categorised by intended purpose, the population affected, and the benefits sought.

MAP-3.1AIRMF-MAP-003medium

AI capabilities and limitations communicated to relevant audiences

Capabilities, limitations, and known failure modes are communicated to deployers, end-users, and impacted populations in clear language.

MAP-4.1AIRMF-MAP-004high

Likelihood and magnitude of impact are characterised

An impact assessment characterises the likelihood and magnitude of potential negative outcomes from the AI system on individuals, groups, society, and the environment.

MAP-5.1AIRMF-MAP-005medium

Risk-to-tolerance mapping is applied

Identified impacts are mapped against the organisation's risk tolerance bands; out-of-tolerance systems are blocked or modified.

MEASURE-1.1AIRMF-ME-001high

Approaches and metrics for AI risk measurement are identified

Validated metrics for accuracy, fairness, robustness, security, and explainability are identified, with documented selection rationale and known limitations.

MEASURE-2.5AIRMF-ME-002high

AI system performance evaluated in production-representative conditions

Pre-deployment evaluation runs on held-out, adversarial, and production-representative test data; results documented with confusion matrix + failure modes.

MEASURE-2.6AIRMF-ME-003high

Trustworthiness characteristics evaluated and documented

Validity, reliability, safety, security, resilience, accountability, transparency, explainability, privacy, and fairness are all evaluated and reported.

MEASURE-2.7AIRMF-ME-004high

AI system performance monitored on an ongoing basis

Post-deployment monitoring of accuracy, drift, GPU saturation, output distribution, and anomalous inference patterns is in place with alerting.

MEASURE-3.1AIRMF-ME-005medium

AI risk register maintained over time

Identified AI risks are tracked in a register with status, owner, mitigation, and residual rating; reviewed quarterly.

MANAGE-1.2AIRMF-MN-001high

AI risks responded to (mitigate / transfer / accept / avoid)

Every documented AI risk has a documented response decision (mitigate / transfer / accept / avoid) with rationale.

MANAGE-1.4AIRMF-MN-002critical

AI cybersecurity controls applied

Cybersecurity controls are applied to AI infrastructure with the same rigour as production systems: secrets management, encryption, access control, audit logging.

MANAGE-2.1AIRMF-MN-003high

AI incident response plan and resources

Documented incident response playbooks specific to AI failure modes (prompt injection, model jailbreak, output bias, training data leakage) with assigned response owners.

MANAGE-3.1AIRMF-MN-004high

Third-party AI risks managed

Risks from third-party AI components (foundation models, datasets, libraries, hosted APIs) are inventoried and managed.

MANAGE-4.1AIRMF-MN-005high

Post-deployment monitoring plans implemented

Operational monitoring with drift thresholds, performance metrics, and rollback criteria documented and enforced.