NIST AI Risk Management Framework 1.0
National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (NIST AI-RMF 1.0). Four functions — Govern, Map, Measure, Manage — providing voluntary guidance for trustworthy AI development and deployment. The de-facto reference standard cited by US federal AI Executive Order 14110 and state-level AI laws.
Legal and regulatory requirements for AI are understood and documented
The organisation identifies, understands, manages, and documents legal, regulatory, and contractual requirements applicable to its AI systems.
Roles and responsibilities for AI risk management are documented
Roles, responsibilities, and lines of communication for AI risk management are clearly defined, documented, and communicated.
AI risk tolerance is determined and communicated
Risk tolerance for AI systems is explicit, approved by leadership, and reflected in deployment decisions.
AI system context, capabilities, and limitations are documented
The intended purpose, capabilities, limitations, and potential negative impacts of each AI system are documented in a model card or equivalent artefact.
AI system intended purpose and benefits are categorised
Each AI system is categorised by intended purpose, the population affected, and the benefits sought.
AI capabilities and limitations communicated to relevant audiences
Capabilities, limitations, and known failure modes are communicated to deployers, end-users, and impacted populations in clear language.
Likelihood and magnitude of impact are characterised
An impact assessment characterises the likelihood and magnitude of potential negative outcomes from the AI system on individuals, groups, society, and the environment.
Risk-to-tolerance mapping is applied
Identified impacts are mapped against the organisation's risk tolerance bands; out-of-tolerance systems are blocked or modified.
Approaches and metrics for AI risk measurement are identified
Validated metrics for accuracy, fairness, robustness, security, and explainability are identified, with documented selection rationale and known limitations.
AI system performance evaluated in production-representative conditions
Pre-deployment evaluation runs on held-out, adversarial, and production-representative test data; results documented with confusion matrix + failure modes.
Trustworthiness characteristics evaluated and documented
Validity, reliability, safety, security, resilience, accountability, transparency, explainability, privacy, and fairness are all evaluated and reported.
AI system performance monitored on an ongoing basis
Post-deployment monitoring of accuracy, drift, GPU saturation, output distribution, and anomalous inference patterns is in place with alerting.
AI risk register maintained over time
Identified AI risks are tracked in a register with status, owner, mitigation, and residual rating; reviewed quarterly.
AI risks responded to (mitigate / transfer / accept / avoid)
Every documented AI risk has a documented response decision (mitigate / transfer / accept / avoid) with rationale.
AI cybersecurity controls applied
Cybersecurity controls are applied to AI infrastructure with the same rigour as production systems: secrets management, encryption, access control, audit logging.
AI incident response plan and resources
Documented incident response playbooks specific to AI failure modes (prompt injection, model jailbreak, output bias, training data leakage) with assigned response owners.
Third-party AI risks managed
Risks from third-party AI components (foundation models, datasets, libraries, hosted APIs) are inventoried and managed.
Post-deployment monitoring plans implemented
Operational monitoring with drift thresholds, performance metrics, and rollback criteria documented and enforced.