LexClawFrameworks & Standards
FrameworksNIST AI RMF
ai riskv1.0Published

NIST AI Risk Management Framework

NIST AI RMF

The NIST AI RMF provides a framework for managing risks related to AI systems throughout their lifecycle. It is designed to be voluntary, rights-preserving, non-sector-specific, and use-case agnostic.

Issuing Body
National Institute of Standards and Technology (NIST)
Version
1.0
Published
2023-01-26
Controls
16
Mapped Laws
Control IDTitleDomainMaturity
AIRF-GOVERN-1.1
AI Risk Policies
Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.
defined
AIRF-GOVERN-1.2
Accountability
Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks.
defined
AIRF-GOVERN-1.3
Organizational Roles
Organizational teams are committed to a culture that considers and communicates AI risk. Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.
defined
AIRF-GOVERN-2.1
Risk Tolerance
The risk or impact tolerance of the organization or context is established, communicated, and updated. Organizational risk tolerance for AI risks is documented and applied.
defined
AIRF-GOVERN-4.1
Organizational Teams
Organizational teams are committed to a culture that considers and communicates AI risk. Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues.
defined
AIRF-GOVERN-5.1
Organizational Policies
Organizational policies and practices are in place to foster a critical thinking and safety-first mindset in the design, development, deployment, and uses of AI systems to minimize potential negative impacts.
defined
AIRF-MAP-1.1
Context Establishment
Context is established for the AI risk assessment. The organizational mission and relevant AI policies, processes, and procedures are understood. Intended purpose, potentially beneficial uses, context of use, and assumptions are documented.
defined
AIRF-MAP-1.5
AI System Categorization
Organizational risk tolerances are determined and documented. The AI system to be deployed presents risks that fall within organizational risk tolerances.
defined
AIRF-MAP-2.1
Scientific Basis
The scientific basis of the claimed or potential benefits and costs of the AI system are understood. Potential costs, including harms, are properly identified and documented.
defined
AIRF-MAP-5.1
Likelihood and Impact
Likelihood and magnitude of each identified impact based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed the AI system, or other data are identified and documented.
managed
AIRF-MEASURE-1.1
Measurement Approaches
Approaches and metrics for measurement of AI risks, impacts, and related harms are identified and documented. Measurement approaches for identifying AI risks are aligned with the AI risk management framework.
managed
AIRF-MEASURE-2.1
AI System Testing
Test sets, metrics, and details about the tools used during test, evaluation, validation, and verification (TEVV) are documented. AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for conditions similar to deployment setting(s).
managed
AIRF-MEASURE-2.5
Bias Testing
The AI system to be deployed is demonstrated to be valid and reliable through systematic evaluation. AI system performance, or assurance criteria, are evaluated against established metrics. Bias testing is performed.
managed
AIRF-MANAGE-1.1
Risk Treatment
A determination is made as to whether the AI system achieves its intended purposes and stated objectives and whether its development or deployment should proceed. Risks or other undesirable impacts are treated, transferred, accepted, or avoided.
managed
AIRF-MANAGE-2.1
Mechanisms for Addressing Risks
Resources required to manage AI risks are taken into account – along with viable non-AI alternative systems, approaches, or methods – to reduce the magnitude or likelihood of potential impacts. Mechanisms are in place to inventory AI systems and their associated risks.
managed
AIRF-MANAGE-4.1
Residual Risk Management
Post-deployment AI system monitoring plans are in place and are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors. Residual risks not identified in pre-deployment testing are documented.
managed