The NIST AI Risk Management Framework (AI RMF 1.0), published as NIST AI 100-1, is a voluntary framework developed by the National Institute of Standards and Technology to help organizations manage risks associated with AI systems throughout their lifecycle. Released in January 2023, the AI RMF provides a structured, flexible approach to identifying, assessing, and mitigating AI-related risks regardless of organization size, sector, or AI maturity level.
The framework is designed to be technology-neutral and use-case agnostic. It applies to all AI systems, from traditional machine learning models to large language models and autonomous agents. It has become one of the most widely referenced AI standards and regulatory frameworks in the United States.
The Four Core Functions
The AI RMF organizes risk management into four interconnected functions that operate across the AI system lifecycle:
Govern
Govern establishes the organizational foundation for AI risk management. This includes defining policies, roles, and accountability structures; setting risk tolerances; fostering a culture of responsible AI; and ensuring that governance processes are documented and enforced. Govern is cross-cutting: it informs and is informed by the other three functions.
Map
Map focuses on context. Organizations identify and document the intended purpose of an AI system, its stakeholders, expected benefits, potential harms, and the operational environment. Mapping also includes cataloging data sources, known limitations, and third-party dependencies. The goal is a clear picture of where risks could emerge before the system is deployed.
Measure
Measure is the quantitative and qualitative assessment of identified risks. This includes testing for bias, fairness, accuracy, robustness, security vulnerabilities, and explainability. Measurement should be ongoing rather than one-time, because AI systems can drift, degrade, or encounter new inputs over their operational life. Metrics and thresholds established during this phase feed directly into management decisions.
Manage
Manage is where organizations prioritize and act on the risks surfaced by the Map and Measure functions. Actions include accepting, mitigating, transferring, or avoiding risks. Manage also covers incident response planning, escalation paths, human oversight requirements, and decommissioning procedures when a system no longer meets acceptable risk thresholds.
The AI RMF Playbook
NIST published the AI RMF Playbook as a companion resource to the framework. The Playbook breaks each of the four core functions into categories and subcategories, then provides suggested actions, references, and practical implementation guidance for each one. It is not prescriptive. Organizations select the actions that are relevant to their risk profile, regulatory requirements, and operational context.
The Playbook is especially useful for teams building AI compliance programs because it translates high-level framework principles into concrete steps that can be mapped to controls, evidence, and audit artifacts.
Generative AI Profile (NIST AI 600-1)
Recognizing the unique risks introduced by generative AI, NIST released the Generative AI Profile (NIST AI 600-1) in 2024. This companion document maps twelve categories of generative AI-specific risks, including hallucination, data privacy, intellectual property concerns, and harmful content generation, to the AI RMF's four core functions.
The Generative AI Profile provides additional guidance for organizations deploying LLMs, foundation models, and AI agents. It addresses risks that traditional ML frameworks may not cover, such as prompt injection, training data poisoning, and unintended memorization. For organizations building or consuming generative AI, this profile is an essential supplement to the base framework. See our comprehensive NIST AI RMF guide for implementation details.
Use Cases and Applicability
The AI RMF applies across industries and use cases:
- Enterprise AI deployments: Structuring AI risk management programs around the four functions, establishing governance boards, and producing audit-ready documentation.
- Regulated industries: Healthcare, financial services, and government agencies use the AI RMF to demonstrate due diligence and map controls to sector-specific requirements.
- Vendor and procurement assessments: Buyers use the AI RMF as a benchmark when evaluating third-party AI products, requiring vendors to demonstrate alignment with framework principles.
- State law compliance: Several U.S. state laws, including Texas TRAIGA, reference or grant affirmative defenses to organizations that adopt recognized frameworks like the AI RMF.
Relationship to NIST Cybersecurity Framework (CSF)
The AI RMF was intentionally designed to complement the NIST Cybersecurity Framework. Both use a functions-based structure, and many organizations already have CSF programs in place. The AI RMF extends cybersecurity risk management to cover AI-specific concerns such as bias, transparency, and accountability that fall outside the scope of traditional cybersecurity controls. Teams with mature CSF programs can layer AI RMF functions on top of existing processes rather than building a separate program from scratch.
Relationship to ISO 42001
ISO 42001 provides requirements for an AI Management System (AIMS), a certifiable standard that covers governance, risk, and continuous improvement. The AI RMF and ISO 42001 are complementary rather than competing. The AI RMF focuses on risk identification and management practices, while ISO 42001 provides the management system structure to institutionalize those practices. Many organizations implement both: using the AI RMF to define their risk approach and ISO 42001 to build the auditable management system around it.
How Swept AI Supports NIST AI RMF Implementation
Swept AI maps directly to the AI RMF's four core functions:
- Govern: Pre-built policy templates and role-based workflows that establish accountability and document governance decisions.
- Map: AI system inventory and risk profiling tools that catalog models, data sources, intended uses, and stakeholder impacts.
- Measure: Automated evaluation including bias testing, safety assessments, red-teaming, and explainability probes that produce versioned, evidence-grade artifacts.
- Manage: Continuous monitoring and supervision with drift detection, incident alerting, and escalation workflows that keep risk management active rather than static.
Swept AI's certification tools generate compliance-ready documentation mapped to AI RMF subcategories, making it straightforward to demonstrate alignment during audits, customer assessments, and regulatory reviews.
What is FAQs
The NIST AI Risk Management Framework (AI RMF 1.0) is a voluntary framework published by the National Institute of Standards and Technology that provides guidance for managing risks throughout the AI system lifecycle.
The four core functions are Govern (establish policies and accountability), Map (identify and contextualize AI risks), Measure (assess and analyze risks quantitatively), and Manage (prioritize and act on identified risks).
The framework itself is voluntary, but several state laws (including Texas TRAIGA) provide affirmative defenses for organizations that adopt it, making it a practical standard for compliance.
NIST AI RMF and ISO 42001 are complementary. NIST AI RMF provides risk management guidance while ISO 42001 provides a certifiable management system. Many organizations implement both.
The AI RMF Playbook is a companion resource that provides suggested actions, references, and practical guidance for each subcategory within the framework's four core functions.
Yes. NIST released a Generative AI Profile (NIST AI 600-1) that maps generative AI-specific risks to the AI RMF functions and provides additional guidance for LLMs and foundation models.