The CRI FS AI RMF: What 108 Financial Institutions Agree AI Risk Management Actually Requires

AI GovernanceLast updated on
The CRI FS AI RMF: What 108 Financial Institutions Agree AI Risk Management Actually Requires

Most AI governance frameworks arrive the same way: a regulator publishes requirements, and the industry scrambles to comply. The CRI Financial Services AI Risk Management Framework broke that pattern. In February 2026, 108 financial institutions published what they collectively agreed AI risk management should look like, before any regulator told them what it must look like.

That distinction shapes everything about the framework.

The Framework the Industry Built Itself

The FS AI RMF didn't emerge from a single regulator's office. It came from the Cyber Risk Institute (CRI) and the Financial Services Sector Coordinating Council (FSSCC), with the U.S. Treasury providing oversight and publication support. Banks, insurance companies, asset managers, and payment processors (108 institutions in total) contributed their operational experience to define 230 control objectives for managing AI risk.

This origin matters. Regulators write frameworks based on what they think institutions should do. The 108 institutions that built the FS AI RMF wrote it based on what they know they need to do. The difference shows up in every control objective: these are grounded in the operational reality of deploying AI in one of the most regulated sectors on earth.

Version 1.0 of the Guidebook establishes a baseline that the industry wrote for itself. Compliance means adopting practices that peer institutions have validated as necessary, not treating it as a checkbox exercise.

What the FS AI RMF Actually Contains

The framework organizes its 230 control objectives across four functions adapted from the NIST AI RMF:

Govern covers organizational governance for AI systems: board-level oversight, AI strategy definition, roles and responsibilities, risk appetite statements, and accountability structures. Govern establishes the organizational infrastructure that makes everything else possible.

Map covers context and risk identification: use-case inventories, risk categorization, data lineage documentation, stakeholder identification, and impact assessments. This function answers a fundamental question. What AI do we have, and what risks does it carry?

Measure covers assessment and analysis. Model validation, bias testing, performance metrics, explainability assessments, and security testing fall under this function. Institutions quantify whether their AI systems meet the standards set by Govern and the risks identified by Map.

Manage covers treatment and monitoring: incident response, continuous monitoring, remediation workflows, model decommissioning, and escalation procedures. This function addresses the ongoing operational reality of AI in production.

Each function contains specific control objectives: concrete, assessable statements about what an institution should be able to demonstrate.

The Staging Model: Why "Start Where You Are" Works

The framework's most practical feature is its staged adoption model. Rather than presenting 230 control objectives as a single compliance target, the FS AI RMF organizes them into four maturity stages:

Initial: 21 control objectives. The absolute baseline. These controls represent what every institution deploying AI should have in place regardless of scale or sophistication. Basic inventory. Fundamental oversight. Essential risk awareness.

Minimal: 126 control objectives. At this stage, institutions move beyond ad-hoc practices into structured risk management. Controls cover systematic validation, documented policies, and regular review cycles.

Evolving: 193 control objectives. Institutions at this stage integrate AI risk management into their broader operational risk frameworks. Controls include advanced testing, cross-functional coordination, and proactive monitoring.

Embedded: 230 control objectives. Every control is addressed. AI risk management is integrated into the institution's culture, processes, and systems. This is the target state, not the starting line.

Most governance frameworks ignore a practical reality: institutions are at different points in their AI journeys. Full compliance from day one sounds good in a press release. In practice, it produces either paralysis or theater. The AI governance maturity model concept maps directly to this staged approach, offering a roadmap rather than a cliff.

How the Four Functions Map to Financial Services Operations

Financial services institutions face AI risks shaped by their regulatory environment. Each function translates differently here than it would in retail or manufacturing.

Govern in financial services means board-level AI committees, SR 11-7 integration, and explicit risk appetite statements for AI-driven decisions. A credit underwriting model and a marketing recommendation engine don't belong in the same governance tier, and the framework's risk tiering reflects that.

Map in financial services requires comprehensive model inventories that include third-party AI systems, data lineage that traces back to source systems, and risk categorizations that account for regulatory sensitivity. A fraud detection model processing millions of transactions daily needs a different risk map than an internal document classifier.

Measure in financial services involves bias testing against protected classes under fair lending laws, model validation that satisfies OCC and FFIEC examination standards, and performance metrics that account for financial impact. An AI that approves or denies credit must demonstrate fairness across demographic groups. That requirement has legal teeth.

Manage in financial services covers incident response for AI failures that could affect customers or markets, continuous monitoring for model drift that could shift approval rates, and decommissioning procedures for models that no longer meet standards. An AI system that starts producing unexpected outcomes needs documented escalation paths, not informal workarounds.

Cross-Framework Alignment: Build Once, Comply Everywhere

Financial institutions already contend with overlapping regulatory requirements. The FS AI RMF accounts for this complexity, explicitly mapping its control objectives to 15+ existing standards and regulatory frameworks:

  • NIST AI RMF: The parent framework. Every FS AI RMF control maps to a NIST AI RMF subcategory.
  • EU AI Act: For institutions with European operations, the framework maps controls to EU AI Act requirements, including high-risk AI system obligations.
  • ISO 42001: The AI management system standard. Institutions pursuing certification can use FS AI RMF implementation as direct evidence.
  • SR 11-7: The Federal Reserve's model risk management guidance. The framework explicitly extends SR 11-7 concepts to AI and ML models.
  • FFIEC and OCC guidance: Examination standards for technology risk, third-party risk, and model risk.
  • CFPB guidance: Consumer financial protection requirements for AI-driven decisions.

The practical benefit: implement the FS AI RMF once, and the same work generates compliance evidence for multiple regulatory bodies. With state-level AI regulations continuing to proliferate, that kind of consolidation saves real operational cost and audit burden.

Operationalizing the Framework with Swept AI

A framework of 230 control objectives is only valuable if institutions can operationalize it. Documentation alone does not constitute risk management.

At Swept AI, we've built our platform to map directly to the FS AI RMF's four functions:

Evaluate aligns with Measure. Our evaluation capabilities support bias testing, model validation, and performance assessment: the quantitative controls that the Measure function requires. Rather than building custom validation pipelines for each model, institutions can deploy standardized evaluation scorecards aligned to FS AI RMF control objectives.

Supervise aligns with Manage. Continuous monitoring, drift detection, and incident escalation are core Manage function requirements. Our supervision layer provides real-time oversight of AI systems in production, generating the monitoring evidence that the framework demands.

Certify aligns with Govern. The Govern function requires documented evidence of oversight, accountability, and compliance. Our certification capabilities generate audit-ready documentation that maps directly to FS AI RMF control objectives, creating the evidence trail that regulators and internal audit teams require.

For institutions working through the staged adoption model, this mapping provides a practical path. Start with the 21 Initial-stage controls and the corresponding Swept AI capabilities. Expand as the institution matures through Minimal, Evolving, and Embedded stages.

What This Framework Signals

The FS AI RMF represents a shift in how financial services approaches AI governance. Instead of waiting for regulators to define requirements, 108 institutions defined them collaboratively. The result is a framework grounded in operational reality, structured for progressive adoption, and designed to reduce the compliance burden through cross-framework alignment.

For institutions deploying AI in production, the question is no longer whether to adopt structured AI risk management. The 108 institutions that built this framework already answered that question. The remaining question is where your institution sits on the path from 21 to 230, and how quickly you close the gap.

Join our newsletter for AI Insights