EU AI ActComparison

EU AI Act (Regulation (EU) 2024/1689) EU AI Act and NIST AI RMF

NIST AI RMF helps you manage AI risk. The AI Act tells you which legal duties still apply.

The strongest model uses AI RMF for governance and risk discipline, then overlays AI Act specific legal checks and evidence.

Author
Sorena AI
Published
Mar 4, 2026
Updated
Mar 4, 2026
Sections
4

Structured answer sets in this page tree.

Primary sources
4

Cited legal and guidance references.

Publication metadata
Sorena AI
Published Mar 4, 2026
Updated Mar 4, 2026
Overview

NIST AI RMF 1.0 is explicitly voluntary. The AI Act is not. That difference matters. Teams that already use AI RMF should keep it, but they should stop short of treating it as a substitute for EU legal classification, operator duties, and authority facing evidence.

Section 1

What NIST AI RMF contributes

NIST describes the AI RMF as a voluntary framework to help organizations designing, developing, deploying, or using AI systems manage risk and support trustworthy AI. The core uses four functions: Govern, Map, Measure, and Manage. That structure is useful because it gives product and risk teams a common language that is broader than model safety alone.

The playbook and related NIST resources are especially helpful when an organization needs a repeatable way to define purpose, identify context, measure risks, prioritize responses, and keep improving over time.

  • Govern helps assign policy, oversight, roles, and accountability.
  • Map helps define context, stakeholders, intended purpose, and risk factors.
  • Measure helps test, evaluate, and monitor risk signals and controls.
  • Manage helps prioritize responses, accept residual risk, and track remediation.
Section 2

What the EU AI Act adds that AI RMF does not

The AI Act adds legal routing. It asks whether the system is prohibited, high risk, transparency triggered, or connected to GPAI provider duties. It also assigns obligations to specific operator roles and attaches penalties to non compliance. NIST AI RMF does not make those legal classifications for you.

This is why a good AI RMF program can still miss core AI Act obligations if it never runs Article 5 screening, Annex III classification, Article 50 product disclosure analysis, or Chapter V model level documentation checks.

  • AI RMF does not determine whether Annex III applies.
  • AI RMF does not create Article 50 disclosure duties or machine readable marking duties.
  • AI RMF does not replace Annex IV technical documentation or conformity assessment work.
  • AI RMF does not replace Article 53 to 55 duties for GPAI providers.
Section 3

How to combine them in practice

Use AI RMF as the operating logic for governance and risk management. Then insert AI Act checkpoints inside each function. Govern should include operator role ownership and AI literacy. Map should include EU scope, intended purpose, and affected persons. Measure should include Article 5 screening, Article 50 design review, and high risk testing evidence. Manage should include launch decisions, residual risk treatment, incident handling, and corrective action.

This combined model is also consistent with the ETSI material in the local source pack, which references the NIST AI RMF as part of a structured risk and governance approach for AI systems.

  • Govern plus AI Act: assign provider and deployer duties and approval authority.
  • Map plus AI Act: identify scope, intended purpose, Annex III exposure, and affected persons.
  • Measure plus AI Act: test disclosures, performance, robustness, logging, and oversight.
  • Manage plus AI Act: approve launch, track incidents, update documentation, and close corrective actions.
Section 4

Evidence reuse without false equivalence

You can reuse a large share of your evidence. Risk registers, role assignments, impact review outputs, control test results, and monitoring reports all carry over. What you cannot reuse is the claim that a completed AI RMF cycle equals AI Act compliance.

Keep one evidence pack with a clear legal appendix. That appendix should show the AI Act classifications, article triggers, and any mandatory artifacts such as FRIA, Annex IV planning, Article 50 evidence, or Chapter V model outputs.

  • Reuse governance records, risk review outputs, and monitoring evidence.
  • Add AI Act only decision records and mandatory legal artifacts.
  • Tie every evidence item to a system or model version.
  • Review the combined pack on the same cadence as product and model change control.
Recommended next step

Use EU AI Act (Regulation (EU) 2024/1689) EU AI Act and NIST AI RMF as a cited research workflow

Research Copilot can take EU AI Act (Regulation (EU) 2024/1689) EU AI Act and NIST AI RMF from how this topic compares with adjacent regulations or standards to a reusable workflow inside Sorena. Teams working on EU AI Act (Regulation (EU) 2024/1689) can keep owners, evidence, and next steps aligned without copying this guide into separate documents.

Primary sources

References and citations

nist.gov
Referenced sections
  • Official playbook describing the Govern, Map, Measure, and Manage functions.
Related guides

Explore more topics

EU AI Act Applicability and Roles | Provider, Deployer, Importer Guide
Determine whether the EU AI Act applies, when output used in the Union brings a system into scope, and how to assign provider, deployer, importer.
EU AI Act Applicability Test | Scope, Role, and Obligation Routing
Run a practical EU AI Act applicability test that checks scope, exclusions, operator role, prohibited practices, high risk status, transparency triggers.
EU AI Act Checklist | Practical Compliance Checklist by Obligation
Use a detailed EU AI Act checklist covering inventory, role mapping, Article 5 screening, high risk controls, Article 50 disclosures, GPAI evidence, logging.
EU AI Act Compliance Program | Build an Operational AI Act Program
Build an EU AI Act compliance program that covers inventory, governance, AI literacy, prohibited practice gates, high risk controls, Article 50 product work.
EU AI Act Deadlines and Compliance Calendar | Exact Dates and Workplan
Track the exact EU AI Act dates, including entry into force on 1 August 2024, early obligations from 2 February 2025, GPAI obligations from 2 August 2025.
EU AI Act FAQ | Dates, High Risk, GPAI, Transparency, and Penalties
Get grounded answers to common EU AI Act questions on application dates, high risk status, provider versus deployer roles, transparency.
EU AI Act GPAI and Foundation Model Obligations | Chapter V Guide
Understand EU AI Act obligations for general purpose AI model providers, including Article 53 documentation, copyright policy.
EU AI Act High Risk AI Use Cases by Industry | Annex III and Product Routes
See how EU AI Act high risk status appears across biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration.
EU AI Act High Risk Requirements Checklist | Articles 9 to 15 and Beyond
Use a detailed high risk AI checklist covering Article 9 risk management, Article 10 data governance, Annex IV technical documentation, logging, instructions.
EU AI Act Penalties and Fines | Article 99 and GPAI Fine Exposure
Understand EU AI Act penalty tiers, including Article 5 fines up to EUR 35,000,000 or 7 percent.
EU AI Act Prohibited AI Practices | Article 5 Screening Guide
Screen AI systems against EU AI Act Article 5 prohibited practices, including manipulative and deceptive techniques, exploitation of vulnerabilities.
EU AI Act Requirements | Prohibited, High Risk, Transparency, and GPAI
Get a grounded overview of EU AI Act requirements across Article 5 prohibited practices, Article 6 and Annex III high risk systems.
EU AI Act Timeline and Phasing Roadmap | Practical Implementation Roadmap
Follow a practical EU AI Act roadmap that aligns workstreams to the phased application dates for prohibited practices, AI literacy, GPAI obligations.
EU AI Act Transparency, Labeling, and User Disclosures | Article 50 Guide
Implement EU AI Act Article 50 transparency duties for direct interaction notices, machine readable marking of synthetic outputs, deepfake disclosures.
EU AI Act vs ISO 42001 | What ISO 42001 Covers and What It Does Not
Compare the EU AI Act with ISO/IEC 42001:2023. Learn where ISO 42001 helps with AI policy, roles, risk assessment, impact assessment, documented information.