EU AI ActHigh risk

EU AI Act (Regulation (EU) 2024/1689) High risk checklist

High risk compliance is a lifecycle system with evidence at every stage.

This checklist covers both the core requirements and the surrounding provider and deployer duties that make them real.

Author
Sorena AI
Published
Mar 4, 2026
Updated
Mar 4, 2026
Sections
5

Structured answer sets in this page tree.

Primary sources
4

Cited legal and guidance references.

Publication metadata
Sorena AI
Published Mar 4, 2026
Updated Mar 4, 2026
Overview

High risk systems require more than principles. They require structured controls, technical and organizational evidence, and release discipline that remains valid after deployment. This page maps the key requirements to the outputs teams should be able to show.

Section 1

Articles 9 and 10: risk management and data governance

Article 9 requires a continuous and iterative risk management system. It should cover known and reasonably foreseeable risks across the lifecycle, not only risks at initial design. Article 10 requires data governance and data quality measures appropriate to the system and its intended purpose.

This is where many high risk programs rise or fall. If the training, validation, and testing record is thin, later claims about performance and oversight will also be weak.

  • Lifecycle risk register linked to design changes, testing, and post market events.
  • Documented assumptions about intended purpose, users, and environment.
  • Training, validation, and testing data governance evidence, including relevance and known limitations.
  • Bias, representativeness, and error handling controls appropriate to the use case.
Section 2

Articles 11 to 13: technical documentation, logging, and instructions

Article 11 requires technical documentation with the Annex IV content needed to demonstrate compliance. Article 12 requires automatic logging capabilities. Article 13 requires instructions for use that let deployers understand capabilities, limitations, expected performance, and required oversight.

This is not paperwork for paperwork sake. These materials allow deployers, notified bodies, and authorities to understand how the system should be used and where the limits are.

  • Annex IV plan complete and linked to the specific system version.
  • Logging design supports traceability, incident review, and required retention.
  • Instructions for use include intended purpose, precluded uses, performance limits, and oversight needs.
  • Provider and deployer teams agree on how instructions are operationalized in production.
Section 3

Articles 14 and 15: human oversight, accuracy, robustness, and cybersecurity

Article 14 requires human oversight measures that fit the system and its context. The assigned natural persons need competence, training, authority, and practical means to intervene. Article 15 requires an appropriate level of accuracy, robustness, and cybersecurity in light of intended purpose and state of the art.

Oversight and performance should be designed together. A strong oversight model fails if operators cannot understand outputs or if the system degrades without usable monitoring.

  • Oversight role assigned to trained persons with authority to pause or stop use.
  • Escalation and fallback actions documented for abnormal outputs or degraded performance.
  • Accuracy, robustness, and cybersecurity acceptance criteria defined and tested.
  • Known limitations, failure modes, and residual risks documented for deployers.
Section 4

Provider and deployer duties around the core requirements

Core requirements only work if the surrounding duties are also in place. Providers need quality management, conformity assessment, retention, registration, and post market monitoring. Deployers need to use the system according to instructions, assign oversight, keep logs under their control, inform affected persons in relevant cases, and perform FRIA where Article 27 requires it.

This is why high risk readiness is always broader than Articles 9 to 15 alone.

  • Technical documentation retained for 10 years where the Act requires it.
  • Logs under provider or deployer control retained for at least six months unless other law changes the period.
  • Article 49 registration checked for relevant Annex III systems.
  • Article 27 FRIA workflow active for public bodies, private entities providing public services, and the listed Annex III finance cases.
  • Affected person notices and complaint handling path defined where decisions concern natural persons.
Section 5

Release and post market checklist

A high risk system is not done at first release. Post market monitoring, serious incident handling, corrective action, and change review all need named owners and templates. Learning systems and major updates also need careful review to determine whether the change was already assessed or becomes substantial.

Your release checklist should therefore be paired with a steady state monitoring checklist.

  • Conformity route confirmed and completed before placement on the market or putting into service.
  • CE marking applied where required.
  • Post market monitoring plan created and linked to production telemetry and support channels.
  • Serious incident and corrective action workflow tested.
  • Material changes reviewed against substantial modification criteria.
Recommended next step

Turn EU AI Act (Regulation (EU) 2024/1689) High risk checklist into an operational assessment

Assessment Autopilot can take EU AI Act (Regulation (EU) 2024/1689) High risk checklist from turning this checklist into an operational workflow to a reusable workflow inside Sorena. Teams working on EU AI Act (Regulation (EU) 2024/1689) can keep owners, evidence, and next steps aligned without copying this guide into separate documents.

Primary sources

References and citations

Related guides

Explore more topics

EU AI Act Applicability and Roles | Provider, Deployer, Importer Guide
Determine whether the EU AI Act applies, when output used in the Union brings a system into scope, and how to assign provider, deployer, importer.
EU AI Act Applicability Test | Scope, Role, and Obligation Routing
Run a practical EU AI Act applicability test that checks scope, exclusions, operator role, prohibited practices, high risk status, transparency triggers.
EU AI Act Checklist | Practical Compliance Checklist by Obligation
Use a detailed EU AI Act checklist covering inventory, role mapping, Article 5 screening, high risk controls, Article 50 disclosures, GPAI evidence, logging.
EU AI Act Compliance Program | Build an Operational AI Act Program
Build an EU AI Act compliance program that covers inventory, governance, AI literacy, prohibited practice gates, high risk controls, Article 50 product work.
EU AI Act Deadlines and Compliance Calendar | Exact Dates and Workplan
Track the exact EU AI Act dates, including entry into force on 1 August 2024, early obligations from 2 February 2025, GPAI obligations from 2 August 2025.
EU AI Act FAQ | Dates, High Risk, GPAI, Transparency, and Penalties
Get grounded answers to common EU AI Act questions on application dates, high risk status, provider versus deployer roles, transparency.
EU AI Act GPAI and Foundation Model Obligations | Chapter V Guide
Understand EU AI Act obligations for general purpose AI model providers, including Article 53 documentation, copyright policy.
EU AI Act High Risk AI Use Cases by Industry | Annex III and Product Routes
See how EU AI Act high risk status appears across biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration.
EU AI Act Penalties and Fines | Article 99 and GPAI Fine Exposure
Understand EU AI Act penalty tiers, including Article 5 fines up to EUR 35,000,000 or 7 percent.
EU AI Act Prohibited AI Practices | Article 5 Screening Guide
Screen AI systems against EU AI Act Article 5 prohibited practices, including manipulative and deceptive techniques, exploitation of vulnerabilities.
EU AI Act Requirements | Prohibited, High Risk, Transparency, and GPAI
Get a grounded overview of EU AI Act requirements across Article 5 prohibited practices, Article 6 and Annex III high risk systems.
EU AI Act Timeline and Phasing Roadmap | Practical Implementation Roadmap
Follow a practical EU AI Act roadmap that aligns workstreams to the phased application dates for prohibited practices, AI literacy, GPAI obligations.
EU AI Act Transparency, Labeling, and User Disclosures | Article 50 Guide
Implement EU AI Act Article 50 transparency duties for direct interaction notices, machine readable marking of synthetic outputs, deepfake disclosures.
EU AI Act vs ISO 42001 | What ISO 42001 Covers and What It Does Not
Compare the EU AI Act with ISO/IEC 42001:2023. Learn where ISO 42001 helps with AI policy, roles, risk assessment, impact assessment, documented information.
EU AI Act vs NIST AI RMF | How to Use AI RMF Without Missing AI Act Duties
Compare the EU AI Act with NIST AI RMF 1.0. Learn how the voluntary NIST AI RMF functions Govern, Map, Measure.