EU AI ActUse cases

EU AI Act (Regulation (EU) 2024/1689) High risk use cases

High risk classification depends on context, purpose, and decision impact.

This page translates Annex III and the product safety route into examples teams can actually recognize.

Author
Sorena AI
Published
Mar 4, 2026
Updated
Mar 4, 2026
Sections
4

Structured answer sets in this page tree.

Primary sources
4

Cited legal and guidance references.

Publication metadata
Sorena AI
Published Mar 4, 2026
Updated Mar 4, 2026
Overview

High risk classification is one of the most misunderstood parts of the AI Act because teams often look only at model capability. The Act instead focuses on the use case, the protected interests at stake, and in some routes the product law context. These examples are not legal advice, but they reflect the structure of Annex III and Article 6.

Section 1

Biometrics, law enforcement, migration, justice, and democracy

The most sensitive categories sit close to state power, identity, and rights. Annex III covers several biometric, law enforcement, migration and border, justice, and democratic process uses. Many of these uses also carry strong transparency, record keeping, or restricted access rules because of their impact on individuals and on public trust.

Examples include biometric identification and categorisation uses that are permitted rather than prohibited, law enforcement risk assessment or profiling tools, border management credibility or risk tools, and systems supporting judicial decision processes or electoral influence analysis.

  • Remote biometric identification can be prohibited or highly restricted depending on context.
  • Profiling within Annex III law enforcement use cases is always high risk.
  • Justice and democratic process tools need especially careful review because they affect core rights.
  • Law enforcement, migration, and justice cases often involve special registration and confidentiality rules.
Section 2

Education, employment, and essential services

Many private sector organizations meet high risk status through Annex III even when they do not think of themselves as regulated AI providers. Education admissions, testing, and evaluation tools can be high risk. Employment tools for recruitment, selection, evaluation, promotion, and termination can be high risk. Essential service decisions in credit, insurance, benefits, and similar contexts can be high risk.

The practical common factor is simple: the system materially influences access to opportunity, income, public services, or comparable rights.

  • Hiring and worker evaluation systems are a frequent enterprise exposure point.
  • Admissions and exam scoring tools can trigger high risk review.
  • Creditworthiness and access to essential private services are core Annex III examples.
  • Public service allocation and benefits eligibility tools should be screened early.
Section 3

Critical infrastructure and product safety routes

Some systems become high risk because they are safety components of products regulated under Union harmonisation law. Others are high risk because they are used in critical infrastructure where failure could endanger health or safety. Medical devices, machinery, vehicles, and aviation linked products need this route checked carefully.

This is where engineering, safety, and product compliance teams need to work together. The AI Act does not replace sector law. It layers AI specific duties onto those environments.

  • Check Annex I and product safety law where AI is embedded in regulated products.
  • Check critical infrastructure uses for energy, transport, water, and similar operations.
  • Map AI specific hazards separately from existing product law hazards.
  • Plan one joined technical documentation approach where the product route requires it.
Section 4

Using the Annex III derogation carefully

Article 6 allows a derogation where an Annex III system does not pose a significant risk of harm and does not materially influence decision making. The Act gives examples of narrow procedural tasks and preparatory tasks that may fit this path. But the provider must document the assessment before placing the system on the market or putting it into service and remains subject to registration duties.

This is not a shortcut for weak cases. If the system profiles natural persons, the derogation is unavailable. If the system meaningfully shapes the outcome, the derogation is also unlikely to hold.

  • Document the assessment before market placement or service use.
  • Check whether the system only performs a narrow procedural or preparatory task.
  • Reject the derogation if the system materially influences the decision outcome.
  • Reject the derogation if the system performs profiling of natural persons.
Recommended next step

Turn EU AI Act (Regulation (EU) 2024/1689) High risk use cases into an operational assessment

Assessment Autopilot can take EU AI Act (Regulation (EU) 2024/1689) High risk use cases from turning this guidance into a repeatable review process to a reusable workflow inside Sorena. Teams working on EU AI Act (Regulation (EU) 2024/1689) can keep owners, evidence, and next steps aligned without copying this guide into separate documents.

Primary sources

References and citations

Related guides

Explore more topics

EU AI Act Applicability and Roles | Provider, Deployer, Importer Guide
Determine whether the EU AI Act applies, when output used in the Union brings a system into scope, and how to assign provider, deployer, importer.
EU AI Act Applicability Test | Scope, Role, and Obligation Routing
Run a practical EU AI Act applicability test that checks scope, exclusions, operator role, prohibited practices, high risk status, transparency triggers.
EU AI Act Checklist | Practical Compliance Checklist by Obligation
Use a detailed EU AI Act checklist covering inventory, role mapping, Article 5 screening, high risk controls, Article 50 disclosures, GPAI evidence, logging.
EU AI Act Compliance Program | Build an Operational AI Act Program
Build an EU AI Act compliance program that covers inventory, governance, AI literacy, prohibited practice gates, high risk controls, Article 50 product work.
EU AI Act Deadlines and Compliance Calendar | Exact Dates and Workplan
Track the exact EU AI Act dates, including entry into force on 1 August 2024, early obligations from 2 February 2025, GPAI obligations from 2 August 2025.
EU AI Act FAQ | Dates, High Risk, GPAI, Transparency, and Penalties
Get grounded answers to common EU AI Act questions on application dates, high risk status, provider versus deployer roles, transparency.
EU AI Act GPAI and Foundation Model Obligations | Chapter V Guide
Understand EU AI Act obligations for general purpose AI model providers, including Article 53 documentation, copyright policy.
EU AI Act High Risk Requirements Checklist | Articles 9 to 15 and Beyond
Use a detailed high risk AI checklist covering Article 9 risk management, Article 10 data governance, Annex IV technical documentation, logging, instructions.
EU AI Act Penalties and Fines | Article 99 and GPAI Fine Exposure
Understand EU AI Act penalty tiers, including Article 5 fines up to EUR 35,000,000 or 7 percent.
EU AI Act Prohibited AI Practices | Article 5 Screening Guide
Screen AI systems against EU AI Act Article 5 prohibited practices, including manipulative and deceptive techniques, exploitation of vulnerabilities.
EU AI Act Requirements | Prohibited, High Risk, Transparency, and GPAI
Get a grounded overview of EU AI Act requirements across Article 5 prohibited practices, Article 6 and Annex III high risk systems.
EU AI Act Timeline and Phasing Roadmap | Practical Implementation Roadmap
Follow a practical EU AI Act roadmap that aligns workstreams to the phased application dates for prohibited practices, AI literacy, GPAI obligations.
EU AI Act Transparency, Labeling, and User Disclosures | Article 50 Guide
Implement EU AI Act Article 50 transparency duties for direct interaction notices, machine readable marking of synthetic outputs, deepfake disclosures.
EU AI Act vs ISO 42001 | What ISO 42001 Covers and What It Does Not
Compare the EU AI Act with ISO/IEC 42001:2023. Learn where ISO 42001 helps with AI policy, roles, risk assessment, impact assessment, documented information.
EU AI Act vs NIST AI RMF | How to Use AI RMF Without Missing AI Act Duties
Compare the EU AI Act with NIST AI RMF 1.0. Learn how the voluntary NIST AI RMF functions Govern, Map, Measure.