---
title: "EU AI Act Compliance Decision Map: Scope, Risk Class, GPAI, and Evidence Workflow"
canonical_url: "https://www.sorena.io/artifacts/eu-ai-act-compliance-decision-map"
source_url: "https://www.sorena.io/artifacts/eu-ai-act-compliance-decision-map"
author: "Sorena AI"
description: "Advanced EU AI Act Compliance Decision Map for Regulation (EU) 2024/1689 implementation: scope and role tests, prohibited practices, high-risk classification."
keywords:
  - "EU AI Act Compliance Decision Map compliance"
  - "EU AI Act Compliance Decision Map guide"
  - "EU AI Act Compliance Decision Map checklist"
  - "EU AI Act Compliance Decision Map requirements"
  - "EU AI Act Compliance Decision Map template"
  - "EU AI Act compliance roadmap"
  - "Regulation EU 2024 1689 implementation"
  - "Article 5 prohibited AI practices"
  - "Annex III high-risk AI systems"
  - "Article 50 AI transparency requirements"
  - "GPAI model obligations Article 53"
  - "systemic risk model obligations Article 55"
  - "EU AI Act Compliance Decision Map"
  - "Regulation (EU) 2024/1689"
  - "High-risk AI"
  - "Article 5 prohibited practices"
  - "Article 50 transparency"
  - "GPAI obligations"
  - "EU AI Office"
---
**[SORENA](https://www.sorena.io/)** - AI-Powered GRC Platform

[Home](https://www.sorena.io/) | [Solutions](https://www.sorena.io/solutions) | [Artifacts](https://www.sorena.io/artifacts) | [About Us](https://www.sorena.io/about-us) | [Contact](https://www.sorena.io/contact) | [Portal](https://app.sorena.io)

---

# EU AI Act Compliance Decision Map: Scope, Risk Class, GPAI, and Evidence Workflow

Advanced EU AI Act Compliance Decision Map for Regulation (EU) 2024/1689 implementation: scope and role tests, prohibited practices, high-risk classification.

![EU AI Act Compliance Decision Map by Sorena AI](https://cdn.sorena.io/cheatsheets/sorena-ai-ai-act-cheatsheet-small.jpg)

*EU AI Act* *Free Resource*

## EU AI Act Compliance Decision Map

This EU AI Act Compliance Decision Map helps teams convert legal interpretation into implementation actions. Confirm scope and role, test for prohibited practices, classify high-risk AI systems, and identify transparency and GPAI obligations with clear ownership outcomes.

Grounded in Regulation (EU) 2024/1689, Commission GPAI scope guidelines (C(2025) 5045 final), AI Office implementation materials, and GPAI code of practice resources.

[Create my custom view](/solutions/assessment.md)

## What you can decide faster

- **Applicability and role**: Determine whether the AI Act applies and identify provider, deployer, importer, distributor, or product manufacturer duties.
- **Risk and controls**: Classify prohibited, high-risk, limited-risk, or minimal-risk use and map obligations to technical and governance controls.
- **GPAI and systemic risk**: Map Chapter V obligations, transparency outputs, serious incident reporting, and risk mitigation requirements.

By Sorena AI | Updated Mar 2026 | No sign-up required

**Key highlights:** Scope and role first | Risk class to controls | Evidence-ready program

## Primary sources

- [Regulation (EU) 2024/1689 (Artificial Intelligence Act)](https://eur-lex.europa.eu/eli/reg/2024/1689/oj?ref=sorena.io) - Primary legal text for AI Act obligations, risk categories, governance, and enforcement architecture.
- [Commission Guidelines on GPAI obligations scope (C(2025) 5045 final)](https://digital-strategy.ec.europa.eu/en/library/guidelines-scope-obligations-providers-general-purpose-ai-models-under-ai-act?ref=sorena.io) - Clarifies GPAI model/provider scope, systemic-risk triggers, exemptions, and enforcement expectations.
- [General-Purpose AI Code of Practice](https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai?ref=sorena.io) - Voluntary implementation framework covering transparency, copyright, and safety/security practices for GPAI providers.
- [Template for Public Summary of Training Content](https://digital-strategy.ec.europa.eu/en/library/explanatory-notice-and-template-public-summary-training-content-general-purpose-ai-models?ref=sorena.io) - Commission materials supporting Article 53(1)(d) public summary obligations for GPAI providers.
- [Serious incident reporting template for GPAI models with systemic risk](https://digital-strategy.ec.europa.eu/en/library/ai-act-commission-publishes-reporting-template-serious-incidents-involving-general-purpose-ai-models-systemic-risk?ref=sorena.io) - Operational reference for serious incident reporting under systemic-risk GPAI obligations.

## Compliance quick-start

*Regulation (EU) 2024/1689*

| Time | Step | Detail |
| --- | --- | --- |
| Art. 5 | Prohibited practices | Identify banned use cases early and stop non-permitted deployment paths. |
| Annex III | High-risk classification | Map lifecycle controls, risk management, technical documentation, and conformity requirements. |
| Art. 53/55 | GPAI duties | Implement model transparency, copyright policy, and systemic-risk measures where applicable. |

Use the map to connect legal obligations, engineering controls, and evidence-ready governance outputs.

| Value | Metric |
| --- | --- |
| 1 Aug 2024 | Entry into force |
| 2 Feb 2025 | Prohibitions apply |
| 2 Aug 2025 | GPAI obligations |
| 2 Aug 2026 | Main application |

## Key dates for EU AI Act implementation

*AI Act Timeline*

Track phased application and implementation checkpoints so product, legal, risk, and security teams can sequence delivery with shared assumptions.

## Decide faster what the Act means for your system or model

*AI Act Decision Map*

Follow yes/no branching from scope to a clear execution path: prohibition handling, high-risk lifecycle controls, transparency implementation, or GPAI compliance operations.

*Go further*

## Turn this decision map into a complete AI Act execution program

Use the EU AI Act deep-dive pages to move from decision-map outcomes to implementation work items, templates, and evidence workflows.

- Prioritise high-risk and prohibited-use remediation with accountable owners and milestones
- Operationalise Article 50 transparency and user disclosure controls across product flows
- Implement GPAI documentation, public summary, and incident reporting workflows
- Build requirement-to-evidence traceability for governance reviews, partners, and audits

- [Open requirements deep dive](/artifacts/eu/ai-act/requirements.md): Map AI Act obligations to controls, owners, and evidence outputs.
- **Download Decision Map**: Share classification and obligation logic with product, legal, security, and compliance teams.
- **Download Timeline**: Share phased dates and planning checkpoints across teams.
- [Open compliance checklist](/artifacts/eu/ai-act/checklist.md): Use an audit-ready checklist with acceptance criteria and evidence expectations.

## Decision Steps

### STEP 1: Does the EU AI Act apply to your organisation and AI activity?

*Reference: Art. 2(1)*

- Providers placing on the market / putting into service AI systems or placing GPAI models on the Union market
- Deployers established or located in the Union
- Non-EU providers/deployers where the output is used in the Union
- Importers and distributors of AI systems; product manufacturers placing AI systems with their products under their name/trademark; authorised representatives of non-EU providers
- Affected persons located in the Union (rights and safeguards)

- **NO** EU AI Act likely does not apply
- **YES** Does your AI system or use-case fall under a prohibited AI practice?

### STEP 2: Does your AI system or use-case fall under a prohibited AI practice?

*Reference: Art. 5*

- Manipulative/deceptive techniques materially distorting behaviour causing (or likely causing) significant harm (Art. 5(1)(a))
- Exploitation of vulnerabilities (age/disability/socio-economic) causing (or likely causing) significant harm (Art. 5(1)(b))
- Social scoring leading to detrimental/unjustified or disproportionate treatment (Art. 5(1)(c))
- Criminal offence risk assessment of individuals based solely on profiling/personality traits (Art. 5(1)(d))
- Untargeted scraping to build or expand facial recognition databases (Art. 5(1)(e))
- Emotion recognition in workplace/education (narrow medical/safety exception) (Art. 5(1)(f))
- Biometric categorisation inferring sensitive attributes (Art. 5(1)(g))
- Real-time remote biometric ID in public spaces for law enforcement (narrow exceptions + safeguards) (Art. 5(1)(h))

- **YES** Stop: prohibited AI practice
- **NO** Are you placing a General-Purpose AI (GPAI) model on the Union market as a provider?

### STEP 3: Are you placing a General-Purpose AI (GPAI) model on the Union market as a provider?

*Reference: Art. 2(1)(a); Art. 3(3),(63)*

- GPAI model = broadly capable + integrable into downstream systems
- If you are only using a third-party model, continue on the AI-system path

- **YES** Is it a GPAI model with systemic risk?
- **NO** Is your AI system classified as high-risk?

### SYSTEMIC RISK: Is it a GPAI model with systemic risk?

- Systemic risk models have additional obligations (Art. 55)
- Commission can designate models ex officio; list is published and kept up to date (Art. 52(4)-(6))
- Commission Guidelines (C(2025) 5045 final) provide practical examples and classification guidance

- **YES** GPAI systemic-risk obligations
- **NO** GPAI model provider obligations

### STEP 4: Is your AI system classified as high-risk?

*Reference: Art. 6; Annex I; Annex III*

- High-risk if safety component/product under Annex I + third-party conformity assessment (Art. 6(1))
- High-risk if listed in Annex III (Art. 6(2))
- Annex III derogation: may be not high-risk if no significant risk + narrow task (Art. 6(3))
- Annex III is always high-risk if it performs profiling of natural persons (Art. 6(3))
- If you claim Annex III is not high-risk: document assessment + register (Art. 6(4); Art. 49(2))

- **YES** Are you the provider of the high-risk AI system (or did you become one via modifications/rebranding)?
- **NO** Does your AI system trigger AI Act transparency obligations?

### STEP 5: Are you the provider of the high-risk AI system (or did you become one via modifications/rebranding)?

*Reference: Art. 3(3); Art. 16; Art. 25*

- Providers carry conformity assessment, CE marking, documentation and QMS duties
- Importers/distributors/deployers may become providers if they rebrand, substantially modify, or change intended purpose

- **YES** High-risk AI provider obligations
- **NO** High-risk AI deployer obligations

### STEP 6: Does your AI system trigger AI Act transparency obligations?

*Reference: Art. 50*

- AI systems that interact with people (chatbots/assistants) must disclose the interaction (Art. 50(1))
- Generative AI outputs must be marked as AI-generated/manipulated (Art. 50(2))
- Deployers of emotion recognition or biometric categorisation must inform exposed persons (Art. 50(3))
- Deepfakes and public-interest AI-generated text have disclosure duties (Art. 50(4))

- **YES** Transparency obligations apply
- **NO** No high-risk / transparency trigger found

## Reference Information

### Scope & Exclusions (Quick)

- Excludes: national security; exclusively military/defence/national security use (Art. 2(3))
- Excludes: AI systems/models developed solely for scientific R&D (Art. 2(6))
- Excludes: personal non-professional use by natural persons (Art. 2(10))
- Open-source exception (limited): not applicable unless high-risk or Art. 5/50 systems (Art. 2(12))
- Sectoral EU product/consumer laws still apply (Art. 2(9))

### Key Roles & Definitions

- Provider: develops (or has developed) and places on market/puts into service under own name/trademark (Art. 3(3))
- Deployer: uses an AI system under its authority (Art. 3(4))
- GPAI model: capable of performing a wide range of distinct tasks; integrable downstream (Art. 3(63))
- Systemic risk: high-impact GPAI risk that can propagate at scale (Art. 3(65))
- Value chain: modifications or rebranding can make you the provider (Art. 25)

### Baseline Obligation: AI Literacy

- Providers and deployers must take measures to ensure sufficient AI literacy of staff/users operating AI on their behalf
- Consider technical knowledge, experience, education/training, and the context of use
- Applies even where your system is not high-risk

### Governance & Authorities

- AI Office (Union level): implementation/monitoring for AI systems and GPAI models (Art. 64; Def. Art. 3(47))
- European AI Board: Union-level governance and coordination (Art. 65)
- Advisory forum + scientific panel support implementation (Arts. 67-68)
- National competent authorities + single point of contact (Art. 70)

### Commission GPAI Guidelines (Scope)

- C(2025) 5045 final (18 July 2025): scope of Chapter V obligations
- Examples: what counts as a GPAI model; lifecycle; who is a provider placing on market
- Open-source exemptions: conditions on licence, lack of monetisation, public availability of parameters
- Annex: training compute estimation (for classification guidance)

### GPAI Code of Practice

- Voluntary tool to demonstrate compliance (Arts. 53(4), 55(2), 56)
- Chapters: Transparency, Copyright, Safety & Security
- Includes templates (e.g., Model Documentation Form) and practical measures

### Templates & Reporting (GPAI)

- Public Summary of Training Content template (Art. 53(1)(d))
- Model Documentation Form template (Code of Practice - Transparency chapter)
- Serious incidents reporting template (Art. 55(1)(c))

### Annex III (High-Risk Areas)

- Biometrics (remote ID, sensitive categorisation, emotion recognition)
- Critical infrastructure (as safety components)
- Education/vocational training (admission, evaluation, monitoring tests)
- Employment/workers management (recruitment, monitoring, termination, task allocation)
- Essential services/benefits (credit scoring, insurance pricing, emergency call triage)
- Law enforcement; migration/asylum/border; justice & democratic processes

### Annex III Derogation (Not High-Risk Claims)

- Annex III system can only be treated as not high-risk if it does not pose a significant risk of harm (incl. not materially influencing decision outcomes) and fits a narrow-task condition (Art. 6(3))
- Always high-risk if it performs profiling of natural persons (Art. 6(3))
- Provider must document the assessment and is subject to registration (Art. 6(4); Art. 49(2))

### Section 2 Requirements (High-Risk)

- Risk management system (Art. 9)
- Data & data governance (Art. 10)
- Technical documentation (Art. 11; Annex IV)
- Record-keeping/logging (Arts. 12 and 19)
- Transparency + instructions for use (Art. 13)
- Human oversight (Art. 14)
- Accuracy, robustness, cybersecurity (Art. 15)

### Responsibilities Along the Value Chain

- If you rebrand, substantially modify, or change intended purpose you can become the provider (Art. 25(1))
- Initial provider must cooperate and provide required info/technical access (Art. 25(2))
- Supplier/provider agreements should allocate info/access needed for compliance (Art. 25(4))

### Notified Bodies & Conformity Assessment

- Member States designate notifying authorities (Art. 28)
- Notified bodies must meet independence and competence requirements (Art. 31)
- Identification numbers and lists of notified bodies (Art. 35)
- Use notified bodies where required by the conformity route (Art. 43 context)

### Harmonised Standards & Common Specifications

- Applying harmonised standards can create a presumption of conformity for covered AI Act requirements/obligations (Art. 40(1))
- If harmonised standards are missing or insufficient, the Commission may adopt common specifications (Art. 41)
- Standards/common specs can reduce ambiguity for documentation, testing, and conformity assessment routes (Art. 40-43 context)

### Fundamental Rights Impact Assessment (FRIA)

- Required before deploying certain high-risk AI systems (Art. 27(1))
- Describe use context, affected groups, risks, human oversight, and mitigations (Art. 27(1)(a)-(f))
- Update when changes occur; notify market surveillance authority with a template (Art. 27(2)-(5))

### Code of Practice (AI-generated Content)

- AI Office-led code of practice supports Art. 50(2) and (4) compliance
- Working group 1 (providers): machine-readable marking + robustness/interoperability
- Working group 2 (deployers): disclosure of deepfakes and other transparency duties
- Drafting timeline targets readiness before Art. 50 obligations apply

## Possible Outcomes

### [PROHIBITED] Stop: prohibited AI practice

Do not place on the market / put into service / use in the Union

- Re-design the system and/or intended purpose to remove the prohibited practice
- Assess if a different use-case or safeguards move you out of Art. 5
- Document the decision and seek legal review for edge cases (e.g., law enforcement exceptions)

### [GPAI] GPAI model provider obligations

Documentation, downstream transparency, copyright policy, and training-content summary

- Technical documentation for AI Office / authorities (Art. 53(1)(a); Annex XI)
- Information for downstream system providers (Art. 53(1)(b); Annex XII)
- Copyright compliance policy incl. rights reservations (Art. 53(1)(c))
- Publish public summary of training content (Art. 53(1)(d))

### [GPAI (SYSTEMIC RISK)] GPAI systemic-risk obligations

Art. 53 + additional systemic-risk controls

- Model evaluation + adversarial testing to identify/mitigate systemic risks (Art. 55(1)(a))
- Assess and mitigate systemic risks at Union level (Art. 55(1)(b))
- Report serious incidents + corrective measures to AI Office without undue delay (Art. 55(1)(c))
- Ensure adequate cybersecurity for the model and supporting infrastructure (Art. 55(1)(d))

### [HIGH-RISK (PROVIDER)] High-risk AI provider obligations

Requirements + conformity assessment + registration + post-market monitoring

- Meet Section 2 requirements (risk mgmt, data governance, logs, transparency, human oversight, cybersecurity)
- Quality management system (Art. 17) + technical documentation (Art. 11; Annex IV)
- Conformity assessment (Art. 43) + EU DoC (Art. 47) + CE marking (Art. 48)
- Register in EU database (Art. 49) and run post-market monitoring (Art. 72) + incident reporting (Art. 73)

### [HIGH-RISK (DEPLOYER)] High-risk AI deployer obligations

Use per instructions, human oversight, monitoring, FRIA (some cases), and transparency to affected persons

- Use per provider instructions + assign competent human oversight (Art. 26(1)-(3))
- Input data quality under your control (Art. 26(4))
- Monitor + suspend and notify for risks/serious incidents (Art. 26(5))
- Inform individuals subject to Annex III decisioning systems (Art. 26(11)); perform FRIA where applicable (Art. 27)

### [TRANSPARENCY] Transparency obligations apply

Disclose AI interaction, label AI-generated content, and handle deepfakes

- Inform users when they interact with an AI system unless obvious (Art. 50(1))
- Inform people exposed to emotion recognition or biometric categorisation systems (Art. 50(3))
- Mark synthetic content outputs machine-readably and detectably (Art. 50(2))
- Disclose deepfakes; disclose AI-generated public-interest text unless editorial control applies (Art. 50(4))
- Provide info clearly and accessibly by first interaction/exposure (Art. 50(5))

### [BASELINE] No high-risk / transparency trigger found

Still in scope: keep core controls and monitor future classification changes

- Maintain AI literacy measures (Art. 4)
- Re-check classification when intended purpose, autonomy, or context changes
- Track Commission guidelines and standards: Annex III use cases and Art. 50 list can evolve

### [OUT OF SCOPE] EU AI Act likely does not apply

No Union nexus under Art. 2(1) (or an exclusion applies)

- Document why you are out of scope (facts + legal basis)
- Re-assess if output becomes used in the Union or you place systems/models on the Union market
- Other laws (GDPR, product safety, sector rules) may still apply

## EU AI Act Timeline

| Date | Event | Reference |
| --- | --- | --- |
| 2024-07-12 | AI Act published in Official Journal (OJ L) | Reg. (EU) 2024/1689 |
| 2024-08-01 | AI Act enters into force (20 days after publication) | Art. 113 |
| 2025-02-02 | Chapters I (General provisions) and II (Prohibited practices) apply | Art. 113(a) |
| 2025-05-02 | Codes of practice for GPAI should be ready (latest) | Art. 56(9) |
| 2025-08-02 | GPAI obligations + governance + notified bodies + penalties apply | Art. 113(b) |
| 2026-02-02 | Commission provides Art. 6 high-risk classification guidelines (latest) | Art. 6(5) |
| 2026-08-02 | AI Act applies (general application date) | Art. 113 |
| 2027-08-02 | Art. 6(1) (Annex I product/safety-component high-risk) + corresponding obligations apply | Art. 113(c) |

## Compliance Timeline

| Date | Event | Category | Reference |
| --- | --- | --- | --- |
| 2024-01-24 | Commission Decision establishes the European AI Office | Notified bodies & governance |  |
| 2024-07-12 | AI Act published in the Official Journal | Legislative milestones |  |
| 2024-08-01 | AI Act enters into force | Legislative milestones |  |
| 2025-02-02 | Chapters I and II apply (including prohibited AI practices) | Prohibitions |  |
| 2025-07-10 | General-Purpose AI Code of Practice published | GPAI |  |
| 2025-07-18 | Commission adopts guidelines on GPAI obligations scope | GPAI |  |
| 2025-08-02 | GPAI obligations and governance provisions apply | GPAI |  |
| 2025-09-01 | Consultation to develop guidelines and a Code of Practice (transparent AI systems) | Transparency & labelling |  |
| 2025-09-26 | Consultation on serious AI incident reporting interplay | Incident reporting & post-market |  |
| 2025-10-01 | Chairs and vice-chairs selection | Transparency & labelling |  |
| 2025-11-04 | Reporting template for serious incidents (GPAI systemic risk) published | Incident reporting & post-market |  |
| 2025-11-05 | Kick-off plenary (start of 1st drafting round) | Transparency & labelling |  |
| 2025-11-17 | 1st working group meetings | Transparency & labelling |  |
| 2025-12-05 | Template published for public summary of GPAI training content | GPAI |  |
| 2025-12-17 | First draft published | Transparency & labelling |  |
| 2026-01-12 | Working group meetings (start of 2nd drafting round) | Transparency & labelling |  |
| 2026-01-21 | Workshops (working groups 1 and 2) | Transparency & labelling |  |
| 2026-03-01 | Second draft published (TBC) | Transparency & labelling |  |
| 2026-04-01 | Working group meetings (TBC) | Transparency & labelling |  |
| 2026-05-01 | Closing plenary and final Code of Practice published | Transparency & labelling |  |
| 2026-08-02 | AI Act applies (main obligations start) | Legislative milestones |  |
| 2026-08-02 | Commission enforcement powers for GPAI enter into application | GPAI |  |
| 2027-08-02 | Article 6(1) and corresponding obligations apply | High-risk AI |  |
| 2027-08-02 | Existing GPAI providers must comply by this date | GPAI |  |

**Event details:**

- **2024-01-24 - Commission Decision establishes the European AI Office**: 24 January 2024: European Commission publishes the decision establishing the European AI Office.
- **2024-07-12 - AI Act published in the Official Journal**: 12 July 2024: Regulation (EU) 2024/1689 is published in the Official Journal (OJ L, 12.7.2024).
- **2024-08-01 - AI Act enters into force**: 1 August 2024: The EU AI Act enters into force (20 days after publication).
- **2025-02-02 - Chapters I and II apply (including prohibited AI practices)**: 2 February 2025: Chapters I and II apply under the AI Act entry into force and application rules.
- **2025-07-10 - General-Purpose AI Code of Practice published**: 10 July 2025: The General-Purpose AI (GPAI) Code of Practice is published as a voluntary tool to help providers meet AI Act obligations.
- **2025-07-18 - Commission adopts guidelines on GPAI obligations scope**: 18 July 2025: Commission finalises its guidelines on the scope of obligations for general-purpose AI models (C(2025) 5045 final).
- **2025-08-02 - GPAI obligations and governance provisions apply**: 2 August 2025: Chapter V (general-purpose AI) and selected governance provisions start to apply (per Article 113).
- **2025-09-01 - Consultation to develop guidelines and a Code of Practice (transparent AI systems)**: September 2025: Consultation to develop guidelines and a Code of Practice on transparent AI systems, plus a call for expression of interest to participate.
- **2025-09-26 - Consultation on serious AI incident reporting interplay**: 26 September 2025: Consultation referenced alongside serious incident reporting guidance and templates for AI incidents.
- **2025-10-01 - Chairs and vice-chairs selection**: October 2025: Eligibility checks and selection of applications for chairs and vice-chairs.
- **2025-11-04 - Reporting template for serious incidents (GPAI systemic risk) published**: 4 November 2025: Commission publishes a reporting template for serious incidents involving general-purpose AI models with systemic risk.
- **2025-11-05 - Kick-off plenary (start of 1st drafting round)**: 5 November 2025: Kick-off plenary; start of the first drafting round.
- **2025-11-17 - 1st working group meetings**: 17-18 November 2025: First working group meetings.
- **2025-12-05 - Template published for public summary of GPAI training content**: 5 December 2025: Commission publishes an explanatory notice and a template for the public summary of training content (Article 53(1)(d)).
- **2025-12-17 - First draft published**: 17 December 2025: Publication of the first draft.
- **2026-01-12 - Working group meetings (start of 2nd drafting round)**: 12 and 14 January 2026: Working group meetings; start of the second drafting round.
- **2026-01-21 - Workshops (working groups 1 and 2)**: 21-22 January 2026: Workshops for working groups 1 and 2.
- **2026-03-01 - Second draft published (TBC)**: March 2026 (TBC): Publication of the second draft; start of the final drafting round.
- **2026-04-01 - Working group meetings (TBC)**: April 2026 (TBC): Working group meetings.
- **2026-05-01 - Closing plenary and final Code of Practice published**: May-June 2026: Closing plenary; publication of the final Code of Practice.
- **2026-08-02 - AI Act applies (main obligations start)**: 2 August 2026: The AI Act applies in general (per Article 113).
- **2026-08-02 - Commission enforcement powers for GPAI enter into application**: 2 August 2026: Commission enforcement powers for obligations on providers of GPAI models enter into application (including fines).
- **2027-08-02 - Article 6(1) and corresponding obligations apply**: 2 August 2027: Article 6(1) and corresponding obligations apply (per Article 113).
- **2027-08-02 - Existing GPAI providers must comply by this date**: By 2 August 2027: Providers of GPAI models placed on the market before 2 August 2025 must comply, per Commission guidance.


---

[Privacy Policy](https://www.sorena.io/privacy) | [Terms of Use](https://www.sorena.io/terms-of-use) | [DMCA](https://www.sorena.io/dmca) | [About Us](https://www.sorena.io/about-us)

(c) 2026 Sorena AB (559573-7338). All rights reserved.

Source: https://www.sorena.io/artifacts/eu-ai-act-compliance-decision-map
