EU AI Act Timeline: Key Dates Every Company Must Know (2024–2031)
The EU AI Act does not apply all at once. Obligations roll out in phases from 2024 to 2031. This guide maps every deadline by date and by risk tier so you know exactly what applies to you and when.
EU AI Act Timeline: Key Dates Every Company Must Know (2024–2031)
The AI Act Does Not Apply All at Once
One of the most common mistakes companies make is treating the EU AI Act as a single compliance deadline. It is not. The regulation uses a phased implementation structure spread over seven years, with different provisions entering into force at different dates.
This matters practically: your compliance priority depends not only on your risk level but on where you stand in the timeline right now.
The Master Timeline at a Glance
| Date | What Applies |
|---|---|
| 1 August 2024 | AI Act enters into force |
| 2 February 2025 | Prohibited AI practices banned · AI literacy obligations |
| 2 May 2025 | GPAI Code of Practice must be finalised |
| 2 August 2025 | GPAI model obligations · Governance structure · Penalty provisions |
| 2 February 2026 | Commission must publish high-risk classification guidelines |
| 2 August 2026 | Full high-risk framework · AI Office enforcement powers · Regulatory sandboxes |
| 2 August 2027 | Article 6(1) high-risk products (Annex I) · Legacy GPAI compliance deadline |
| 2 August 2028 | First Commission evaluation of AI Office and codes of conduct |
| 2 August 2030 | High-risk AI systems used by public authorities must comply |
| 31 December 2030 | Large-scale EU IT systems (Schengen, Eurodac, etc.) final deadline |
| 2 August 2031 | Full enforcement assessment |
Each Deadline in Detail
1 August 2024 — Entry into Force
The AI Act (Regulation (EU) 2024/1689) was published in the Official Journal of the EU on 12 July 2024 and entered into force twenty days later. This date starts the clock on all subsequent deadlines. No obligations applied yet for most organisations — this was purely the legal birth of the regulation.
2 February 2025 — Prohibited Practices and AI Literacy
What entered into force:
Prohibited AI practices (Article 5) became enforceable. Any AI system that falls into one of the banned categories must have been taken off the EU market or discontinued by this date. The banned systems include:
- Subliminal manipulation systems
- Systems exploiting vulnerabilities of specific groups
- Social scoring by public authorities
- (Most) real-time remote biometric identification in public spaces
- Emotion inference in workplaces and schools
- Facial image scraping for recognition databases
- Predictive policing based on personal characteristics
AI literacy obligations (Article 4) also entered into force. Providers and deployers must ensure their staff who work with AI systems have sufficient AI literacy — understanding of AI capabilities, limitations, and the regulatory framework.
Why this matters now: If you are deploying any of the above in the EU, you are already in violation. The fine is up to €35 million or 7% of global annual turnover.
2 May 2025 — GPAI Code of Practice Finalised
The General-Purpose AI Code of Practice — the voluntary compliance framework for LLM and foundation model providers — had to be finalised by this date. It was published on 10 July 2025 and endorsed by the European Commission.
Adherence to the Code creates a presumption of compliance with Articles 53 and 55 for GPAI providers. Non-signatories face greater regulatory scrutiny.
2 August 2025 — GPAI Rules and Governance
What entered into force:
- GPAI model obligations (Articles 51–56): all providers of general-purpose AI models on the EU market must comply with documentation, transparency, copyright, and (for systemic-risk models) red-teaming and incident reporting requirements
- AI Office becomes operational with its governance and supervisory functions
- Penalty provisions become applicable — fines can now be imposed for GPAI violations
- Member States must have designated their national competent authorities
- Notified bodies framework activated for conformity assessments
Who is affected: Any company providing an LLM, foundation model, or multimodal model to EU customers or through EU-based downstream providers.
2 February 2026 — Commission High-Risk Guidelines
The European Commission must publish guidelines clarifying the implementation of Article 6 (high-risk classification rules) and post-market monitoring requirements. These guidelines help organisations determine whether their system qualifies as high-risk, particularly for edge cases not explicitly addressed in Annex III.
2 August 2026 — Full High-Risk Framework
This is the most significant deadline for most businesses.
What enters into force:
- All obligations for high-risk AI systems (Articles 9–49): risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy, conformity assessment, EU Declaration of Conformity, and registration
- AI Office full enforcement powers: the AI Office can now conduct investigations, impose fines, and require market withdrawals for high-risk system violations
- Transparency obligations (Article 50) for limited-risk systems: chatbots, deepfakes, AI-generated content labelling
- Deployer obligations (Article 26) fully applicable: fundamental rights impact assessments, human oversight implementation, incident reporting
Who is affected: Any provider or deployer of a high-risk AI system as defined in Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice and democratic processes).
Note: Systems placed on the market before 2 August 2026 that have not been substantially modified are not automatically required to comply — but any substantial modification triggers full compliance.
2 August 2027 — Annex I Products and Legacy GPAI
Two provisions enter into force:
Article 6(1): High-risk AI systems that are safety components of products covered by EU harmonisation legislation (Annex I — medical devices, machinery, aviation, motor vehicles, toys, etc.) must now comply. This is the provision that brings AI embedded in physical products under the AI Act.
Legacy GPAI models: General-purpose AI models placed on the market before 2 August 2025 must be compliant by this date. This gives earlier GPAI providers a two-year runway to bring existing deployments into compliance.
2030–2031 — Long-Tail Deadlines
2 August 2030: High-risk AI systems used by public sector bodies must be compliant.
31 December 2030: Large-scale EU IT systems (Eurodac, Schengen Information System, Entry/Exit System, etc.) must comply. These systems are covered by specific EU legislation and were given an extended runway.
2 August 2031: The Commission publishes a full evaluation report on the AI Act's functioning, including whether the AI Office is working effectively and whether any amendments are needed.
Timeline by Risk Tier
| Risk Tier | Key Deadline | What You Must Do |
|---|---|---|
| Prohibited | 2 February 2025 (past) | Discontinue or withdraw immediately |
| Limited risk | 2 August 2026 | Implement AI disclosure obligations |
| High risk (Annex III) | 2 August 2026 | Full compliance framework |
| High risk (Annex I products) | 2 August 2027 | Conformity assessment under product law |
| GPAI — new models | 2 August 2025 (past) | Art. 53–55 obligations |
| GPAI — legacy models | 2 August 2027 | Art. 53–55 obligations |
| Public sector high-risk | 2 August 2030 | Full compliance framework |
What You Should Be Doing Right Now (May 2026)
If you have not started your compliance process, here is where the urgency sits:
Immediate: Verify you are not running any prohibited AI practice (Article 5). If you are, stop immediately — enforcement is already active.
Urgent (deadline in 3 months — August 2026): If you operate a high-risk AI system under Annex III, your compliance framework must be in place by 2 August 2026. The AI Office will have full enforcement powers from that date.
Planning: If you provide a GPAI model, obligations have applied since August 2025. If you have not yet aligned with Articles 53–55, prioritise this now.
Monitoring: If your AI is embedded in a physical product (medical device, machinery, vehicle), the Article 6(1) deadline is August 2027 — but conformity assessments take time to conduct and notified bodies have limited capacity.
How DilAIg Keeps You on Track
Compliance is not a one-time event. Regulations update, guidelines are published, and your AI systems evolve. DilAIg's audit is designed to be re-run whenever your system changes — and re-audits after modifications are included in the platform.
The audit determines your risk tier, maps your applicable obligations to specific articles, and generates a prioritised action plan anchored to the timeline above. For high-risk systems, it then produces the mandatory documents required before the 2 August 2026 deadline.
Start your free AI Act audit →
FAQ: EU AI Act Timeline
Is the 2 August 2026 deadline hard?
Yes. From that date, the AI Office has full enforcement powers for high-risk systems. There is no grace period announced for good-faith non-compliance, though the Commission has indicated that enforcement will initially focus on the most significant risks.
Do the deadlines apply to AI systems already on the market?
Partially. Systems placed on the market before the relevant deadline and not substantially modified benefit from a transition period. But "substantial modification" is defined broadly — a significant change to intended purpose, risk level, or technical architecture triggers full compliance.
What counts as a "substantial modification"?
The AI Act defines it as a change that affects the system's compliance status or alters its risk level. The Commission's guidelines (due February 2026) will provide further clarity. In practice, retraining on new data, adding new use cases, or changing the target population are likely to qualify.
Do the GPAI deadlines apply to open-source models?
Partially. Open-weight models under free licences are exempt from documentation and downstream information obligations (Article 53(1)(a) and (b)) — but not if the model is also classified as having systemic risk. The August 2025 deadline applied to all GPAI providers, including open-source.
Key Takeaways
- The AI Act is a phased regulation: not all obligations apply at once
- Prohibited practices have been enforceable since 2 February 2025
- GPAI obligations apply since 2 August 2025
- Full high-risk framework applies from 2 August 2026 — the most critical deadline for most businesses
- Article 6(1) products and legacy GPAI models have until 2 August 2027
- If you are in Annex III territory and have not started compliance, the window is closing fast