← Back to blog

What Is the EU AI Act? A Plain-English Guide for Business Leaders

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive AI law. This plain-English guide explains what it is, who it applies to, how the risk classification works, and what your business needs to do.

16 May 2026DILAIG

What Is the EU AI Act? A Plain-English Guide for Business Leaders

The Short Answer

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It applies to any company that develops, deploys, or uses AI systems that affect people in the European Union — regardless of where that company is based.

If your AI touches EU users, the AI Act touches you.


Why the EU Created the AI Act

AI systems can make consequential decisions: who gets a loan, who gets hired, who crosses a border. The EU's position is that such decisions require accountability, transparency, and oversight — the same principles that underpin EU consumer protection and data privacy law.

The AI Act is not an anti-AI regulation. It is a risk-based framework: the stricter the potential impact on people, the stricter the rules. Most AI systems — a recommendation engine, a spam filter, a scheduling tool — face no mandatory obligations at all.


Who Does the AI Act Apply To?

The AI Act applies to four types of actors:

Role Definition
Provider Develops an AI system and places it on the EU market
Deployer Uses an AI system in a professional context
Importer Brings a non-EU AI system into the EU market
Distributor Makes an AI system available in the EU without substantially modifying it

Extraterritorial scope: Like the GDPR, the AI Act applies based on where the effect occurs, not where the company is located. A US company whose AI system is used in France is subject to the AI Act.


The Four Risk Levels

The AI Act classifies AI systems into four categories. Your obligations depend entirely on which category your system falls into.

Unacceptable Risk — Banned (Article 5)

These AI systems are prohibited outright. They entered prohibition on 2 February 2025. Key examples:

  • Systems that manipulate people through subliminal techniques or by exploiting vulnerabilities
  • Social scoring systems used by public authorities
  • Real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions)
  • AI systems that infer emotions in workplaces or educational institutions
  • Systems that scrape facial images from the internet or CCTV to build recognition databases
  • Predictive policing systems based on personal characteristics

Fines for deploying a banned system: up to €35 million or 7% of global annual turnover.

High Risk — Allowed, with Obligations (Article 6 and Annex III)

High-risk AI systems are permitted but subject to a full compliance framework before they can be placed on the market. This is the core of the AI Act.

A system is high-risk if it falls into one of eight domains listed in Annex III (see below) or if it is a safety component of a product subject to EU harmonisation legislation (Annex I — medical devices, machinery, vehicles, etc.).

Limited Risk — Transparency Obligations Only (Article 50)

These systems must disclose that the user is interacting with AI. The main examples:

  • Chatbots must inform users they are talking to an AI
  • Deepfakes and AI-generated content must be labelled as such
  • Emotion recognition systems must inform the people they are analysing

Minimal Risk — No Mandatory Obligations

The vast majority of AI systems fall here: spam filters, AI-assisted document editing, recommendation engines, simple classification tools. No mandatory compliance steps are required, though voluntary codes of conduct are encouraged.


The Eight High-Risk Domains (Annex III)

If your AI system operates in any of these eight areas, it is likely high-risk and subject to the full compliance framework:

1. Biometrics Remote biometric identification, biometric categorisation based on sensitive attributes, emotion recognition.

2. Critical Infrastructure AI managing or operating digital infrastructure, road traffic, or energy supply networks.

3. Education and Vocational Training Systems that determine access to institutions, evaluate learning outcomes, or monitor students during tests.

4. Employment and Worker Management Recruitment tools (job ad targeting, CV screening, candidate evaluation), systems for promotions, terminations, task allocation, and performance monitoring.

5. Essential Private and Public Services Creditworthiness assessment, eligibility for public benefits or healthcare, life and health insurance pricing, emergency call dispatch prioritisation.

6. Law Enforcement Risk assessment for victimisation or re-offending, polygraph-like tools, evidence reliability evaluation, criminal profiling.

7. Migration, Asylum, and Border Control Visa and asylum application processing, border security risk assessment, biometric identification in migration contexts.

8. Administration of Justice and Democratic Processes AI assisting judicial authorities in legal research or fact-finding, systems designed to influence election outcomes or voting behaviour.


What High-Risk Providers Must Do

If your AI system is high-risk, you must complete the following before placing it on the EU market:

  1. Risk management system (Article 9) — ongoing process to identify and mitigate risks
  2. Data governance (Article 10) — training data must be relevant, representative, and free of errors
  3. Technical documentation (Article 11, Annex IV) — full record of system design, development, and evaluation
  4. Logging and traceability (Article 12) — automatic event logging to enable post-market monitoring
  5. Transparency for users (Article 13) — clear instructions so deployers understand capabilities and limitations
  6. Human oversight (Article 14) — humans must be able to monitor, intervene, and override the system
  7. Accuracy, robustness, and cybersecurity (Article 15) — documented performance standards
  8. Conformity assessment (Article 43) — internal or third-party verification depending on the system type
  9. EU Declaration of Conformity (Article 47) — formal declaration that the system meets AI Act requirements
  10. Registration (Article 49) — registration in the EU database for high-risk AI systems

Key Dates to Know

Date What Happens
1 August 2024 AI Act enters into force
2 February 2025 Prohibited AI practices become enforceable
2 August 2025 GPAI model obligations apply; governance structure active
2 August 2026 Full applicability for high-risk systems; AI Office enforcement powers
2 August 2027 Article 6(1) applies; legacy GPAI model compliance deadline

Does the AI Act Apply to My Business?

Ask yourself three questions:

1. Do I develop or deploy AI systems? If yes, and those systems affect people in the EU, the AI Act applies regardless of where your company is incorporated.

2. What does my AI actually do? Map your system's outputs against the four risk tiers. Most systems land in minimal risk. But if your system makes or influences decisions about people — hiring, credit, healthcare, education — check Annex III carefully.

3. What is my role in the value chain? Provider obligations are more extensive than deployer obligations. If you integrate a third-party AI model into your product, you may be a provider for the purposes of the AI Act even if you did not build the underlying model.


How DilAIg Helps

Determining your risk level and obligations manually means reading 144 articles, 13 annexes, and a growing body of Commission guidelines. DilAIg compresses that into a 50-question audit that takes 20 minutes.

The audit classifies your system across the four AI Act risk levels, identifies your applicable obligations article by article, and generates a prioritised action plan. For high-risk systems, it then produces the mandatory documents — FRIA, EU Declaration of Conformity, Transparency Notice, and Technical Documentation — as professional drafts ready for legal review.

Start your free AI Act audit →

See how it works →


FAQ: EU AI Act Basics

When did the EU AI Act come into force?

The AI Act entered into force on 1 August 2024. Obligations apply progressively: prohibited practices from February 2025, GPAI rules from August 2025, and the full high-risk framework from August 2026.

Does the AI Act apply to companies outside the EU?

Yes. If your AI system affects people in the EU — whether through deployment by an EU company or direct use by EU residents — the AI Act applies. Non-EU providers must designate an authorised representative in the EU.

What is the difference between a provider and a deployer?

A provider develops an AI system and places it on the market. A deployer uses it in a professional context. Providers face more extensive obligations (conformity assessment, technical documentation, registration). Deployers must conduct fundamental rights impact assessments for certain high-risk systems and implement appropriate human oversight.

What are the fines under the AI Act?

Up to 7% of global annual turnover for deploying a prohibited AI system, 3% for violations of most other obligations (including GPAI rules), and 1% for providing incorrect information to authorities.

Is the AI Act the same as the GDPR for AI?

Not exactly. The GDPR governs personal data processing. The AI Act governs AI systems specifically. They overlap significantly — particularly around automated decision-making, data governance, and fundamental rights — but they are separate legal frameworks with separate compliance obligations.


Key Takeaways

  • The EU AI Act (Regulation 2024/1689) is the world's first comprehensive AI law, in force since August 2024
  • It applies to any company whose AI affects EU users — extraterritorial scope like the GDPR
  • Four risk tiers: prohibited, high-risk, limited risk, minimal risk
  • High-risk systems (Annex III) must complete a full compliance framework before market placement
  • Most AI systems face no mandatory obligations
  • Fines reach 7% of global turnover for the most serious violations
  • DilAIg's 50-question audit classifies your system and generates mandatory documents in 20 minutes

Sources

Is your AI system compliant?

Free audit in 20 minutes.

Start the audit