← Back to blog

EU AI Act and General-Purpose AI Models: What Providers Must Know (Articles 51–55)

GPAI model obligations under the EU AI Act sit in Articles 51–56, not Article 25. This guide explains who is affected, what the law requires in practice, the penalties for non-compliance, and how to get compliant step by step.

16 May 2026DILAIG

The EU AI Act Does Not Exempt General-Purpose AI Models

Many providers of large language models, foundation models, and multimodal systems assume that their technology sits outside the scope of the EU AI Act. It does not. Chapter V of the EU AI Act (Articles 51 to 56) establishes a dedicated regulatory framework specifically for general-purpose AI (GPAI) models. Obligations entered into force on 2 August 2025.

This article explains exactly what the law requires, which models it targets, and how to build a compliant programme — whether you are a US company supplying models to the EU market or a European provider.

A common source of confusion: Some resources cite "Article 25" when discussing GPAI obligations. Article 25 is a separate provision governing responsibilities along the AI value chain for high-risk AI systems (e.g., when a distributor rebrands a product or makes a substantial modification). GPAI model obligations are found in Articles 51 to 56.


What Is a General-Purpose AI Model Under the AI Act?

The AI Act defines a general-purpose AI model as an AI model trained on large amounts of data using self-supervised learning, capable of competently performing a wide range of distinct tasks, and which can be integrated into a variety of downstream applications.

This definition covers:

  • Large language models (LLMs) such as GPT-class or Gemini-class models
  • Foundation models used as a base for fine-tuning
  • Multimodal models processing text, image, audio, or video

The definition deliberately focuses on architecture and use pattern, not specific outputs. If your model can be used for multiple purposes and integrated into downstream systems, it likely qualifies.


Two Tiers of Obligation: Standard GPAI vs. Systemic Risk

The AI Act creates two compliance tiers based on the potential societal impact of the model.

Tier 1 — All GPAI Providers (Article 53)

Every provider of a GPAI model placed on the EU market must comply with four core obligations.

1. Technical Documentation (Article 53(1)(a), Annex XI)

You must draw up and keep up to date a comprehensive technical dossier covering:

  • The architecture, design choices, and capabilities of the model
  • Training data sources, filtering methods, and volume
  • Training compute and methodology
  • Evaluation results, including benchmarks and red-team findings
  • Known limitations and foreseeable misuse scenarios

This documentation must be made available to the AI Office upon request. It does not need to be made public — except for open-weight models with systemic risk.

2. Downstream Provider Information (Article 53(1)(b), Annex XII)

You must produce and keep current a dedicated information package for downstream providers — companies and developers who integrate your model into their own applications. This package must enable them to understand what your model can and cannot do, so they can meet their own obligations under the AI Act.

It must include at minimum: model capabilities and limitations, intended use cases, safety testing results, and instructions for appropriate use. Intellectual property protections are explicitly acknowledged: you are not required to disclose your full training pipeline.

3. Copyright Policy (Article 53(1)(c))

You must put in place a policy to comply with EU copyright law, using state-of-the-art technical measures to identify and respect rights reservations by content creators — including opt-outs under Article 4(3) of the DSM Directive.

4. Public Training Data Summary (Article 53(1)(d))

You must publish a "sufficiently detailed summary" of the content used to train the model. This does not require full disclosure of proprietary datasets, but must be detailed enough to be meaningful. The AI Office has indicated that vague statements like "publicly available internet data" are insufficient.

Open-Source Exemption

Providers releasing models under free and open licences (allowing access, use, modification, and distribution) are exempt from obligations 1 and 2 above — unless the model is also classified as having systemic risk.


Tier 2 — GPAI Models with Systemic Risk (Articles 51 and 55)

A GPAI model is presumed to have systemic risk when the cumulative training compute exceeds 10²⁵ floating-point operations (FLOPs) (Article 51(1)(a)). This threshold currently captures the most powerful frontier models.

The classification can also be triggered by a Commission decision based on qualitative indicators such as a large number of users, cross-sectoral reach, or high autonomy capabilities — even if the compute threshold is not met (Article 51(1)(b), Annex XIII).

Providers must notify the European Commission within two weeks of reaching or knowing they will reach this threshold (Article 52).

In addition to all Tier 1 obligations, Tier 2 providers must comply with Article 55:

Adversarial Testing and Model Evaluation (Article 55(1)(a))

You must conduct structured red-teaming and model evaluation using standardised protocols to identify systemic risks — including risks to public health, safety, fundamental rights, and democratic processes — before placing the model on the market and on an ongoing basis.

Systemic Risk Assessment and Mitigation (Article 55(1)(b))

You must assess and mitigate potential systemic risks at Union level arising from the development, market placement, or use of the model. This includes identifying sources of risk in training data, model architecture, and foreseeable deployment contexts.

Serious Incident Reporting (Article 55(1)(c))

You must track, document, and report without undue delay to the AI Office any serious incidents involving your model, along with corrective measures taken. A serious incident is any incident resulting in death, serious harm to health, significant disruption of infrastructure, or infringement of fundamental rights at scale.

Cybersecurity (Article 55(1)(d))

You must ensure adequate cybersecurity protection for both the model and its physical infrastructure, including measures against model extraction attacks, adversarial inputs designed to bypass safety measures, and data poisoning.


The GPAI Code of Practice: Your Fastest Path to Compliance

The GPAI Code of Practice was published on 10 July 2025 and endorsed by the European Commission. It is voluntary but carries significant practical weight:

  • Adherence creates a presumption of compliance with Articles 53 and 55
  • Non-signatories face increased scrutiny, more information requests, and less favourable treatment in fine-setting
  • The Code provides detailed operational guidance that the legislation intentionally leaves open

Signing the Code of Practice is not a legal requirement, but for most providers it is the most efficient path to demonstrating compliance.


Penalties for Non-Compliance

Fines for GPAI-related infringements are calculated based on global annual turnover:

Infringement Maximum Fine
Violations of Article 53 or 55 obligations 3% of global annual turnover or €15 million, whichever is higher
Providing incorrect information to authorities 1% of global annual turnover or €7.5 million, whichever is higher
Violations posing unacceptable risks (Annex I) 7% of global annual turnover or €35 million, whichever is higher

The 7% / €35M threshold applies to prohibited AI practices — not to GPAI-specific obligations. Conflating these two tiers overstates the penalty for GPAI compliance failures.


Compliance Timeline at a Glance

Date Milestone
2 August 2025 GPAI obligations enter into force for all new models
2 August 2026 AI Office gains full enforcement powers
2 August 2027 Deadline for legacy models placed on market before August 2025

Five Practical Steps to Build a Compliant GPAI Programme

Step 1 — Determine Your Tier

Establish whether your model exceeds the 10²⁵ FLOP training compute threshold. If uncertain, review qualitative indicators in Annex XIII. Notify the Commission if you are at or near the threshold.

Step 2 — Build Your Technical Dossier

Map all required elements from Annex XI into an internal documentation process. Assign ownership. Set review triggers linked to model updates and retraining events.

Step 3 — Create Your Downstream Information Package

Draft the Annex XII disclosure document for downstream providers. Keep it current — it must reflect the model's capabilities as deployed, not as designed.

Step 4 — Establish a Copyright Compliance Policy

Audit your training data pipeline for content covered by EU copyright. Implement technical measures to honour opt-outs. Document your policy and make it accessible.

Step 5 — Publish Your Training Data Summary

Draft a public summary of training data that goes beyond generic labels. Describe categories of sources, filtering criteria, and volume ranges. Review the AI Office's published guidance on sufficiency.

For Tier 2 providers, additionally: build adversarial testing protocols, a systemic risk register, an incident reporting procedure, and a cybersecurity framework for model infrastructure.


How DilAIg Supports GPAI Compliance

Mapping legal obligations to your specific model architecture and deployment context is not a one-time exercise. It requires ongoing documentation, structured evaluation, and a clear process for reporting.

DilAIg is built for this. Our platform guides GPAI providers through each obligation in Articles 53 and 55, generates the required documentation outputs, and keeps your compliance posture current as the model and the regulation evolve.

Start your compliance audit →

Book a demo →


FAQ: GPAI Obligations Under the EU AI Act

What articles of the EU AI Act cover GPAI model obligations?

Articles 51 to 56 in Chapter V. Article 25 is a separate provision about responsibilities along the AI value chain for high-risk AI systems and does not govern GPAI model obligations.

Does the AI Act apply to non-EU companies providing GPAI models?

Yes. If your model is placed on the EU market or used in the EU, the AI Act applies regardless of where you are established. Article 54 requires non-EU providers to designate an authorised representative in the EU.

What is the 10²⁵ FLOPs threshold?

It is the training compute threshold above which a GPAI model is presumed to have systemic risk and must comply with Article 55 in addition to Article 53. It is a rebuttable presumption — providers can contest the classification with the AI Office.

Are open-source GPAI models exempt?

Partially. Open-weight models under free licences are exempt from technical documentation and downstream information requirements under Article 53(1)(a) and (b). They are not exempt if they are also classified as having systemic risk.

What is the GPAI Code of Practice?

A voluntary compliance framework published on 10 July 2025 and endorsed by the European Commission. Adherence creates a presumption of compliance with Articles 53 and 55 and reduces regulatory scrutiny.

When did GPAI obligations become enforceable?

2 August 2025 for new models. Full enforcement powers for the AI Office apply from 2 August 2026. Legacy models have until 2 August 2027.


Key Takeaways

  • GPAI model obligations are in Articles 51–56, not Article 25
  • All GPAI providers must comply with Article 53: technical documentation, downstream information, copyright policy, and training data summary
  • Models with training compute above 10²⁵ FLOPs face additional obligations under Article 55: adversarial testing, risk assessment, incident reporting, cybersecurity
  • The GPAI Code of Practice (July 2025) is the most efficient path to demonstrating compliance
  • GPAI-specific fines are capped at 3% of global turnover — not 7%
  • Non-EU providers must designate an authorised representative in the EU

Sources

Is your AI system compliant?

Free audit in 20 minutes.

Start the audit