AI Act Annex III Explained: High-Risk AI Systems and What They Mean for US and EU Companies
What is Annex III of the EU AI Act? Discover the list of high-risk AI systems, why they matter, and how US companies must comply. Avoid fines and ensure your AI is legally sound.
AI Act Annex III Explained High Risk AI Systems and What They Mean for US and EU Companies
Last updated May 17 2026 Reading time 12 minutes
Annex III Is the Heart of the EU AI Act
The EU AI Act is built on a simple idea. Not all AI systems are created equal. Some pose higher risks to fundamental rights and safety. These systems need stricter rules.
Annex III is where the EU lists these high risk AI systems. It is the foundation of the regulation. If your AI system is on this list you must comply with the strictest obligations.
For US companies this is non negotiable. If your AI system is used in the EU and falls under Annex III you must follow the rules. The penalties for non compliance are severe.
This guide will explain what Annex III is. It will list the high risk AI systems. It will show you why they are considered high risk. It will help you understand what you need to do.
What Is Annex III
Annex III is a list. It is part of the EU AI Act. It identifies AI systems that are considered high risk.
High risk does not mean dangerous. It means these systems have the potential to harm fundamental rights. They have the potential to harm safety. They have the potential to harm society.
The EU requires these systems to meet strict obligations. These include risk assessments. These include transparency. These include human oversight.
The Complete List of High Risk AI Systems in Annex III
Annex III is divided into several categories. Each category includes specific use cases. Here is the full list.
Management and Operation of Critical Infrastructure
AI systems used for the management and operation of critical infrastructure. This includes
Energy Water Transport Healthcare
Example A US company provides an AI system to manage a power grid in France. This system is high risk.
Education and Vocational Training
AI systems used in education. This includes
Grading exams Determining admissions Assessing learning outcomes
Example A US edtech company offers an AI tool to grade exams in German schools. This system is high risk.
Employment Worker Management and Access to Self Employment
AI systems used in employment. This includes
Screening job applicants Conducting interviews Monitoring performance Managing workers
Example A US HR tech company provides an AI tool to screen job applicants for a Dutch employer. This system is high risk.
Access to Essential Private and Public Services
AI systems used to determine access to essential services. This includes
Credit scoring Insurance underwriting Healthcare services Social benefits
Example A US fintech company offers an AI credit scoring system to a Spanish bank. This system is high risk.
Law Enforcement
AI systems used in law enforcement. This includes
Predictive policing Facial recognition in public spaces Crime risk assessment
Example A US tech company provides an AI tool for predictive policing to a Belgian police department. This system is high risk.
Migration Asylum and Border Control
AI systems used in migration asylum and border control. This includes
Automated visa applications Asylum eligibility assessments Border surveillance
Example A US company provides an AI system to assess visa applications for a French embassy. This system is high risk.
Administration of Justice
AI systems used in the administration of justice. This includes
AI assistants for judges Legal research tools Sentencing recommendations
Example A US legaltech company offers an AI tool to assist judges in a Swedish court. This system is high risk.
Democratic Processes
AI systems used in democratic processes. This includes
AI used in elections Political campaigning tools Voter profiling
Example A US company provides an AI tool to profile voters for a German political party. This system is high risk.
Why Are These AI Systems Considered High Risk
The EU considers these AI systems high risk for a reason. They have the potential to significantly impact people’s lives. They have the potential to violate fundamental rights. They have the potential to cause harm.
Here is why each category is high risk.
Critical Infrastructure
AI systems managing critical infrastructure can affect public safety. A failure can lead to power outages. It can lead to water shortages. It can lead to transportation disruptions.
Education
AI systems in education can affect people’s futures. A biased grading system can disadvantage students. It can limit their opportunities. It can perpetuate inequalities.
Employment
AI systems in employment can affect livelihoods. A discriminatory hiring tool can exclude qualified candidates. It can reinforce biases. It can lead to unfair labor practices.
Essential Services
AI systems determining access to essential services can affect quality of life. A biased credit scoring system can deny loans unfairly. It can limit financial opportunities. It can perpetuate economic inequalities.
Law Enforcement
AI systems in law enforcement can affect justice. A biased predictive policing tool can lead to wrongful arrests. It can target innocent individuals. It can erode public trust.
Migration and Border Control
AI systems in migration can affect people’s rights. A biased asylum assessment tool can deny refuge unfairly. It can violate international law. It can endanger lives.
Administration of Justice
AI systems in justice can affect fairness. A biased legal assistant can lead to wrongful convictions. It can undermine the rule of law. It can erode public confidence.
Democratic Processes
AI systems in democratic processes can affect democracy. A biased voter profiling tool can manipulate elections. It can undermine free will. It can erode democratic values.
Real World Examples of High Risk AI Systems
Healthcare AI in the EU
A US based healthcare AI company deploys its diagnostic tool in Italian hospitals. The AI analyzes medical images. It assists in diagnosing diseases.
This system falls under Annex III. It is used in healthcare. It must comply with strict obligations. These include risk assessments. These include transparency. These include human oversight.
HR AI for a Multinational Company
A multinational company uses an AI system for recruitment. The system screens job applicants. It ranks candidates. It is used in both the US and the EU.
This system falls under Annex III. It is used in employment. It must comply with strict obligations. These include bias testing. These include fairness assessments. These include human review.
Credit Scoring AI for European Banks
A US fintech company offers an AI credit scoring system to banks in the EU. The AI assesses creditworthiness. It determines loan eligibility.
This system falls under Annex III. It is used in essential services. It must comply with strict obligations. These include transparency. These include explainability. These include data protection.
How to Determine If Your AI System Falls Under Annex III
Determining if your AI system falls under Annex III involves several steps. Here is how to do it.
Step 1 Identify the Use Case
What does your AI system do? Where is it used? Who uses it?
Example Your AI system screens job applicants for a company in the EU.
Step 2 Check Annex III
Look at the list of high risk use cases in Annex III. Does your use case match any of them?
Example Screening job applicants falls under Employment Worker Management and Access to Self Employment.
Step 3 Assess the Impact
Does your AI system have the potential to harm fundamental rights? Does it have the potential to harm safety? Does it have the potential to harm society?
Example A biased hiring tool can discriminate against certain groups. It can violate the right to non discrimination.
Step 4 Consult an Expert
If you are unsure consult an expert. They can help you determine if your AI system is high risk. They can help you understand your obligations.
What Obligations Apply to High Risk AI Systems
If your AI system falls under Annex III you must comply with strict obligations. Here is what they include.
Risk Management
You must implement a risk management system. This includes identifying risks. This includes assessing risks. This includes mitigating risks.
Data and Data Governance
You must ensure high quality data. You must ensure data governance. This includes data collection. This includes data processing. This includes data storage.
Technical Documentation
You must create technical documentation. This includes a description of the AI system. This includes its intended purpose. This includes its technical specifications.
Transparency
You must ensure transparency. This includes providing clear information about the AI system. This includes its capabilities. This includes its limitations.
Human Oversight
You must ensure human oversight. This includes human review of decisions. This includes human intervention when necessary.
Accuracy Robustness and Cybersecurity
You must ensure the AI system is accurate. You must ensure it is robust. You must ensure it is cybersecure.
Conformity Assessment
You must conduct a conformity assessment. This includes testing the AI system. This includes evaluating its compliance.
Post Market Monitoring
You must monitor the AI system after it is deployed. This includes tracking its performance. This includes addressing any issues.
Common Mistakes with High Risk AI Systems
Mistake 1 Assuming Your AI System Is Not High Risk
Many companies assume their AI systems are not high risk. But if your system is used in a sensitive sector it likely is.
Always check Annex III. When in doubt consult an expert.
Mistake 2 Failing to Implement Risk Management
High risk AI systems require risk management. Failing to implement it can lead to non compliance. It can lead to harm.
Mistake 3 Ignoring Data Quality
High risk AI systems require high quality data. Ignoring data quality can lead to biased outcomes. It can lead to harm.
Mistake 4 Not Ensuring Human Oversight
High risk AI systems require human oversight. Not ensuring it can lead to wrong decisions. It can lead to harm.
Mistake 5 Skipping Conformity Assessments
High risk AI systems require conformity assessments. Skipping them can lead to non compliance. It can lead to fines.
How DilAIg Helps with Annex III Compliance
Complying with Annex III can be complex. DilAIg simplifies the process.
Our tool guides you through each step. It helps you determine if your AI system is high risk. It assists in implementing the necessary obligations. It generates the required documentation.
For US companies our tool ensures compliance with both US and EU regulations. It flags EU specific requirements. It helps you navigate the complexities of the AI Act.
Here is how it works.
1 Answer a series of questions about your AI system. What does it do? Where is it used? Who uses it?
2 Our tool analyzes your responses. It determines if your AI system falls under Annex III. It identifies the obligations you must meet.
3 We generate a comprehensive compliance plan. It includes all the necessary steps. It is ready for implementation.
4 We provide guidance on risk management. We help you ensure data quality. We assist with human oversight.
Check if your AI system falls under Annex III. Start Your Compliance Check
FAQ Annex III and High Risk AI Systems
Q What is Annex III of the EU AI Act
Annex III is a list of high risk AI systems in the EU AI Act. These systems have the potential to harm fundamental rights safety or society.
Q Why are these AI systems considered high risk
These AI systems are considered high risk because they can significantly impact people’s lives. They can violate fundamental rights. They can cause harm.
Q Do US companies need to comply with Annex III
Yes if their AI systems are used in the EU and fall under Annex III.
Q What are the obligations for high risk AI systems
Obligations include risk management data governance technical documentation transparency human oversight accuracy robustness cybersecurity conformity assessment and post market monitoring.
Q How can I determine if my AI system falls under Annex III
Check if your AI system’s use case matches any of those listed in Annex III. Assess the potential impact on fundamental rights safety or society.
Q What happens if my AI system falls under Annex III but I do not comply
You could face fines up to 7% of your global revenue or 35 million euros. You could also face reputational damage and loss of trust.
Key Takeaways
Annex III of the EU AI Act lists high risk AI systems. These systems have the potential to harm fundamental rights safety or society. They must comply with strict obligations.
The list includes AI systems used in critical infrastructure education employment essential services law enforcement migration justice and democratic processes.
For US companies compliance is mandatory if their AI systems are used in the EU and fall under Annex III. Obligations include risk management data governance transparency human oversight and more.
Common mistakes include assuming your AI system is not high risk and failing to implement risk management. DilAIg’s tool simplifies Annex III compliance. It guides you through each step. It generates the necessary documentation.
Next Steps
Check if your AI system falls under Annex III. Start Your Compliance Check
Implement the necessary obligations. Learn How
Need help? Book a Demo
Join the Conversation
Does your AI system fall under Annex III? What challenges have you faced with compliance? Share your thoughts in the comments or tweet us @DilAIg.
Further Reading
Official EU AI Act Text Annex III European Commission Annex III Guidelines DilAIg’s AI Act Compliance Hub High Risk AI Systems: What You Need to Know
This article is part of DilAIg’s AI Act Compliance Series. Next up AI Act Article 9: Data and Data Governance Requirements Explained