← Back to blog

AI Act Article 27 Explained: Fundamental Rights Impact Assessment (FRIA) for US & EU Companies

What is a Fundamental Rights Impact Assessment (FRIA) under the EU AI Act? Learn when it’s required how to conduct one and why US companies must comply. Free template inside.

16 May 2026DILAIG

AI Act Article 27 Explained Fundamental Rights Impact Assessment for US and EU Companies

FRIA AI Act: How to assess your AI system's impact on fundamental rights

Last updated May 17 2026 Reading time 10 minutes


The EU AI Act Demands More Than Just Compliance It Demands Accountability

The EU AI Act is not just about classifying AI systems. It is also about ensuring they respect fundamental rights. Article 27 introduces a critical requirement the Fundamental Rights Impact Assessment.

This is not a simple checkbox exercise. It is a thorough evaluation of how your AI system could affect people’s rights. Rights like non discrimination privacy and fairness.

For US companies this is especially relevant. If your AI system is used in the EU or affects EU citizens you must conduct a FRIA. Failure to do so could result in fines up to 7% of your global revenue.

This guide will explain what a FRIA is. It will show you when it is required. It will walk you through the process step by step. It will help you avoid costly mistakes.


What Is a Fundamental Rights Impact Assessment

A Fundamental Rights Impact Assessment is a detailed analysis. It evaluates how an AI system could impact the fundamental rights of individuals. These rights are protected under the EU Charter of Fundamental Rights.

The FRIA is mandatory for high risk AI systems. It is also required for some limited risk systems if they could significantly affect fundamental rights.

The goal is simple. Identify potential risks. Mitigate them. Ensure your AI system respects EU values.


When Is a FRIA Required

Not all AI systems need a FRIA. But many do. Here is when it applies.

High risk AI systems always require a FRIA. These include AI used in

Healthcare HR and employment Credit scoring Law enforcement Legal services Critical infrastructure

Some limited risk systems may also need a FRIA. This happens if they could significantly affect fundamental rights. Example a chatbot that processes sensitive personal data.

For US companies the rule is clear. If your AI system is used in the EU or affects EU citizens and falls into these categories you must conduct a FRIA.

AI System Type Example US Example EU FRIA Required?
HR recruitment tool HireVue’s AI interview tool A German startup’s resume screener Yes
Credit scoring AI A US fintech’s loan approval system N26’s credit assessment chatbot Yes
Healthcare diagnostic AI Ada Health’s symptom checker Babylon Health’s diagnostic tool Yes
Customer support chatbot Bank of America’s virtual assistant BNP Paribas’ client chatbot No (unless processing sensitive data)
Marketing chatbot A US SaaS company’s lead qualification bot A Dutch e-commerce site’s chatbot No

Real World Examples of FRIA in Action

Healthcare AI and the Right to Non Discrimination

A US based healthcare AI is deployed in a French hospital. It helps diagnose diseases. It processes patient data. It could unintentionally discriminate against certain ethnic groups.

The hospital must conduct a FRIA. It must assess how the AI could affect the right to non discrimination. It must ensure the AI does not reinforce biases.

The FRIA would include

An analysis of the training data A review of the AI’s decision making process An assessment of potential biases Mitigation strategies to address any issues

HR AI and the Right to Fair Working Conditions

A German company uses an AI to screen job applicants. The AI analyzes resumes. It ranks candidates. It could disadvantage certain groups.

The company must conduct a FRIA. It must evaluate how the AI affects the right to fair working conditions. It must ensure the AI does not lead to unfair hiring practices.

The FRIA would include

A review of the AI’s criteria for ranking candidates An analysis of historical hiring data An assessment of potential discrimination Measures to ensure transparency and fairness

Credit Scoring AI and the Right to Privacy

A US fintech offers its credit scoring AI to European banks. The AI processes financial data. It assesses creditworthiness. It could infringe on the right to privacy.

The fintech must conduct a FRIA. It must assess how the AI affects the right to privacy. It must ensure compliance with GDPR and other data protection laws.

The FRIA would include

A review of data collection practices An assessment of data security measures An evaluation of user consent mechanisms Strategies to minimize data processing


How to Conduct a Fundamental Rights Impact Assessment

Conducting a FRIA involves several key steps. Here is a detailed breakdown.

Step 1 Identify the AI System and Its Purpose

Start by defining the AI system. What does it do? Who uses it? What data does it process?

Example An AI system used for resume screening in a multinational company. It processes resumes from EU and US applicants. It ranks candidates based on skills and experience.

Step 2 Identify the Fundamental Rights at Risk

Next identify which fundamental rights could be affected. The EU Charter of Fundamental Rights lists many rights. Focus on those most relevant to your AI system.

Common rights to consider

Right to non discrimination (Article 21) Right to privacy (Article 7 and 8) Right to fair working conditions (Article 31) Right to an effective remedy (Article 47)

Step 3 Assess the Potential Impact

Evaluate how the AI system could affect these rights. Consider both direct and indirect impacts.

Direct impacts are straightforward. Example an AI that denies loans based on gender discriminates directly.

Indirect impacts are subtler. Example an AI that uses historical data could perpetuate past biases.

Step 4 Evaluate the Likelihood and Severity of Risks

Not all risks are equal. Some are more likely to occur. Some have more severe consequences.

Use a risk matrix to prioritize. Focus on high likelihood high severity risks.

Risk Likelihood Severity Priority
Discrimination in hiring High High High
Privacy breach Medium High High
Inaccurate credit scoring Low Medium Medium

Step 5 Develop Mitigation Strategies

For each identified risk develop strategies to mitigate it. These could include

Adjusting the AI’s algorithms Improving data quality Implementing human oversight Enhancing transparency

Step 6 Document the Assessment

Document every step of the FRIA. This documentation is crucial for compliance. It may also be required in case of an audit.

The FRIA report should include

A description of the AI system The fundamental rights at risk The potential impacts The likelihood and severity of risks Mitigation strategies Responsible parties and timelines

Step 7 Review and Update Regularly

A FRIA is not a one time exercise. Review and update it regularly. Especially when the AI system changes or new risks emerge.


Common Mistakes in Conducting a FRIA

Mistake 1 Treating FRIA as a One Time Exercise

A FRIA is not a static document. It must be updated as the AI system evolves. New data new use cases or new regulations may require a new assessment.

Mistake 2 Focusing Only on Direct Impacts

Many companies only consider direct impacts. But indirect impacts can be just as harmful. Example an AI that uses biased historical data could perpetuate discrimination.

Mistake 3 Ignoring Indirect Stakeholders

A FRIA should consider all stakeholders. Not just users but also those affected by the AI’s decisions. Example a credit scoring AI affects not just the bank but also the loan applicants.

Mistake 4 Overlooking Data Protection

A FRIA must consider data protection. If your AI processes personal data GDPR applies. Ensure your FRIA addresses both the AI Act and GDPR.

Mistake 5 Not Involving Legal Experts

A FRIA is a legal requirement. Involve legal experts in the process. They can help identify risks and ensure compliance.


How DilAIg Simplifies the FRIA Process

Conducting a FRIA can be complex. DilAIg simplifies the process.

Our tool guides you through each step. It helps you identify the AI system. It assists in assessing the risks. It generates the necessary documentation.

For US companies our tool ensures compliance with both US and EU regulations. It flags EU specific requirements. It helps you navigate the complexities of the AI Act.

Here is how it works.

1 Answer a series of questions about your AI system. What does it do? What data does it process? Who uses it?

2 Our tool analyzes your responses. It identifies the fundamental rights at risk. It assesses the potential impacts.

3 We generate a comprehensive FRIA report. It includes all the necessary details. It is ready for submission to regulators.

4 We provide mitigation strategies. We help you implement them. We ensure your AI system is compliant.

Test your AI system’s compliance today. Start Your Free FRIA Assessment

DilAIg FRIA Tool Screenshot


FAQ Fundamental Rights Impact Assessment

Q What is the difference between a FRIA and a DPIA

A Data Protection Impact Assessment is required under GDPR. It focuses on data protection risks.

A Fundamental Rights Impact Assessment is required under the AI Act. It focuses on a broader range of fundamental rights.

Some AI systems may need both.

Q Do US companies need to conduct a FRIA

Yes if their AI system is used in the EU or affects EU citizens and falls into a high risk category.

Q How often should a FRIA be updated

A FRIA should be updated whenever the AI system changes significantly. This could be due to new data new use cases or new regulations. At a minimum review it annually.

Q Who is responsible for conducting a FRIA

The provider or deployer of the AI system is responsible. For US companies deploying AI in the EU this means you.

Q What happens if I do not conduct a FRIA

You could face fines up to 7% of your global revenue or 35 million euros. You could also face reputational damage and loss of trust.

Q Can I use a template for my FRIA

Yes. DilAIg provides a free FRIA template. It is designed to meet the requirements of the AI Act. It includes all the necessary sections.

Download Our Free FRIA Template


Key Takeaways

A Fundamental Rights Impact Assessment is a mandatory requirement for high risk AI systems under the AI Act. It evaluates how an AI system could affect fundamental rights. It is not just a compliance exercise but a commitment to ethical AI.

US companies are not exempt if their AI systems are used in the EU or affect EU citizens. The FRIA process involves identifying the AI system assessing potential impacts and developing mitigation strategies.

Common mistakes include treating FRIA as a one time exercise and focusing only on direct impacts. DilAIg’s tool simplifies the FRIA process. It guides you through each step. It generates the necessary documentation.


Next Steps

Conduct a FRIA for your AI system. Start Your Free FRIA Assessment

Download our free FRIA template. Get the Template

Need help? Book a Demo


Join the Conversation

Have you conducted a FRIA for your AI system? What challenges did you face? Share your thoughts in the comments or tweet us @DilAIg.


Further Reading

Official EU AI Act Text Article 27 European Commission FRIA Guidelines DilAIg’s AI Act Compliance Hub FRIA vs DPIA What’s the Difference


This article is part of DilAIg’s AI Act Compliance Series. Next up AI Act Article 51 AI Registry Requirements Explained

Is your AI system compliant?

Free audit in 20 minutes.

Start the audit