Resources EU AI Act for SaaS Companies: What You Actually Need to Do Before August 2026
AI Act

EU AI Act for SaaS Companies: What You Actually Need to Do Before August 2026

March 2026 min read

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world's first comprehensive AI law. It entered into force on 1 August 2024, with obligations phasing in through August 2027. For SaaS companies that build or deploy AI features for EU users, the regulation applies regardless of where the company is based. Prohibited AI practices and AI literacy requirements are already enforceable. Transparency obligations and high-risk system rules take effect on 2 August 2026. Fines reach up to EUR 35 million or 7% of global annual turnover. This guide explains every obligation that matters for SaaS, maps the compliance timeline, identifies which AI features fall into which risk category, and provides a step-by-step action plan.

1. What Is the EU AI Act and Does It Apply to Your SaaS?

The EU AI Act (Regulation (EU) 2024/1689) was adopted on 21 May 2024 and entered into force on 1 August 2024. It is the first comprehensive legal framework for artificial intelligence anywhere in the world.

Like the GDPR, the AI Act has extraterritorial reach. It applies to any company that places an AI system on the EU market or whose AI system's output is used within the EU, regardless of where that company is established. As the European Commission states, the Act covers providers (developers), deployers (users in a professional capacity), importers, and distributors of AI systems.

Your SaaS is in scope if it does any of the following:

        Integrates AI features (chatbots, recommendation engines, automated decisions, content generation) used by EU customers

        Deploys or embeds a general-purpose AI model (such as an LLM) in a product accessible to EU users

        Uses AI internally to make decisions about EU-based individuals (hiring, credit, insurance, moderation)

        Provides AI-powered SaaS tools (analytics, scoring, classification) to EU-based businesses

2. The Risk-Based Framework: Four Tiers

The AI Act follows a risk-based approach. AI systems are classified into four tiers, each with different obligations. The classification determines what you must do. The official high-level summary from the AI Act website provides a useful overview.

Risk Level

What It Means

Obligations

Penalty for Non-Compliance

Unacceptable (Prohibited)

AI practices that are fundamentally incompatible with EU values and rights

Completely banned. Must not be developed, deployed, or used.

Up to EUR 35 million or 7% of global annual turnover

High-Risk

AI systems in sensitive domains (employment, credit, education, biometrics, critical infrastructure) listed in Annex III

Risk management, data governance, technical documentation, transparency, human oversight, accuracy/robustness, conformity assessment, EU database registration

Up to EUR 15 million or 3% of global annual turnover

Limited Risk (Transparency)

AI systems that interact with people or generate/manipulate content (chatbots, deepfakes, generative AI)

Disclosure and labeling obligations under Article 50. Users must know they are interacting with AI or viewing AI-generated content.

Up to EUR 7.5 million or 1% of global annual turnover

Minimal/No Risk

AI systems that pose little or no risk (spam filters, AI-enhanced video games, most internal analytics)

No specific obligations beyond general AI literacy (Article 4)

N/A

Where most SaaS AI features land: The majority of AI-powered SaaS features (chatbots, content generation, recommendation engines, summarization tools) fall into the limited-risk or minimal-risk tiers. However, if your product makes or supports decisions about people in employment, creditworthiness, insurance, education, or similar domains, you are likely operating a high-risk AI system.

3. What Is Already Enforceable (As of March 2026)

The AI Act uses a phased timeline. Some obligations are already in effect. The European Commission's overview and DLA Piper's analysis confirm the following milestones:

 

Date

What Became Applicable

Key Reference

1 August 2024

AI Act enters into force

Regulation (EU) 2024/1689

2 February 2025

Prohibited AI practices banned (Article 5); AI literacy obligation applies (Article 4)

Chapter II, Article 4

2 August 2025

GPAI model rules apply; governance infrastructure operational; penalty regime in effect (up to EUR 35M / 7%); member states must designate competent authorities

Chapter V, Articles 99-101

2 August 2026

Full application: high-risk AI system rules, transparency obligations (Article 50), conformity assessments, EU database registration

Chapters III, IV

2 August 2027

Extended transition for high-risk AI in regulated products (Annex I); final deadline for GPAI models already on the market

Article 111

Important: The European Commission proposed in November 2025 (the "Digital Omnibus" package) to potentially extend the high-risk deadline to December 2027. As of March 2026, this proposal is still being negotiated by EU lawmakers and is not yet adopted.

4. Prohibited AI Practices: What You Must Not Do

Article 5 lists eight AI practices that are completely banned. These have been enforceable since 2 February 2025, with penalties applicable since 2 August 2025. The European Commission published guidelines on 4 February 2025 explaining each prohibition in detail.

 

Prohibited Practice

What It Covers

SaaS Relevance

Manipulative/deceptive AI (Art. 5(1)(a))

AI using subliminal, manipulative, or deceptive techniques to distort behavior causing significant harm

Dark pattern AI in pricing, urgency-based conversion tools that exploit user psychology

Exploiting vulnerabilities (Art. 5(1)(b))

AI exploiting age, disability, or socioeconomic vulnerability to distort behavior harmfully

AI features targeting children, elderly users, or financially distressed individuals

Social scoring (Art. 5(1)(c))

Evaluating or classifying people based on social behavior leading to unjustified detrimental treatment

Reputation scoring, tenant rating systems based on unrelated social data

Criminal risk prediction (Art. 5(1)(d))

Predicting crime risk based solely on profiling or personality traits

Unlikely for most SaaS, but relevant for security/compliance platforms

Untargeted facial scraping (Art. 5(1)(e))

Building facial recognition databases through untargeted internet/CCTV scraping

Image processing, computer vision, or identity verification features

Workplace/education emotion recognition (Art. 5(1)(f))

Inferring emotions via biometrics in workplaces or educational institutions

HR tech, EdTech, employee monitoring, proctoring software

Biometric categorization by sensitive traits (Art. 5(1)(g))

Categorizing people by race, political opinions, sexual orientation, etc. from biometric data

Any SaaS processing biometric data

Real-time remote biometric ID in public spaces (Art. 5(1)(h))

Real-time biometric identification in publicly accessible spaces (limited law enforcement exceptions)

Physical security, smart city, and surveillance SaaS

As Orrick's analysis explains, only practices explicitly listed in Article 5(1) are prohibited. The Commission interprets these prohibitions broadly, and organizations should not attempt to structure around them.

Penalty: Up to EUR 35 million or 7% of global annual turnover, whichever is higher. Italy has also introduced criminal penalties, including imprisonment for certain AI offences under its national implementing law (Law No. 132/2025).

5. AI Literacy: Already Mandatory

Article 4 requires all providers and deployers to ensure that their staff and contractors have a sufficient level of AI literacy. This obligation has been in effect since 2 February 2025.

AI literacy means ensuring that people who operate, oversee, or make decisions about AI systems understand how those systems work, their limitations, and the potential risks they pose. As Latham & Watkins explains, there are no direct fines for violating Article 4 itself. However, regulators will likely scrutinize AI literacy compliance during investigations, and inadequate training could increase liability if an AI system causes harm.

What this means in practice for SaaS companies:

        Train product, engineering, support, and sales teams on how your AI features work and their limitations

        Document your training program (dates, content, attendees) as evidence of compliance

        Include AI literacy in onboarding for new hires and contractors

        Tailor training to roles: engineers need technical depth; sales needs to understand disclosure obligations

6. High-Risk AI Systems: Does Your SaaS Qualify?

An AI system is classified as high-risk under Article 6 if it falls into one of two categories: (1) it is a safety component of a product covered by EU harmonization legislation in Annex I, or (2) it is listed in one of the use-case domains in Annex III. High-risk rules apply from 2 August 2026.

 

Annex III Domain

SaaS Examples That Could Be High-Risk

Biometrics

Identity verification SaaS, facial recognition APIs, emotion detection tools

Critical infrastructure

AI managing energy grids, water systems, transport logistics, or digital infrastructure security

Education and vocational training

AI-powered grading, admission decision support, adaptive learning systems, exam proctoring

Employment and worker management

AI resume screening, candidate ranking, performance evaluation, promotion recommendations, workforce scheduling

Access to essential services

AI credit scoring, insurance risk assessment, loan approval, social benefit eligibility, healthcare triage

Law enforcement

AI evidence evaluation, predictive analytics for law enforcement, recidivism risk tools

Migration and border control

AI visa processing, asylum claim assessment, border monitoring

Administration of justice

AI legal research tools that influence case outcomes, sentencing recommendation systems

Critical note: An AI system listed in Annex III is always high-risk if it profiles individuals. There is a narrow exception under Article 6(3) for systems that do not materially influence decision outcomes, but you must document your non-high-risk assessment and register it before placing the system on the market.

7. Obligations for High-Risk AI Systems (Articles 9-15)

If your AI system qualifies as high-risk, you face the most substantial compliance burden under the Act. As Dataiku's analysis summarizes, providers must meet requirements under Articles 9 through 15, and deployers have their own set of obligations.

Requirement

Article

What It Means for SaaS Providers

Risk management system

Art. 9

Implement a continuous risk management process covering the entire lifecycle: identify risks, evaluate them, adopt mitigation measures, test for residual risk

Data governance

Art. 10

Training, validation, and testing data must be relevant, representative, and free of errors to the best extent possible. Document data sources, preprocessing, and known biases.

Technical documentation

Art. 11

Maintain detailed documentation covering system design, intended purpose, training data, testing methods, risk controls, and performance metrics (Annex IV)

Record-keeping (logging)

Art. 12

Design the system to automatically log events relevant to identifying risks and substantial modifications. Logs must be tamper-resistant and retained appropriately.

Transparency for deployers

Art. 13

Provide clear instructions for use so deployers understand capabilities, limitations, intended purpose, performance characteristics, and human oversight requirements

Human oversight

Art. 14

Design the system so a natural person can effectively oversee it, understand its outputs, intervene, and override decisions. Include a "stop" mechanism where appropriate.

Accuracy, robustness, cybersecurity

Art. 15

Achieve appropriate accuracy levels, resilience to errors and adversarial attacks, and cybersecurity throughout the lifecycle

In addition, providers of high-risk systems must: complete a conformity assessment (self-assessment in most cases, third-party for biometric systems), affix the CE marking, register the system in the EU database (Article 49), and implement a post-market monitoring system (Article 72).

8. Transparency Obligations (Article 50): Applicable from August 2026

Article 50 applies to all generative AI systems, not just high-risk ones. As Bird & Bird notes, Article 50 will likely affect the greatest number of organizations because its scope is far broader than the high-risk rules.

 

Obligation

Who

What You Must Do

AI interaction disclosure

Providers

Design AI systems so that users know they are interacting with AI (e.g., chatbots must identify themselves as AI), unless this is obvious from context

Machine-readable marking of AI content

Providers

Ensure AI-generated outputs (text, images, audio, video) are marked in a machine-readable format and detectable as artificially generated. This includes watermarking, metadata, and provenance signals.

Emotion recognition / biometric disclosure

Deployers

Inform individuals when they are subject to emotion recognition or biometric categorization systems

Deepfake disclosure

Deployers

Disclose that image, audio, or video content is AI-generated or manipulated (deepfakes). Exception for artistic/satirical works, which require minimal non-intrusive disclosure.

AI-generated text disclosure

Deployers

Disclose that text published to inform the public on matters of public interest was AI-generated, unless a human has taken editorial responsibility

The European Commission published the first draft of a Code of Practice on Transparency of AI-Generated Content on 17 December 2025, with finalization expected by June 2026. As Jones Day explains, although voluntary, this Code is likely to become the benchmark for what regulators consider adequate compliance with Article 50.

9. General-Purpose AI (GPAI) Model Rules

If your SaaS builds on or integrates a general-purpose AI model (such as an LLM from OpenAI, Anthropic, Google, Mistral, or others), the GPAI rules under Chapter V of the AI Act apply to the model provider, not typically to you as a downstream deployer. However, you should understand how these rules affect your supply chain.

GPAI providers (the model developers) must:

        Maintain technical documentation covering training, evaluation, and capabilities

        Provide downstream providers (you) with information to integrate the model compliantly

        Implement copyright compliance measures and publish a training data summary

        For systemic-risk models: conduct adversarial testing, track and report serious incidents, and ensure cybersecurity

What this means for SaaS companies using third-party models: You are a "deployer" or "downstream provider" in the AI Act's terminology. You are responsible for complying with your own obligations (transparency, high-risk requirements if applicable), but you should also verify that your GPAI model provider is meeting their Chapter V obligations. Request their technical documentation and information disclosures as part of vendor due diligence.

10. How the AI Act Interacts with the GDPR

The AI Act does not replace the GDPR. They operate in parallel. A single AI system can be subject to both regulations simultaneously. Key overlaps include:

Area

GDPR

AI Act

Automated decision-making

Article 22: right not to be subject to solely automated decisions with significant effects; right to human review

Article 14: human oversight requirements for high-risk AI systems; broader in scope than GDPR Art. 22

Data quality

Article 5(1)(d): personal data must be accurate and kept up to date

Article 10: training, validation, and testing datasets must be relevant, representative, and free of errors

Transparency

Articles 13-14: inform data subjects about automated processing and profiling

Article 50: inform users about AI interaction; label AI-generated content; high-risk systems require detailed disclosure to deployers

Impact assessments

Article 35: DPIA required for high-risk processing

Article 9: continuous risk management system for high-risk AI; Article 27: fundamental rights impact assessment for deployers of high-risk AI

Data subject rights

Articles 15-22: access, rectification, erasure, portability, objection

No direct equivalent, but human oversight (Art. 14) and transparency (Art. 13) support exercising GDPR rights

As Wilson Sonsini's 2026 preview notes, the European Commission is expected to publish guidance in 2026 clarifying the AI Act's interplay with EU data protection law. Until then, treat both sets of obligations as cumulative.

11. Fines and Enforcement

The AI Act's penalty regime has been in effect since 2 August 2025. As DLA Piper reports, competent authorities may now impose administrative fines for non-compliance:

 

Violation Category

Maximum Fine

Examples

Prohibited AI practices (Art. 5)

EUR 35 million or 7% of global annual turnover

Operating a banned AI system; workplace emotion recognition; social scoring

High-risk system obligations; transparency obligations; other key duties

EUR 15 million or 3% of global annual turnover

Missing conformity assessment; no technical documentation; failing to disclose AI interaction

Supplying incorrect information to authorities

EUR 7.5 million or 1% of global annual turnover

Providing false or misleading data in registrations, audits, or investigations

 

For SMEs and startups, the Act specifies that the lower of the fixed amount or percentage applies. Member states may also impose additional national penalties. Italy, for example, has introduced criminal penalties including imprisonment for certain offences.

As of March 2026, no public enforcement actions for AI Act violations have been announced. However, the Greenberg Traurig analysis notes that the Commission and national supervisory authorities have stated they will closely monitor implementation. Enforcement is expected to accelerate after August 2026 when the full set of obligations takes effect.

12. Step-by-Step Compliance Action Plan

Drawing from guidance by Orrick, Legal Nodes, and Greenberg Traurig, here is a practical action plan:

 

Step

Action

Deadline

1

Conduct an AI inventory: map every AI system and model your company develops, deploys, imports, or distributes in the EU

Immediately

2

Classify your role for each AI system: are you the provider, deployer, importer, or distributor?

Immediately

3

Classify each AI system by risk level: prohibited, high-risk, limited-risk, or minimal-risk

Immediately

4

Remove or redesign any prohibited AI practices (Article 5)

Already required

5

Implement AI literacy training for all staff involved with AI systems and document it

Already required

6

For high-risk systems: begin building risk management, data governance, technical documentation, logging, human oversight, and accuracy/robustness measures (Articles 9-15)

Before 2 Aug 2026

7

For high-risk systems: complete conformity assessment, prepare CE marking, register in the EU database

Before 2 Aug 2026

8

Implement Article 50 transparency obligations: chatbot disclosure, AI content marking/labeling, deepfake disclosure

Before 2 Aug 2026

9

Review all AI vendor contracts: ensure your GPAI model providers are meeting their Chapter V obligations

Before 2 Aug 2026

10

Establish an AI governance framework: assign responsible persons, define internal policies, create incident reporting processes

Before 2 Aug 2026

11

Set up post-market monitoring for high-risk AI systems

Before 2 Aug 2026

12

Monitor member state implementing laws (e.g., Italy's Law 132/2025) for jurisdiction-specific requirements

Ongoing

13. Common Mistakes SaaS Companies Should Avoid

Assuming the AI Act only applies if you build foundation models. Wrong. The Act applies to anyone who places an AI system on the EU market or deploys one. If your SaaS uses a third-party model to power features, you are a deployer (and potentially a downstream provider) with your own obligations.

Waiting until August 2026 to start. Prohibited practices and AI literacy have been enforceable since early 2025. Penalties have been in effect since August 2025. Starting compliance work in 2026 leaves no margin for error.

Treating AI compliance as separate from GDPR compliance. A single AI feature can trigger obligations under both the AI Act and the GDPR. Your data protection team and AI compliance team should coordinate, not work in parallel silos.

Ignoring the transparency obligations because your AI is "low risk." Article 50 applies to all generative AI systems regardless of risk level. If your SaaS has a chatbot, generates content, or produces synthetic media, you have transparency obligations.

Overlooking national implementing laws. Member states are passing their own laws (Italy's Law 132/2025, Germany's designation of the Federal Network Agency). These may include additional requirements or stricter penalties.

Failing to document your risk classification. If your AI system falls under Annex III but you believe it is not high-risk, Article 6(4) requires you to document that assessment before placing the system on the market. The burden of proof is on you.

 

 

References

1.     European Commission - AI Act Regulatory Framework - https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

2.     EU AI Act Full Text and Explorer - https://artificialintelligenceact.eu/

3.     EU AI Act High-Level Summary - https://artificialintelligenceact.eu/high-level-summary/

4.     Article 5 - Prohibited AI Practices - https://artificialintelligenceact.eu/article/5/

5.     Article 6 - Classification Rules for High-Risk AI Systems - https://artificialintelligenceact.eu/article/6/

6.     Annex III - High-Risk AI Systems List - https://artificialintelligenceact.eu/annex/3/

7.     Article 50 - Transparency Obligations - https://artificialintelligenceact.eu/article/50/

8.     EC Guidelines on Prohibited AI Practices (Feb 2025) - https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act

9.     EC Code of Practice on AI-Generated Content Transparency - https://digital-strategy.ec.europa.eu/en/policies/code-practice-ai-generated-content

10.  DLA Piper - Latest Wave of EU AI Act Obligations (Aug 2025) - https://www.dlapiper.com/en-us/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect

11.  Wilson Sonsini - 2026 AI Regulatory Developments Preview - https://www.wsgr.com/en/insights/2026-year-in-preview-ai-regulatory-developments-for-companies-to-watch-out-for.html

12.  Orrick - 6 Steps Before 2 August 2026 - https://www.orrick.com/en/Insights/2025/11/The-EU-AI-Act-6-Steps-to-Take-Before-2-August-2026

13.  Orrick - EC Guidelines on Prohibited Practices Analysis - https://www.orrick.com/en/insights/2025/04/eu-commission-publishes-guidelines-on-the-prohibited-ai-practices-under-the-ai-act

14.  Greenberg Traurig - Key Compliance Considerations (Jul 2025) - https://www.gtlaw.com/en/insights/2025/7/eu-ai-act-key-compliance-considerations-ahead-of-august-2025

15.  Legal Nodes - EU AI Act 2026 Compliance Requirements - https://www.legalnodes.com/article/eu-ai-act-2026-updates-compliance-requirements-and-business-risks

16.  SIG - Comprehensive EU AI Act Summary (Jan 2026) - https://www.softwareimprovementgroup.com/blog/eu-ai-act-summary/

17.  Latham & Watkins - AI Literacy and Prohibited Practices - https://www.lw.com/en/insights/upcoming-eu-ai-act-obligations-mandatory-training-and-prohibited-practices

18.  Bird & Bird - Draft Transparency Code of Practice Analysis - https://www.twobirds.com/en/insights/2026/taking-the-eu-ai-act-to-practice-understanding-the-draft-transparency-code-of-practice

19.  Jones Day - Draft Code of Practice on AI Labelling - https://www.jonesday.com/en/insights/2026/01/european-commission-publishes-draft-code-of-practice-on-ai-labelling-and-transparency

20.  Dataiku - High-Risk AI System Requirements Guide - https://www.dataiku.com/stories/blog/eu-ai-act-high-risk-requirements

21.  Future of Life Institute - AI Act Prohibited Practices (Art. 5) - https://fpf.org/blog/red-lines-under-the-eu-ai-act-understanding-prohibited-ai-practices-and-their-interplay-with-the-gdpr-dsa/

22.  WilmerHale - Article 50 Transparency Deep Dive - https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20240528-limited-risk-ai-a-deep-dive-into-article-50-of-the-european-unions-ai-act

23.  DataGuard - EU AI Act Timeline - https://www.dataguard.com/eu-ai-act/timeline