Back GDPR Digital Omnibus: What Changes for Compliance in 2026

GDPR Digital Omnibus: What Changes for Compliance in 2026

The European Commission’s 2025 Digital Omnibus Proposal and associated documents is a substantive amendment to the GDPR intended to reduce the compliance burden on European businesses and explicitly facilitate the development of Artificial Intelligence within the Union.[1] The proposal originates from a recognition that the "cumulative impact" of the EU's digital rules[2], including the Data Act, DORA, NIS2 and the AI Act, has resulted in a fragmented and excessively burdensome environment for European industry. This shift toward simplification reflects an effort to streamline the regulatory landscape and alleviate the compliance pressures currently facing businesses operating within the digital single market.[3]

The Redefinition of Personal Data: The Shift to a "Relative" Standard

Perhaps the most significant and controversial amendment proposed in the 2025 Omnibus is the revision of Article 4(1) of the GDPR concerning the definition of personal data. For nearly a decade, the European approach to personal data has leaned toward a broad standard of identifiability. Rather than adopting a purely ‘absolute’ test, where data would be considered personal if anyone could identify the subject, the CJEU in Case C-582/14 Breyer established a relative but expansive approach: data is personal where the controller has reasonably available means to identify the data subject, including through recourse to third parties (such as requesting information from an internet service provider). This is built upon the earlier, broader, guidance of the Article 29 Working Party (Opinion 4/2007 on the concept of personal data) which had favoured a more objective interpretation under the previous Data Protection Directive. The position under the GDPR, specifically Recital 26 reflects this synthesis, providing that account should be taken of ‘all the means reasonably likely to be used’[4] by the controller or another person to identify the individual, having regard to factors such as cost, time and available technology.

The 2025 proposal seeks to codify and refine this standard, heavily influenced by the CJEU’s judgment in the Single Resolution Board case (C-413/23P). Importantly, the SRB narrows the Breyer approach by clarifying that where pseudonymised data is transmitted to a recipient who lacks the means to re-identify the data subject, that data is not personal data in that recipient’s hands. The proposal, therefore, moves beyond merely codifying the existing framework and enshrines a more firmly subjective test, potentially restricting the scope of personal data where pseudonymisation measures are in place.[5]

In the SRB case, the CJEU held that information is not personal data for a recipient if that recipient does not have the "means reasonably likely to be used" to identify the individual, and has no legal means to access the additional information (the key) held by the sender.[6]

The 2025 Digital Omnibus proposes to write this judicial interpretation directly into the GDPR. The amendments to Article 4(1) add three clarifying sentences:

  1. Information relating to a natural person is not automatically personal data for every entity just because another entity can identify that person.[7]
  2. Information is not personal data for a given entity if that entity cannot identify the individual, taking into account the "means reasonably likely to be used" by that specific entity.[8]
  3. Information does not become personal data for an entity merely because a potential subsequent recipient has the means to identify the individual.[9]

This represents a paradigm shift. Under the previous approach, if a dataset was encrypted but the key existed somewhere in the world, the encrypted dataset was often treated as personal data by cautious compliance teams.[10] Under the proposed approach, the status of the data depends entirely on the capabilities and legal rights of the specific holder.[11]

Implications for Data Processors and the Cloud Economy

The practical implications of this redefinition are profound for the cloud computing and data processing industries. Previously, transferring key-coded data to a third party created uncertainty. If the sender held the key, was the recipient processing personal data? The Omnibus clarifies that if the recipient has no access to the key and re-identification would require disproportionate effort or illegal action, the data is non-personal in their hands.[12]

For entities receiving effectively anonymized or heavily pseudonymized datasets, the full weight of GDPR obligations (e.g., responding to Subject Access Requests, maintaining Records of Processing Activities) may no longer apply, provided they cannot re-identify the subjects.[13]

To provide further legal certainty, the proposal empowers the Commission to adopt implementing acts that specify exactly when pseudonymized data constitutes personal data, based on the state of the art in re-identification techniques.[14]

Regulatory Backlash and the Risks

While this simplification is welcomed by the industry especially when backed by claims such as cutting costs for business with an estimate of €4 billion by 2029[15] in savings, the EDPB and EDPS have issued a Joint Opinion recently strongly opposing this change. They argue that this amendment does not ‘accurately reflect’ and ‘goes beyond’ the CJEU jurisprudence.[16] Furthermore, they argue that the amendment significantly narrows the concept of personal data, contradicts the protective intent of the GDPR and misinterprets the SRB judgment by stripping away the specific context of that case.[17]

This creates a dual-track reality for compliance officers. The Commission is signalling a relaxation of the rules to foster the data economy, while the DPAs, who actually enforce the rules on ground are signalling that they view this relaxation as a potentially enabling context-dependent violation of fundamental rights. Until the Omnibus is formally adopted and tested in court, organisations should be cautious. Relying prematurely on the ‘relative’ definition here to de-classify datasets as non-personal could lead to enforcement actions before it is binding law.

Artificial Intelligence and the "Legitimate Interest" Breakthrough (Article 88c)

The intersection of AI development and the GDPR has been a source of immense legal friction. AI developers, particularly those training Large Language Models (LLMs), have struggled to identify a lawful basis for scraping or processing public data. Consent (Article 6(1)(a)) is often impossible to obtain from millions of people, and Legitimate Interest (Article 6(1)(f)) has been viewed as legally risky due to the lack of explicit regulatory endorsement.[18]

The 2025 Digital Omnibus addresses this head-on by introducing a new Article 88c, which explicitly establishes that the "development and operation…” of an AI system constitutes a legitimate interest.[19] The proposed Article 88c effectively ends the debate about whether training an AI model is a valid "interest" under the GDPR. By codifying this, the Commission shifts the compliance focus from finding a lawful basis to safeguarding that basis.

The proposal provides that ‘Where the processing of personal data is necessary for the interests of the controller in the context of the development and operation of an AI system as defined in Article 3, point (1), of… [the AI Act]… or an AI model, such processing may be pursued for legitimate interests within the meaning of Article 6(1)(f) of…[the GDPR]…, where appropriate, except where other Union or national laws explicitly require consent, and where such interests are overridden by the interests, or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.’[20]

While the interest is presumed legitimate, the controller must still conduct a balancing test, demonstrating that this interest is not overridden by the fundamental rights and freedoms of the data subjects.[21] Article 88c further provides that such processing must be subject to safeguards for the rights and freedoms of data subjects, respect for data minimisation, and protection against non-disclosure of data in the AI system. Notably, it must also provide data subjects with an unconditional right to object to the processing of their personal data.[22] If a user objects to their data being used for training purposes, the system must respect this. However, this also raises important technical questions about machine unlearning (how to remove a specific individual’s data impact from a model that has already been trained). Current large language models do not neatly store individual data points in ways that can be surgically removed. Full retraining is prohibitively expensive, and approximate unlearning techniques are still immature.

For the broader regulatory landscape, this signals a deliberate choice by the Commission to prioritise AI competitiveness while maintaining a rights-based framework. It also creates potential tension with existing DPA guidance (as several authorities have taken stricter positions on AI training)[23] and raises questions about how national laws requiring consent will interact with this presumption. 

The Sensitive Data Paradox and Article 9

A noteworthy bottleneck for AI safety has been the prohibition on processing special category data (e.g., race, ethnicity, health data, political opinions) under Article 9 GDPR. Paradoxically, to ensure an AI system is not biased against minorities, developers need to test it against diverse datasets that include these protected characteristics.[24] If they strip all sensitive data, they cannot measure bias ("fairness through unawareness" is often ineffective).

The AI Act partially addressed this tension through Article 10(5), which permits the processing of special category data for bias monitoring, but only for high-risk AI systems and subject to strict safeguards.[25] The Digital Omnibus now goes further by amending Article 9 GDPR itself, inserting a new exception at paragraph 2(k) that applies to the ‘development and operation of an AI system…or an AI model’ generally, not limited to high-risk category.[26] This is a significant broadening of the scope.

However, the nature of this exception should not be overstated. The accompanying new paragraph 5 does not grant developers a positive right to collect or process sensitive data for AI purposes.[27] Instead, it establishes what might be described as ‘tolerate but minimise’ framework.[28] Controllers are first required to implement appropriate organisational and technical measures to avoid the collection of special category data altogether. Where, despite these measures, such data is identified in training testing or validation datasets, the controller must remove it. Only where removal would require disproportionate effort does the exception bite and, in that case, the controller must, without undue delay, implement effective protections to prevent the data from being disclosed in the AI system’s outputs. The logic, in essence, is to allow a model to learn from residual sensitive data without being able to regurgitate it.

The EDPB and the EDPS have taken a cautiously supportive but qualified position on this proposal. They acknowledge the practical reality that when collecting data for training, testing and validation of AI systems, particularly general-purpose models, it is not always possible for controllers to avoid residual and incidental processing of special category data.[29] Their first objection is textual.[30] Article 9(2)(k) as proposed refers broadly to ‘processing in the context of the development and operation of an AI system or an AI model’, while the accompanying Recital 33 frames the derogation more narrowly around incidental and residual processing.[31] They argue that these words must be inserted into the enacting terms themselves, not merely left in the recitals to prevent the exception from being read as a broader license to process special category data deliberately in AI context.[32] There is a risk that controllers could interpret the provision as authorising more than the Commission might have intended. They also take issue with the threshold for invoking the exemption. They recommend that proposed Article 9(5) explicitly require that deletion of special category be ‘impossible or involve disproportionate efforts’ as a precondition, and that the controller’s assessment be properly documented with reference to the state of the art in technology and the impact on data subjects.[33] In other words, the ‘disproportionate effort’ standard should not become a convenient excuse, it must be evidenced. Lastly, they submit that safeguards should apply across the entire AI development lifecycle, not merely at the point of data collection. The text should also make clear that residual special category data protected under this exemption cannot be re-used for other purposes.[34]

Finally, and perhaps most significantly for the coherence of the EU regulatory framework, the EDPB and the EDPS flag the potential for confusion between this GDPR amendment and the separate AI Omnibus Proposal’s Article 4a of the AI Act. Article 4a covers the intentional processing of special category for the specific purpose of bias detection and correction whereas Article 9(2)(k) GDPR addresses incidental and residual processing that occurs despite best efforts to avoid it.[35]  

The Migration of Cookie Rules

For many years, the EU's rules on cookies and tracking technologies were stranded in the outdated ePrivacy Directive, awaiting the stalled ePrivacy Regulation. The 2025 Omnibus takes a decisive step by moving these rules directly into the GDPR via new Articles 88a and 88b.[36] This consolidation is intended to simplify the legal landscape and address the phenomenon of "consent fatigue," where users are bombarded with cookie banners they blindly accept.

Article 88a establishes that the storage of or access to personal data on a user’s device generally requires explicit consent. This rule serves as the primary protection for terminal equipment, ensuring that individuals maintain control over their digital privacy. There are specific, narrow exceptions where consent is unnecessary, such as when the processing is essential for transmitting communications, providing a service specifically requested by the user, maintaining security, or performing basic audience measurement for the controller’s own use.[37] This regulation introduces strict procedural requirements for how consent is managed and requested. Controllers must provide an easy, single-click method for users to refuse consent, ensuring the processing is as simple as possible.[38] If the user declines, the controller is prohibited from asking again for the same purpose for at least six months. Conversely, if the user provides consent, the controller must honour that choice and refrain from repeated requests during the lawful duration of that consent.

Article 88b focuses on modernising the expression of these choices through automated and machine-readable means. Controllers are required to ensure their interfaces can interpret and respect signals sent automatically by a user’s software or device,[39] such as ‘do-not-track’ settings. While media service providers are exempt from some of these technical requirements, larger web browser providers are specifically tasked with providing the necessary technical tools to facilitate these automated choices.[40] This system will be supported by standardised European interpretations to ensure consistency across the digital market, with various implementation deadlines stretching from six to forty-eight months after the regulation takes effect.[41]

In a notable concession to the publishing industry, the proposal exempts media service providers from the obligation to respect these automated signals.[42] The Commission acknowledges that independent media rely heavily on advertising revenue to fund journalism and pluralism. Media sites argue that a blanket "reject all" signal from a browser threatens their economic survival. Therefore, they are permitted to override the signal and present a banner asking the user to reconsider or offering a "pay or consent" alternative. This exemption applies only to defined "media service providers" under the European Media Freedom Act. E-commerce sites, corporate websites, and platforms do not benefit from this and must honour the automated signal.

The Audience Measurement Exemption

Under Article 88(a)(3)(c), consent would not be required for the use of cookies involving personal data and any subsequent data processing for aggregated audience measurement for the controller’s own use among others.  This aligns EU law with the guidance previously issued by authorities such as the French CNIL, which allowed for privacy-preserving analytics without consent.[43]

Harmonization of Incident Reporting and Enforcement

Current compliance involves a fragmented reporting landscape where a single data breach might trigger reporting obligations under GDPR (72 hours),[44] NIS2 (24 hours)[45], and DORA (various timelines)[46]. This fragmentation leads to "reporting fatigue" where organisations facing a live security incident must divert resources from containment and remediation to satisfy overlapping bureaucratic requirements across multiple regulators. The Omnibus proposes to introduce a Single-Entry Point for all these notifications to streamline the process underpinned by a ‘submit once’ principle.[47]

A Submit Once Principle

Under this model, an affected organisation would file a single notification through the portal, and the system would then route the relevant elements of that report to appropriate authorities, whether that is the national Data Protection Authority for personal data issues under the GDPR, the Computer Security Incident Response Team (NIS2) or the financial supervisory authority for incidents falling within DORA’s scope.[48] This mechanism requires robust backend coordination between different national authorities to ensure that the correct regulator takes lead.

In a move designed to prioritise regulatory attention on genuinely significant threats, the Omnibus proposes meaningful changes to the GDPR's breach notification framework under Article 33. The reporting window is extended from 72 hours to 96 hours, providing controllers with an additional day to assess the scope and severity of an incident before filing.[49] While this extension may appear modest, it reflects a practical acknowledgment that the original 72-hour window, particularly when counting from the moment of "awareness," often proved insufficient for complex breaches involving multiple systems, third-party processors, or cross-border data flows. The additional time should allow for more accurate initial notifications, reducing the frequency of incomplete reports that subsequently require multiple amendments.

Under the current Article 33(1), a controller must notify the supervisory authority of any breach unless it is ‘unlikely to result in a risk to the rights and freedoms of natural persons”.[50] The Omnibus would raise this bar significantly, requiring notification only where the breach poses a ‘high-risk’ to individuals.[51] This aligns the trigger for authority notification with the current threshold for individual notifications under Article 34(1), effectively collapsing what were previously two distinct levels to one. This change aims to filter out the noise of minor breaches that currently clog regulatory inboxes. However, it places a heavier burden on the controller's internal risk assessment. The definition of "high risk" will be critical, and the EDPB is tasked with developing guidelines and lists of high-risk scenarios to ensure consistency.[52] Furthermore, an organisation that genuinely misjudges the risk levels and fails to notify could face enforcement action, creating a perverse incentive either to over-report out of caution, which would defeat the purpose of the reform, or to engage in motivated reasoning to avoid the costs and reputational consequences of notification. Recognising this challenge, the proposal tasks the EDPB with developing detailed guidelines and illustrative lists of scenarios that constitute high risk and the quality and specificity of this guidance will be decisive.[53]

Abuse of Data Subject Rights (DSARs)

The Omnibus also turns its attention to the increasingly contentious issue of DSARs under Article 15.[54] The proposal seeks to provide controllers with stronger grounds to refuse requests or charge a reasonable fee where the request is ‘manifestly unfounded or excessive’, specifically targeting requests that are instrumentalised for purposes unrelated to data protection.[55]This provision responds to a well-documented pattern across Member States in which DSARs have been deployed as tactical tools in employment disputes, commercial litigation, and regulatory complaints, where the requester’s actual objective is not to understand how their personal data is being processed but to extract documents useful for a parallel legal claim.[56] In the UK for instance, the phenomenon of ‘DSAR as discovery’ has become sufficiently widespread that the ICO has issued specific guidance on the issue, and employment tribunals regularly encounter access requests timed to coincide with grievance proceedings.[57]

It should be noted that Article 15 exists as a fundamental right under the Charter of Fundamental Rights of the EU (Article 8), and fundamental rights are not ordinarily subject to a motive test. The EDPB has also raised concerns about linking the concept of abuse of rights to situations where individuals exercise their right of access for purposes other than data protection. They emphasise that the GDPR protects fundamental rights and freedoms broadly, and that the Court of Justice of the European Union has confirmed that individuals may exercise their access rights for objectives beyond simply verifying the lawfulness of data processing.[58]

They argue that legislation should focus on abusive intention, such as a clear intent to harm the controller, rather than on the purpose behind the request. They also oppose the suggestion that overly broad or undifferentiated requests should automatically be considered excessive, noting that the right of access is meant to enable individuals to understand how their data is processed and that the GDPR already allows controllers to request clarification when needed. In addition, they recommend maintaining the current high threshold for rejecting requests as manifestly unfounded or excessive and express doubts about introducing the notion of reasonable grounds to believe. Finally, they stress that any assessment of whether a request is excessive or unfounded should be objective, properly documented, and accompanied by an opportunity for the data subject to clarify their request before it is rejected.[59]

Conclusion

The 2025 Digital Omnibus Proposal represents one of the most ambitious attempts to recalibrate the European data protection framework since the GDPR's adoption. By redefining personal data along a firmly relative standard, creating an explicit legitimate interest basis for AI development, carving out a pragmatic exception for incidental processing of special category data, migrating cookie rules into the GDPR, and streamlining breach notification through a single-entry point, the Commission has signalled a clear intent to reduce regulatory friction and position the EU as a competitive environment for AI innovation and the broader data economy. Yet ambition is not the same as consensus. The sharp opposition from the EDPB and EDPS on several key provisions, particularly the narrowing of the personal data definition and the scope of the Article 9 exception, reveals a fundamental tension between the Commission's competitiveness agenda and the supervisory authorities' mandate to uphold fundamental rights. For compliance professionals, the immediate takeaway is one of cautious preparation rather than premature reliance. Organisations should begin mapping out how the proposed changes would affect their data classification practices, AI training pipelines, cookie management systems, and breach response procedures, while recognising that the final text may differ substantially from the current proposal once it passes through the legislative process and is tested before the courts. The Omnibus may well deliver the simplification it promises, but only if the delicate balance between economic efficiency and individual rights is maintained throughout its implementation.

References

[1] The proposal explicitly states, ‘…the amendments focus on unlocking opportunities in the use of data, as a fundamental resource in the EU economy, not least in in view of supporting the development and use of trustworthy artificial intelligence solutions in the EU market.’

[2] Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on the simplification of the digital acquis (Digital Omnibus for the digital acquis)’ COM(2025) XXX draft, page 2, available here: https://cdn.netzpolitik.org/wp-upload/2025/11/EU-Kommission-Digital-Omnibus-A-Data-Act-und-DSGVO.pdf

[3] Ibid.

[4] Recital 26, GDPR.

[5] Commission [n2], article 3(1) amending article 4(1) of Regulation (EU) 2016/679.

[6] Case C-413/23 P European Data Protection Supervisor v Single Resolution Board ECLI:EU:C:2025:645, paras 71-82

[7] Ibid, Article 3(1)(a).

[8] Ibid.

[9] Ibid.

[10] EDPB, ‘Guidelines 01/2025 on Pseudonymisation’ (Version 1.0 for public consultation, 16 January 2025) 4 (Stating that pseudonymised data are personal data, a position now in tension with the CJEU ruling); see also, Article 29 Data Protection Working Party, ‘Opinion 4/2007 on the Concept of Personal Data) (WP 136,20 June 2007) 15-16.

[11] Commission [n2], article 3(1) and recital 27.

[12] https://www.law.kuleuven.be/citip/blog/personal-data-in-the-digital-omnibus-where-are-we-going/

[13] https://www.insideprivacy.com/eu-data-protection/european-commission-proposes-revisions-to-gdpr-and-other-digital-rules-under-digital-omnibus-package/

[14] Ibid.

[15] https://www.geciclaw.com/eu-digital-omnibus/

[16] EDPB-EDPS Joint Opinion 2/2026, ‘On the Proposal for a Regulation as regards the simplification of the digital legislative framework (Digital Omnibus)’, 10 February 2026, available here: https://www.edpb.europa.eu/system/files/2026-02/edpb_edps_jointopinion_202602_digitalomnibus_en.pdf, para, 17.

[17] Ibid, page 10.

[18] European Parliamentary Research Service, ‘The Impact of General Data Protection Regulation (GDPR) on artificial intelligence’, available here: https://www.europarl.europa.eu/RegData/etudes/STUD/2020/641530/EPRS_STU(2020)641530_EN.pdf, section 3.

[19] Commission [n2], Article 88c.  

[20] Ibid,

[21] Ibid

[22] Ibid.

[23] For instance, in March 2023, the Italian DPA imposed a temporary ban on ChatGPT citing the lack of legal basis for the massive collection and processing of personal data used to train algorithms, as well as failure to provide users with a privacy notice, for more information: https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2023/04/the-italian-data-protection-authority-halts-chatgpt-s-data-proce.html

[24] https://iapp.org/news/a/the-ai-acts-debiasing-exception-to-the-gdpr

[25] Article 10(5), AI Act.

[26] Commission [n2], Article 3(3).

[27] Ibid, Article 3(3)(b).

[28] Ibid, see also recital 33.

[29] EDPB-EPDS [n16]. See also the EDPB Press Release: ‘The EDPB and the EDPS welcome the proposal’s aim to introduce specific derogation to the prohibition to process sensitive data, subject to conditions, covering the incidental and residual processing of such data in the context of the development and operation of AI systems or models’, here: https://www.edps.europa.eu/press-publications/press-news/press-releases/2026/digital-omnibus-edpb-and-edps-support-simplification-and-competitiveness-while-raising-key-concerns_en

[30] See for more elaboration, EDPB-EDPS [n16], see generally, section 6.2

[31] Ibid, paras 46-52.

[32] Ibid.

[33] Ibid, para 50.

[34] Ibid, para 51.

[35] Ibid, para 52.

[36] Commission [n2], Article 3(15) inserting Article 88a & b to the GDPR.

[37] Ibid, see also recital 44.

[38] Ibid, Article 3(15) inserting article 88a(4) to the GDPR.

[39] Ibid, Article 3(15) inserting Article 88b(1)-(2) to the GDPR.

[40] Ibid, 88b(2).

[41] Ibid, Article 88(b)(7).

[42] Ibid, Article 88(b)(3).

[43] https://www.cnil.fr/fr/cookies-solutions-pour-les-outils-de-mesure-daudience

[44] GDPR, Article 33(1).

[45] NIS2 Directive, Article 23(4)(a).

[46] DORA, Articles 17-19.  

[47] Commission [n2], inserting article 23a into Directive 2022/2555, establishing the single-entry point operated by ENISA.

[48] Ibid, Article 23a NIS2 (as proposed) (setting out the routing mechanism, interoperability requirements, and access rules for the single-entry point across GDPR, NIS2, DORA, CER and eIDAS).

[49] Ibid, Article 3(8)amending article 33 GDPR to extend the notification deadline to 96 hours.

[50] GDPR, Article 33(1).

[51] Commission, [n49].

[52] Ibid, Article 3(8)(c).

[53] Ibid.

[54] Ibid, Article 3(4).

[55] Ibid.

[56] See Case C‑307/22 FT (Copies du dossier médical) ECLI:EU:C:2023:811 (CJEU confirming that a DSAR cannot be refused merely because the data subject pursues purposes unrelated to data protection, but that the abuse exception under art 12(5) remains available); Case C‑526/24 Brillen Rottler, Opinion of AG Szpunar, ECLI:EU:C:2025:723 (18 September 2025) (opining that an initial DSAR may constitute an abuse of rights in exceptional circumstances where the controller can demonstrate abusive intention on the part of the data subject).

[57] Information Commissioner's Office, 'Subject access request Q and As for employers' (24 May 2023) https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/employment/subject-access-request-q-and-as-for-employers/

[58] EDPB-EDPS [n16], section 7.1.

[59] Ibid.