Go back

The Problem with GDPR Consent: Does It Really Work in Practice?

Introduction

Under the General Data Protection Regulation (GDPR), processing personal data is lawful only if it falls within one of six legal bases set out in Article 6(1). These are: consent, contractual necessity, legal obligation, vital interests, public task, and legitimate interests. Consent is often treated as the most ethically appealing of these bases, rooted in individual autonomy and control. Unlike other legal bases, which depend on institutional roles or balancing tests, consent places the data subject at the centre of the decision-making process.

Consent sits at the moral heart of the GDPR. Few ideas feel more intuitively aligned with data protection than the notion that individuals should control how their personal data is used. In theory, consent empowers users, disciplines organisations, and anchors data processing in respect for autonomy. In practice, however, GDPR consent has become one of the most fragile and contested lawful bases for processing personal data. Precisely because consent is framed as an expression of personal choice, GDPR imposes particularly demanding conditions on its validity. These conditions—requiring consent to be freely given, specific, informed, and unambiguous (Article 4(11))—have transformed consent from a flexible enabler of data processing into one of the most difficult and risky legal bases to rely on, especially in the modern digital environment.

The reason is not philosophical opposition to consent itself, but the cumulative weight of the legal standard imposed upon it. Each requirement is defensible in isolation. Taken together, especially in the context of modern digital systems and rapidly evolving technologies, they set a bar that is often unrealistically high. This has made consent difficult to rely on, risky to operationalise, and in some cases counterproductive to the very goal of meaningful user protection.

Freely Given: The Illusion of Real Choice

Consent must be freely given, meaning that the individual must have a genuine choice and must not suffer detriment if they refuse. On its face, this requirement is intended to prevent coercion and abuse. In practice, it has become one of the most destabilising elements of GDPR consent.

Regulatory guidance, particularly from the European Data Protection Board (EDPB), emphasises that consent is invalid where there is a clear imbalance of power between the data controller and the data subject. This logic is uncontroversial in contexts such as employment or public services. However, it has increasingly been extended into the digital commercial sphere, where power imbalances are more nuanced.

Many online services are functionally unavoidable. Social networks, communication tools, professional platforms, and cloud-based services play a central role in economic and social participation. If access to such services is conditional on consent to extensive data processing, regulators increasingly question whether consent is truly free, even if users technically have the option to walk away.

This creates a structural problem. Digital services are often built around data-intensive business models. Advertising-funded platforms, AI-driven personalisation, and security or fraud-prevention systems all rely on continuous data flows. As regulators narrow the scope of what counts as “strictly necessary,” consent is more readily deemed invalid because users are said to lack a meaningful alternative.

The result is a paradox: the more sophisticated, integrated, and widely used a service becomes, the harder it is to argue that consent to its data practices is freely given. What was intended as a safeguard against coercion risks becoming a blanket scepticism toward consent in almost any commercial digital context.

Specific: Granularity Meets Complexity

The GDPR requires consent to be specific, meaning it must be tied to clearly defined purposes. Where multiple purposes exist, separate consent may be required. This requirement reflects a legitimate concern that individuals should not unknowingly authorise unrelated or unexpected uses of their data.

However, modern data processing rarely fits neatly into discrete, static purposes. Digital services are complex systems involving overlapping functions: service delivery, security monitoring, analytics, product improvement, regulatory compliance, and increasingly, machine learning and artificial intelligence.

Attempting to map this complexity into narrowly defined purposes that are both accurate and understandable is exceptionally difficult. Organisations face a choice between excessive granularity and excessive abstraction. Overly granular consent requests fragment processing into dozens of categories, overwhelming users and encouraging mechanical acceptance. Overly abstract purposes, meanwhile, are vulnerable to regulatory criticism for being insufficiently precise.

The challenge is particularly acute for AI-driven systems. Data may be reused to train, test, or improve models in ways that are not fully foreseeable at the point of collection. Requiring consent to be specific in advance assumes a level of predictability that modern data innovation often lacks.

As a result, specificity becomes less a tool for user empowerment and more a compliance trap, where organisations struggle to strike an acceptable balance between legal precision and operational reality, degrading the user experience in the process.

Informed: Explaining the Unexplainable

For consent to be valid, individuals must be informed. This includes understanding who is processing their data, what data is involved, for what purposes, and what the consequences may be. Information must be provided in clear and plain language.

The difficulty is that many contemporary data processing activities are technically complex, probabilistic, and opaque even to specialists. Explaining algorithmic inference, data sharing ecosystems, or automated decision-making systems in a way that is both accurate and accessible to a general audience is an extraordinary challenge.

Organisations are therefore caught between two unsatisfactory options. They can simplify explanations to make them readable, risking accusations that they are misleading or incomplete. Or they can provide lengthy, technically detailed disclosures that are legally defensible but practically useless, as few users read or understand them.

Regulators often assess whether consent was “informed” from a legal and technical standpoint, while users engage with consent notices through quick, habitual interactions. This mismatch turns informed consent into a legal abstraction rather than a lived reality.

Unambiguous: Affirmation Without Deliberation

The requirement that consent be unambiguous demands a clear affirmative act by the user. Pre-ticked boxes, silence, or inactivity are insufficient. This standard has reshaped digital interfaces, from cookie banners to app permissions and sign-up flows.

While this has eliminated certain abusive practices, it has also normalised constant consent prompts. Users are asked to click “Accept” repeatedly across websites and applications, often multiple times per day.

The result is not heightened awareness, but desensitisation. Affirmative action becomes a reflex, not a considered choice. The legal clarity sought by regulators does not translate into meaningful engagement by users.

At the same time, the margin for error is narrow. Small design choices—button placement, wording, or default settings—can determine whether consent is deemed valid or unlawful. This creates a fragile compliance environment in which formalism outweighs substance.

The Cumulative Effect: Consent as a Liability

Each element of GDPR consent—freely given, specific, informed, and unambiguous—serves a legitimate protective purpose. The problem arises when they are applied cumulatively, without sufficient regard for how digital services actually operate.

Together, they create a standard that is extremely difficult to satisfy with confidence. As a result, many organisations increasingly avoid consent altogether, turning instead to other legal bases such as legitimate interests or contractual necessity, even in situations where consent might seem intuitively appropriate.

This outcome is deeply ironic. Consent, intended as the gold standard of user control, has become the most legally precarious option. Users, meanwhile, are inundated with consent requests that provide little real choice or understanding.

Conclusion: Rethinking Consent’s Role

The challenge of GDPR consent is not that the concept is flawed, but that the regulatory bar has been set without sufficient consideration of technological complexity, human behaviour, and innovation dynamics. By insisting that consent simultaneously meet four demanding criteria in environments where none are easy to satisfy, the GDPR risks hollowing out consent’s practical value. In an environment where even well-intentioned transparency efforts struggle to convey genuine understanding, the expectation that users can make informed choices about complex data practices becomes increasingly unrealistic.

A more sustainable approach would recognise the limits of consent as a governance tool and place greater emphasis on accountability, proportionality, and substantive protections regardless of the legal basis relied upon. Without such recalibration, in the digital age, GDPR consent will remain a powerful idea burdened by an unrealistically high standard.

8 December 2025