Go back

Modernising GDPR: Aligning Rights With Digital Reality

The General Data Protection Regulation (GDPR) grants individuals important powers over their personal information. Among the most central are the rights to rectification (to correct inaccurate data) and erasure (to request deletion). These protections aim to ensure fairness by preventing outdated or incorrect information from influencing meaningful decisions about people’s lives.

The General Data Protection Regulation (GDPR) grants individuals important powers over their personal information. Among the most central are the rights to rectification (to correct inaccurate data) and erasure (to request deletion). These protections aim to ensure fairness by preventing outdated or incorrect information from influencing meaningful decisions about people’s lives.

Yet while these rights remain essential, the context in which they operate has changed dramatically. Much of the GDPR rests on assumptions formed when personal data was simpler, more factual, and easier to control. Today, information is continuously generated, widely shared, and stored across complex digital infrastructures. The gap between what the law expects and what modern technology can realistically deliver is becoming increasingly difficult to ignore.

Where These Rights Came From

Rectification and erasure rights emerged in the 1970s and 1980s, when European countries first introduced rules to protect citizens from the misuse of personal information. At that time, the nature of data and data systems made such rights straightforward to administer. Most information held by organisations was objective—names, addresses, financial records. Systems were centralised, databases were traceable, and if something was wrong, it could usually be corrected; if something needed to be deleted, it could genuinely be removed.

These early rights were later absorbed into the GDPR with broader scope and stronger enforcement. What did not carry forward, however, was a fundamental updating of the assumptions about how data behaves.

How Personal Data Has Changed

The nature of personal information today is fundamentally different. Three shifts are particularly significant.

First, personal data is no longer purely factual. Many of the data points used by organisations are interpretations or predictions—such as inferred interests, behavioural classifications, or machine-generated risk scores. These categories are not easily labelled as “correct” or “incorrect”, which complicates the idea of rectification.

Second, data no longer sits in one place. Once collected, information can be spread across multiple systems, shared with vendors, stored in automated backups, and used to train analytics or machine-learning models. Organisations often struggle to map where every copy resides.

Third, modern systems are interconnected and automated. A correction in one database may not propagate through others. Some information must be retained for compliance, fraud prevention, or auditing reasons even after an erasure request is made. The infrastructure itself makes perfect rectification or deletion difficult to guarantee.

These realities transform the practical meaning of rights that were designed for a different technological era.

Why This Mismatch Matters

Public trust in digital services depends heavily on confidence that personal information is handled responsibly. When individuals find they cannot reliably correct or delete their own data, that confidence erodes.

There are also significant operational consequences. Organisations expend vast amounts of time and resources searching for data that may be fragmented or duplicated beyond traceability. Individuals grow frustrated when their expectations of control do not align with the technical realities. Regulators, meanwhile, face enforcement challenges when compliance cannot be fully demonstrated.

The result is a system where rights exist in principle but may be difficult to guarantee in practice.

Towards a More Realistic Future

Maintaining the value of rectification and erasure requires updating these rights to reflect how data actually works today. Several principles could guide such reform.

A first step is distinguishing between types of data. Objective information—such as an address or account detail—should remain fully correctable. Subjective or inferred information may instead require transparency, explanation, or the ability to challenge its use, rather than literal rewriting.

Another principle is focusing on areas of real harm. Compliance resources should prioritise the information that meaningfully affects people’s lives, rather than devoting equal attention to every minor detail.

Absolute deletion may also need to evolve into a more contextual approach. In many cases, preventing further processing or removing information from public visibility may offer equivalent protection without requiring every historical trace to be erased.

Finally, improving visibility and traceability of data flows would enable more realistic compliance. Regulators, industry, and technology providers can work together to develop better standards for understanding how data moves through systems.

These shifts would make rights more meaningful, while avoiding expectations that are unrealistic in today’s digital environment.

The Path Forward

Rectification and erasure remain foundational to European data protection. They reflect core values of dignity, fairness, and individual control. But as the data ecosystem becomes more complex, these rights require modern interpretation to remain effective.

If Europe wants individuals to retain meaningful control over their personal information, the GDPR must evolve. Aligning legal expectations with technical reality will strengthen trust, improve compliance, and ensure that privacy rights remain viable in the years ahead.

21 November 2025