
In the age of rapidly advancing artificial intelligence, where powerful algorithms can predict human behaviour and model global patterns, we face a paradox: we believe our personal data belongs to us, yet we have little real control over how it is collected and used. According to Dr Paulius Jurčys of Vilnius University’s Faculty of Law, we are entering a critical moment where the gap between our perceived rights over personal data and the actual power we hold is wider than ever.
Jurčys, a legal scholar specialising in data privacy, copyright and digital ethics, explores how personal data is increasingly treated as a commodity while the individuals who generate it remain excluded from meaningful ownership.
We need to rethink the architecture of our digital world and design data systems that reflect the rights and dignity of individuals, he said. This thinking has led him to propose a new, human-centred data model, which aims to promote dominion over personal data and reverse the choice architecture where individuals own data is private by default.
Who really controls our data?
This raises a fundamental question: if we do not own our data, then who does?
Picture this: a few months ago, someone bought the latest Japanese Mitsubishi Outlander model, said Jurčys. Every time the engine is started, a message appears on the screen. All your vehicle data is collected for product development and research purposes. If you wish to limit data transmission to Mitsubishi Motors, press the INFO button. A car used daily has suddenly become more than just a vehicle; it is now a mobile data collection platform. Are we really okay with our daily routes and driving behaviour ending up in corporate databases?
He adds another example, which concerns the fairness of social media platforms. In September 2024, LinkedIn quietly updated its terms of use and announced that users posts and profile information would be used to improve the AI models behind the platform. When the news was revealed by The Verg journalists, many of LinkedIn $930 million users felt betrayed. Why wasn’t I asked for consent? they wondered, sparking debates around fairness in the data market.
This case clearly shows that users are excluded from decisions that affect their own privacy, said Jurčys.
Another story involves the actress Scarlett Johansson. In autumn 2024, OpenAI introduced a new voice-controlled version of the ChatGPT app. One of the available voices sounded strikingly similar to Johansson’s character from the film Her. The actress publicly expressed her disappointment. After refusing OpenAI CEO Sam Altmans invitation to record her voice, the company found another actress with a nearly identical vocal tone, which was then ‘coincidentally” used. If AI merely imitates existing voices and likenesses, can it truly be considered original or fair? asks Jurčys.
These examples symbolically illustrate inequality and power asymmetry in an increasingly AI-mediated world. Users are not seen as owners or partners in the data economy. They are treated as raw material. The current system is designed to consolidate the power of tech companies. These corporations act as data controllers, while the rights granted to users under regulations like the General Data Protection Regulation (GDPR) remain limited and often ineffective.
What is the real value of our data?
Everyone knows that every step we take and every click in the digital space is tracked. Tech giants like Google, Apple, Facebook, Amazon, and Microsoft, and device makers monitor heart rates, movement across the city, and even facial expressions. While they claim to collect data to improve services, they simultaneously attempt to convince users that their digital footprint is essentially worthless.
Yet people tend to value their personal data far more highly. Studies by Harvard Professor Cass Sunstein and Angela Winegar highlight a huge gap between how companies and individuals perceive data value. Using concepts from behavioural economics, willingness to pay and willingness to accept, the researchers found that while participants were willing to pay just USD5 per month to protect their data, they would demand USD80 per month to give up access to it. This 16:1 ratio is among the highest recorded in behavioural economics. The scientists explain it through the endowment effect: people place greater value on what they already own.
Towards a human-centric data model
Ownership of data has become a central issue in the AI era. Public debates increasingly call for personal data to be recognised as personal property. Our data is a central pillar of our digital identity. Rapid advancements in AI now allow us to imagine a data system that is human-centred rather than business-oriented.
According to Jurčys, the guiding principle should be that personal data is private by default and accessible only to its owner: What if all data truly belonged to us? Imagine a world where your data is fully private and under your control. In this world, individuals could determine who accesses their data, how its used, and even benefit from it directly. This idea rests on a fundamental technological shift: the design of a system where people have full dominion and control over their data, not corporations.
One key element of this approach revolves around the idea that individuals data should stay on their side. Every individual would have a digital vault storing data from different sources: social media history, app usage, financial records, health information, and smart device data.
With the help of AI, people could interact meaningfully with this data. For example, health information might help detect early symptoms, financial data could assist in budgeting, and other personal records could support smarter everyday decisions. The key is to put the human at the centre of interactions.
Another core feature is the shift from an opt-out to an opt-in system. Today, individuals must constantly opt-out from data collection. A human-centric model would reverse this: data is not shared unless the user actively consents.
This architecture puts the individual back in control. Its not just about privacy its about agency, said Jurčys. This approach is already gaining traction in tech ecosystems. It opens up possibilities for new, personalised services and markets. These new uses of personal data offer a glimpse into a future shaped not just by technology but by AI-powered life forms: robots, digital twins, and virtual assistants.
Welcoming AI agents into our lives
Until recently, interacting with virtual reality meant logging into games or digital platforms with avatars. But today, the virtual world is integrated into our daily lives via smartphones and constant internet access. AI breakthroughs have brought new, synthetic life forms into our homes and routines, including robot vacuum cleaners, smartwatches, AI assistants.
In Japan, weak robots designed to reduce social isolation are growing in popularity. One example is Nicobo, a soft, expressive robot developed by Panasonic to offer emotional support. In Tokyo, Cafe DAWN uses robots and AI tools to take customer orders and assist staff with reduced mobility. Some dishes are even prepared and delivered by robots.
These examples show how technology can be inclusive and supportive if implemented ethically and purposefully, says Jurčys.
The Japanese government openly acknowledges that zero-risk technology is a myth. Every technology carries potential risks and benefits. Still, public and private actors in Japan aim to harness these tools to address complex societal challenges.
A new social contract
These changes raise pressing ethical and legal questions. In times of disruption, societies often return to core values: human dignity, justice, natural rights, property. At this crossroads, we must recognise that AI will not replace us entirely. Instead, we are called to experiment boldly, innovate, and use technology to shape the future we want to live in.
We need a new social contract that reflects the realities of AI, where people have full dominion and control over their own data, concludes Jurčys. In this future, humans and their data become the foundations of a new social contract. Through constant inquiry and responsible innovation, we can build a stronger, more equitable digital society based on an opt-in choice architecture, human agency, dignity, and shared responsibility.