France has passed its ban. Denmark has a political deal. Spain wants an even stricter threshold. Greece enforceable from January. And in Brussels, the European Commission is now saying the technology to hold the whole thing together already exists.

On April 15, Commission President Ursula von der Leyen told social media platforms there were “no more excuses” for failing to protect children online, announcing that the EU’s age verification app is technically ready for deployment. The app uses zero-knowledge proof cryptography — a method that lets users mathematically prove they are above a minimum age without revealing their date of birth or any other personal data to the platform requesting it.

The problem isn’t the app. The problem is everything around it.

The enforcement puzzle

At least a dozen European countries are now pursuing or considering legislation to set minimum age limits for social media, according to TNW. The thresholds vary: France and Greece have settled on 15, Spain is eyeing 16, and the European Parliament approved a non-binding resolution in November calling for a bloc-wide floor of 16. No binding EU-wide minimum age exists yet.

The Commission’s app is designed to bridge that fragmentation. It is open source, compatible with both mobile and desktop devices, and built on the same technical infrastructure as the EU’s COVID digital certificate — a system that proved the bloc could deploy cross-border digital credentials at scale. Users upload a passport or national ID card, the app generates a verifiable credential, and platforms receive a simple yes-or-no confirmation without ever seeing the underlying document.

Some EU countries are already planning to integrate the app into their national digital identity wallets, with full deployment expected by the end of 2026. The app enters a pilot phase with member states, platforms, and third-party software providers.

But here is where the architecture meets reality. European Commission technology spokesperson Thomas Regnier told CNN that platforms will not be required to use the app — they merely need to demonstrate that they have age checks that are equally effective. Those that fail to comply face sanctions under the Digital Services Act. The Commission has already opened formal proceedings against Snapchat and reached preliminary findings that Pornhub, Stripchat, XNXX, and XVideos are in breach of DSA rules for allowing minors on their services.

In other words: the tool is voluntary, the enforcement is indirect, and the sanctions depend on a regulatory process that has yet to fully mature. A determined 14-year-old with a parent’s ID and five minutes of free time may prove a more formidable opponent than the EU’s cryptographic architecture.

Safety versus privacy

The child safety case is straightforward. Greek Prime Minister Kyriakos Mitsotakis announced his country’s ban earlier this month citing rising anxiety, sleep problems, and addictive platform design — telling a young audience in a TikTok video that “science is clear: when a child spends hours in front of a screen its mind gets no rest.” Greek teachers have reported children arriving at school so sleep-deprived they are “almost lifeless,” according to the Guardian.

Public opinion is firmly behind restrictions. A YouGov poll of six European countries found majorities ranging from 53% in Poland to 79% in France supporting bans for under-16s, with support crossing partisan lines.

But the scepticism is equally striking. In the UK, 54% of respondents said they thought a ban would be “not very” or “not at all” effective — including 46% of those who supported it. Digital rights groups have gone further. Amnesty Tech has argued that such bans are ineffective and ignore the realities of younger generations.

There is also a structural irony that is difficult to ignore. Governments across Europe have spent years struggling to regulate platforms over misinformation, tax avoidance, and market dominance. Now those same governments are building identity verification infrastructure designed to funnel users toward those same platforms — just older ones. The app that verifies a child is too young for Instagram is, by design, also verifying that an adult is old enough to use it. A continent-wide age gate is, functionally, a continent-wide identity system.

A template for the world

Australia became the first country to implement a national social media age ban in December 2025, blocking children under 16 from platforms including TikTok, Instagram, Snapchat, and X, with penalties of up to $49.5 million AUD for non-compliant companies. Indonesia has banned under-16s from social media. Malaysia is implementing a similar restriction this year. Several US states — including Florida, Arkansas, and Louisiana — have passed parental consent laws, though most face ongoing legal challenges.

Europe’s approach is the most ambitious yet: a federated system designed to work across 27 legal jurisdictions, multiple languages, and varying age thresholds, while preserving enough privacy to satisfy the continent’s own data protection regime. If the pilot phase succeeds, the architecture will be freely available for any country to adopt — the source code is public, and the Commission has encouraged partners worldwide to use it.

The expert panel on children’s online safety is expected to deliver its recommendations by summer 2026. Until then, the app exists in a curious state: technically complete, legally optional, and politically unstoppable.

Sources