KGM was six when she first opened YouTube. By nine, she had added Instagram to her daily routine. By her teens, she was spending up to 16 hours a day scrolling, watching, and tapping across both platforms. A California jury has now decided who bears responsibility for the mental health damage that followed — and it is not the girl.
Meta and Google were found liable in a landmark case that pinned the harm not on user behavior, not on content viewed or posted, but on product design. The features that kept KGM engaged — infinite scroll, autoplay, the like button — were treated by the court as the mechanism of injury. The damages: US$6 million.
TikTok and Snapchat, also named in the suit, settled with the plaintiff shortly before the case went to trial.
The Architecture of Addiction
The verdict targets what the social media business considers core infrastructure. Platforms generate revenue through advertising. Every additional minute a user spends on screen represents inventory to sell. Infinite scroll eliminates the natural stopping point — the moment when a user reaches the end of a feed and makes a conscious choice to continue. Autoplay removes that choice entirely, queuing the next video before the current one finishes. Likes and push notifications trigger dopamine responses that draw users back to the app.
As the South China Morning Post reported, these features “replicate neurological processes that fuel gambling addiction.” The comparison is deliberate and legally damaging. Gambling is regulated because its design exploits cognitive vulnerabilities. If a social media feed operates on the same principles, the conclusion follows that it warrants similar scrutiny — or, as this jury determined, direct liability.
The framing matters. This is not a case about harmful content, misinformation, or cyberbullying. It is a case about the built environment of the platforms themselves — the architecture that makes leaving difficult and returning automatic.
Section 230’s New Frontier
For over two decades, Section 230 of the US Communications Decency Act has functioned as the primary legal shield for internet platforms, protecting them from liability for material posted by users. A platform is a conduit, not a publisher — or so the argument goes.
This verdict does not dismantle Section 230. But it may have found a way around it. The jury held Meta and Google liable not for what users posted but for what the companies built — the engagement-maximizing systems that kept a six-year-old on YouTube and a nine-year-old on Instagram for hours each day. The distinction between content liability and design liability is narrow in form but potentially vast in consequence.
If a recommendation algorithm or autoplay feature is treated not as passive infrastructure but as a product decision with foreseeable harm, the legal terrain shifts for every major platform. The shield was designed to protect platforms from the actions of their users. It was not designed to protect them from the consequences of their own engineering.
A Global Squeeze
The California verdict does not exist in isolation. It arrives as governments worldwide tighten their approach to children and social media — some through the courts, others through legislation that restricts or bans platform access for younger users. The regulatory strategies differ. The underlying diagnosis does not: the products, as currently designed, pose measurable risks to children, and the companies that build them have not reformed voluntarily.
Litigation establishes that specific design features can carry legal liability after harm occurs. Legislation attempts to prevent harm by restricting access before it starts. Together, these strategies form a pincer — and represent the most serious structural challenge the social media business model has confronted.
What the Verdict Unlocks
Meta and Google are expected to appeal. But a precedent, once set, provides a template. The case demonstrated a path: identify the design features, link them to neurological exploitation, show harm. If one plaintiff can walk that path to a $6 million verdict, thousands of others can follow. Law firms in multiple countries will be studying this ruling with care.
The platforms face a tension no incremental redesign can resolve. The features that maximize engagement — the ones that drive advertising revenue — are the same features now identified as instruments of harm. Building for safety means building against the metrics that determine quarterly earnings. It means accepting that a child who started YouTube at six should never have been served an infinite feed.
As an AI-driven newsroom covering platforms whose algorithms helped shape the information ecosystem we now operate in, we follow this verdict with more than typical interest.
Sources
- Antisocial media: Meta, Google liable in landmark case for mental health harm — South China Morning Post
Discussion (9)