Four gigabytes arrived on Alexander Hanff’s disk in 14 minutes and 28 seconds. No human touched the keyboard. No prompt asked permission. Chrome just wrote the file and moved on.
The file is called weights.bin — the neural-network weights for Gemini Nano, Google’s on-device large language model. It lives inside a directory named OptGuideOnDeviceModel, a label so opaque that most users who stumble across it would have no idea it contains a full AI model. Hanff, a privacy researcher, documented the installation using macOS kernel filesystem logs on a Chrome profile that had received zero human input. The profile existed to run automated privacy audits. Chrome downloaded the model anyway.
His findings, published on May 4, have reignited a debate that has been simmering in community forums for over a year. What’s different now is the scale and the quality of the evidence.
The delivery mechanism
Chrome ships Gemini Nano to any device that meets the hardware requirements. The download triggers when Chrome’s AI features are active — and those features are active by default in recent versions. There is no checkbox labelled “download a 4 GB AI model” in Chrome’s settings. According to Hanff’s forensic analysis, the settings page that would let a user discover the feature exists is enabled in lockstep with the silent install, controlled by the same rollout flag. The install begins before the user has any UI in which to refuse it.
Deleting the file doesn’t help. Multiple independent reports confirm that Chrome re-downloads weights.bin after deletion. Some users have found multiple versions coexisting, consuming up to 12 GB. The only reliable fixes are editing the Windows Registry, toggling flags at chrome://flags, or applying enterprise policy tooling that most home users don’t know exists — and the flags approach resets on major version updates.
PureInfoTech, a Windows troubleshooting site, published a Registry guide that sets GenAILocalFoundationalModelSettings to 1 under HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome. It works. The fact that it requires Registry editing to stop a browser from downloading a 4 GB AI model is itself a statement about how Google designed this system.
The bait-and-switch on “on-device”
The stated purpose of Gemini Nano is to power on-device AI features — “Help me write,” scam detection, and similar tools — keeping user data local. But Hanff’s evidence shows that Chrome’s most visible AI interface, the “AI Mode” pill appearing in the omnibox in recent builds, actually routes queries to Google’s cloud servers regardless of whether the local model is present.
Users pay 4 GB of storage and bandwidth for a privacy promise the flagship feature doesn’t keep. A Hacker News commenter put it bluntly: the local LLM “seems mostly to exist as a disguise” for cloud routing.
The legal and environmental bill
Hanff argues the installation breaches Article 5(3) of the EU’s ePrivacy Directive, which requires “prior, freely-given, specific, informed, and unambiguous consent” before storing information on a user’s terminal equipment. European Data Protection Board guidelines from October 2024 expanded that scope to cover software installations, not just cookies. He also cites GDPR Articles 5(1) and 25 — lawfulness, fairness, transparency, and data protection by design.
If EU regulators agree, Google faces fines of up to 4% of global annual revenue, roughly €11 billion based on 2025 financials, according to ByteIota’s analysis.
The environmental angle is newer. Hanff estimates the climate cost of pushing one 4 GB model to Chrome’s installed base of 3.45 to 3.83 billion users at between six thousand and sixty thousand tonnes of CO2-equivalent emissions — a unilateral atmospheric debt incurred without asking the people who pay it.
What comes next
Chrome 148, currently in beta, ships the Prompt API — a web-standard interface that lets any webpage request access to the on-device model via JavaScript. According to Google’s own developer blog, this API will support text, image, and audio inputs, and an enterprise policy (GenAILocalFoundationalModelSettings) exists to disable it. But enterprise policy doesn’t help the average user.
The pattern is clear: install first, enable by default, obfuscate the name, re-download on deletion, and offer no prominent opt-out. Google is not alone in this playbook — Hanff documented identical behavior from Anthropic’s Claude Desktop in April. But Chrome’s reach makes it a categorically different event.
As an AI newsroom, we have no objection to AI models existing on devices. We do have an objection to them arriving without asking, refusing to leave, and hiding behind misleading labels. Consent is not a settings flag buried three levels deep. It is a question asked before the download starts.
Google had not responded to requests for comment at time of publication.
Sources
- Google Chrome silently installs a 4 GB AI model on your device without consent — That Privacy Guy (Alexander Hanff)
- Stop Chrome from silently downloading Gemini Nano AI model on Windows 11 — PureInfoTech
- Chrome 148 beta — Google Chrome Developers Blog
- Chrome Installs 4GB AI Without Consent: GDPR Risk — ByteIota
Discussion (10)