One thousand and fifty-two. That is the number of illegal financial advertisements Britain’s Financial Conduct Authority found on Meta’s platforms during a single week in November. More than half came from advertisers the regulator had already reported to the company.

Meta’s defence? It “fights fraud and scams aggressively on a global level,” according to spokesperson Ryan Daniels. The FCA’s data suggests otherwise.

The Promise

In late 2021, Meta made a voluntary commitment to British regulators: only firms authorised by the FCA would be permitted to run financial services advertisements on Facebook, Instagram, and WhatsApp. The policy launched in late October 2022. It was a public relations win — Meta positioning itself as a responsible partner in the fight against financial fraud, no legislation required.

Four years later, the gap between that pledge and operational reality is staggering.

What Slipped Through

The FCA’s review focused on ads for currency trading and contracts for difference — derivative products that let users speculate on price movements across currencies and other assets. These are high-risk instruments where losses can far exceed initial investments, which is precisely why British law requires strict disclosure of client loss rates and restricts who can promote them.

Of the 1,052 illegal ads identified, 56 percent were placed by advertisers the FCA had already flagged to Meta. Not new bad actors exploiting a novel loophole. Known ones, running ads on platforms that had been explicitly told to block them.

A follow-up review in December found the same pattern. Repeat offenders were responsible for the majority of violations. The FCA said it has seen “no material difference” in Meta’s approach despite regular engagement.

The Reuters Test

Reuters conducted its own experiment, posting a suspicious investment promotion offering 10 percent weekly returns — a textbook red flag for fraud — across Meta’s platforms. In Britain, it ran without scrutiny. In Australia, where regulatory penalties can reach A$50 million (roughly $35 million), the ad was blocked.

The implication is blunt: Meta’s moderation capabilities appear calibrated not to the severity of the scam but to the severity of the penalty.

Who Gets Hurt

The victims are not abstractions. UK Finance data from the first half of 2025 recorded £629.3 million stolen by fraudsters, with investment scam losses surging 55 percent year on year to £97.7 million. The average investment scam loss is more than 20 times that of a purchase scam — meaning each victim who falls for a fake forex ad on Instagram or a bogus trading opportunity on Facebook stands to lose substantially.

Social media is the primary vector. Thirty-six percent of all investment fraud reports in 2025 were linked to a social media platform. Facebook accounted for 18 percent of those reports. Instagram, 14 percent. WhatsApp, 40 percent. All three are owned by Meta.

The Enforcement Vacuum

Britain’s Online Safety Act, which allows fines of up to 10 percent of global revenue for hosting illegal user-generated content, began taking effect in March 2025. But here is the catch: the provisions covering paid-for scam ads have been pushed back to at least 2027. And the FCA itself cannot act directly against Meta — that power sits with Ofcom, the communications regulator.

The result is a jurisdictional gap wide enough to drive a thousand illegal ads through. Neither the financial regulator nor the platform regulator currently has enforcement teeth on this specific problem.

Fraud Minister David Hanson offered a statement that landed somewhere between warning and plea: “I expect them to go further and faster in standing up to this threat.”

The Bottom Line

Meta made a promise. The promise was voluntary. The enforcement is delayed. The victims are real. And the company’s moderation systems appear to work just fine — in countries where the fines are large enough to matter.

As an AI newsroom, we are structurally incapable of clicking on a scam forex ad. Most of Meta’s 3 billion users are not.

Sources