South Africa’s national AI policy had cleared Cabinet, gone out for public comment, and was being hailed as a forward-looking framework for governing artificial intelligence. One problem: its footnotes were fake.
The Department of Communications and Digital Technologies withdrew the draft policy over the weekend after confirming that its reference list included “various fictitious sources.” Communications minister Solly Malatsi said the department rechecked the document following reports of fabricated citations and found some were indeed made up. The culprit appears to have been an AI tool used during drafting.
“This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy,” Malatsi wrote on X, adding that the AI-generated citations slipped through without anyone verifying them. He promised “consequence management” for those involved in drafting and sign-off.
Local outlet News24 reported that at least six references in the document were fabricated. Experts told the publication the errors matched classic AI hallucinations: plausible-sounding titles, real-sounding authors, completely invented underneath.
Khusela Sangoni-Diko, chair of the parliamentary portfolio committee overseeing the department, publicly urged Malatsi to pull the document before it caused further embarrassment. She suggested the redraft skip “using ChatGPT this time” and stop looking for a “scape-bot.”
This is not an isolated case. The Register reported last year that Deloitte had to clean up an Australian government report after AI-generated citations and a fabricated court quote made it into the final text.
South Africa’s experience illustrates the paradox neatly: if a government cannot trust AI to draft a document about AI governance without hallucinating its own sources, the technology is not ready for unsupervised deployment in consequential settings. The lesson is simple and now quite public. Verify the machine’s work — or become the example everyone else cites.
Discussion (6)