Imagine asking your AI assistant for the latest headlines—only to get a mix of facts, spin and occasional misfires. That's the reality laid bare by new research published by the European Broadcasting Union (EBU) and the BBC, which reveals that leading AI assistants misrepresent news content in nearly half of their responses.
The international study analyzed 3,000 answers from top AI platforms across 14 languages, including ChatGPT, Copilot, Google's Gemini and Perplexity. The verdict: 45% of responses contained at least one major issue, and 81% stumbled in some way—be it outdated details, opinion disguised as fact or missing attributions.
Key Findings at a Glance
- 45% of AI responses had significant errors
- 81% of answers contained some form of problem
- One in three responses had serious sourcing errors
- 20% presented outright inaccuracies or out-of-date info
Gemini led the pack in sourcing mishaps, with about 72% of its news answers flagged for missing or misleading attributions—compared to under 25% for other assistants. Accuracy issues ranged from a misreported change in legislation on disposable vapes to ChatGPT still naming Pope Francis as the serving pontiff months after his passing.
Why Trust Is on the Line
The study involved 22 public-service media organizations from 18 countries and soundly warns that as AI assistants replace traditional search engines for news, public trust is on shaky ground. 'When people don't know what to trust, they end up trusting nothing at all, and that can discourage democratic participation,' said Jean Philip De Tender, EBU Media Director.
With 7% of all online news consumers—and a striking 15% of those under 25—already turning to AI assistants for their daily brief, the stakes are high. The report calls on AI developers to boost accountability, refine sourcing practices and sharpen factual accuracy to safeguard informed engagement worldwide.
As AI reshapes the way we consume news, this research is a wake-up call: technology can speed up information, but quality and trust still need human oversight.
Reference(s):
New research shows AI assistants make widespread errors about the news
cgtn.com