back to top
20.5 C
Islamabad
Thursday, January 15, 2026

AI News Helpers Flunk Fact-Check: 45% of Responses Botched!”

🚨 Attention, news-hungry tech enthusiasts! 🚨 You might want to double-check those AI news assistants before sharing the latest scoop. A revealing study by the European Broadcasting Union (EBU) and the BBC has found that leading AI assistants, like ChatGPT, Copilot, Gemini, and Perplexity, are tripping up big time when it comes to dishing out accurate news.

The research, which analyzed over 3,000 responses in 14 languages, found that a whopping 45% of AI responses had at least one major issue. Yikes! Even when we’re being generous, 81% of responses had some form of problem. Double yikes!

So, what’s going wrong? Well, sourcing issues were a biggie – a third of responses had serious problems with attribution, with Google’s Gemini leading the pack with a staggering 72% of responses having significant sourcing issues. Ouch! And accuracy? A solid 20% of responses were off the mark, serving up outdated info or flat-out wrong news.

But wait, there’s more! Some AI assistants were caught spreading misinformation, like Gemini claiming there were changes to a law on disposable vapes that never happened, or ChatGPT reporting that Pope Francis was still alive and kicking months after his passing. Awkward!

Now, you might be thinking, “Who’s using AI for news anyway?” Well, according to the Reuters Institute, 7% of all online news consumers and a hefty 15% of those under 25 are turning to AI assistants for their news fix. So, it’s a thing!

The EBU’s warning us that if we can’t trust the news we’re getting, we might just give up on it altogether. And that, my friends, is a recipe for democratic disaster.

So, AI companies, listen up! The EBU’s calling for you to step up your game and make sure your AI assistants are serving up accurate, trustworthy news. Let’s keep the fake news at bay, folks! 📢🔎📚

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles