AI Chatbots unintentionally providing fake news


In a recent study, the BBC found that leading AI chatbots, including OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity AI, are not reliably summarizing news stories. The investigation revealed "significant inaccuracies" in their responses to content sourced from the BBC's website.

Deborah Turness, CEO of BBC News and Current Affairs, expressed concern over the potential risks posed by AI-generated news summaries, questioning the possible real-world impacts of distorted headlines. She emphasized the need for responsible use of AI technologies, noting that developers are "playing with fire."

The research involved testing 100 news stories with the chatbots, which were then evaluated by expert BBC journalists. Approximately 51% of the AI-generated summaries were deemed to have notable issues, while 19% contained factual errors, such as incorrect statements, dates, and figures.

Ms. Turness called for collaboration with AI developers to address these challenges. She urged companies to reconsider their AI news summarization processes and cited Apple's decision to suspend its AI news alerts after similar complaints as a positive step.

Some specific errors identified included:

  • Gemini inaccurately stated that the NHS does not recommend vaping as a smoking cessation aid.
  • ChatGPT and Copilot incorrectly reported that Rishi Sunak and Nicola Sturgeon were still in office after they had left.
  • Perplexity misquoted a BBC News article on the Middle East, incorrectly describing Iran's and Israel's actions.

The study highlighted that Microsoft's Copilot and Google's Gemini exhibited more significant issues than OpenAI's ChatGPT and Perplexity. The BBC temporarily lifted its restriction on AI access to its content for the study conducted in December 2024.

The findings underscored the AI systems' difficulty in distinguishing between fact and opinion and their tendency to editorialize without providing necessary context. Pete Archer, BBC's Programme Director for Generative AI, stressed the importance of allowing publishers to control how their content is used by AI systems. He also called for transparency from AI companies regarding their error rates.

OpenAI responded by stating its commitment to improving citation accuracy and respecting publisher preferences, utilizing tools like robots.txt to manage search bot behavior.

This study highlights the ongoing challenges and responsibilities in deploying AI technologies for news distribution.

ABOUT US

Hands-On Mastery For AI: Elevate Your Skills with GTM Workshops

Phone

650 770 1729

Email Address

INFO@GTMWORKSHOPS.COM

© Copyrights, 2024. GTM Workshops. All Rights Reserved