Apple

Apple Urged to Remove New AI Feature After Falsely Summarizing News Reports

Apple is under mounting pressure to remove its newly launched AI-powered news summarization feature after it falsely attributed a headline to the BBC, prompting concerns from press freedom advocates.

The controversy erupted last week when Apple Intelligence, the generative AI tool integrated into Apple devices, pushed a notification falsely summarizing a BBC report. The summary incorrectly stated that Luigi Mangione, the suspect in the killing of UnitedHealthcare’s CEO, had shot himself – an inaccuracy not reflected in the original report.

Following the incident, Reporters Without Borders (RSF) called on Apple to take swift action. In a statement on Wednesday, Vincent Berthier, head of RSF’s technology and journalism desk, urged Apple to “act responsibly by removing this feature.”

“AI systems are probability-based machines, and facts cannot be left to chance,” Berthier said. “Producing false information attributed to trusted media outlets undermines their credibility and endangers the public’s right to reliable news.”

The BBC confirmed it had raised the issue with Apple but could not verify whether the tech giant had responded. Emphasizing trust in its journalism, the BBC stated, “It is essential to us that our audiences can trust any information or journalism published in our name, including notifications.”

The incident underscores the broader risks posed by AI tools in the media landscape. RSF warned that AI-generated summaries remain “too immature” to be used for delivering accurate news and argued that their probabilistic nature makes them unreliable. “AI systems operating this way automatically disqualify themselves as dependable tools for public news dissemination,” the group added.

Apple, which has yet to comment publicly, introduced its AI summarization feature earlier this year as part of its generative AI suite, known as Apple Intelligence. The tool aims to streamline news consumption by generating concise summaries in various formats, such as paragraphs or bullet points, for users of iPhone, iPad, and Mac devices.

However, concerns have mounted since its launch in late October. Alongside the BBC incident, users reported another error involving The New York Times. Apple Intelligence reportedly summarized a story about the International Criminal Court issuing an arrest warrant for Israeli Prime Minister Benjamin Netanyahu as simply “Netanyahu arrested”—a misleading interpretation that could cause significant confusion.

Critics argue that such inaccuracies pose significant risks, not only by spreading misinformation but also by damaging the reputation of trusted news outlets whose branding accompanies Apple’s AI-generated summaries. Unlike some publishers that have adopted AI tools internally under their control, Apple Intelligence summaries are user-opted features, created independently of newsrooms.

The AI challenges faced by Apple reflect broader tensions within the news and technology industries as generative AI reshapes content delivery. Publishers have been grappling with tech giants’ use of copyrighted material to train AI models, with some—like The New York Times—pursuing legal action. Others, such as Axel Springer, have signed licensing deals to adapt to the shifting digital landscape.

As calls grow for accountability, Apple must navigate mounting concerns about the integrity of its AI tool and its implications for public trust in journalism. Reporters Without Borders has made it clear: until AI tools can guarantee accuracy, their role in news dissemination remains a significant risk.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *