TikTok Scales Back AI Video Summaries Following Series Of Bizarre Errors
TikTok has reduced the use of its AI-generated video summary feature after users flagged several inaccurate and unusual descriptions appearing beneath videos on the platform.
The feature, known as “AI overviews,” was introduced experimentally for selected users in the United States and the Philippines. It was designed to provide brief explanations or additional context about video content.
However, the tool quickly drew criticism after generating strange summaries for videos involving celebrities and creators. In one widely shared example, a video featuring TikTok personality Charli D’Amelio was reportedly described as “a collection of various blueberries with different toppings.”
Similar inaccurate descriptions were also linked to videos featuring artists including Shakira and Olivia Rodrigo.
Following the backlash, TikTok confirmed that the feature has now been adjusted to focus mainly on identifying products shown within videos instead of summarising overall content.
Users began noticing the AI summaries earlier in the year, but complaints intensified in late April as more examples circulated online. Several creators shared screenshots of descriptions that appeared completely unrelated to the videos they accompanied.
One viral example involved ballroom dancers Reagan and Juli To, whose performance was reportedly summarised by the AI system as “a person repeatedly striking their head with a rubber chicken.” In other cases, videos with no violent content were described as showing people hitting themselves with hammers.
TikTok said users were able to report inaccurate AI overviews and submit feedback while the feature was being tested. The company also stated that it had identified the source of the inconsistencies, although it did not provide further technical details.
The incident adds to growing concerns surrounding generative artificial intelligence tools, which are increasingly being integrated into social media and technology platforms despite ongoing issues with false or misleading outputs.
In recent years, several major technology companies have faced criticism over AI-related errors. Google
previously faced online ridicule after its AI-generated search summaries produced bizarre responses, while Apple
also paused an AI-powered notification summarisation feature following complaints about inaccurate news alerts.
Despite continued improvements in AI systems, experts say so-called “hallucinations” — where AI tools generate incorrect or fabricated information — remain a persistent challenge across the industry.
