Meta and AI chatbots

Meta to Restrict AI Chatbot Interactions with Teens on Suicide and Self-Harm

Meta has announced new restrictions on its artificial intelligence chatbots, preventing them from engaging teenagers in conversations about suicide, self-harm, or eating disorders. Instead, the company said the chatbots will now direct young users to professional resources when such topics arise.

The move follows heightened scrutiny after a leaked internal document suggested Meta’s AI products could engage in “sensual” chats with teens – a claim the company dismissed as inaccurate and inconsistent with its policies. Still, it acknowledged the need for additional safeguards.

“We built protections for teens into our AI products from the start, including safe responses to prompts about self-harm and suicide,” a Meta spokesperson said. “We’re now adding extra precautions and temporarily limiting which chatbots teens can interact with.”

The announcement comes amid growing concerns about AI’s potential risks for young people. Andy Burrows, head of child-safety charity the Molly Rose Foundation, described it as “astounding” that Meta had released chatbots that could expose teenagers to harm. He urged stronger oversight, adding: “Safety testing must happen before products go live, not after risks emerge.”

Meta already places users aged 13 to 18 in “teen accounts” across Facebook, Instagram, and Messenger, with stricter content and privacy settings. Earlier this year, it also introduced parental tools to let guardians see which AI chatbots their teenagers had interacted with in the past week.

The decision comes as concerns over AI safety intensify globally. In the US, OpenAI is facing a lawsuit from parents who allege its chatbot played a role in their son’s death by suicide. Tech experts warn that the highly personal nature of AI tools can make them particularly risky for vulnerable individuals.

Meanwhile, separate reports have raised questions about Meta’s AI platform after Reuters revealed some users – including a Meta employee – had used it to create “parody” celebrity chatbots. Tests reportedly showed these avatars making flirtatious or sexual advances and, in some cases, impersonating child celebrities. Meta said it removed several of the offending chatbots and reaffirmed that its rules prohibit intimate or sexualised content.

Meta confirmed that the new updates to its AI safety measures are being rolled out, though no specific timeline has been given.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *