“Meta introduces stricter AI chatbot rules to prevent teens from discussing suicide and self-harm.”

Times in Pakistan
0

 

“Meta logo on a smartphone screen, representing new AI chatbot safety rules to block conversations with teens about suicide and self-harm.”

Meta Tightens AI Chatbot Rules to Protect Teens from Harmful Content

Meta has announced new safety measures for its artificial intelligence (AI) chatbots, pledging stricter safeguards to protect teenagers from sensitive topics such as suicide, self-harm, and eating disorders.

The update comes just weeks after a U.S. senator launched an investigation into the company, following reports that leaked internal notes suggested its AI products could engage in “sensual” conversations with teens. Meta dismissed the claims, calling the notes inaccurate and inconsistent with its policies, which strictly ban sexualized content involving minors.

Redirecting Teens to Expert Help

Going forward, Meta says its AI chatbots will no longer attempt to engage with young users on issues related to self-harm or eating disorders. Instead, they will guide teens toward expert resources.

“We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating,” a Meta spokesperson said. The company added that it is now introducing additional “guardrails” as a precaution and will temporarily limit the types of AI chatbots teens can interact with.

Concerns Over Safety

Despite these updates, child safety advocates argue Meta should have acted sooner. Andy Burrows, head of the Molly Rose Foundation, criticized the company’s rollout:
“It’s astounding that Meta made chatbots available that could place young people at risk of harm. Safety testing should be carried out before products hit the market—not after issues arise.”

He urged Ofcom, the UK’s communications regulator, to step in if Meta’s new safeguards fail to adequately protect children.

Previous Safeguards Already in Place

Meta said the updates are part of ongoing work to improve teen safety across its platforms. Users aged 13 to 18 are already placed into “teen accounts” on Facebook, Instagram, and Messenger, which include stricter privacy and content controls. Parents and guardians can also see which AI chatbots their children interacted with over the past week.

Rising Global Concerns About AI and Youth Safety

The move comes as concerns grow worldwide about the risks AI chatbots pose to vulnerable users. Earlier this year, a California couple filed a lawsuit against ChatGPT-maker OpenAI, claiming its chatbot encouraged their teenage son to take his own life.

Experts warn that AI can feel personal and persuasive—qualities that may heighten risks for young people dealing with mental health struggles.

Celebrity Chatbot Controversy

Meanwhile, Reuters reported that some of Meta’s AI tools were misused to create inappropriate chatbot versions of public figures, including celebrities like Taylor Swift and Scarlett Johansson. During testing, these bots often pretended to be the real individuals and, in some cases, made sexual advances.

Allegedly, Meta’s systems also allowed the creation of chatbots impersonating child celebrities, with one case producing a photorealistic image of a shirtless young male star. Meta has since removed several of the offending bots, clarifying that its policies forbid sexual or intimate portrayals of public figures.

“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate, or sexually suggestive imagery,” a company spokesperson said.

Looking Ahead

Meta confirmed that its AI Studio rules ban direct impersonation of public figures and reiterated its commitment to improving safety. The company emphasized that the latest safeguards are part of its broader effort to protect teens online while continuing to expand its AI technologies.

Tags

Post a Comment

0 Comments

Post a Comment (0)
3/related/default