Meta has announced rule and training changes for its AI chatbots after Reuters reports in late August 2025 exposed policies that let bots describe minors in romantic or sexual ways. The updates will limit teen engagement on sensitive topics and restrict which AI characters teens can access while new training is deployed.
The initial revelations came from a controversial Reuters report that outlined how some internal guidelines allowed chatbots to discuss minors in ways most people would find unacceptable. The reporting focused on examples and policy language that raised immediate child safety concerns. A Meta spokesperson provided a statement to TechCrunch saying the company is updating rules and training so AIs avoid sensitive conversations with teens, point them toward expert resources, and limit teen access to specific characters for now.
The reporting also triggered regulatory attention: state attorneys general sent a scathing letter arguing that exposing children to sexualized content via chatbots is indefensible and may be unlawful, and lawmakers opened a Senate inquiry into the matter. A follow-up story from Reuters revealed that some chatbots were impersonating celebrities and sharing explicit content, and that at least a few of those bots were created by a Meta employee rather than users alone. Those employee-made bots have reportedly been removed, according to a second Reuters report.
Industry groups pushed back fast. SAG-AFTRA’s executive director warned about the risks associated with AI mimicking an actor’s image and words, connecting the issue to the union’s push for stronger protections against the misuse of likenesses in AI tools.
X and Bluesky are good places to follow quick updates and reactions.