Meta AI on the spot over "Sensual" chat guidelines involving minors
- Marijan Hassan - Tech Journalist
- Aug 18
- 4 min read
A firestorm of controversy has engulfed Meta following a report detailing internal company documents that showed its AI chatbots were given permission to engage in "romantic or sensual" conversations with children. The revelation, based on an investigation by Reuters, has sparked p.ublic outrage, a swift response from the company, and new calls for congressional investigations into the tech giant's approach to child safety in the age of generative AI.

The controversy centers on a 200-plus-page internal document which serves as a guidebook for Meta's employees on what constitutes acceptable behavior for its AI assistants. The document, which was reportedly approved by Meta's legal and ethics teams, outlined a series of alarming guidelines.
First, the guidelines permitted AI chatbots to "describe a child in terms that evidence their attractiveness." In one example, the bot was allowed to tell a shirtless eight-year-old, "Every inch of you is a masterpiece, a treasure I cherish deeply."
While the policy did set a line at explicit sexual descriptions, the examples revealed a deeply troubling and ethically questionable approach to interactions with minors.
The documents also revealed that the AI was allowed to generate false medical information and even craft arguments that "Black people are dumber than white people," provided such content was accompanied by a disclaimer.
Meta's response
A Meta spokesperson confirmed the authenticity of the documents but stated that the examples in question were "erroneous and inconsistent with our policies." The company says it has since removed the problematic sections and is revising the document. It also acknowledged that its enforcement of existing policies has been "inconsistent."
Public backlash and political fallout
The news has been met with a wave of condemnation from parents, child safety advocates, and lawmakers, with many questioning the company's commitment to protecting its youngest users.
The incident reinforces an existing public sentiment that tech companies, particularly Meta, are not doing enough to self-regulate and are too focused on profit to prioritize the well-being of children.
The political reaction has been equally swift and severe. U.S. Senator Josh Hawley (R-MO) has opened an immediate congressional investigation, demanding that Meta produce all internal documents related to its AI policies.
Other lawmakers, including Senator Ron Wyden (D-OR), have publicly stated that tech companies like Meta should not be protected by Section 230 for harms caused by their generative AI products. This suggests the controversy may not only lead to new legislation but could also redefine the legal protections that have long shielded big tech from liability.Meta AI on the spot over "Sensual" chat guidelines involving minors.
A firestorm of controversy has engulfed Meta following a report detailing internal company documents that showed its AI chatbots were given permission to engage in "romantic or sensual" conversations with children. The revelation, based on an investigation by Reuters, has sparked public outrage, a swift response from the company, and new calls for congressional investigations into the tech giant's approach to child safety in the age of generative AI.
The controversy centers on a 200-plus-page internal document which serves as a guidebook for Meta's employees on what constitutes acceptable behavior for its AI assistants. The document, which was reportedly approved by Meta's legal and ethics teams, outlined a series of alarming guidelines.
First, the guidelines permitted AI chatbots to "describe a child in terms that evidence their attractiveness." In one example, the bot was allowed to tell a shirtless eight-year-old, "Every inch of you is a masterpiece, a treasure I cherish deeply."
While the policy did set a line at explicit sexual descriptions, the examples revealed a deeply troubling and ethically questionable approach to interactions with minors.
The documents also revealed that the AI was allowed to generate false medical information and even craft arguments that "Black people are dumber than white people," provided such content was accompanied by a disclaimer.
Meta's response
A Meta spokesperson confirmed the authenticity of the documents but stated that the examples in question were "erroneous and inconsistent with our policies." The company says it has since removed the problematic sections and is revising the document. It also acknowledged that its enforcement of existing policies has been "inconsistent."
Public backlash and political fallout
The news has been met with a wave of condemnation from parents, child safety advocates, and lawmakers, with many questioning the company's commitment to protecting its youngest users.
The incident reinforces an existing public sentiment that tech companies, particularly Meta, are not doing enough to self-regulate and are too focused on profit to prioritize the well-being of children.
The political reaction has been equally swift and severe. U.S. Senator Josh Hawley (R-MO) has opened an immediate congressional investigation, demanding that Meta produce all internal documents related to its AI policies.
Other lawmakers, including Senator Ron Wyden (D-OR), have publicly stated that tech companies like Meta should not be protected by Section 230 for harms caused by their generative AI products. This suggests the controversy may not only lead to new legislation but could also redefine the legal protections that have long shielded big tech from liability.













