An internal document of the goal, obtained by ReutersIt showed that even recently, the platform guidelines allowed artificial intelligence chatbots to interact with children on romantic or sensual dialogues, provide false medical information and even produce racist arguments.
The material, called GenAI: Content Risk Standardshas more than 200 pages and was prepared with the endorsement of the company’s legal, public policy and engineering sectors, said the news agency
According to Reutersthe rules, applied to Meta AI and the assistants available on Facebook, WhatsApp and Instagram, included explicit examples of romantic staging involving minor, such as “I take your hand, guiding you to the bed.” The company stated that these sections were withdrawn after questions from the agency.
Unique opportunity
Legacy Card: Far beyond a service

The rules also admitted that the bots did not have to provide correct information, allowing, for example, that it would mistakenly respond that advanced colon cancer could be treated with quartz crystals.
There were also exceptions to the creation of derogatory statements based on protected characteristics, such as the defense of the thesis that black people would be “darier” than white people, something the goal says has reviewed.
Spokesman Andy Stone recognized the authenticity of the document and classified the examples as “erroneous” and “inconsistent” with official policies. He stated that the standard review is underway and reiterated that the sexualization of children is prohibited.
Continues after advertising
The content confirms previous reports of the Wall Street Journal and from Fast Company on the suggestive behaviors of the company’s bots. Former employees claim that guidelines reflect the target’s priority to increase users’ engagement with virtual assistants, even in the face of ethical risks.