SAN FRANCISCO — Character.AI announced on Wednesday that it will ban the use of its chatbots by people under the age of 18 starting at the end of next month, in a broad move to address child safety concerns.
The rule will come into effect on November 25, the company said. To enforce it, Character.AI said that, over the next month, it will identify which users are minors and impose time limits on using the app. After the measure begins, these users will no longer be able to chat with the company’s chatbots.
“We are taking a very bold step in saying that for teenage users, chatbots are not the right form of entertainment, but there are much better ways to serve them,” Karandeep Anand, CEO of Character.AI, said in an interview. He said the company also plans to create an artificial intelligence security laboratory.
FREE TOOL
XP simulator
Find out in 1 minute how much your money can yield
The moves come amid increased attention on how chatbots, sometimes called AI companions, can affect users’ mental health. Last year, Character.AI was sued by the family of Sewell Setzer III, a 14-year-old teenager from Florida who committed suicide after having constant conversations and exchanging messages with one of Character.AI’s chatbots. The family accused the company of being responsible for the death.
The case has become a focal point for the debate over how people can develop emotional bonds with chatbots, with potentially dangerous results. Since then, Character.AI has faced other lawsuits related to child safety. AI companies, including OpenAI, creator of ChatGPT, have also come under fire for the impact their chatbots have on people — especially young people — when they have sexually explicit or toxic conversations.
In September, OpenAI announced plans to introduce features that make its chatbot more secure, including parental controls. This month, Sam Altman, CEO of OpenAI, posted on social media that the company “was able to mitigate serious mental health issues” and that it would relax some of its security measures.
Continues after advertising
(O New York Times sued OpenAI and Microsoft, alleging copyright infringement of journalistic content related to AI systems. Both companies denied the accusations.)
In light of these cases, lawmakers and authorities have launched investigations and proposed or passed legislation to protect children from AI chatbots. On Tuesday, Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) introduced a bill to ban AI companions for minors, among other safety measures.
Governor Gavin Newsom this month signed into law a law in California that requires AI companies to implement security mechanisms in chatbots. The law comes into force on January 1st.
“The stories about what could go wrong are growing,” said Steve Padilla, a Democrat in the California state Senate who introduced the security bill. “It is important to set reasonable limits to protect the most vulnerable people.”
Character.AI’s Anand did not comment on the lawsuits faced by the company. He stated that the startup wants to set an example of security for the sector “by doing much more than regulation may require.”
Character.AI was founded in 2021 by Noam Shazeer and Daniel De Freitas, two former Google engineers, and raised nearly $200 million from investors. Last year, Google agreed to pay about $3 billion to license Character.AI’s technology, and Shazeer and De Freitas returned to Google.
Continues after advertising
Character.AI allows people to create and share their own AI characters, such as custom anime avatars, and promotes the app as AI entertainment. Some personalities may be created to simulate boyfriends, girlfriends, or other intimate relationships. Users pay a monthly subscription, starting at about $8, to chat with companions. Until the recent concern about underage users, Character.AI did not verify age upon registration.
Last year, researchers at the University of Illinois Urbana-Champaign analyzed thousands of posts and comments left by young people in Reddit communities dedicated to AI chatbots, and interviewed teenagers who used Character.AI, as well as their parents. The researchers concluded that AI platforms did not have sufficient protections for children, and that parents did not fully understand the technology or its risks.
“We should pay as much attention as if they were talking to strangers,” said Yang Wang, a professor of information science at the university. “We shouldn’t underestimate the risks just because they are non-human bots.”
Continues after advertising
Character.AI has about 20 million monthly users, with less than 10% of them claiming to be under 18, Anand said.
With the new policies, the company will immediately impose a two-hour daily limit for users under 18. Starting November 25th, these users will not be able to create or chat with chatbots, but they will be able to read past conversations. They will also be able to generate AI videos and images through a structured menu of commands, within certain security limits, Anand explained.
He said the company had already implemented other security measures over the past year, such as parental controls.
Continues after advertising
In the future, Character.AI will use technology to detect underage users based on conversations and interactions on the platform, as well as information from connected social media accounts, he said. If the company suspects that a user is under 18, they will be notified to verify their age.
Dr. Nina Vasan, a psychiatrist and director of a mental health innovation lab at Stanford University that researches AI safety and children, said it is “very important” for a chatbot maker to ban minors from using its app. But she said the company should work with child psychologists and psychiatrists to understand how the sudden loss of access to AI companions would affect young users.
“What concerns me is children who have been using this for years and have become emotionally dependent,” she said. “Losing your friend on Thanksgiving is not good.”
Continues after advertising
c.2025 The New York Times Company
