In August, Reuters uncovered an internal Meta policy document, establishing governing standards that showed that Meta’s chatbots were allowed to engage in “romantic or sensual” conversations with minors. Further, the Washington Post reported that “Meta suppressed research” that “might have illuminated potential safety risks to children and teens on the company’s virtual reality devices and apps.”

Since then, there has been high engagement on the topic from enforcers and policymakers. Senator Josh Hawley (R-MO) launched an investigation, demanding document productions from Meta following the report. A bipartisan group of 44 State Attorneys General wrote a letter to tech companies (including Meta, Google, Apple, Microsoft, OpenAI, and Anthropic) warning they would be held accountable for their decisions on child safety. The Federal Trade Commission issued a study inquiry into AI chatbots acting as companions. And Attorneys General from California and Delaware sent a joint letter, putting OpenAI “on notice” about serious concerns with ChatGPT’s safety. 

Given the quickly moving developments, below is a high level list of interventions that lawmakers and enforcers might consider in the context of legislation or enforcement actions. 

  1. Ban on AI chatbots: Prohibiting the deployment of AI chatbots for children.
  2. Duty of care: Imposing on AI developers a duty of care in the design, deployment, and monitoring of AI systems for minors, analogous to obligations under the Kids Online Safety Act
  3. Ban on data collection for AI training: Banning the monetization of data collected from minors, including for AI training.
  4. Mandated assessments: Requiring regular, formal impact assessments on child safety that are shared with independent auditors and government overseers. 
  5. Whistleblower protections: Providing whistleblower protections for employees or contractors who raise concerns about child safety in AI systems. 
  6. Researcher Access & Evaluation: Requiring controlled API access to researchers, ensuring oversight, and enabling independent evaluation of child safety risks. We note that key safeguards will be needed to ensure this intervention provides real value to researchers and the public without opening the door to privacy abuses.

 

***

Stephanie T. Nguyen is a Senior Fellow at Georgetown Institute for Technology Law & Policy, Former Chief Technologist at the Federal Trade Commission

Erie Meyer is a Senior Fellow at Georgetown Institute for Technology Law & Policy, Former CFPB Chief Technologist

Samuel A.A. Levine is a Senior Fellow at UC Berkeley Center for Consumer Law & Economic Justice, Former Bureau of Consumer Protection Director at the Federal Trade Commission