Continued Scrutiny Over Artificial Intelligence Technology Highlights Importance of Good Governance

As federal and state attention continues to heighten focus on the development and use of emerging technologies, specifically Artificial Intelligence, the Federal Trade Commission (FTC) continues to be one of the most active regulators.  While there are no signs of a Federal comprehensive Artificial Intelligence law in the works, the FTC continues to rely on its existing authority under the FTC Act (15 U.S.C. § 46) to regulate the key players in this space.     

On September 11, 2025, FTC announced that it will be issuing orders under its Section 6(b) authority to seven companies that develop and deploy customer-facing AI chatbots in an effort to obtain more information about how these companies measure, test, and monitor for negative impacts of such chatbots on children and teens.   The seven companies to receive such orders include:

  • Alphabet, Inc. (formerly known as Google)
  • Character Technologies, Inc. (which developed a Generative AI chatbot service)
  • Instagram, LLC
  • Meta Platforms, Inc. (formerly known as Facebook)
  • OpenAI OpCo, LLC (developer of ChatGPT)
  • Snap, Inc.
  • X.AI Corp (developer of the Grok 4 AI model)

Under Section 6(b) of the FTC Act, the FTC has the authority to require companies to file “reports or answers in writing to specific questions” concerning the business’s practices, management, and conduct. Through the exercise of this authority, the FTC hopes to better understand what steps the companies are taking to evaluate the safety of their chatbots, to limit the potential negative effects on children and teens, to inform users and parents of the risks associated with using the products, and to comply with the Children’s Online Privacy Protection Act Rule. 

This probe comes on the heels of allegations that AI chatbots influence impressionable teens to commit self-harm, with the most recent lawsuit to be filed against OpenAI for a product allegedly encouraging a teenager to commit suicide in April of 2025, and new state laws further regulating the use of AI for mental health or related purposes, including in California, Illinois, Nevada, New York, and Utah.   

The FTC orders follow the FTC's public referrals to the Department of Justice of its complaints against Snap Inc. earlier this year and TikTok and its parent company ByteDance in 2024, each alleging the use of AI-based chatbots may harm young users. 

In a similar vein, earlier this year, the Texas Attorney General, who has actively enforced its state’s comprehensive privacy law, launched an investigation into the same AI chatbots from Meta Platforms, Inc. and X.AI Corp.  Moreover, such AI chatbots have been the subject of a litany of lawsuits, including those alleging violation of privacy, wiretapping, and deceptive trade practices.

While companies are struggling to fully understand their regulatory requirements under early AI laws, adopting foundational AI risk governance policies and frameworks, including those that classify risk categories and clearly document risk mitigation strategies during the design, development, training, and testing phases of AI technology, will be critical to any business that is developing their own or adopting a third-party AI tool for customer-facing purposes.

For more information, please contact Kenneth Suh and Hannah Babinski from the McDonald Hopkins AI Practice Group.

Jump to Page

McDonald Hopkins uses cookies on our website to enhance user experience and analyze website traffic. Third parties may also use cookies in connection with our website for social media, advertising and analytics and other purposes. By continuing to browse our website, you agree to our use of cookies as detailed in our updated Privacy Policy and our Terms of Use.