Regulation Moves From Theory to Enforcement
Effective January 1, 2026, California is imposing comprehensive safety requirements on AI companion chatbots under Senate Bill 243. The law targets AI systems providing adaptive, human-like social interactions.
What Operators Must Do: Operators must clearly disclose when users could reasonably be misled into believing they are communicating with humans. For minor users, operators face heightened obligations, including regular reminders about the AI's artificial nature and measures preventing sexually explicit content. The law mandates protocols for detecting and responding to suicidal ideation, requiring operators to provide crisis service referrals when such content is detected.
The Contradictions: On December 11, 2025, President Trump signed an executive order that casts doubt on the enforceability of these and other state AI laws. The executive order proposes to establish a uniform Federal policy framework for AI that preempts state AI laws deemed by the Trump administration to be inconsistent with that policy.
My Take: California is writing the rulebook while Washington tries to erase it. This is the battle of 2026: state-by-state guardrails vs. federal preemption favoring industry. California's law is thoughtful—suicide detection, child protections, consent disclosure. But if Trump's executive order survives court challenges, companies may face impossible conflicts: comply with California or the feds?
