U.S. Updates to Children’s Online Safety Initiatives
September and October saw a few major updates in the realm of children’s online safety. Two federal bills were introduced aimed at AI chat bot safety and education, and California enacted one of the most comprehensive safe verifications laws in the U.S.. These developments highlight the ongoing desire to enable children to use the latest technology, while protecting our most vulnerable population.
EdTech and marketing companies in particular must monitor these developments to ensure compliance with new and proposed legislation, and should consult counsel sooner rather than later on what California’s new law may mean for their apps. Importantly, California’s new law may impact businesses located outside of California. Read on to learn more about how.
Pending U.S. Federal Developments: AI Companions & Child Safety
The Children Harmed by AI Technology (CHAT) Act and the AI Warnings and Resources for Education (AWARE) Act, first introduced in September, reflect growing efforts by the U.S. government to regulate AI systems used by or interacting with children.
The CHAT Act would require AI developers to do the following: (1) include recurring disclosures that users are speaking with AI; (2) block explicit content; and (3) provide self-harm resources if needed.
The AWARE Act directs the Federal Trade Commission (FTC) to develop and release educational resources to parents, educators and minors regarding the safe use of AI chatbots. This includes providing information on how to identify unsafe chatbots, how technologies handle privacy and data collection, best practices for supervising AI use in children, and all content must be engaging and easy for families to access and understand.
Both proposals were prompted by concerns over online safety, and reports of inappropriate AI chat interactions with children, such as AI bots encouraging children to commit or assisting children in attempting suicide, and AI bots engaging in conversations of a sexual nature with children.
These incidents demonstrate the propensity for children to develop deep trust bonds and parasocial relationships with AI bots, which may result in catastrophic outcomes or subject children to unsafe content. At a minimum, developers should be aware of these incidents so any technology that a child could reasonably be expected to use can implement guardrails preventing these outputs.
Neither bill has been signed into law yet. Check back for updates.
California’s Digital Age Assurance Act
On October 13, 2025, Governor Newsom signed the Digital Age Assurance Act (DAAA) (Assembly Bill 1043) into law.
DAAA mandates device-level age verification, to create “safer digital environments for children under 18.” To accomplish this, operating system providers must collect a user’s age range during device setup. Operating system providers and app stores must also provide an age-signal API to developers, enabling app makers to determine a user’s age range in real-time before allowing app install. Parents of minors will serve as their “account holders,” enabling AI and app-based controls.
Unlike similar proposals abroad, DAAA stops short of requiring sensitive government IDs or biometric verification, but still presents significant privacy and compliance considerations. This is just one part of a series of California’s new laws, aimed at protecting children online.
The law will take effect on January 1, 2027. For devices already in use, OS providers must provide an updated interface by July 1, 2027, allowing age input.
This law has significant implications for app developers (particularly in EdTech and/or if apps contain in-app purchases) inside and outside California:
App developers must ensure that apps integrate with the age-signal API, and limit content for children in a manner consistent with the law. This may include developing different versions of apps for minors.
App developers should be aware of data protection laws that impact children. Knowing a child’s exact age triggers additional liability considerations for app developers, who can no longer claim that they could not have reasonably known that a child was using their technology.
Even if app developers are not located in California, if the technology can be deployed in California, or downloaded on a California device, the app must integrate with the age-signal API.
For EdTech in particular, because such services are used by under-18s or under-13s, companies should audit whether their default settings, onboarding flows, and age-segmentation are compatible with the new law.
Marketing to children through in-app purchases will also have to change. If an app or service is likely to be accessed by minors, firms should be mindful that push notifications, in-app purchases and data collection practices may have to vary based on age-bracket. Again, knowing the age of an app’s user creates undeniable liability under COPPA, an existing child protection law limiting how children may be marketed to, and how their data may be collected and stored.
Conclusion
As children’s online interactions become increasingly shaped by AI and algorithmic systems, regulators are moving to preempt harm rather than react to it.
DAAA, CHAT and AWARE all represent a paradigm shift toward embedding safety and transparency into technology architecture. For developers, the message is clear: compliance and child protection must converge. Businesses that fail to adapt to these evolving standards risk both legal exposure and reputational harm as regulators and consumers alike are increasingly demanding responsible design for younger users.