
PolicyEvery technology brings both benefits and risks. Over the past two decades, digital platforms—from social media and app stores to AI systems—have reshaped how we learn, work, communicate, and innovate. The positive impacts are undeniable. But so is the growing body of evidence showing that certain technologies, when designed without safeguards, are causing real harm to children.
Today, more than 90 percent of U.S. parents say online safety is their top child-safety concern. Most believe technology companies and policymakers aren’t doing enough—and the data backs them up.
A recent American Academy of Pediatrics study found that preteen smartphone use is associated with higher rates of depression, obesity, and sleep disruption. These findings add to a long series of warnings about youth mental health and digital exposure.
Social media plays a major role. About 40 percent of children ages 8–12 and up to 95 percent of teens use social media. Teenagers now average more than four and a half hours per day online, and those spending over three hours face twice the risk of depression and anxiety. Youth suicide rates have surged 62 percent since 2007.
Exposure to pornography is also rising. Seventy-three percent of teenagers report viewing online pornography, with an average first exposure at age 12. More than half have encountered violent content, including depictions of sexual assault. Decades of research connect early exposure to pornography with negative developmental outcomes—from increased sexual aggression to strained relationships and social isolation.
This is not a blip. It’s a market failure—and one that demands serious government action.
Current debates often fixate on policy vs. parental responsibility. But at ChildSafe.dev, we see a different truth: technology must shoulder more of the burden. Platforms rely heavily on algorithms and features that optimize for engagement, not safety. Parental controls alone cannot counteract systems designed to keep children online at all costs.
ChildSafe.dev advocates for safety-by-design tools that companies can embed directly into their products—tools that help detect grooming, filter harmful content, and support privacy-preserving age assurance. Without infrastructure like this, even good laws fall short because the underlying systems remain unchanged.
Meanwhile, the Proudfoot Group emphasizes the governance side: as AI increasingly shapes what children see, responsible oversight becomes non-negotiable. Social platforms, app stores, and AI-driven recommendation engines must adopt enforceable governance frameworks that align organizational behavior, compliance, and transparency with child-safety expectations. Without proper governance, even well-designed tech can drift into harmful patterns.
Technology created the modern risks kids face. Technology must also help fix them.
Congress is now considering several bills that together offer a layered approach to online child safety. While none is a silver bullet, they form a promising foundation—especially when paired with responsible technology design and governance.
This bill would establish a nationwide standard requiring adult websites to verify user ages, addressing widespread youth exposure to pornography. Privacy protections—including data minimization and deletion—help mitigate risks. Most Americans support this approach, and states’ success in implementing similar rules shows it can work.
KOSA addresses harmful design choices by requiring platforms likely to be used by minors to default to the strongest safety settings. It enables parents to disable addictive features like autoplay and infinite scroll and prohibits advertising illegal products to minors.
The criticism that such rules will lead to censorship cannot outweigh the real harms shown in internal documents from major platforms—such as algorithms recommending inappropriate adults to teens or rejecting screen-time limits because they reduce ad revenue. Platforms have shown they will not fix these issues voluntarily.
Google and Apple control nearly all smartphone distribution. ASAA would require these companies to verify ages at account creation, link minors’ accounts to parents, and obtain parental consent for app downloads or purchases. This creates a verified baseline that makes COPPA enforceable and allows other safeguards—like those in KOSA—to work effectively.
Critics claim privacy-preserving age verification is impossible, but both companies are already doing it in states where similar laws have passed.
These bills—combined with others addressing illegal drugs, targeted advertising, and data brokerage—form a comprehensive system:
But legislation alone is not enough.
Together, they help turn legislation from aspiration into reality.
Parenting harder is not the answer. Self-regulation has already failed. And privacy-preserving age assurance is absolutely possible with the right design choices and governance oversight.
After years of delay, Congress finally appears ready to act—and technology companies must be ready to meet that moment with safer design, stronger governance, and better tools to protect children where it matters most: inside the products they use every day.
© 2025 ChildSafe.dev · Carlo Peaas Inc. All rights reserved.
Built with privacy-first, PII-free child protection.