The FTC’s updated COPPA rule takes full effect on April 22, 2026. One of the approved methods for verifying parental consent is facial recognition: match a parent’s selfie against a government-issued photo ID. To protect children’s data, the federal government now recommends collecting biometric data from their parents.
Meanwhile, 17 states have passed their own age verification laws. Texas requires government ID. New York requires “age determination technology.” Virginia requires “commercially reasonable efforts.” California’s law was enjoined entirely. Here is why every proposed solution makes it worse.
The app store delegation: liability without control#
Four states have enacted laws requiring age verification at the app store level. Apple’s Declared Age Range API and Google’s Play Age Signals API provide four brackets: under 13, 13–15, 16–17, and 18+. The signal is based on self-declaration at account setup. A 14-year-old who created an Apple ID claiming to be 18 transmits “18+” to every app on their device. Three parties in the chain, zero actual verification, distributed liability.
New York’s S8102A pushes further: device manufacturers must conduct “age assurance” at activation and transmit the result to every app. Self-attestation is explicitly prohibited, so the manufacturer must collect a face scan, a government ID, or a biometric template before the device is usable. The trust chain did not get an anchor. It got another link.
The verification paradox#
So skip the app store signal. Verify age yourself. Now you have a different problem.
Age verification requires collecting exactly the data these laws exist to protect. The COPPA rule amendments finalized in April 2025 classify biometric identifiers as “personal information,” the same category of data the rule restricts collecting from children. The rule simultaneously restricts biometric collection and approves biometric collection as a compliance method.
AI-based facial age estimation avoids document upload but constitutes biometric processing under Illinois BIPA, Texas CUBI, and COPPA itself. The data liability changed categories, not magnitude. And NIST evaluations show higher error rates for darker skin tones, women, and younger age groups. The populations with the highest error rates are precisely the ones these laws are designed to protect. A system tuned to never let a minor through will incorrectly block adults. A system tuned to never block adults will let minors through. No state defines an acceptable error rate.
Every verification method either collects sensitive data, relies on self-declaration, or introduces disparate impact with no accountability framework. The laws do not wait for a working option to exist.
The patchwork is architecturally unsolvable#
Even if you solve the verification method, you cannot solve the jurisdiction problem.
The Supreme Court upheld Texas’s adult content age verification law in Free Speech Coalition v. Paxton in June 2025, 6-3. Twenty-five states now have constitutional backing for age verification mandates. On March 10, the Eleventh Circuit heard oral arguments challenging Florida and Georgia’s laws, with the panel pressing industry challengers on standing. The judicial space for challenging these laws is narrowing. But the mandates do not agree on what to verify.
| Threshold | States (examples) |
|---|---|
| Under 13 | Federal COPPA (all states) |
| Under 16 | Virginia, Louisiana, Georgia |
| Under 18 | Florida, Tennessee, Nebraska, New York |
The compliance question for a company shipping to all 50 states has two bad answers. Implement the strictest standard everywhere, and adults in Montana upload government ID because Florida passed a law. Geo-fence by state and you need the user’s location, itself a data collection problem under its own set of privacy laws.
No federal preemption exists. KOSA and COPPA 2.0 remain unsigned. Even if a federal law passes this session, the preemption clause is contested between chambers.
Verification infrastructure as attack surface#
Assume you solve both the method and the jurisdiction. You still built a new database, and that database is a target.
Developers do not build age verification. They integrate third-party SDKs, and the provider market is consolidating around a small number of vendors. A breach of a single provider exposes the identity and content consumption patterns of every user who passed through it.
The retention requirements are contradictory. Texas requires deletion of verification data “upon completion,” an undefined term. But age verification metadata (timestamps, outcomes, age brackets) is operational data the platform needs to prove it complied. Retaining it creates data liability. Deleting it eliminates compliance evidence. The platform must choose which legal obligation to violate.
A breach of the age verification layer exposes government IDs, biometric templates, and a map of who accessed what. The verification layer is the highest-value target in the stack, and the laws are mandating its construction.
The structural failure#
The question is not whether children deserve protection online. They do. The question is whether the protection mechanism should create a national identity-verification infrastructure with no accuracy standards, no federal consistency, no breach liability framework, and no technical architecture that actually works.
The COPPA compliance deadline is weeks away. Utah’s app store law takes effect this spring. The engineering cannot catch up because the laws are asking for something that does not exist: verification without data collection, accuracy without measurement, and privacy through surveillance.
If your team is evaluating age verification compliance, start with the Supreme Court opinion in Free Speech Coalition v. Paxton and the NIST Face Recognition Vendor Test. Primary sources first. Vendor whitepapers second. Blog posts about compliance (including this one) third.