The real policy test comes in 12–18 months when independent research can assess whether teen online safety outcomes have actually improved
Evan Spiegel has broken ranks with the usual tech industry platitudes about regulation, publishing a pointed critique of Australia's under-16 social media ban that raises uncomfortable questions about whether lawmakers have just made the problem worse. Since the law came into force, Snapchat has shuttered or restricted 415,000 Australian teen accounts—a material commercial hit, certainly, but Spiegel's argument goes beyond protecting his user base. He's suggesting that Australia has built a policy framework that might actually increase risk for the very teens it claims to protect.
The commercial impact is already visible. Snap disclosed a global loss of 3 million daily active users in Q4 2025, with Australia's mandatory teen exodus forming a significant portion. That's revenue walking out the door. But the policy implications stretch far beyond one quarter's user metrics, particularly for an industry that's spent years building scaffolding around youth safety whilst simultaneously monetising teenage attention.
Teenager using smartphone with social media apps
The DII Take
Australia's ban is the first high-stakes test of age-gating at scale, and early evidence suggests it's failing on its own terms. Rather than making teens safer, it's pushing them toward platforms with weaker moderation, less transparent policies, and often foreign jurisdiction that makes enforcement nearly impossible. For dating operators watching regulatory trends, this should be a warning: poorly designed age restrictions don't protect young users—they simply move the risk somewhere less visible.
Enjoying this article?
Join DII Weekly — the dating industry briefing, delivered free.
Where Teens Go When You Lock the Front Door
Spiegel's central claim—published in the Financial Times and cross-posted to Snap's own newsroom—is that platform-specific bans create perverse incentives. According to his analysis, Australia's legislation targets established platforms whilst leaving unregulated alternatives entirely accessible. Teens don't stop seeking connection online; they migrate to apps outside the regulatory perimeter.
If a determined 15-year-old can't access Snapchat, they're not sitting in their room reading Tolstoy—they're finding alternatives with fewer safety rails.
The dating industry should recognise this dynamic immediately. Age verification requirements have become table stakes for legitimate dating platforms, but enforcement remains wildly inconsistent across the broader landscape of connection apps. Anonymous chat platforms, foreign-operated social apps, and quasi-dating products operating in regulatory grey zones don't face the same scrutiny.
Snap has invested material resources in youth protection features: content moderation, parental controls, anti-grooming technology. Those investments make commercial sense when you're retaining teenage users and building brand loyalty that carries into adulthood. Remove the teenage cohort entirely, and the business case for sophisticated safety infrastructure weakens. Why spend millions on age-appropriate moderation if you're legally prohibited from serving that demographic anyway?
The Age Verification Trap
Spiegel describes age estimation technology as 'highly imperfect'—a characterisation that's both self-serving and largely accurate. Current age verification methods range from simple self-declaration (trivially easy to circumvent) to biometric estimation using facial analysis (raising privacy concerns and producing error rates that vary by ethnicity and gender) to government ID verification (creating friction that kills conversion rates).
Digital identity verification on mobile device
Dating platforms know this tension intimately. Stricter verification improves trust and safety metrics but damages growth. Bumble (BMBL) and Match Group (MTCH) properties have implemented various verification layers, but none claim 100% accuracy. Grindr (GRND) has faced particular scrutiny given safety risks for LGBTQ+ minors, yet even there, enforcement remains imperfect.
Australia's legislation mandates compliance without specifying enforcement mechanisms, effectively outsourcing the policy problem to platforms whilst providing no technical solution. Snap is left implementing blunt tools—removing accounts flagged as potentially underage based on algorithmic guesses—knowing full well that some genuine adults will be caught in the net whilst determined teens will simply lie more convincingly.
What This Means for Dating Platforms
The dating industry has largely avoided the youth safety debate by maintaining 18+ age restrictions across major platforms. But Australia's regulatory approach signals a broader trend: governments under pressure to 'do something' about online harms are increasingly willing to impose age-gating requirements without necessarily thinking through second-order effects.
If other jurisdictions adopt Australia's model—and early political noises from the UK and Canada suggest interest—dating platforms face a different challenge than Snap. The risk isn't losing a core demographic; it's facing expanded verification requirements that increase compliance costs whilst providing no meaningful improvement in safety outcomes.
Platforms that invest in robust age verification and safety features become easier targets for regulation because they're visible, established, and have compliance infrastructure already in place.
The economic incentives get messy quickly. Fly-by-night operators and foreign platforms outside Western regulatory reach face no such pressure. Just as Australia's ban may push teens toward less regulated social apps, heavy-handed dating platform requirements could fragment the market toward smaller, less accountable operators.
The Evidence Problem
Spiegel's claim that 'most teens benefit from online connections' rather than suffering harm is contestable, and he's smart enough to acknowledge the research remains inconclusive. Longitudinal studies show mixed results depending on usage patterns, platform features, and individual vulnerability factors. But his broader point holds: Australia's policy appears driven more by parental anxiety and voter sentiment than robust evidence about what actually reduces harm.
Research data and policy documentation on laptop screen
Research consistently shows that digital literacy levels among teens often exceed those of adults, particularly around navigating online spaces and identifying risks. That doesn't mean teens don't face dangers—they absolutely do, particularly around exploitation, grooming, and mental health impacts—but blunt age bans don't address those risks in any meaningful way.
For dating operators, this evidence gap matters. If regulatory approaches continue prioritising visible action over measurable outcomes, compliance costs rise whilst actual safety improvements remain uncertain. The industry's challenge is building credible evidence around what interventions actually work—not just what generates favourable press coverage for politicians.
What to Watch
Australia's ban is six months old. The real test comes in 12–18 months when independent research can assess whether teen online safety outcomes have actually improved, or whether the policy simply moved risk into less visible corners of the internet.
Other jurisdictions are watching closely. If Australia's approach demonstrably reduces harm, expect copycats. If it backfires—pushing teens toward riskier platforms whilst imposing significant compliance costs on safer alternatives—the regulatory conversation may shift toward Spiegel's preferred approach: device-level controls and platform-agnostic safety standards rather than app-specific bans.
For dating platforms, the strategic question is whether to engage proactively in shaping age verification standards or wait for mandates to arrive. Match Group's scale gives it lobbying power; smaller operators may simply face compliance costs they can't absorb. The industry has an opportunity to advocate for evidence-based policy before the next moral panic drives the next round of legislation. Whether it seizes that opportunity, or waits to react, will determine how much control it retains over its own regulatory environment.
Poorly designed age restrictions don't eliminate risk—they displace it toward less regulated platforms with weaker safety infrastructure and less accountability
Dating platforms should engage proactively in shaping evidence-based age verification standards before mandate-driven compliance costs arrive
Watch for independent research in 12–18 months assessing whether Australia's ban improved teen safety outcomes or simply fragmented the market toward riskier alternatives