
Seeking's AI Verification: Security Theatre or Retention Strategy?
- Seeking.com has introduced mandatory AI-powered facial verification for new users only, permanently exempting all existing members from biometric checks
- Romance fraud losses in the UK reached £92.6M in 2023, with average losses of £9,083 per victim—up 28% year-on-year
- The platform claims over one-third of active members have been approved through the new system, though this figure likely represents only new sign-ups rather than voluntary verification by existing users
- Competitors Match Group and Bumble apply photo verification universally when implemented, creating no legacy exemptions for established accounts
Seeking.com has introduced a contradiction that perfectly encapsulates the tension between platform security and user retention. The sugar dating platform now requires all new users to complete AI-powered facial verification designed to combat sophisticated fake profiles, whilst simultaneously granting permanent exemption to its existing member base. The result is a two-tier system where the accounts most likely to harbour established scammers face no biometric scrutiny whatsoever.
The platform announced its 'Selfie Liveness Check' requirement this month, compelling new members to submit real-time selfies that AI analyses to confirm they're live persons matching their profile photos. According to the company, existing users need no such verification because these 'long-standing members' have 'already demonstrated their authenticity over time'. That logic collapses under minimal scrutiny, particularly on a platform where financial fraud incentives run exceptionally high.
Account tenure proves nothing about authenticity. Sophisticated romance scammers don't create accounts and immediately extract money; they build credibility over weeks or months. By exempting established accounts, Seeking has created what amounts to a roadmap for bad actors: establish presence before verification becomes mandatory, then operate indefinitely without biometric checks.
Create a free account
Unlock unlimited access and get the weekly briefing delivered to your inbox.
Retention Economics Dressed as Security Theatre
This is fundamentally about churn prevention, not fraud prevention. Seeking understands that forcing existing users through verification would trigger immediate departures, and on platforms where user base equals currency, that cost is unacceptable. But the company has now deployed a fraud prevention tool that explicitly doesn't prevent fraud from the accounts most likely to be fraudulent.
If the technology works, exempting your existing base renders it largely pointless. If it doesn't work, you've just added friction for new users whilst solving nothing.
Seeking claims over one-third of active members have been approved through the new system. That figure requires context the company hasn't provided. Are existing users voluntarily submitting to verification, or does that 33% simply represent new sign-ups since rollout? The exemption for 'long-standing members' strongly suggests the latter.
If the platform were seeing meaningful voluntary adoption amongst established users, it would trumpet that fact explicitly. Instead, the one-third figure likely reflects natural account turnover—new members subjected to mandatory checks whilst the pre-existing base remains untouched. If that interpretation holds, roughly half of Seeking's active user base has joined since feature launch, signalling either aggressive growth or unsustainably high churn for a mature platform.
The AI-Versus-AI Escalation Nobody Can Win
Seeking positions the feature as response to AI-generated images that are 'becoming easier to produce'. The company is correct: modern generative AI creates photorealistic faces that fool humans and many automated systems. What Seeking hasn't addressed is why AI-powered verification would maintain any meaningful advantage in that arms race.
Detection technology and generation technology advance in lockstep. Liveness checks—which verify that selfies show live persons rather than static images or video replays—are currently effective against low-effort fraud. They struggle against deepfake video, which can already mimic live facial movements in real time. As generative AI improves, the window in which liveness detection maintains superiority narrows.
The platforms best positioned in this race aren't those deploying AI verification first. They're the ones with comprehensive identity infrastructure: document verification, payment method validation, and behavioural analysis that flags accounts exhibiting scammer patterns regardless of profile photo provenance. Seeking's announcement made no reference to these layers.
Why Sugar Dating Makes Selective Verification Especially Risky
The economics of Seeking's model amplify the consequences of partial verification. Unlike mainstream dating platforms where scammers typically seek small-scale gift card fraud, sugar dating attracts bad actors pursuing five- and six-figure cons. The promise of financial arrangements creates cover for requests that would trigger immediate suspicion on Hinge or Bumble.
Romance fraud losses in the UK reached £92.6M in 2023, with average losses of £9,083 per victim. Sugar dating platforms—where financial discussions are expected rather than suspicious—represent particularly fertile ground.
The National Fraud Intelligence Bureau has flagged 'romance fraud on dating sites' as one of the fastest-growing fraud categories, with losses up 28% year-on-year. Against that backdrop, exempting existing accounts from verification isn't just operationally inconsistent. For a platform where financial harm risk sits structurally higher than mass-market competitors, it's a choice to prioritise member retention over member protection.
Industry Peers Take Different Approaches
Match Group has faced its own verification challenges across Tinder, Hinge, and Match.com, but its photo verification—powered by Noonlight technology acquired in 2020—applies universally when users opt in. Bumble made its photo verification mandatory for all users in 2021, with no legacy exemptions. Both approaches have drawbacks, primarily the friction they create.
Neither creates an explicit loophole for the exact accounts most likely to need verification. The contrast reveals Seeking's particular vulnerability: operating in a higher-risk segment whilst deploying lower-integrity safeguards than mainstream competitors.
What Happens When the Exemption Becomes the Exploit
The test for Seeking's approach comes in twelve months, when enough time has passed to assess whether verified-only cohorts show measurably lower fraud rates than the mixed ecosystem the platform has created. If fraud concentrates among unverified legacy accounts—and there's little reason to think it won't—the company will face an uncomfortable choice: force retroactive verification and accept the churn, or continue operating a platform where new users are vetted and established accounts operate on trust.
The broader question is whether any liveness check—applied universally or selectively—can keep pace with AI generation tools that improve by the month. Seeking may have bought itself six to eighteen months of effective fraud detection. After that, the arms race moves to the next phase, and platforms still relying on selfie checks will be back where they started.
They'll be trying to separate real humans from increasingly convincing fakes, without the verification infrastructure that might actually make a difference. By then, the decision to exempt existing users won't look like pragmatic product management. It will look like the moment Seeking chose short-term retention over building the identity systems required for long-term platform integrity.
- Watch whether fraud incidents concentrate among unverified legacy accounts over the next 12 months—this will determine whether Seeking must eventually force retroactive verification despite the churn risk
- AI liveness checks provide only temporary advantage in the detection-versus-generation arms race; platforms without comprehensive identity infrastructure including document verification and behavioural analysis will find themselves back at square one within 18 months
- Selective verification creates a playbook for sophisticated fraudsters: establish accounts before mandatory checks arrive, then operate indefinitely in the unverified legacy tier where financial harm potential is highest
Comments
Join the discussion
Industry professionals share insights, challenge assumptions, and connect with peers. Sign in to add your voice.
Your comment is reviewed before publishing. No spam, no self-promotion.
