
Seeking Admitted to Fraud on Its Platform. That Is Transparency and a Liability Shield Simultaneously.
🕐 Last updated: March 16, 2026
- UK romance fraud losses totalled £92.7M in the year to March 2024, with 8,141 reported cases according to Action Fraud
- Reported romance fraud has risen 56% since 2019, even as platforms deployed AI detection and verification systems
- Seeking CTO Yoon Chang told Cyber Defence Magazine that dating platforms are 'easy targets' due to emotionally open users and trivially simple identity fabrication
- The Online Safety Act imposes a duty of care on platforms to prevent fraud, with Ofcom's final codes of practice due in 2025
The dating industry has a fraud problem it can't moderate away. Despite millions spent on AI detection, photo verification, and content moderation teams, romance scams continue to surge—and a senior executive at a major platform has finally said what everyone in trust and safety already knows. The fundamental architecture of dating apps remains, by design, a fraudster's paradise.
Yoon Chang, CTO of Seeking, told industry publication Cyber Defence Magazine this month that dating platforms are 'easy targets' for romance scammers because they combine two structural vulnerabilities: users who arrive emotionally open and systems that make identity fabrication trivially simple. The admission is remarkable not because it's untrue—every trust and safety professional in the sector knows it—but because a senior executive at a major platform has said it out loud.
Seeking operates what it calls 'sugar dating', connecting financially successful individuals with younger partners seeking 'mutually beneficial relationships'. The model puts the company under particular regulatory scrutiny. It also means Chang's comments land differently than they would from, say, Hinge's product lead. When its CTO describes the industrywide vulnerability to fraud in such direct terms, it raises an uncomfortable question: is this transparency in service of reform, or liability management dressed as candour?
Create a free account
Unlock unlimited access and get the weekly briefing delivered to your inbox.
Chang has articulated what the industry has been reluctant to admit: dating platforms are structurally vulnerable to fraud, and no amount of AI moderation or selfie verification fully addresses the core problem. The question is whether public acknowledgement translates to meaningful architectural change—stricter identity verification, financial transaction monitoring, behavioural AI that actually works—or whether this is simply a CTO getting ahead of the next regulatory cycle by admitting the obvious.
Seeking's niche makes it an unlikely industry spokesperson on trust and safety. That Chang is speaking at all suggests the pressure is real.
The fraud problem, quantified
UK Finance reported that Britons lost £35.8M to romance fraud in 2023, down 13% from the prior year but still representing 6,362 reported cases. That's the banking industry's tally. Action Fraud's figures tell a grimmer story: 8,141 reports of romance fraud in the year to March 2024, with total losses of £92.7M. The gap between the two datasets reflects underreporting, the involvement of cryptocurrency transfers that bypass traditional banking rails, and victims who never come forward at all.
Dating platforms don't disclose their own fraud losses or blocked attempts, making independent analysis impossible. Match Group references 'trust and safety investments' in earnings calls but doesn't break out fraud-specific costs. Bumble lists moderation headcount in regulatory filings but not fraud incident rates. Grindr mentions AI-based detection systems without quantifying their efficacy.
What's clear is that reported romance fraud has risen 56% since 2019, according to Action Fraud data, even as platforms have rolled out selfie verification, AI detection models, and in-app warning prompts. Either the fraud problem is accelerating faster than mitigation efforts, or the mitigation efforts aren't addressing the root cause. Chang's comments suggest the latter.
Why dating apps are different
Chang's explanation centres on two structural realities. First, users arrive in a heightened emotional state, explicitly seeking connection and therefore more willing to trust. Second, creating a convincing fake profile requires little more than stolen photos and a coherent narrative. Both are design features, not bugs.
Mainstream social platforms benefit from network effects that make fake accounts harder to sustain. A fraudulent Facebook profile lacks friend connections, shared history, mutual contacts. A fraudulent Instagram account shows no engagement pattern. Dating apps, by contrast, are built for strangers to meet strangers. Every user is a blank slate.
The product design that makes dating apps work for legitimate users is the same design that makes them ideal for fraud.
Photo verification helps, but only at the margins. Bumble's selfie system confirms that the person creating the account matches the photos—at that moment. It doesn't confirm identity, employment, location, or intent. It certainly doesn't prevent account takeovers, which the National Cyber Security Centre flagged last year as a growing vector for romance fraud.
Financial verification is harder still. Some platforms, including The League and Luxy, tie profiles to LinkedIn or bank accounts. But that creates friction most users won't tolerate, and it doesn't stop a fraudster with a stolen identity or a legitimate profile used for illegitimate ends. The match rate would crater.
What Seeking's admission signals
Seeking's position in the market makes Chang's comments particularly loaded. The platform has faced persistent criticism over user safety, including a 2021 investigation by The Sunday Times that found hundreds of profiles from sex workers and an alleged lack of age verification. Seeking has since introduced mandatory photo verification and says it uses AI to detect suspicious behaviour.
The Online Safety Act, which came into force in stages through 2024, imposes a duty of care on platforms to prevent harm, including fraud. Ofcom's draft codes of practice, published in November, propose that dating services conduct risk assessments specifically for romance fraud and implement 'proportionate measures' to mitigate it. Those measures could include identity verification, transaction monitoring, or limits on financial requests made within apps.
For Seeking, which operates in a category already under regulatory scrutiny, public acknowledgement of systemic vulnerabilities may serve a dual purpose: demonstrate awareness of the problem (useful in a regulatory defence) and lower expectations about what technology can achieve (useful in managing liability).
What actually changes
The dating industry's fraud problem won't be solved by better moderation alone. It requires structural change: stronger identity verification, integration with financial fraud detection systems, and product design that makes prolonged deception difficult. That means friction. It means slower onboarding. It might mean fewer users.
Operators face a brutal trade-off. Tighten verification and risk user attrition in an already challenging acquisition environment. Do nothing and invite regulatory mandates that will impose those costs anyway, along with reputational damage. Chang's comments suggest Seeking understands this. Whether the company—or its competitors—will act before they're forced to remains the more pressing question.
Ofcom's codes of practice are due for final publication in 2025. When they arrive, the days of acknowledging the problem without fixing it will be over.
- Current fraud mitigation focuses on symptoms (AI moderation, photo verification) rather than structural causes—product architecture that enables anonymous interaction at scale
- Ofcom's 2025 codes of practice will force dating platforms to choose between voluntary friction (stronger verification) and mandated compliance with reputational costs
- Watch whether major operators implement meaningful identity verification before regulation forces it—or whether Chang's admission becomes industry template for managing liability without changing design
Comments
Join the discussion
Industry professionals share insights, challenge assumptions, and connect with peers. Sign in to add your voice.
Your comment is reviewed before publishing. No spam, no self-promotion.
