UK Dating App Trust Collapsed 20 Points in a Year. AI Fakes Are Only Part of the Story.
    Financial & Investor

    UK Dating App Trust Collapsed 20 Points in a Year. AI Fakes Are Only Part of the Story.

    ·7 min read
    • Trust in UK dating apps collapsed 20 percentage points in a single year, with 84% of users now saying AI-generated content has made it harder to trust matches
    • Dating apps share the highest fraud rate across all digital sectors at 6.35%, with romance scams costing UK victims over £100 million annually
    • 67% of users now take verification into their own hands through reverse-image searches or demanding video calls before meeting
    • 36% of UK users have turned to AI companions instead of human-led dating platforms altogether

    Trust in UK dating apps has collapsed by 20 percentage points in a single year, according to new survey data that exposes a self-inflicted crisis now threatening the industry's core proposition. Research commissioned by identity verification firm Sumsub shows 84% of UK dating app users now say AI-generated content has made it harder to trust matches or date successfully, up from 64% last year. The shift coincides precisely with mass consumer access to ChatGPT, Midjourney, and other generative tools that transformed sophisticated catfishing from the preserve of organised criminals into a commodity skill anyone can deploy in minutes.

    The implications cut deeper than fraud metrics. Dating platforms sell the promise of human connection, and that promise depends entirely on the assumption that profiles represent real people showing their authentic selves. When 61% of users report being deceived by fake profiles—either personally or through someone they know—that foundational assumption evaporates.

    The DII Take
    This isn't a trust crisis. It's a market structure crisis. The 20-point trust collapse in twelve months suggests dating apps have lost control of their product in a way that quarterly feature releases and trust & safety blog posts won't fix.

    What's remarkable isn't that AI is being used to deceive—it's that users are simultaneously deploying AI themselves to compete whilst abandoning platforms because everyone else is doing the same. That's not a bug in the system. That's the system eating itself.

    Enjoying this article?

    Join DII Weekly — the dating industry briefing, delivered free.

    Person using smartphone dating app
    Person using smartphone dating app

    The arms race nobody wins

    The survey data, based on 2,000 UK users, reveals a darkly comic paradox at the heart of dating app usage in 2025. Thirty-two per cent of respondents now use AI tools like ChatGPT for message coaching or profile writing. Meanwhile, 36% have turned to AI companions instead of human-led dating platforms altogether.

    Users are both perpetrators and victims of the same phenomenon. They're enhancing their own profiles with generative tools whilst simultaneously losing faith in everyone else's authenticity. The result is an arms race where the weapons make the entire battlefield uninhabitable.

    Consider the moderation challenge this creates for platforms. Attitudes towards AI-enhanced profiles split almost evenly: 54% are open to or already use AI for editing or creating profile images, and 60% believe some alterations should be allowed. But 42% maintain zero tolerance for any changes whatsoever. Operators face an impossible policy question with no demographic consensus—where precisely does acceptable self-presentation end and deceptive content begin?

    Match Group (MTCH), Bumble (BMBL), and their peers have historically relied on user reporting and automated detection to manage fake accounts. That model assumed bad actors were a minority. When a third of users are actively employing AI to optimise their presence and more than half are open to profile enhancement, the line between legitimate user and policy violator becomes meaningless.

    Dating app profile on mobile device
    Dating app profile on mobile device

    Fraud rates that predate the current crisis

    Dating apps now share the highest fraud rate across all digital sectors at 6.35%, level with online media, according to Sumsub's Identity Fraud Report 2025–2026, which analysed millions of verification attempts. Romance scams cost UK victims over £100 million annually—a figure that notably predates widespread access to consumer AI tools.

    That timeline matters. The fraud infrastructure was already operational before ChatGPT launched. Generative AI didn't create the romance scam economy; it industrialised it. What previously required some degree of linguistic skill or Photoshop competence now requires only a prompt.

    The fraud rate data warrants qualification—Sumsub is an identity verification vendor with commercial interest in platforms adopting stricter controls, and the methodology for calculating that 6.35% figure deserves scrutiny. The year-on-year comparison citing "2025" as the baseline appears to be an error, likely referring to 2024. But even accounting for vendor bias, the directional finding aligns with broader industry data and platform disclosures about rising sophisticated fraud attempts.

    Platforms have known about this problem for years. The £100 million annual victim cost wasn't a secret. The current crisis represents a failure not of prediction but of prioritisation.

    User-led verification as a vote of no confidence

    Perhaps the most damning finding: 67% of users now take verification into their own hands, reverse-image searching photos or demanding video calls before meeting. That's not users collaborating with platform safety systems. That's users routing around platforms they no longer trust to perform basic authentication.

    When two-thirds of your user base treats your matching system as fundamentally unreliable and builds their own verification layer on top, you no longer operate a trusted marketplace. You operate infrastructure that people tolerate because switching costs are high and network effects trap them.

    Eighty-one per cent of respondents believe platforms should share responsibility for fraud and malicious content alongside authorities. That view doesn't yet have regulatory teeth in the UK—the Online Safety Act focuses primarily on user-generated content harms rather than impersonation and financial fraud. But sentiment precedes legislation, and operators would be foolish to assume the current light-touch regime persists if victim numbers continue climbing.

    The regulatory trajectory in financial services offers a preview. Payment platforms that initially argued fraud was a user education problem now face mandatory reimbursement schemes and liability frameworks. Dating apps could face similar pressure if the £100 million annual fraud cost continues rising whilst platforms post healthy EBITDA margins.

    Video call verification on smartphone
    Video call verification on smartphone

    What verification actually costs

    Mandatory identity verification—the solution Sumsub would naturally advocate—solves the impersonation problem but creates new friction costs. Bumble disclosed in its Q3 2024 earnings call that its optional photo verification feature sees adoption rates below 50% on most markets. Mandatory verification would almost certainly suppress new user conversion, particularly among privacy-conscious demographics and markets where dating app usage carries social stigma.

    That's the trade-off operators face: maintain low-friction onboarding and watch trust collapse, or implement verification gates and watch conversion rates drop. Neither option solves the deeper problem, which is that users are competing with each other using the same AI tools that erode their collective trust in the platform.

    The crisis extends beyond fraud into existential questions about what dating apps are actually for. If 36% of users have migrated to AI companions, they've not just left specific platforms—they've opted out of human connection entirely. That figure should terrify every operator. The competition isn't Hinge versus Tinder. It's human relationships versus frictionless artificial substitutes that provide emotional returns without rejection risk.

    Seventy-three per cent of respondents believe AI tools risk normalising objectification, particularly of women. That concern links directly to the gamification that dating apps have spent a decade perfecting. Swipe mechanics already reduced partner selection to rapid visual sorting. AI-enhanced profiles take that logic to its conclusion—optimised images and language designed to maximise engagement metrics rather than represent actual humans.

    The industry built this trap for itself. Platforms optimised for engagement and monetisation created incentives for users to game the system. Generative AI simply gave users better tools to do what the product logic already encouraged. The 20-point trust collapse is the bill coming due.

    • Dating platforms face an impossible choice between maintaining low-friction onboarding or implementing mandatory verification that will suppress conversion—neither solves the fundamental problem of users competing with AI-enhanced profiles
    • The real competition isn't between dating apps but between human relationships and AI companions that offer emotional returns without rejection risk, with more than a third of users already making that switch
    • Watch for regulatory intervention similar to financial services: if romance scam costs continue rising whilst platforms maintain healthy margins, mandatory liability frameworks and reimbursement schemes become inevitable

    Comments

    💬 What are your thoughts on this story? Join the conversation below.

    to join the conversation.

    More in Financial & Investor

    View all →