Dating Industry Insights
    Trending
    FBI Warns: Dating Apps' Trust Systems Now Scammers' Playground
    Regulatory Monitor

    FBI Warns: Dating Apps' Trust Systems Now Scammers' Playground

    ·6 min read
    • Romance and confidence fraud resulted in $652M in reported losses to the FBI in 2023
    • Match Group introduced photo verification across Tinder in 2020, with Bumble following the same year
    • Verification scams operate on a transactional timeline—one conversation, one link, one data harvest—allowing fraudsters to target dozens of matches simultaneously
    • Fraudulent sites collect government-issued identification, credit card details, and banking information while enrolling victims in recurring subscription charges

    Dating app users are being scammed through fake verification schemes that exploit the trust-building rituals legitimate platforms have spent years normalising, according to a new warning from the Federal Bureau of Investigation. The scams bypass traditional romance fraud entirely, harvesting personal and financial data within a single conversation by mimicking the photo verification and identity checks that Tinder, Bumble, and Hinge have trained millions of users to complete without question. The FBI's Internet Crime Complaint Centre disclosed that fraudsters posing as potential matches direct victims to external websites masquerading as verification services.

    Person using dating app on smartphone
    Person using dating app on smartphone

    These sites collect government-issued identification, credit card details, and banking information under the pretence of confirming identity or safety status. In many cases, according to the Bureau, the sites then enrol victims in recurring subscription charges whilst selling their personal data to third parties. What makes the scheme effective is its speed.

    Unlike romance scams that extract large sums through weeks or months of emotional manipulation, verification fraud operates on a transactional timeline. One conversation. One link. One data harvest. The scalability is obvious: a single scammer can target dozens of matches simultaneously rather than investing time in building false relationships.

    Create a free account

    Unlock unlimited access and get the weekly briefing delivered to your inbox.

    No spam. No password. We'll send a one-time link to confirm your email.

    The DII Take
    The dating industry built verification systems to solve a trust problem and inadvertently created a new one.

    Operators spent years persuading sceptical users that handing over biometric data and identity documents was safe, normal, and necessary—and now scammers are reaping the reward of that user education. The challenge isn't technical; it's behavioural. Platforms have conditioned their users to comply with verification requests, and that conditioning doesn't distinguish between a Tinder prompt and a phishing site.

    This is a second-order consequence of the industry's trust-building efforts, and it will require more than in-app warnings to contain.

    From safety feature to security vulnerability

    Match Group introduced photo verification across Tinder in 2020, requiring users to submit real-time selfies that match their profile images. Bumble rolled out similar technology the same year, branding it as a core safety feature. Hinge followed.

    The Match Group portfolio now offers background checks through Garbo, a service that scans criminal records and registers of violence. These initiatives were intended to combat catfishing, fake profiles, and safety concerns that have dogged the industry since its inception. The unintended consequence is that users no longer question verification requests.

    Dating app profile verification screen
    Dating app profile verification screen

    They've been trained to expect them. When a match sends a link to "verify your profile" or "prove you're real," the behaviour feels consistent with what platforms have already asked them to do. The friction that might once have existed—a moment of scepticism, a question about why verification is happening off-platform—has been eroded by repetition.

    According to the FBI, the fraudulent sites often mirror the design language of legitimate dating apps, complete with logos, safety messaging, and professional layouts. Some claim affiliation with recognised platforms. Others present themselves as third-party services that "work with" major apps to verify users across multiple platforms—a plausible claim given the industry's fragmented approach to identity verification and the absence of a universal standard.

    The data collected goes beyond names and birthdates. The FBI noted that victims are often asked to submit photographs of government-issued identification, which provides enough information for identity theft. Credit card details entered for nominal "verification fees"—sometimes as low as $1—are used to set up recurring charges or sold on. In some cases, victims only discover the fraud when they notice unexplained subscription fees or find their data compromised in unrelated breaches.

    The scalability problem operators can't ignore

    Romance scams remain lucrative. The FBI's Internet Crime Complaint Centre received reports totalling $652M in losses attributed to romance and confidence fraud in 2023. But those scams require time, emotional labour, and a degree of improvisation.

    Verification scams eliminate those constraints. A single fraudster can operate at scale, targeting hundreds of matches with identical scripted messages and links. That efficiency creates a volume problem for platforms.

    Trust and safety teams are already stretched managing content moderation, fake profiles, and harassment reports. Verification scams add another layer: policing conversations for external links, educating users about off-platform threats, and potentially monitoring for patterns in message content that suggest phishing attempts.

    The FBI's guidance to users is predictable: never click external links, never provide financial information to verify identity, never submit identification documents outside the official app. But user education has limited efficacy when it contradicts learned behaviour. Platforms have spent years persuading users that verification is safe and necessary.

    Reversing that instinct—teaching users to suddenly become sceptical—requires more than a pop-up warning or a help centre article. Some operators will likely respond by restricting external links in messages, a measure already implemented to varying degrees across major platforms. Bumble, for instance, blurs external URLs and requires users to manually reveal them.

    Cybersecurity concept with lock and digital interface
    Cybersecurity concept with lock and digital interface

    But scammers adapt. They move to coded language, use URL shorteners, or direct victims to search for specific site names rather than clicking links directly. The more aggressive response would involve real-time scanning of message content for phishing patterns, similar to the machine learning models used to detect spam and harassment.

    That introduces new complications around privacy and encryption, particularly for platforms that have positioned themselves as protecting user conversations from surveillance. It also requires significant investment in moderation infrastructure at a time when MTCH and BMBL have both focused on cost discipline following the sector's valuation collapse.

    What comes next for platform liability

    The legal landscape around platform responsibility for user-to-user fraud remains murky, but regulatory momentum is shifting. The UK Online Safety Act places obligations on platforms to protect users from fraudulent content, though the definition of "content" and the extent of liability for private messages are still being tested. The EU Digital Services Act similarly imposes risk management requirements for systemic platforms, including measures to address scams.

    If verification scams proliferate—and the FBI's warning suggests they already are—platforms may face pressure from regulators to take more direct action. That could mean mandatory warnings before users can access external links, restrictions on message content that references verification, or even liability for failing to prevent off-platform fraud that originates on their services. The irony is acute.

    Platforms introduced verification to protect users and insulate themselves from criticism over safety failures. Those same verification systems are now being weaponised against users, and operators may find themselves liable for fraud that exploits the trust they worked so hard to build. The prevalence of scammers and fraud-related challenges have put the security and trust of dating app users at risk, while emerging threats like deepfakes add new layers of complexity to an already fraught security landscape.

    The industry's trust paradox is no longer theoretical. It's measurable in FBI complaints and fraudulent subscription charges.

    • Dating platforms face a behavioural security challenge: years of user education around verification have created reflexive compliance that scammers exploit, requiring solutions beyond technical fixes
    • Regulatory pressure from the UK Online Safety Act and EU Digital Services Act may force platforms to accept liability for off-platform fraud originating from their services
    • Watch for platforms to implement aggressive link restrictions and real-time message scanning, creating new tensions between fraud prevention and user privacy commitments

    Comments

    Join the discussion

    Industry professionals share insights, challenge assumptions, and connect with peers. Sign in to add your voice.

    Your comment is reviewed before publishing. No spam, no self-promotion.

    More in Regulatory Monitor

    View all →
    Regulatory Monitor
    Cyberflashing Crackdown: Dating Apps Face Revenue-Tied Fines by 2026

    Cyberflashing Crackdown: Dating Apps Face Revenue-Tied Fines by 2026

    Dating platforms have until summer 2026 to comply with new UK cyberflashing regulations or face fines based on global re…

    1d ago · 1 min readRead →
    Regulatory Monitor
    Tinder's Mandatory Facial Verification: A Privacy Trade-Off the Industry Can't Ignore

    Tinder's Mandatory Facial Verification: A Privacy Trade-Off the Industry Can't Ignore

    Tinder has made video selfie facial verification compulsory for all new UK users, marking the dating industry's most agg…

    2d ago · 1 min readRead →
    Regulatory Monitor
    Meta's $375M Verdict: A Legal Blueprint for Dating Apps' Age Verification Failures

    Meta's $375M Verdict: A Legal Blueprint for Dating Apps' Age Verification Failures

    A New Mexico jury awarded $375 million in civil penalties against Meta after a six-day deliberation Undercover accounts …

    3d ago · 1 min readRead →
    Regulatory Monitor
    Hinge's Algorithm Denial: Transparency or Just Talk?

    Hinge's Algorithm Denial: Transparency or Just Talk?

    Jackie Jantos became Hinge CEO in January 2025, taking over from founder Justin McLeod after Match Group announced the s…

    4d ago · 1 min readRead →