
FBI Warns: Dating Apps' Trust Systems Now Scammers' Playground
- Romance and confidence fraud resulted in $652M in reported losses to the FBI in 2023
- Match Group introduced photo verification across Tinder in 2020, with Bumble following the same year
- Verification scams operate on a transactional timeline—one conversation, one link, one data harvest—allowing fraudsters to target dozens of matches simultaneously
- Fraudulent sites collect government-issued identification, credit card details, and banking information while enrolling victims in recurring subscription charges
Dating app users are being scammed through fake verification schemes that exploit the trust-building rituals legitimate platforms have spent years normalising, according to a new warning from the Federal Bureau of Investigation. The scams bypass traditional romance fraud entirely, harvesting personal and financial data within a single conversation by mimicking the photo verification and identity checks that Tinder, Bumble, and Hinge have trained millions of users to complete without question. The FBI's Internet Crime Complaint Centre disclosed that fraudsters posing as potential matches direct victims to external websites masquerading as verification services.
These sites collect government-issued identification, credit card details, and banking information under the pretence of confirming identity or safety status. In many cases, according to the Bureau, the sites then enrol victims in recurring subscription charges whilst selling their personal data to third parties. What makes the scheme effective is its speed.
Unlike romance scams that extract large sums through weeks or months of emotional manipulation, verification fraud operates on a transactional timeline. One conversation. One link. One data harvest. The scalability is obvious: a single scammer can target dozens of matches simultaneously rather than investing time in building false relationships.
Create a free account
Unlock unlimited access and get the weekly briefing delivered to your inbox.
The dating industry built verification systems to solve a trust problem and inadvertently created a new one.
Operators spent years persuading sceptical users that handing over biometric data and identity documents was safe, normal, and necessary—and now scammers are reaping the reward of that user education. The challenge isn't technical; it's behavioural. Platforms have conditioned their users to comply with verification requests, and that conditioning doesn't distinguish between a Tinder prompt and a phishing site.
This is a second-order consequence of the industry's trust-building efforts, and it will require more than in-app warnings to contain.
From safety feature to security vulnerability
Match Group introduced photo verification across Tinder in 2020, requiring users to submit real-time selfies that match their profile images. Bumble rolled out similar technology the same year, branding it as a core safety feature. Hinge followed.
The Match Group portfolio now offers background checks through Garbo, a service that scans criminal records and registers of violence. These initiatives were intended to combat catfishing, fake profiles, and safety concerns that have dogged the industry since its inception. The unintended consequence is that users no longer question verification requests.
They've been trained to expect them. When a match sends a link to "verify your profile" or "prove you're real," the behaviour feels consistent with what platforms have already asked them to do. The friction that might once have existed—a moment of scepticism, a question about why verification is happening off-platform—has been eroded by repetition.
According to the FBI, the fraudulent sites often mirror the design language of legitimate dating apps, complete with logos, safety messaging, and professional layouts. Some claim affiliation with recognised platforms. Others present themselves as third-party services that "work with" major apps to verify users across multiple platforms—a plausible claim given the industry's fragmented approach to identity verification and the absence of a universal standard.
The data collected goes beyond names and birthdates. The FBI noted that victims are often asked to submit photographs of government-issued identification, which provides enough information for identity theft. Credit card details entered for nominal "verification fees"—sometimes as low as $1—are used to set up recurring charges or sold on. In some cases, victims only discover the fraud when they notice unexplained subscription fees or find their data compromised in unrelated breaches.
The scalability problem operators can't ignore
Romance scams remain lucrative. The FBI's Internet Crime Complaint Centre received reports totalling $652M in losses attributed to romance and confidence fraud in 2023. But those scams require time, emotional labour, and a degree of improvisation.
Verification scams eliminate those constraints. A single fraudster can operate at scale, targeting hundreds of matches with identical scripted messages and links. That efficiency creates a volume problem for platforms.
Trust and safety teams are already stretched managing content moderation, fake profiles, and harassment reports. Verification scams add another layer: policing conversations for external links, educating users about off-platform threats, and potentially monitoring for patterns in message content that suggest phishing attempts.
The FBI's guidance to users is predictable: never click external links, never provide financial information to verify identity, never submit identification documents outside the official app. But user education has limited efficacy when it contradicts learned behaviour. Platforms have spent years persuading users that verification is safe and necessary.
Reversing that instinct—teaching users to suddenly become sceptical—requires more than a pop-up warning or a help centre article. Some operators will likely respond by restricting external links in messages, a measure already implemented to varying degrees across major platforms. Bumble, for instance, blurs external URLs and requires users to manually reveal them.
But scammers adapt. They move to coded language, use URL shorteners, or direct victims to search for specific site names rather than clicking links directly. The more aggressive response would involve real-time scanning of message content for phishing patterns, similar to the machine learning models used to detect spam and harassment.
That introduces new complications around privacy and encryption, particularly for platforms that have positioned themselves as protecting user conversations from surveillance. It also requires significant investment in moderation infrastructure at a time when MTCH and BMBL have both focused on cost discipline following the sector's valuation collapse.
What comes next for platform liability
The legal landscape around platform responsibility for user-to-user fraud remains murky, but regulatory momentum is shifting. The UK Online Safety Act places obligations on platforms to protect users from fraudulent content, though the definition of "content" and the extent of liability for private messages are still being tested. The EU Digital Services Act similarly imposes risk management requirements for systemic platforms, including measures to address scams.
If verification scams proliferate—and the FBI's warning suggests they already are—platforms may face pressure from regulators to take more direct action. That could mean mandatory warnings before users can access external links, restrictions on message content that references verification, or even liability for failing to prevent off-platform fraud that originates on their services. The irony is acute.
Platforms introduced verification to protect users and insulate themselves from criticism over safety failures. Those same verification systems are now being weaponised against users, and operators may find themselves liable for fraud that exploits the trust they worked so hard to build. The prevalence of scammers and fraud-related challenges have put the security and trust of dating app users at risk, while emerging threats like deepfakes add new layers of complexity to an already fraught security landscape.
The industry's trust paradox is no longer theoretical. It's measurable in FBI complaints and fraudulent subscription charges.
- Dating platforms face a behavioural security challenge: years of user education around verification have created reflexive compliance that scammers exploit, requiring solutions beyond technical fixes
- Regulatory pressure from the UK Online Safety Act and EU Digital Services Act may force platforms to accept liability for off-platform fraud originating from their services
- Watch for platforms to implement aggressive link restrictions and real-time message scanning, creating new tensions between fraud prevention and user privacy commitments
Comments
Join the discussion
Industry professionals share insights, challenge assumptions, and connect with peers. Sign in to add your voice.
Your comment is reviewed before publishing. No spam, no self-promotion.
