
AI in Dating: The 107% First Date Boost That Singles Distrust
- 51% of UK singles now use AI features on dating apps for crafting messages, generating date ideas, and writing profiles
- 50% of singles believe AI undermines authenticity in dating, with some calling AI-assisted messaging a form of catfishing
- AI users reported a 107% increase in first dates compared to non-users, though no data exists on second dates or relationship formation
- Survey of 1,000 UK singles commissioned by video dating platform Arrows in partnership with matchmaking service Tawkify
Over half of UK singles are now using AI to navigate dating apps, according to new research from video dating platform Arrows—and an identical proportion believe the technology is destroying the very thing they're looking for. The figures reveal a striking contradiction at the heart of modern courtship: the technology is both lifeline and saboteur, depending on who's holding the phone. Even more revealing, AI users reported a 107 per cent increase in first dates compared to non-users, yet no data exists on what happens after those doors close.
This isn't just a contradiction—it's a full-blown cognitive dissonance that should worry every operator relying on AI to drive engagement metrics.
Singles are using tools they fundamentally distrust because they believe they have no choice, which is either a damning indictment of how difficult modern dating has become or evidence that platforms have successfully convinced users they can't compete without technological enhancement. Either way, it's not sustainable. When your members achieve a 107 per cent lift in first dates but simultaneously think the method is deceptive, you're building on sand.
Effectiveness versus authenticity
The research points to a familiar pattern in dating product development: features that solve immediate friction points whilst creating longer-term trust deficits. AI assistance demonstrably works at the top of the funnel. Singles who deploy it are converting profile views into in-person meetings at twice the rate of those going it alone.
Create a free account
Unlock unlimited access and get the weekly briefing delivered to your inbox.
But here's the cliff edge: that 107 per cent first date increase exists in a vacuum. The survey, commissioned by Arrows in partnership with matchmaking service Tawkify, provides no indication whether these AI-assisted connections convert to second dates, relationships, or anything beyond a single coffee. The absence of that data is conspicuous.
If AI were genuinely improving match quality rather than simply increasing match volume, you'd expect to see it in the research design. Operators should recognise this gap. Match Group, Bumble, and others have invested heavily in AI-powered features—profile optimisation, conversation starters, photo selection tools—all designed to reduce friction and increase engagement.
The double standard hiding in plain sight
The most instructive finding isn't the paradox itself but where users draw the line. According to the research, singles are broadly comfortable with AI-generated profiles—the static presentation of self. Yet a majority consider AI-written messages during active conversation to be catfishing, a term traditionally reserved for identity deception.
That's a fascinating distinction. It suggests users have internalised the idea that your profile is a marketing document, fair game for enhancement and optimisation. Nobody expects unfiltered reality in a profile any more than they expect unretouched photos.
Conversation, by contrast, retains an expectation of authenticity. When you're messaging back and forth, users believe they're engaging with a person, not a large language model. The moment AI enters that exchange, it crosses from acceptable optimisation into something that feels like deception—even if the human on the other end approved every word.
This isn't an academic distinction. It's a product design problem.
Tinder and other platforms have already faced backlash for introducing AI chatbots and generated profile images, features that blur the line between human and machine in ways that erode trust. The current research suggests users will tolerate AI in the shop window but rebel when it follows them to the checkout.
Who benefits from the paradox
Worth noting the source here: Arrows is a video dating platform that positions itself as an antidote to text-based apps. Its business model depends on highlighting the inauthenticity of traditional swiping platforms. Tawkify, the co-sponsor, is a matchmaking service that markets human curation as superior to algorithmic matching.
Both companies have commercial incentives to frame AI as a problem rather than a solution. That doesn't invalidate the findings—the paradox is real, and aligns with broader industry conversations around AI trust—but it does suggest caution in interpretation. The research tells us what users say they believe.
Platforms could look at this data and conclude that AI features should be more transparent, with clear labelling when messages or profiles receive AI assistance. They could equally conclude the opposite: that users want the benefits of AI without confronting its presence, and that the solution is better integration, not more disclosure.
What this means for operators
The immediate challenge is navigating member expectations that haven't caught up with product reality. Singles are using AI because it works—those first date numbers don't lie. But they're doing so with mounting unease, aware that the same tools giving them an edge are being deployed by everyone else, creating an arms race nobody particularly wants to win.
That unease will manifest in trust metrics. Platforms already face endemic scepticism around fake profiles, bots, and engagement bait. Introducing AI features that members view as deceptive—even if those members are simultaneously using them—accelerates the erosion of trust that has plagued the industry since the beginning.
Regulation is coming as well. The UK Online Safety Act and the EU Digital Services Act both include provisions around transparency and authenticity. If members increasingly view AI assistance as a form of misrepresentation, regulators won't be far behind. Compliance teams should be watching this data closely.
The paradox also raises uncomfortable questions about product-market fit. If your users need AI to get dates but believe AI ruins dating, you haven't solved the problem—you've just automated the dysfunction. The 107 per cent increase in first dates might be masking a deeper failure to create environments where authentic connection is possible without technological mediation.
The industry has spent a decade optimising for engagement. Researchers into the ethics of online matchmaking are urging caution over AI's role in dating apps, with concerns about trust declining when people believe profiles were created with AI assistance. Meanwhile, deepfakes and synthetic profiles are fueling mistrust across UK dating platforms. These figures suggest it's time to ask what, exactly, is being engaged with—and whether anyone involved actually trusts it.
- The cognitive dissonance between AI usage and AI distrust signals an unsustainable user relationship that will eventually manifest in declining trust metrics and increased regulatory scrutiny
- The absence of second date and relationship data reveals AI optimises for volume rather than quality—a fundamental mismatch if platforms claim to facilitate meaningful connections
- Users distinguish between AI-enhanced profiles (acceptable marketing) and AI-written messages (deceptive catfishing), creating a clear product design boundary that operators violate at their peril
Comments
Join the discussion
Industry professionals share insights, challenge assumptions, and connect with peers. Sign in to add your voice.
Your comment is reviewed before publishing. No spam, no self-promotion.
