
Hily's AI Adoption Stats: A Trust Crisis in the Making?
- 82% of American Gen Z and Millennial daters now use AI tools in their dating lives, rising to 87% among Gen Z singles
- 95% of users who've tried AI dating tools continue using them, according to Hily's survey of 1,500 users
- 61% use AI for crafting opening messages, 52% for profile optimisation, and 47% for generating conversation topics
- Match Group reported a 6% year-on-year decline in average subscribers in Q4 2024 as trust issues mount
AI assistance in dating has vaulted from early adoption to mainstream behaviour with startling speed, and the implications for platform operators are profound. When four in five young daters are using artificial intelligence to write messages, optimise profiles, and generate conversation topics, the industry's fundamental promise of authentic human connection faces an existential challenge. Dating apps now confront an uncomfortable reality: the technology they're integrating to boost engagement may be accelerating the trust collapse already eroding their business models.
The research, published by dating app Hily, surveyed 1,500 users and found adoption rates far higher than most industry observers expected. The 95% retention rate among those who've tried AI tools suggests this isn't a passing experiment but a permanent shift in dating behaviour. Whether that shift represents empowerment or an arms race depends largely on who you ask—and what they're selling.
The Arms Race Nobody Admits To
Hily positions its findings as evidence that AI can enhance rather than undermine authenticity, claiming users deploy these tools primarily for 'refining communication' and 'boosting confidence'. But when half your potential matches are using AI to write their messages, not using it yourself becomes a competitive disadvantage. The company frames this as empowerment—'overcoming the blank page syndrome' and 'presenting their best selves'—but it could equally be described as rational adaptation to a changed environment where unassisted communication reads as low-effort.
Create a free account
Unlock unlimited access and get the weekly briefing delivered to your inbox.
The research documents AI usage across the full dating funnel: 61% for crafting opening messages, 52% for profile optimisation, 47% for generating conversation topics, and 35% for photo enhancement. These aren't marginal use cases. They're core dating behaviours that platforms previously assumed were authentically human.
When nearly everyone is AI-assisted, what does authenticity mean anymore?
Dating apps have spent years trying to solve the authenticity problem through verification badges, photo requirements, and prompt-based profiles designed to surface personality. That entire effort assumed the person behind the profile was actually writing their own words. If Hily's numbers reflect reality across the broader market—and Match Group, Bumble, and others are all integrating similar AI features—that assumption no longer holds.
The Methodology Problem
Hily hasn't disclosed the methodology behind these figures beyond confirming the 1,500-person sample. How 'using AI for online dating' was defined matters enormously. Does running a message through ChatGPT count the same as using Hily's in-app AI coach? What about basic autocorrect or predictive text?
The company also hasn't specified whether the sample was drawn from Hily's own user base or represented a broader cross-section of dating app users. If the former, the research is essentially documenting adoption of features Hily has been actively promoting to its own members—hardly surprising, and not necessarily indicative of industry-wide behaviour. None of the major platforms have published comparable adoption data, which makes Hily's figures impossible to verify or contextualise against competitive benchmarks.
Bumble's introduction of AI-powered 'Opening Moves' suggestions and Match's investment in conversational AI suggest the major platforms see similar demand. But without transparent methodology or independent verification, these numbers should be read as market positioning rather than neutral research.
Platform Incentives Versus User Interests
Dating apps have clear financial incentives to normalise AI usage. AI features can be monetised as premium add-ons, reduce the friction that causes early churn, and generate engagement data that improves matching algorithms. Hily offers an AI-powered 'dating coach' as a core feature, making this research a convenient proof point for the product strategy the company has already committed to.
If most people are using AI to write their messages, optimise their photos, and generate conversation topics, then the AI-assisted version is the authentic representation of how people date now.
Hily's Head of Marketing, Alex Pasykov, describes AI as 'amplifying' rather than replacing authenticity, a framing that conveniently elides the question of where amplification ends and fabrication begins. Trust and safety teams at dating platforms are already stretched managing photo fraud, catfishing, and romance scams. Adding AI-generated text and enhanced images to that mix doesn't reduce the verification burden—it multiplies it.
Platforms will need to decide whether to detect and flag AI-generated content, embrace it as a product feature, or simply ignore it and hope users don't care. None of those options are cost-free, and most operators are choosing permissiveness over restriction because restriction is both technically difficult and commercially unappealing.
The Trust Collapse Accelerates
Regulatory frameworks like the EU's Digital Services Act require platforms to be transparent about algorithmic systems, but they don't address AI-generated user content in social contexts. The UK's Online Safety Act focuses on harmful content and verification, not synthetic text in dating profiles. Operators are largely on their own to set policies, and the vacuum is showing.
Competitors who've resisted AI integration—dating apps positioning themselves on rawness, spontaneity, or anti-swipe mechanics—may find their value proposition accidentally validated. If the mainstream market becomes visibly AI-saturated, a segment of users will actively seek alternatives that guarantee human-only interaction. That creates an opening for challenger brands, assuming they can credibly verify what they promise.
The broader risk is that widespread AI usage accelerates the trust collapse already eroding dating app engagement. Match Group disclosed a 6% year-on-year decline in average subscribers in Q4 2024, and Bumble's paying user count has been essentially flat. If members increasingly suspect they're talking to AI-enhanced versions of people—or can't tell the difference—the entire value proposition of 'meeting real people' starts to dissolve.
Hily's research may be self-interested, but it's documenting something real: a massive, rapid behavioural shift that most platforms aren't prepared for. Whether that shift represents progress or problem depends on whether operators can articulate a coherent answer to what they're actually selling anymore—and academics warn that AI in dating apps poses a threat to authentic intimacy, suggesting they haven't yet.
- Dating platforms must urgently define their position on AI-generated content before users make the decision for them by leaving
- An opportunity exists for challenger brands that can credibly guarantee human-only interaction as mainstream apps become AI-saturated
- Trust and safety infrastructure built for photo fraud and catfishing is inadequate for detecting and managing AI-enhanced profiles and messages at scale
Comments
Join the discussion
Industry professionals share insights, challenge assumptions, and connect with peers. Sign in to add your voice.
Your comment is reviewed before publishing. No spam, no self-promotion.
