Norton's AI Dating Paradox: Users Trust Bots Over Humans
·6 min read
48% of current online daters in New Zealand would consider a romantic relationship with an AI system
65% of daters would be bothered if a human match used AI tools for photos, profiles, or chat
Norton blocked 17 million dating scams globally in Q4 2025, a 19% year-on-year increase
46% of respondents already use AI to craft profiles, 39% to enhance photos, and 48% for conversation starters
The Norton 'Artificial Intimacy' study, conducted among 1,000 New Zealand adults between July and August 2025, has surfaced a striking contradiction: daters are open to AI relationships but deeply opposed to humans using AI deceptively. It's a double standard that cuts to the core of the dating industry's trust crisis. The problem, it turns out, isn't artificial intelligence—it's artificial humans.
The distinction matters. When Norton's data shows that nearly half of daters are open to AI relationships—and that 25% believe genuine romantic feelings for an AI are possible—the reflexive reading is that loneliness has driven people to desperate measures. But the same study shows widespread use of AI tools across the dating journey, from profile creation to photo enhancement to conversation starters. They're not opposed to AI—they're opposed to being deceived about whether they're talking to a person or an algorithm.
The DII Take
This isn't a dating industry story yet, but it will be. The data reveals a trust paradox that operators can't ignore: daters would rather engage with something transparently artificial than with humans who might be lying about authenticity. That has immediate implications for product roadmaps, trust and safety enforcement, and the entire premise of "authentic connection" that every major platform claims to offer.
Enjoying this article?
Join DII Weekly — the dating industry briefing, delivered free.
Person using dating app on smartphone
When 34% of respondents say they trust AI coaching more than advice from friends or family, they're not endorsing AI relationships—they're rejecting the experience they're getting from human ones.
What Norton's research actually captures is the collapse of the social contract on dating platforms. The platforms themselves have become so saturated with manipulation—catfishing, filters, ghostwritten bios, romance scams—that the hypothetical honesty of an AI companion starts to look appealing by comparison. The industry's answer to AI can't be prohibition—it has to be transparency.
The scam data provides context for why. Norton blocked 17 million dating scams globally in Q4 2025, a 19% year-on-year increase. According to the company's Gen Threat Report, social engineering accounts for more than 90% of individual digital threats in this category. Instagram, Facebook, and WhatsApp were rated the least safe platforms for meeting matches, followed by Tinder, Bumble, and Hinge.
The Deception Threshold
The study's framing of "would consider" versus active intent is worth unpacking. Saying you'd consider dating an AI system is not the same as downloading a companion app tomorrow. Norton, as a cybersecurity vendor with commercial interests in threat detection, has reasons to emphasise the connection between loneliness and scam vulnerability. That doesn't invalidate the findings, but it does suggest caution in treating 48% openness as imminent consumer demand.
Still, the data points to a threshold that's already been crossed. If a third of daters believe an AI partner could be more emotionally supportive than a human one, and 33% are open to romantic engagement with an AI clone of a celebrity crush, the conceptual leap has been made. The industry is now competing not just with other apps, but with a category of product that promises connection without risk, validation without rejection, and conversation without the cognitive load of wondering whether the person on the other end is real.
Couple meeting through online dating
This creates a strategic problem for Match Group, Bumble, and every venture-backed dating operator. The value proposition has always been facilitating human-to-human connection. But if a meaningful segment of the addressable market is open to bypassing humans entirely, the moat around "largest user base" or "best matching algorithm" starts to erode. Why optimise for matches if the user doesn't trust the match to be who they say they are?
The dating industry built its growth on reducing friction—faster signups, less profile detail required, gamified interfaces that prioritise volume over depth. AI didn't create that environment. It just made the existing trade-offs more visible.
Enforcement of AI use policies becomes unworkable in this environment. Platforms can ban AI-generated photos or chatbot-assisted messaging, but detection is imperfect and the arms race favours the user. More importantly, blanket prohibition ignores what the Norton data shows: daters want AI assistance, they just don't want to be on the receiving end of it unknowingly. The answer isn't less AI—it's disclosure.
What Operators Should Do
Transparency mechanisms are the obvious product response, but implementation is anything but straightforward. Bumble has experimented with verified photos, Match has invested heavily in identity verification, and Grindr has explored biometric checks. None of these directly address AI-generated content, and all of them add friction to onboarding. The question is whether the trust benefit justifies the conversion cost.
Smartphone displaying social media and dating applications
The alternative—allowing AI use but requiring disclosure—introduces its own complications. If a platform lets members flag AI-enhanced photos or chatbot-drafted messages, does that create a two-tier system where undisclosed AI use becomes a signal of dishonesty rather than just a tool? Does it push deceptive behaviour further underground? And how does enforcement scale when the volume of content is measured in billions of swipes per day?
Mark Gorrie, Norton's VP for APAC, framed the issue as AI accelerating the breakdown in trust by making it easier to manipulate images and fabricate identities at scale. That's accurate, but it understates the structural challenge. The dating industry built its growth on reducing friction, and AI didn't create that environment—it just made the existing trade-offs more visible.
The Norton study is a warning shot, not a roadmap. The 48% figure will be cited in pitch decks for AI companion startups, and it will show up in regulatory hearings as evidence of harm. But the real story is the gap between what daters say they want—authentic human connection—and what they're willing to tolerate in pursuit of it. That gap is where the industry's trust crisis lives, and where the next wave of product innovation will need to focus. Prohibition won't close it. Verification alone won't either. The only sustainable path is making AI use visible, consensual, and symmetrical. Anything less is just another form of catfishing.
Dating platforms must shift from prohibiting AI to requiring transparent disclosure of its use—the trust crisis stems from deception, not technology itself
Traditional dating operators face existential competition from AI companion apps that promise connection without the risk of human dishonesty
Watch for platforms that implement symmetrical AI policies: if users can deploy AI tools, they must also be able to see when others do the same