Dating Industry Insights
    Trending
    Greed Dating's AI Compliance: Innovation or Compliance Theatre?
    Regulatory Monitor

    Greed Dating's AI Compliance: Innovation or Compliance Theatre?

    ·5 min read
    • Cyberflashing became a specific criminal offence in the UK in January 2024, carrying penalties of up to two years imprisonment
    • Greed Dating claims to have built its AI nudity detection system in just hours using open-source models and ChatGPT
    • Platforms failing to prevent illegal content face fines of up to £18M or 10% of global turnover under the Online Safety Act
    • Ofcom's illegal content codes of practice are not yet finalised, with full enforcement expected in 2025

    A small UK dating platform claims it built an AI-powered cyberflashing detection system in hours, not months, using off-the-shelf tools and ChatGPT guidance. Greed Dating's rapid deployment is positioned as proof that regulatory compliance needn't break the bank—but the absence of testing data, accuracy metrics, and bias protocols tells a different story. This is what happens when legal deadlines meet limited resources: AI moderation tools assembled at speed, with questions about effectiveness deferred until later.

    Compliance theatre or genuine safeguarding?

    Cyberflashing—the unsolicited sending of explicit images—became a specific criminal offence under the OSA in January 2024, carrying penalties of up to two years imprisonment. Platforms now face legal obligations not just to remove such content, but to prevent users from encountering it in the first place. That's a meaningful shift from reactive moderation to proactive prevention.

    Greed Dating's approach uses what Minns describes as open-source models combined with ChatGPT guidance to identify and block nude imagery before it's delivered. The technical details are sparse. No accuracy rates have been disclosed. There's no mention of whether the system has been tested for the well-documented racial and body-type biases that plague AI image recognition tools.

    Create a free account

    Unlock unlimited access and get the weekly briefing delivered to your inbox.

    No spam. No password. We'll send a one-time link to confirm your email.

    Person using smartphone with AI interface overlay
    Person using smartphone with AI interface overlay

    The broader concern isn't whether AI can detect nudity—it demonstrably can, with varying degrees of success. The question is whether a system assembled in hours, using generic tools and minimal custom development, can make nuanced decisions about context, consent, and edge cases. Can it distinguish between explicit cyberflashing and consensual sharing between matched users?

    The "built in hours" narrative sounds impressive until you consider what's missing—bias testing, accuracy benchmarks, false positive protocols, and any mention of how this performs across different skin tones and body types.

    The smaller platform dilemma

    Greed Dating operates at a radically different scale to Match Group (MTCH) or Bumble (BMBL), which have dedicated trust and safety teams, legal counsel, and engineering resources to deploy sophisticated moderation infrastructure. Smaller platforms don't have that luxury. They face the same regulatory obligations with fractions of the budget and headcount.

    Minns frames the rapid deployment as democratising safety, arguing that accessible AI tools level the playing field. There's some truth to that. Open-source models and large language model assistance have lowered the barrier to entry for basic content moderation.

    But speed and sophistication aren't the same thing. Match Group's systems have been refined over years, informed by millions of moderation decisions, and supported by human review teams. A few hours of development, however clever, doesn't replicate that.

    Digital security and online safety concept illustration
    Digital security and online safety concept illustration

    Industry observers watching OSA implementation have warned about exactly this scenario: platforms bolting on AI moderation tools without adequate testing, creating the appearance of compliance whilst potentially exposing users to inaccurate filtering or, worse, missing genuinely harmful content. Ofcom, the UK regulator responsible for OSA enforcement, has yet to publish final codes of practice on illegal content, meaning platforms are building to a moving target.

    What's actually being tested here?

    The absence of disclosed accuracy metrics is telling. AI nudity detection isn't new—companies like Hive Moderation and Microsoft's PhotoDNA have offered commercial solutions for years. Those systems publish performance data, undergo third-party audits, and document known limitations. Greed Dating's announcement mentions none of this.

    Research from organisations including the Algorithmic Justice League has repeatedly demonstrated that image recognition systems trained predominantly on lighter skin tones perform significantly worse on darker skin, leading to both false positives and false negatives. Without transparency about training data, testing protocols, and accuracy across demographic groups, there's no way to assess whether this tool protects all users equally.

    If this becomes the industry template for OSA compliance, trust and safety teams should be deeply concerned.

    The OSA doesn't specify how platforms must prevent illegal content, leaving implementation to operators. That flexibility is deliberate, allowing for innovation. But it also creates risk. If Greed Dating's approach becomes a blueprint—quick, cheap, AI-driven, light on testing—the industry could end up with a patchwork of minimally viable compliance tools rather than robust safeguarding infrastructure.

    Data analysis and algorithmic monitoring dashboard
    Data analysis and algorithmic monitoring dashboard

    Minns suggests the implementation "redefines what people expect" from dating platforms. That's generous. This is regulatory compliance, not voluntary innovation. The OSA made cyberflashing prevention a legal requirement.

    Platforms that fail to implement effective measures face fines of up to £18M or 10% of global turnover, whichever is higher. The incentive structure is clear.

    Process versus outcomes

    What remains unclear is whether enforcement will focus on process or outcomes. Will Ofcom accept that platforms made good-faith efforts to deploy detection tools, even if those tools prove inadequate? Or will it hold platforms accountable for measurable reductions in harmful content, regardless of the methods used?

    That distinction will determine whether "built in hours" is a success story or a liability. The OSA's enforcement timeline is still unfolding, with illegal content provisions expected to come into full effect in 2025. Platforms have time to refine their approaches.

    Whether they'll use that time to rigorously test and improve AI moderation tools, or simply point to their existence as evidence of compliance, will shape how effective the legislation proves in practice. Research on participatory design of user interactions with risk detection AI suggests that users need to understand how these systems work to trust them, yet transparency remains scarce.

    Meanwhile, broader concerns about AI deepfakes eroding trust in UK dating apps add another layer of complexity to the safety landscape that platforms must navigate.

    • Watch whether Ofcom prioritises measurable safety outcomes over mere implementation of AI tools when enforcement begins in earnest—this will determine if rapid, minimally tested solutions prove legally sufficient
    • Demand transparency on accuracy rates and bias testing from platforms deploying AI moderation, particularly regarding performance across different skin tones and body types where image recognition historically fails
    • Expect a two-tier system to emerge: well-resourced platforms with sophisticated, tested moderation infrastructure versus smaller operators relying on quick-fix AI solutions that may create compliance theatre rather than genuine user protection

    Comments

    Join the discussion

    Industry professionals share insights, challenge assumptions, and connect with peers. Sign in to add your voice.

    Your comment is reviewed before publishing. No spam, no self-promotion.

    More in Regulatory Monitor

    View all →
    Regulatory Monitor
    Cyberflashing Crackdown: Dating Apps Face Revenue-Tied Fines by 2026

    Cyberflashing Crackdown: Dating Apps Face Revenue-Tied Fines by 2026

    Dating platforms have until summer 2026 to comply with new UK cyberflashing regulations or face fines based on global re…

    1d ago · 1 min readRead →
    Regulatory Monitor
    Tinder's Mandatory Facial Verification: A Privacy Trade-Off the Industry Can't Ignore

    Tinder's Mandatory Facial Verification: A Privacy Trade-Off the Industry Can't Ignore

    Tinder has made video selfie facial verification compulsory for all new UK users, marking the dating industry's most agg…

    2d ago · 1 min readRead →
    Regulatory Monitor
    Meta's $375M Verdict: A Legal Blueprint for Dating Apps' Age Verification Failures

    Meta's $375M Verdict: A Legal Blueprint for Dating Apps' Age Verification Failures

    A New Mexico jury awarded $375 million in civil penalties against Meta after a six-day deliberation Undercover accounts …

    3d ago · 1 min readRead →
    Regulatory Monitor
    Hinge's Algorithm Denial: Transparency or Just Talk?

    Hinge's Algorithm Denial: Transparency or Just Talk?

    Jackie Jantos became Hinge CEO in January 2025, taking over from founder Justin McLeod after Match Group announced the s…

    4d ago · 1 min readRead →