Three years into the Online Safety Act, dating platforms remain deeply divided on their response to cyberflashing — with market leaders investing billions in AI detection whilst niche platforms serving vulnerable communities ignore the criminal offence entirely. Our investigation examines who is leading, who is lagging, and what regulatory enforcement actually looks like when a new crime enters the statute book.
Executive Summary
Cyberflashing — the unsolicited sending of sexually explicit images — became a criminal offence in England and Wales on 1 January 2024 under the Online Safety Act 2023. Enforcement has been uneven and revealing. Here are the key findings from our investigation:
Key Findings
- Prevalence is staggering: 76% of girls aged 12–18 in the UK have received unsolicited sexual images (UCL), whilst 41% of women aged 18–36 report the same experience, with dating apps identified as the primary channel (YouGov/DCMS).
- Psychological harm is equivalent to in-person assault: Recipients experience distress comparable to traditional indecent exposure, including feelings of violation, shame, and anxiety (Journal of Gender-Based Violence).
- Compliance divides are stark: Major platforms including Bumble, Tinder, and Hinge have implemented AI-based detection; Grindr, Feeld, and the vast majority of smaller operators have not.
- Legal consequences are severe but enforcement remains minimal: Maximum sentence is two years' imprisonment; fines can reach 10% of global turnover. Yet Ofcom has not yet issued significant enforcement actions specifically targeting cyberflashing non-compliance.
- Technology is imperfect but effective: On-device AI detection systems achieve 98% accuracy (Bumble's Private Detector) and demonstrably reduce incidence by introducing friction into the sending process; however, they come with privacy trade-offs and can be circumvented by determined users.
- Regulatory gaps protect bad actors: Smaller platforms and international operators serving UK users often fall outside Ofcom's direct enforcement reach, creating a protection gap for the most vulnerable users on niche platforms.
The Crime That Entered the Law: Context and Urgency
On 1 January 2024, a threshold was crossed. The unsolicited sending of sexually explicit images — an act that had plagued digital life for decades — was formally criminalised under the Online Safety Act 2023. Cyberflashing was no longer prosecutable only under harassment or obscenity statutes. It became its own discrete criminal offence.
This criminalisation was not merely a technicality. The act carries a maximum sentence of two years' imprisonment and is defined with clinical precision in the legislation. It covers the sending of a photo or film of genitals — whether one's own or another's — to another person without consent, with the intention of causing humiliation, distress, or humiliation, or for the purpose of obtaining sexual gratification while reckless as to whether the recipient will be distressed.
Yet in the two years since that pivotal moment, the industry response has been strikingly uneven. Some platforms have invested substantially in detection technology and compliance infrastructure. Others have largely ignored the law entirely, calculating (perhaps correctly) that regulatory enforcement is slow and under-resourced.
This investigation examines which platforms are genuinely protecting users, which are performing compliance whilst doing the minimum, and what the actual state of enforcement looks like at the intersection of criminal law, regulation, and commercial incentive.
The stakes are material on both sides. For platforms, non-compliance carries fines of up to 10% of global annual turnover — a penalty that could exceed $319 million for Match Group or $107 million for Bumble. For users, the consequences remain intimate and damaging: psychological harm, erosion of trust in digital dating, and in some cases, secondary victimisation when reports disappear into investigative voids.
The Prevalence Crisis: Data That Should Have Triggered Industry Action
The scale of cyberflashing is difficult to overstate, and the data predates the legal response by years. This is crucial: by the time cyberflashing became an explicit criminal offence, the industry had years of warning and ample evidence of the problem's magnitude.
Among teenagers, the figures are alarming. Research from the University College London (UCL) found that 76% of girls aged 12–18 in the United Kingdom had received unsolicited sexual images (Professor Jessica Ringrose, UCL; findings cited in Home Office Online Harms White Paper briefing materials). This was not occasional harassment. It was normalised, expected, and woven into the fabric of adolescent digital life.
The implications of this statistic extend beyond raw numbers. At 76%, cyberflashing becomes not an exception but an assumption — a regular experience that girls anticipate when using digital platforms. The psychological consequence is that the abnormal becomes normal, and the harmful becomes expected.
Among adult women, the prevalence remains severe. A 2023 YouGov survey commissioned by the Department for Culture, Media and Sport (DCMS) found that 41% of women aged 18–36 had received unsolicited intimate images. Critically, the survey identified dating applications as the single most common channel through which this abuse occurs. Women report that dating apps — where sexual content might be expected in some contexts — become the vector for unsolicited, unwanted, and often shocking imagery.
The gendered nature of this harm is pronounced. Men report cyberflashing far less frequently, and when they do, it is typically from other men on platforms like Grindr. Women, however, report it as a routine feature of app-based dating. This asymmetry is important to understanding why platforms targeting women-led interactions (Bumble) moved early on detection, whilst platforms serving other user bases moved more slowly.
Within the first hour of conversation, the risk is extreme. Bumble's own internal research, conducted in 2023, revealed a startling metric: 1 in 3 women using the platform received an unsolicited nude image within the first hour of matching. This was not a long-term relationship gone wrong. This was immediate, opportunistic abuse happening at the moment of initial contact.
This statistic alone should have triggered industry-wide urgency. The fact that it largely did not — that only Bumble had implemented detection by 2024 — speaks to the commercial priorities and risk calculus of the industry.
Understanding the Harm: Psychology and Impact
The criminal law is based on the principle that cyberflashing causes genuine harm. But what does that harm actually entail? Academic research has begun to map it.
Research published in the Journal of Gender-Based Violence has established that recipients of unsolicited intimate images experience psychological distress equivalent to in-person indecent exposure — traditional flashing. The mechanism is similar: the sudden, non-consensual exposure of genitals. The only difference is the medium.
Recipients report:
- Disgust and violation equivalent to in-person assault
- Shame and embarrassment, despite their having done nothing to invite the abuse
- Hypervigilance and anxiety when using dating apps subsequently
- Erosion of trust in digital dating platforms and in the possibility of genuine connection
Some women report a chilling effect: they abandon dating apps entirely following a cyberflashing incident, interpreting it as evidence that app-based dating is inherently unsafe. This has cascading effects on their romantic and social lives.
The psychological impact is not trivial, and it is not temporary. Unlike physical indecent exposure, which occurs in the moment and then ends, cyberflashing can generate ongoing harm: the recipient knows the image exists somewhere, may worry about screenshot or sharing, may experience unwanted reminders of the incident.
This psychological evidence is the foundation for the criminal law. The criminality flows from the harm. And the harm is real and measurable.
The Legal Framework: What the Act Requires
The Online Safety Act 2023 imposes two distinct obligations on dating platforms regarding cyberflashing. Understanding these obligations is crucial to assessing compliance.
First: Prevention (Priority Offence)
Cyberflashing is classified as a "priority offence" under the legislation. This classification is significant. Priority offences are those where platforms must take proactive measures to prevent harm, rather than simply responding passively to user complaints. The burden of prevention falls on the platform, not the user.
This means:
- Platforms cannot rely solely on user reporting. They must actively work to stop cyberflashing from occurring.
- Passive moderation (responding after harm has occurred) is insufficient.
- The presumption is that platforms have a responsibility to deploy preventative technology or controls.
- Ofcom's codes of practice require that platforms conduct risk assessments and implement proportionate measures.
Bumble's Private Detector, Tinder's late-2024 feature, and Hinge's early-2025 implementation are all responses to this preventative duty. They represent the platforms' argument that they are taking proactive steps to prevent priority offences.
Second: Reporting and Cooperation
Platforms must establish clear pathways for users to report cyberflashing and must cooperate with law enforcement when reports are made or when they detect the crime themselves. In theory, this creates a pipeline: user or platform detects cyberflashing → report is made → law enforcement investigates → prosecution.
In practice, this pipeline is porous. Users report that cyberflashing reports are often met with generic responses ("thank you for reporting; we'll investigate"). Prosecution is rare. The deterrent effect is unclear.
Enforcement Mechanism
Ofcom, the UK's communications regulator, oversees compliance. Ofcom has significant enforcement powers: it can issue compliance notices requiring specific actions within specified timeframes; it can impose fines; it can designate platforms as falling within scope of enhanced duties.
The fines are not notional. The legislation allows Ofcom to impose fines of up to 10% of global annual turnover. For reference:
- Match Group (owner of Tinder, Hinge, OkCupid, and others) reported global revenue of $3.19 billion in 2024 (SEC filings), meaning a maximum fine would exceed $319 million.
- Bumble reported revenue of $1.07 billion in 2024 (SEC filings), meaning a maximum fine would be approximately $107 million.
Even for smaller platforms with revenue of $10 million, a 10% fine would reach $1 million.
Yet as of early 2026, Ofcom has not issued significant enforcement actions specifically targeting cyberflashing non-compliance. This creates what economists call a "perverse incentive": platforms that invest in detection bear the cost; platforms that do nothing face minimal near-term enforcement risk.
The Technology Solution: AI Detection at Scale
In response to the legal and reputational pressure created by cyberflashing, the industry's primary response has been technological: artificial intelligence designed to detect sexually explicit images before they are transmitted.
Bumble's Private Detector: First-Mover Advantage
Bumble launched Private Detector in 2019 — well before cyberflashing became a criminal offence. The system operates on-device: when a user attempts to send an image, the phone's local processor analyzes it using a trained neural network. If the system detects sexually explicit content with high confidence, it alerts the sender before the image is transmitted. The user can choose to proceed, but they are warned that they may be breaking the law and that the recipient may report them.
Bumble claims 98% accuracy, meaning the system correctly identifies explicit images in 98% of cases. The on-device architecture is a privacy choice: the image itself is never uploaded to Bumble's servers for analysis. Instead, the analysis happens locally, and only metadata (whether an image was flagged) is returned to the platform.
This technical choice has important implications. It reduces privacy intrusion compared to server-side analysis, but it also means the system is only as good as what is trained into the device's model. It cannot update in real-time based on new circumvention techniques.
The psychological mechanism of Private Detector is also important. The system introduces friction: sending becomes slightly harder, requires an affirmative choice to override the warning, and reminds the user of legal consequences. Research on "nudging" suggests this friction reduces the behaviour, at least for some users.
Late Industry Response: Tinder and Hinge Follow
For nearly five years, Bumble's Private Detector remained largely alone. Other platforms either lacked the resources or lacked the incentive. Then, in late 2024 — nearly a year after cyberflashing became a criminal offence — Tinder announced a similar feature. Early 2025 saw Hinge follow.
This timing is revealing. The Online Safety Act came into force on 1 January 2024. Tinder's response came roughly 12 months later. Hinge's came even later. The inference is clear: these platforms moved only when forced to do so by legislation and reputational pressure, not by intrinsic concern for user safety.
Tinder's system operates on similar principles to Private Detector: on-device analysis, warning to the sender, integration of legal messaging. Hinge has been less public about its technical architecture, but the fundamental approach is the same.
Gaps: Who Has Not Responded
Notably absent from the list of platforms with announced detection are:
- Grindr, a platform with millions of users, predominantly serving the LGBTQ+ community
- Feeld, a smaller platform catering to people exploring non-traditional relationship configurations
- OkCupid and eHarmony, both owned by Match Group, despite Tinder and Hinge's implementation
The absence of detection on Grindr is particularly concerning. Research suggests that LGBTQ+ individuals, particularly gay and bisexual men, experience higher rates of sexual harassment and abuse on dating apps. Yet Grindr has made no public commitment to cyberflashing detection, and DII's investigation found no evidence of such a system being in development.
When approached for comment on cyberflashing compliance, Grindr did not respond to specific questions about detection systems, prevention measures, or regulatory compliance strategy.
Smaller platforms serving regional markets or niche communities — often operated by small teams with limited technical resources — remain wholly unprepared. Many lack the engineering capacity to develop neural networks or deploy on-device AI. Others may be operating under the assumption that enforcement is a low priority for Ofcom and that their users are not worth protecting.
Platform Compliance Scorecard: Leaders, Followers, and Laggards
The extent to which dating platforms have implemented cyberflashing protections remains deeply uneven. Here is a detailed assessment of major platforms and their current status:
| Platform | Detection Implemented | Legal Status | Notes |
|---|---|---|---|
| ✓ Yes (2019) | Proactive Compliance | Private Detector; on-device AI; 98% accuracy. Earliest mover. Industry leader in safety posture. | |
| ✓ Yes (Late 2024) | Reactive Compliance | Detection launched post-legislation. Reaches millions. Late-mover but significant reach. | |
| ✓ Yes (Early 2025) | Reactive Compliance | Detection implemented after Tinder. Owned by Match Group. Architecture less transparent. | |
| ✗ No | Non-Compliant | No public detection system. Predominantly LGBTQ+. Serves vulnerable population with no protection. | |
| ✗ No | Non-Compliant | No public detection system. Niche platform. Minimal safety infrastructure. | |
| ? Unclear | Unclear | Owned by Match Group but no public announcement. May inherit systems from Tinder/Hinge. | |
| ? Unclear | Unclear | Owned by Match Group. No public commitment to detection despite size and resources. | |
| Regional/Smaller Operators | ✗ No | Non-Compliant | Estimated 80%+ of smaller platforms have no protection. |
What This Scorecard Reveals
The pattern is unambiguous: the largest mainstream platforms with the most users and the highest revenue have implemented detection. The gap-filling platforms serving niche or vulnerable communities have not. This creates a troubling inversion: the users who may be most vulnerable (LGBTQ+ individuals on Grindr, individuals exploring non-traditional relationships on Feeld) are on platforms with the least protection.
Grindr's non-compliance is particularly stark. The platform serves an estimated 10+ million users, predominantly gay and bisexual men. Research from the Pew Internet Research Center and academic studies suggest that LGBTQ+ individuals experience higher rates of sexual harassment and abuse on dating apps than heterosexual users. Yet Grindr has made no public announcement of detection systems, no commitment to cyberflashing prevention, and appears to have adopted a "wait and see" posture regarding regulatory enforcement.
When DII contacted Grindr in February 2026 to request information on its cyberflashing compliance, its detection systems, and its approach to preventing priority offences under the Online Safety Act, the company did not respond within the requested timeframe. This non-responsiveness is itself informative: it suggests either that Grindr has not prioritised the issue or that it is choosing not to be transparent about its approach.
Regulatory Reality: Ofcom's Enforcement Posture and Gaps
Ofcom, the UK's communications regulator, published guidance on priority offences, including cyberflashing, shortly after the Online Safety Act came into force. The guidance is clear: platforms must take proactive steps to prevent cyberflashing.
Yet as of March 2026, Ofcom has not issued significant enforcement actions specifically targeting platforms for cyberflashing non-compliance. No compliance notices have been issued. No fines have been levied. The regulatory enforcement machinery has not been deployed against this specific offence.
Why Ofcom Has Moved Slowly
Several factors explain this measured approach:
- Institutional newness: The Online Safety Act came into force only two years ago. Ofcom is newly empowered by this legislation and is still building its enforcement machinery. The regulator has prioritised other harms (child sexual abuse material, terrorism, etc.) that are seen as more urgent.
- Resource constraints: Ofcom has been given significant regulatory duties but not proportional funding. Enforcement of every provision of the Online Safety Act is not possible with available resources.
- Collaborative approach: Ofcom has signalled a preference for working collaboratively with platforms to achieve compliance, rather than through aggressive enforcement.
- Evidence of action: Major platforms (Bumble, Tinder, Hinge) have visibly implemented detection systems. From Ofcom's perspective, the industry is responding.
However, this lenient enforcement posture creates perverse incentives. Platforms that have invested in detection bear the cost; platforms that have done nothing face minimal enforcement risk — at least for now.
The Enforcement Question: Will It Change?
The consensus among regulatory experts DII consulted is that enforcement will likely intensify in 2025 and 2026. As the Online Safety Act matures and Ofcom gains enforcement confidence, regulatory scrutiny of cyberflashing will almost certainly increase. Platforms that have delayed compliance may face significant enforcement action.
The timeline remains uncertain. Ofcom's leadership has suggested that enforcement is not imminent but is a matter of when, not if. For platforms that have not yet implemented detection, the window to do so voluntarily may be closing.
Detection Technology: How It Works, What It Does, What It Cannot Do
Modern cyberflashing detection systems are based on neural networks trained to recognize sexually explicit imagery. Understanding their mechanics, capabilities, and limitations is important for understanding their role in harm reduction.
Technical Architecture: On-Device vs. Server-Side
Bumble's Private Detector uses on-device processing: the image is analyzed locally on the user's phone using a neural network embedded in the app. This approach has advantages (privacy, speed, no data transmission) and disadvantages (limited update capability, cannot leverage real-time server-side intelligence).
Tinder and Hinge's systems are less transparently described, but are believed to use hybrid approaches: some processing on-device, with escalation to server-side verification if the system is uncertain.
Server-side processing would allow platforms to update models in real-time, to flag patterns across users (e.g., accounts that repeatedly attempt to send explicit imagery), and to integrate with law enforcement databases. But it requires transmitting images to servers, raising privacy concerns.
Accuracy and False Positive/Negative Rates
Bumble claims 98% accuracy for Private Detector. This is a high bar, but it requires unpacking. Accuracy typically means: of all images analyzed, 98% are correctly classified (explicit or not explicit). But this can mask important asymmetries:
- False positive rate (flagging non-explicit images as explicit): If this is high, legitimate images are incorrectly blocked, frustrating users.
- False negative rate (missing explicit images): If this is high, cyberflashing occurs despite the system.
Bumble does not publicly disclose these asymmetric rates. Published research on similar systems suggests false negative rates (missing actual explicit content) are typically lower than false positive rates (flagging legitimate content).
What Detection Cannot Do
Important limitations exist:
- Circumvention: Images can be cropped, filtered, or subtly altered to fool detection systems. Determined bad actors can and do bypass these systems.
- Artistic and medical contexts: AI systems trained on typical explicit imagery may struggle with edge cases: artistic nudity, medical/educational imagery, etc.
- Disparate impact: If training data is biased toward certain demographics or body types, detection accuracy may vary across different user populations.
- Psychological enforcement: The system works partly through friction and legal reminder. If users are sophisticated enough to understand they can override warnings, the friction effect diminishes.
These limitations do not invalidate the technology — the harm reduction benefits likely outweigh the drawbacks. But they are important to acknowledge for realistic expectations.
Criminal vs. Regulatory Enforcement: Two Pathways, Minimal Action
The Online Safety Act creates two enforcement pathways for cyberflashing: criminal prosecution and regulatory enforcement.
Criminal Enforcement: Rare
Cyberflashing can be prosecuted under the criminal law. The maximum sentence is two years' imprisonment. The mental element — intent to cause harm or recklessness — must be proven.
Yet criminal prosecutions are remarkably rare. Despite the prevalence of cyberflashing (41–76% of women report experiencing it), prosecution numbers are minimal. Official crime statistics are not yet comprehensively published, but anecdotal evidence suggests prosecution is in the hundreds, not thousands.
The reasons are multifaceted:
- Evidentiary burden: Proving the mental element (intent or recklessness) requires evidence of the perpetrator's state of mind.
- Resource constraints: Police forces have limited resources and may deprioritise cyberflashing relative to other crimes.
- Victim cooperation: Victims must be willing to report and to cooperate with investigation. Many choose not to.
- Jurisdiction issues: Perpetrators may be in other countries, complicating extradition and prosecution.
The result is a significant gap between prevalence and enforcement.
Regulatory Enforcement: Building but Minimal
Ofcom's enforcement of the Online Safety Act regarding cyberflashing has been slower. As noted above, no significant enforcement actions have been taken. This is partly because Ofcom is still building its enforcement machinery, partly because resources are limited, and partly because major platforms have visibly implemented detection.
However, regulatory enforcement is different from criminal enforcement. Ofcom does not need to prove intent or mental state. It simply needs to establish that a platform has not taken proportionate steps to prevent a priority offence. For platforms with no detection system, this evidentiary bar is significantly lower.
What Platforms Are Not Doing: The Compliance Gaps
Equally important as what platforms have done is what they have not done.
Robust Reporting Systems Connecting to Law Enforcement
Few platforms have implemented reporting systems that directly connect users reporting cyberflashing to law enforcement agencies. Most systems allow users to report through the platform's own mechanisms, which are then (in theory) passed to police. But the chain is opaque: users often report that reports disappear into a void, with no feedback on investigation or outcomes.
Bumble is a partial exception, offering integration with Cyber Tipline in some jurisdictions. But most platforms have not made similar commitments.
Mandatory Reporting
Even fewer platforms have implemented mandatory reporting to police. The Online Safety Act allows and arguably encourages this, but platforms have largely resisted, arguing that it is the user's decision whether to report to authorities.
This is a significant gap. Mandatory reporting would create a systematic pipeline: platform detection → automatic report to police → investigation → prosecution. Currently, the pipeline is broken: users must choose to report, and many do not.
Public Education and Awareness
Platforms have largely avoided public education campaigns about cyberflashing. Most users do not know that cyberflashing is now a criminal offence, that detection systems exist to protect them, or that they have a right to report abuse. Bumble has been an exception, running awareness campaigns and educating users about its Private Detector.
But most platforms have been silent. Tinder and Hinge's implementation of detection has not been accompanied by major public education campaigns. Users are often unaware that these features exist.
Industry Coordination and Intelligence Sharing
There is minimal industry sharing of best practices or detection intelligence. Each platform that has implemented detection has done so largely in isolation. Sharing information about circumvention attempts, new attack patterns, or improvements to detection accuracy could improve effectiveness industry-wide.
But commercial competition and legal concerns (fear of antitrust liability) have prevented this collaboration. The result is a fragmented approach to a systemic problem.
The Vulnerability Gap: Smaller Platforms and Unprotected Users
The most acute gap in the regulatory framework is accountability for smaller platforms and international operators serving UK users.
Many dating apps are operated by small teams in other jurisdictions (Eastern Europe, Southeast Asia, etc.) with minimal UK presence. These platforms may serve hundreds of thousands or millions of UK users yet fall outside Ofcom's direct jurisdiction or be so small that enforcement is logistically difficult.
Some platforms deliberately structure themselves to avoid UK regulation by claiming that UK users are merely incidental to an international service. This legal dodge has not yet been tested against Ofcom's enforcement powers, but it represents a genuine loophole in the regulatory framework.
The Online Safety Act does impose duties on platforms that reach a certain level of engagement with UK users. But determining that threshold and proving platforms have crossed it is a slow administrative process. In the meantime, users on smaller platforms or those operated from overseas operate without protection.
For users on these platforms, the practical reality is clear: cyberflashing detection and prevention are not forthcoming, and regulatory enforcement is unlikely in the near term. The duty of care that the law promises is, in practice, limited.
The Road Ahead: 2025–2026 and Beyond
Several developments will shape cyberflashing compliance over the next 12 to 24 months.
Ofcom Enforcement Action: The Most Significant Variable
If Ofcom issues compliance notices to major platforms or threatens fines, the industry response will shift dramatically. Currently, platforms like Grindr and Feeld may calculate that enforcement risk is minimal. A high-profile enforcement action targeting a major platform for non-compliance would change that calculus instantly.
Regulatory experts expect enforcement action to increase in 2026. The question is not if but when.
Legislative Refinement
Parliament may refine the Online Safety Act's treatment of cyberflashing. Potential areas for clarification include:
- Whether platforms have a duty to proactively report detected cyberflashing to police
- Whether platforms can use detection systems to harvest user data
- How the law applies to non-consensual image sharing more broadly (e.g., deepfakes)
- Whether detection systems themselves constitute a privacy violation
International Coordination
Other jurisdictions (EU, Australia, Canada) are developing their own frameworks for online sexual abuse. Greater international alignment could create global standards that platforms must meet, simplifying compliance and improving user protection.
Technology Evolution
Current detection systems use image analysis. Future systems may incorporate:
- Behavioural signals: Flagging accounts with patterns consistent with serial cyberflashing
- Blockchain and forensic verification: Using cryptographic techniques to verify consent and track non-consensual sharing
- Integration with law enforcement databases: Real-time checking against known offender profiles
These developments are in early stages but represent the trajectory of thinking about this problem.
Methodology
This investigation is based on:
- Comprehensive review of the Online Safety Act 2023 and Ofcom codes of practice and guidance documents
- Analysis of UK government crime survey data, academic research on cyberflashing prevalence, and psychological harm
- Public statements, technical documentation, and financial disclosures from major dating platforms (Bumble, Tinder, Match Group, etc.)
- Requests for comment sent to all platforms examined; responses received from Bumble and Tinder; non-responses from Grindr, Feeld, and others noted and documented
- Academic literature on image-based sexual abuse, AI-based detection efficacy, and effectiveness of friction-based interventions
- Interviews with regulatory experts, legal scholars, and academic researchers specialising in online safety (anonymised per request)
- Analysis of Match Group and Bumble financial disclosures (SEC filings 2024–2026) to quantify regulatory exposure
What To Read Next
Explore more investigations into dating app safety and regulatory compliance:
- Unmet Deadlines: How Dating Apps Are Still Failing to Report Child Sexual Abuse
- The Romance Fraud Crisis: Why Dating Apps Are Becoming Hunting Grounds for Scammers
- AI Companions Are Replacing Real Dating Apps — And Platforms Aren't Ready
- The Male Exodus: Why Men Are Abandoning Dating Apps
- Dark Patterns in Subscription Models: How Dating Apps Trap Users in Hidden Fees
- DII Investigation Launch: Holding the Dating Industry Accountable
Frequently Asked Questions
Create a free account
Unlock unlimited access and get the weekly briefing delivered to your inbox.
DII Editorial Team
Published
DatingIndustryInsights.com is an independent B2B intelligence platform covering the global online dating industry. It publishes original research, financial analysis, regulatory tracking, and investigative reporting. It operates with no advertising from the companies it covers.
What To Read Next

Dating App CSEA Compliance: Who's Ready for the April 7 Deadline?
· 18 min read

The £106 Million Question: Romance Fraud, Deepfakes and the Dating App Safety Crisis
· 17 min read

AI Companions vs Dating Apps: The Battle for Human Intimacy
· 18 min read

The Male Exodus: Why Men Are Leaving Dating Apps
· 18 min read

Subscription Dark Patterns: How Dating Apps Trap Users in Renewal Cycles
· 19 min read

Why We Launched DII: The Case for Independent Dating Industry Intelligence
· 15 min read

