Bluesky's Moderation Success: A Mirage for Dating Apps?
·6 min read
Bluesky removed 2.08 million accounts in 2025 whilst growing to 41.41 million users
User-submitted reports per 1,000 monthly active users dropped 50.9% from January to December 2025
Reply-filtering system reduced anti-social behaviour reports by 79%
Legal demands increased 518% in 2025—five times faster than user growth
Bluesky removed 2.08 million accounts in 2025 whilst growing to 41.41 million users, according to the platform's first full transparency report published yesterday. For dating operators watching their own bot and scammer populations metastasise, the question isn't whether the decentralised social network's human-centred moderation works today—it's whether it can possibly work at 100 million users, or 200 million. The data presents a platform achieving something the dating industry has been chasing for years: measurably reduced spam and abusive behaviour during a period of explosive growth.
Social media and technology devices
User-submitted reports per 1,000 monthly active users dropped 50.9% from January to December 2025 despite the platform adding 15.47 million users. A reply-filtering system that deprioritised toxic comments behind an additional click reduced anti-social behaviour reports by 79%, figures from the report show. Dating apps have long struggled with this exact problem—how to maintain trust without manually reviewing every profile, message, or reported interaction.
But Bluesky itself acknowledges what every trust and safety team already knows: this approach 'may have to change as the platform grows further'. That caveat deserves more attention than the topline numbers.
Enjoying this article?
Join DII Weekly — the dating industry briefing, delivered free.
The DII Take
Bluesky's moderation model is impressive for a platform at 41 million users, but dating operators shouldn't mistake it for a scalable blueprint.
The combination of automated detection for clear violations and human review for context-dependent cases works precisely because the platform is still small enough for it to work. What matters here isn't whether Bluesky can maintain this approach—it almost certainly can't at dating industry scale—but whether its reply-filtering mechanics and labelling systems offer tactical solutions that translate to dating contexts. In dating environments, a single unfiltered scammer can cause substantially more damage than a toxic reply.
Moderation density improves, but complexity accelerates faster
The report shows Bluesky took down 2.44 million violating items in 2025, applied 16.49 million labels (up 200% year-over-year), and processed 9.97 million user reports. Automated systems flagged 2.54 million potential violations, but human moderators still reviewed nuanced cases involving harassment and context-dependent content.
Mobile phone displaying social media application
Dating platforms operate with similar hybrid models, though the stakes differ. A false positive on a dating profile—suspending a legitimate user who happens to use common phrases or stock photos—directly damages revenue and retention. A false negative—missing a romance scammer who goes on to defraud members—creates legal exposure and reputational harm that can take years to recover from.
The challenge compounds at scale. Match Group (MTCH) reported 15 million paying subscribers across its portfolio in Q3 2024. Bumble (BMBL) disclosed 4.1 million paying users in the same period. Grindr (GRND) had 1.1 million subscribers. Each platform faces bot accounts, catfishing, and coordinated fraud operations, but they're doing it with user bases that dwarf Bluesky's whilst operating under sector-specific regulatory frameworks that don't apply to general social platforms.
Bluesky's removal of 3,619 accounts linked to suspected Russian influence operations signals something dating operators already know: sophisticated bad actors target emerging platforms early, before defences harden. The dating industry has seen this pattern repeatedly. New platforms launch with minimal verification, grow quickly on the promise of authenticity, then spend years playing catch-up as fraud networks embed themselves in the user base.
Legal requests intensify faster than user growth
Legal demands increased fivefold in 2025, from 238 to 1,470 requests, according to the report. That's a 518% increase against 60% user growth—a troubling ratio for any platform planning international expansion.
Dating platforms face this regulatory pressure curve even more acutely. The UK Online Safety Act (OSA) imposes direct criminal liability on executives for systemic safety failures. The EU Digital Services Act (DSA) requires detailed transparency reporting from platforms above certain user thresholds. Regulators in Australia, the US, and across Asia have increased enforcement actions targeting dating services specifically.
Bluesky's legal request volume at 41 million users suggests regulatory attention intensifies far faster than the user base grows. For dating apps, which face sector-specific scrutiny around age verification, identity authentication, and romance fraud, that multiplier likely accelerates further.
The platform's transparency report doesn't break down legal requests by jurisdiction or type, but the aggregate growth rate matters. Compliance teams at dating companies should note that even platforms explicitly designed around decentralisation and user control—Bluesky operates on the federated AT Protocol—cannot insulate themselves from state-level demands as they scale.
What translates, what doesn't
Bluesky's reply-filtering system—which reduced anti-social behaviour reports by 79%—offers the most immediately applicable lesson for dating platforms. The mechanism is straightforward: potentially toxic replies get deprioritised behind an interaction barrier rather than removed entirely. Users can still access them, but the friction reduces both visibility and the psychological impact of harassment.
Tablet displaying social media interface
Dating apps could adapt this for message filtering in ways that preserve user agency whilst reducing exposure to opening-line harassment and spam. The approach sits between aggressive automated blocking (which creates false positives and user frustration) and laissez-faire reporting systems (which place the burden entirely on victims).
What clearly won't translate is Bluesky's reliance on human review for context-dependent moderation. The platform admits this model's limitations even as it celebrates current effectiveness. Dating platforms generate substantially more intimate, context-heavy content than social networks. A suggestive message might be welcomed flirting in one conversation and reportable harassment in another. Human moderators cannot possibly review this at scale, which is why the industry has spent years building risk-scoring models, natural language processing systems, and behavioural analytics.
The real question facing dating operators isn't whether to copy Bluesky's approach—they can't—but whether to acknowledge the same trade-offs openly.
Bluesky published user-submitted report volumes, takedown numbers, false positive rates, and the explicit caveat that current methods won't scale. Most dating platforms reveal a fraction of this data, and only when compelled by regulation. That opacity might feel safer in the short term, but it leaves the industry perpetually playing defence when the next trust crisis emerges.
Bluesky had the luxury of building moderation systems before reaching critical mass. Dating platforms are trying to retrofit trust infrastructure whilst managing paying subscribers, investor expectations, and regulatory deadlines. The transparency report doesn't solve that problem, but it does demonstrate that users will tolerate—perhaps even appreciate—honesty about the limitations of current approaches. For an industry that's spent years promising safety it cannot fully deliver, that might be the more valuable lesson.
Reply-filtering systems that deprioritise rather than remove problematic content offer a practical middle ground for dating platforms struggling with message harassment whilst avoiding false positives
Regulatory pressure will intensify faster than user growth—dating operators should prepare for compliance demands to scale at multiples of their user base expansion
Transparency about moderation limitations may prove more valuable than overpromising safety capabilities, particularly as regulatory frameworks increasingly demand detailed reporting