Dating Industry Insights
    Trending
    After's Anti-Ghosting Pitch: A Solution or Just More Friction?
    Technology & AI Lab

    After's Anti-Ghosting Pitch: A Solution or Just More Friction?

    ·6 min read
    • After, an Austin-based dating app, requires users to provide mandatory explanations when unmatching someone
    • 42% of Austin residents are unmarried, with one of the highest interstate migration rates in the US
    • Match Group disclosed in 2022 it was investing heavily in AI moderation to reduce reliance on human moderators
    • A 2023 NIST study found facial recognition false rejection rates vary by as much as 100x across demographic groups

    Ghosting has plagued online dating since its inception, costing platforms millions in churned users and torpedoing countless matches before they begin. A new Austin-based app called After thinks it has the answer: force users to explain themselves when they unmatch someone. The question is whether mandatory rejection feedback solves a genuine user problem or simply adds friction to an already exhausting experience.

    The app, which launches this month, requires users to provide a reason whenever they disconnect from a match. That explanation is then converted into a message sent to the other person. According to the company, the feature is designed to prevent the abrupt disappearances that characterise modern dating and replace them with what it calls 'closure'.

    The moderation labour no one's pricing in

    The most striking aspect of After's positioning is the claim that it's 'created and moderated by women'. If that means what it appears to mean—that actual humans are reviewing unmatch explanations before they're dispatched—the operational implications are significant. Manual content moderation at scale is expensive, emotionally taxing work that doesn't scale linearly with user growth.

    Create a free account

    Unlock unlimited access and get the weekly briefing delivered to your inbox.

    No spam. No password. We'll send a one-time link to confirm your email.

    Woman reviewing content on computer screen
    Woman reviewing content on computer screen

    Dating apps have spent the past five years trying to automate trust and safety functions, not expand human review. Match Group (MTCH) disclosed in its 2022 annual report that it was investing heavily in AI moderation tools specifically to reduce reliance on contract moderators. Bumble (BMBL) has leaned hard on PhotoVerify and other automated systems for similar reasons. The economics of human moderation are brutal: high turnover, significant training costs, and psychological toll that creates liability risk.

    If After is genuinely routing every unmatch explanation through human reviewers, that creates a ceiling on growth velocity that venture investors won't love.

    If it's not—if 'moderated by women' is aspirational or refers only to edge cases—then the marketing claim becomes misleading, and the app risks the same toxic behaviour it claims to prevent slipping through automated filters.

    What Bumble and Hinge already tried

    After isn't the first platform to attempt behaviour modification through forced explanations. Bumble introduced 'accountability' features in 2020, prompting users who let matches expire to provide context. The initiative was quietly deprioritised within 18 months, absorbed into broader trust and safety functions without the fanfare that accompanied its launch.

    Hinge has long used prompts and conversation starters to reduce low-effort engagement, but it has deliberately avoided mandatory explanation features. The company's research, according to statements from former product leads, suggested that forced interactions often produce lower-quality engagement than optional ones. Users resent being made to explain themselves, particularly in contexts where silence is itself a form of communication.

    Person using dating app on mobile phone
    Person using dating app on mobile phone

    The pattern across implementations is consistent: features that force users to perform emotional labour tend to get gamed or abandoned. They work in theory but produce perverse incentives in practice. Users learn to select the path of least resistance—generic, meaningless explanations that satisfy the requirement without delivering the promised closure.

    Austin as a stress test

    Launching in Austin presents both opportunity and risk. The city's population skews young, tech-literate, and single, with a transient cohort of recent arrivals looking to build social networks. According to US Census data, roughly 42% of Austin residents are unmarried, and the city has one of the highest rates of interstate migration in the country. That creates natural demand for relationship infrastructure.

    But Austin is also a hookup market, not a relationship market. The city's reputation as a weekend destination for bachelor parties and music festivals has shaped dating culture in ways that don't align neatly with After's 'unapologetically romantic' positioning. Tinder and Feeld dominate usage patterns among under-35s, according to data from Sensor Tower. A relationship-focused app requiring mandatory emotional disclosure is swimming against the behavioural current.

    The test will be whether After can attract users who actually want the friction it's introducing.

    There's a segment of dating app users—particularly women over 30—who are exhausted by low-effort interactions and might welcome a platform that enforces higher standards. Whether that segment is large enough to build a sustainable business in a single metro area is another question entirely.

    The verification problem

    After claims AI-powered verification will keep out bots and bad actors. Every dating app makes this claim. The reality is that facial recognition technology remains imperfect, particularly across skin tones and gender presentations. A 2023 study from the National Institute of Standards and Technology found that commercial facial recognition systems still exhibit higher error rates for women and people of colour, with false rejection rates varying by as much as 100x across demographic groups.

    Smartphone displaying verification interface
    Smartphone displaying verification interface

    More importantly, verification doesn't solve the harder problems: users who are real people but lie about relationship intentions, users who pass verification then behave abusively, or users who create accounts with authentic photos but use the platform to manipulate or extract value from others. The emphasis on verification as a trust mechanism is telling. It suggests After is optimising for the easiest problem to communicate—fake profiles—rather than the hardest ones to solve.

    What to watch

    The proof will arrive within six months. If After gains traction, the metric that matters isn't downloads—it's sustained engagement and match-to-conversation conversion rates compared with incumbents. Forced explanations will either improve match quality by filtering out low-intent users or drive high-intent users toward platforms with less friction.

    The moderation model will also reveal itself quickly. If the app scales beyond a few thousand users, the economics of human review will become unsustainable, and After will either need to quietly automate or accept a permanently constrained growth trajectory. For an industry that has spent a decade chasing network effects, deliberately limiting scale would be a genuine departure. Whether it's a viable one remains to be seen. After is part of a growing anti-ghosting movement in dating apps, joining platforms like Elate and Snack that are attempting similar interventions.

    • Watch whether After's human moderation model can scale economically or whether it quietly shifts to automation within six months
    • The real test is sustained engagement and match-to-conversation conversion rates, not download numbers—forced friction will either filter for quality or drive users away
    • Verification systems remain imperfect and don't address the harder trust problems of deception, manipulation, and misrepresented intentions that plague dating platforms

    Comments

    Join the discussion

    Industry professionals share insights, challenge assumptions, and connect with peers. Sign in to add your voice.

    Your comment is reviewed before publishing. No spam, no self-promotion.

    More in Technology & AI Lab

    View all →
    Technology & AI Lab
    Tinder's Content Play: From Dating App to Queer Culture Broadcaster

    Tinder's Content Play: From Dating App to Queer Culture Broadcaster

    Tinder has reportedly acquired rights to BBC's cancelled LGBTQ+ dating shows I Kissed a Girl and I Kissed a Boy, with a …

    3d ago · 1 min readRead →
    Technology & AI Lab
    Lamu's £7.50 Paywall: A Test of Whether Users Will Pay for Less

    Lamu's £7.50 Paywall: A Test of Whether Users Will Pay for Less

    Lamu launches with £7.50 monthly paywall before users see any matches, inverting the industry's freemium model Platform …

    6d ago · 1 min readRead →
    Technology & AI Lab
    Goldrush's 'Rejection Insurance' App: A Symptom, Not a Solution

    Goldrush's 'Rejection Insurance' App: A Symptom, Not a Solution

    Goldrush launched this month at UK universities, requiring a .ac.uk email address to join The app only reveals matches w…

    6d ago · 1 min readRead →
    Technology & AI Lab
    Grindr's AI Claims: Revenue Diversification or Genuine Innovation?

    Grindr's AI Claims: Revenue Diversification or Genuine Innovation?

    Grindr CEO claims AI generates 70% of the company's codebase—a claim no other major dating platform has approached Premi…

    6d ago · 1 min readRead →