Dating Industry Insights
    Trending
    Dating Algorithms: The Invisible Hand Shaping Modern Romance
    Ai Technology

    Dating Algorithms: The Invisible Hand Shaping Modern Romance

    Research Report

    This report examines how recommendation algorithms on dating platforms shape the modern dating market, determining which potential partners users encounter and which connections never occur. It analyses the tension between engagement-optimised systems that keep users swiping and outcome-optimised systems designed to facilitate actual relationships, exploring the ethical, socioeconomic, and regulatory implications of algorithmic dating recommendation.

    • At least half of the rise in income inequality between 1980 and 2020 can be attributed to changing preferences and the rise of online dating
    • Users' stated preferences frequently contradict their revealed behaviour, with algorithms that incorporate behavioural signals outperforming those relying solely on profile settings
    • Dating platforms using Elo-style attractiveness scoring created stratified marketplaces where high-scoring users were shown primarily to other high-scoring users
    • Major platforms have moved away from explicit Elo scoring toward multi-dimensional models, though the degree of appearance-based sorting within newer systems remains difficult to verify
    • The EU's Digital Services Act imposes transparency obligations on recommendation systems, signalling a regulatory trend toward algorithmic accountability
    Person using dating app on smartphone
    Person using dating app on smartphone

    The DII Take

    Recommendation algorithms are the invisible hand of modern dating. They determine not just who appears in a user's feed but who does not appear, creating a curated reality that feels like organic discovery but is actually a constructed marketplace. The ethical implications are substantial: algorithms that optimise for engagement time rather than relationship formation may actively work against users' romantic interests. The platforms that align their algorithms with user outcomes rather than engagement metrics will build the most trusted products. Those that continue to optimise for addictive engagement at the expense of relationship success will face growing regulatory scrutiny and user backlash.

    How Dating Algorithms Work

    Modern dating recommendation algorithms use multiple data sources to predict which profiles a user will find attractive and engage with. Explicit preferences (stated age range, location, education, interests) provide the baseline filtering layer. Implicit preferences (which profiles the user views longest, which they swipe right on, which messages they respond to, which matches they actually meet) provide the behavioural layer that often contradicts stated preferences. Collaborative filtering (what similar users find attractive) extends predictions beyond individual behaviour. Engagement prediction (which profiles will generate the most interaction: swipes, messages, app opens) optimises for platform metrics.

    The critical design choice is which metric the algorithm optimises for. An algorithm optimised for engagement time will show profiles that keep users swiping, messaging, and returning to the app, not necessarily profiles that will produce satisfying matches. An algorithm optimised for relationship formation will show profiles most likely to produce mutual interest and progression to meeting, even if this reduces total engagement time. These objectives can conflict: the most engaging profiles (conventionally attractive, provocative, aspirational) may not be the most compatible matches.

    Attractiveness Scoring and Elo Systems

    Dating platforms historically used attractiveness scoring systems (often called Elo ratings, after the chess ranking system) to match users of similar perceived desirability. Users who received more right-swipes were assigned higher attractiveness scores, and the algorithm showed high-scoring users to other high-scoring users.

    This system created a stratified marketplace where the most conventionally attractive users were shown primarily to each other, while less conventionally attractive users were shown primarily to each other. The system optimised for mutual right-swipe probability but reinforced appearance-based evaluation rather than compatibility-based matching.

    Most major platforms have publicly moved away from explicit Elo-style scoring, replacing it with multi-dimensional attractiveness models that incorporate behavioural compatibility alongside physical appearance. Tinder's Chemistry feature and Hinge's recommendation engine both represent this evolution. However, the degree to which appearance-based sorting persists implicitly within these newer systems is difficult to verify from outside the platforms.

    Filter Bubbles in Dating

    Algorithmic recommendation creates filter bubbles in dating analogous to those documented in news and social media. A user who consistently swipes right on a particular demographic (age group, ethnicity, body type, education level) receives more profiles matching that pattern, reinforcing existing preferences rather than expanding them. Over time, the algorithm narrows the user's dating pool to an increasingly homogeneous set of candidates.

    This narrowing may feel helpful (showing more of what the user wants) but carries costs. Research on assortative mating, as covered in DII's Science of Relationships analysis, suggests that algorithmic filtering may contribute to increased socioeconomic homogamy, matching people within similar education and income brackets more efficiently than the pre-digital dating market did. The societal implications include reduced cross-class partnership and potentially increased economic inequality.

    The Ethical Framework

    Several ethical considerations arise from algorithmic dating recommendation. Transparency about how algorithms work and what they optimise for is currently minimal. Users do not know why they are shown specific profiles or why other profiles are hidden. Bias in training data can perpetuate or amplify existing prejudices. If historical swipe data reflects racial, body-type, or socioeconomic biases, algorithms trained on that data will reproduce those biases in their recommendations. Manipulation risk exists when algorithms optimise for engagement over outcomes, potentially keeping users on the platform longer by showing aspirational but incompatible matches rather than realistic but compatible ones. Consent is questionable when users do not understand or cannot control how algorithms shape their dating experience.

    This analysis draws on published research on recommendation algorithms, assortative mating (including Federal Reserve research on online dating and income inequality), platform-specific algorithm descriptions, and DII's assessment of the ethical implications of algorithmic dating recommendation.

    The Revealed vs Stated Preferences Problem

    The most important insight from recommendation algorithm research is that users' stated preferences frequently contradict their revealed behaviour, and the best algorithms learn to weight behaviour over declarations.

    A user who states they prefer partners aged 25-30 but consistently engages with profiles of people aged 30-40 is revealing a preference that their settings do not reflect. A user who lists "hiking" as an interest but whose messaging patterns show strongest engagement with profiles mentioning "books" and "coffee" is revealing priorities that their self-description obscures. The Eastwick and Finkel speed-dating research, detailed in DII's Science of Relationships analysis, demonstrated this stated-versus-revealed gap systematically: participants who said they valued physical attractiveness above all acted no differently from those who claimed to prioritise personality.

    Algorithms that incorporate revealed preferences outperform those that rely solely on stated preferences. Tinder's Chemistry tool and Hinge's recommendation engine both analyse behavioural signals (view duration, messaging patterns, response rates) alongside profile preferences. The AI-native platforms (Fate, Known) go further by eliminating stated preference filters entirely, relying on AI analysis of voice, conversation, and behaviour to determine compatibility.

    The ethical dimension of the stated-versus-revealed gap is significant. When an algorithm overrides a user's stated preferences based on observed behaviour, it is making a judgement about what the user "really" wants that the user has not explicitly endorsed. Users who discover that the algorithm is ignoring their stated preferences may feel manipulated, even if the algorithm's predictions are more accurate than their self-reports. Transparency about how behavioural data influences recommendations is essential for maintaining trust.

    Close-up of dating app interface on mobile device
    Close-up of dating app interface on mobile device

    The Engagement-Outcome Misalignment

    The most consequential algorithm design choice in dating is whether to optimise for engagement (time spent on the platform) or outcomes (relationships formed). These objectives are not merely different; they are actively in tension.

    An engagement-optimised algorithm shows profiles that maximise swipes, messages, and app opens. This means showing aspirational profiles that are attractive enough to generate interest but not so compatible that the user finds a partner and leaves the platform. The cynical interpretation is that engagement-optimised algorithms are designed to keep users searching rather than finding, because a user who finds a partner is a lost subscriber.

    An outcome-optimised algorithm shows profiles most likely to produce mutual interest, conversation progression, and in-person meetings. This may mean showing fewer profiles overall (reducing swipe volume), showing less conventionally attractive but more compatible profiles (reducing the dopamine hit of aspirational swiping), and actively pushing users toward meeting (reducing the in-app messaging time that engagement metrics count). An outcome-optimised algorithm may produce lower engagement metrics but higher user satisfaction and stronger word-of-mouth growth.

    Hinge's public positioning as "designed to be deleted" represents the most explicit outcome-optimised brand promise in the industry. Whether Hinge's algorithm genuinely optimises for relationship formation rather than engagement is not independently verifiable, but the positioning acknowledges the tension and aligns the brand with user interests.

    The Socioeconomic Impact

    Research from the Federal Reserve Banks of St. Louis and Dallas and Haverford College found that at least half of the rise in income inequality between 1980 and 2020 can be attributed to changing preferences and the rise of online dating. The mechanism is algorithmic reinforcement of assortative mating: dating platforms make it easier for people to filter partners by education, income, and socioeconomic status, increasing the probability that high-earning individuals partner with other high-earning individuals.

    This finding has profound implications for how algorithms are designed. An algorithm that surfaces profiles based on education and income similarity (either through explicit filters or through collaborative filtering that identifies similar-background clusters) reinforces socioeconomic stratification.

    An algorithm that deliberately surfaces profiles across socioeconomic boundaries could reduce assortative mating, but would likely frustrate users who are filtering for socioeconomic compatibility intentionally. The tension between individual user preferences (which favour filtering for similar backgrounds) and societal outcomes (which benefit from cross-class partnership) creates a design dilemma that has no easy resolution. Platforms currently optimise for individual preferences, which means that their algorithms reinforce rather than mitigate socioeconomic sorting.

    What Users Should Know

    Users interact with dating algorithms without understanding how those algorithms shape their experience. DII recommends that users understand several key principles.

    • Your feed is not random: Every profile you see has been selected by an algorithm that predicts you will engage with it. Profiles you do not see have been filtered out by the same algorithm. Your dating pool is algorithmically curated whether you know it or not.
    • Your behaviour trains the algorithm: Every swipe, view, message, and response teaches the algorithm about your preferences. If you consistently engage with a particular demographic, the algorithm will show you more of that demographic and fewer alternatives.
    • Algorithms may not have your best interests at heart: An algorithm optimised for engagement will show you profiles that keep you swiping, not necessarily profiles that will make you happy. Understanding this misalignment helps users approach their feeds with appropriate scepticism.
    • Settings and preferences are starting points, not guarantees: The algorithm may weight your behaviour differently from your stated preferences. If your experience does not match your settings, your behaviour may be sending signals that override your preferences.

    The Regulatory Horizon

    Regulators are beginning to examine dating recommendation algorithms alongside social media and content recommendation algorithms.

    The EU's Digital Services Act (DSA) imposes transparency obligations on very large online platforms regarding their recommendation systems. While most dating platforms do not currently meet the DSA's user threshold for "very large" platform classification, the regulatory direction is toward greater algorithmic transparency across all consumer-facing platforms.

    The UK's Online Safety Act creates obligations around content recommendation that may extend to dating platforms, particularly regarding the recommendation of profiles that could facilitate harm.

    The broader regulatory trend is toward algorithmic accountability: requiring platforms to explain how their recommendation systems work, what they optimise for, and how they affect user outcomes. Dating platforms should prepare for a regulatory environment that demands algorithmic transparency and accountability, even if specific dating-focused regulations have not yet been enacted.

    Algorithm Transparency: What Platforms Disclose

    Major dating platforms disclose minimal information about how their recommendation algorithms work, creating an information asymmetry between platforms and users.

    Tinder has publicly stated that it no longer uses an Elo-based attractiveness score, replacing it with a multi-factor system that considers profile quality, engagement patterns, and user preferences. The specific factors and their relative weights are not disclosed.

    Hinge describes its algorithm as learning from user behaviour, particularly "Most Compatible" picks that use machine learning to identify potential matches. The company has published research partnerships and blog posts about its recommendation approach, providing more transparency than most competitors while still keeping proprietary details private.

    Bumble has disclosed that its algorithm considers profile completion, activity level, and user preferences, but specific technical details are not public. The forthcoming AI-first platform rebuild may include a different recommendation architecture.

    The lack of algorithmic transparency is not unique to dating; social media, e-commerce, and content platforms all guard their recommendation systems as trade secrets. But the stakes in dating are arguably higher: a recommendation algorithm that shapes who you partner with has more consequential effects on your life than one that shapes what videos you watch or what products you buy.

    Data visualization and analytics on computer screen
    Data visualization and analytics on computer screen

    The Future of Dating Algorithms

    Several developments will reshape dating recommendation algorithms over the next five years.

    • Multi-modal matching: Incorporating voice, video, and behavioural data alongside photos and text will produce more accurate compatibility predictions. The AI-native platforms are pioneering this approach; established platforms will follow.
    • Explainable AI techniques: Enabling platforms to show users why they are being shown specific profiles, addressing the transparency deficit. A system that says "we recommended this person because you both enjoy hiking and have similar communication styles" is more trustworthy than a black-box recommendation with no explanation.
    • Outcome-feedback loops: Tracking which matches lead to dates, relationships, and long-term satisfaction will enable algorithms that optimise for genuine relationship outcomes rather than engagement proxies. The platforms that implement these feedback loops, which require systematic outcome tracking that most platforms do not currently conduct, will build the most effective recommendation systems.
    • User-controlled algorithm tuning: Giving users the ability to adjust algorithm behaviour (prioritise compatibility over attractiveness, expand or narrow demographic range, weight different preference dimensions) will shift power from platform to user. This approach sacrifices some algorithmic efficiency for user agency, a trade-off that trust-building platforms may find commercially attractive.

    The Platform's Incentive Problem

    The recommendation algorithm is where the dating platform's commercial interests and its users' romantic interests most directly conflict, and understanding this conflict is essential for anyone evaluating dating platform products or business models.

    A dating platform that perfectly matches every user with their ideal partner on the first attempt would be commercially disastrous: every successful match would eliminate two paying subscribers. The platform's revenue depends on users remaining subscribed long enough to generate sufficient lifetime value, which means the optimal commercial outcome is not instant perfect matching but sustained engagement with periodic successes that maintain hope without eliminating the need for the service.

    This incentive misalignment does not mean that platforms deliberately produce bad matches. It means that the optimisation target for the recommendation algorithm (maximise engagement and subscription duration) does not perfectly align with the user's goal (find a compatible partner as quickly as possible). The misalignment is structural rather than malicious, but it produces real consequences for users who experience an endless stream of almost-right matches that keep them engaged without satisfying their fundamental need.

    The AI-native platforms' outcome-aligned pricing models (Known's pay-per-date model) represent the most direct attempt to resolve this incentive misalignment by aligning platform revenue with user success rather than user engagement. Under this model, the platform profits when users meet suitable people, creating an incentive to match effectively rather than to sustain engagement.

    The recommendation algorithm is the dating platform's most powerful and least transparent product decision. As regulatory scrutiny of algorithmic systems increases across technology sectors, dating platforms will face growing pressure to explain, justify, and open their algorithms to external audit. The platforms that embrace algorithmic transparency proactively will build trust advantages that opaque competitors cannot match.

    The future of dating algorithms lies not in better prediction of attraction (which the academic evidence suggests has fundamental limits) but in better facilitation of discovery: helping users encounter potential partners they would not have found on their own, in contexts that enable genuine chemistry assessment. Research exploring how beliefs about algorithms shape dating success suggests that user perceptions and expectations play a critical role in how effectively algorithmic matching systems work in practice.

    What This Means

    Dating platforms face a fundamental choice between engagement-optimised algorithms that maximise user retention and outcome-optimised systems that prioritise relationship formation. The platforms that transparently align their algorithms with user outcomes rather than commercial engagement metrics will build competitive advantages through trust and word-of-mouth growth. Regulatory pressure toward algorithmic transparency and accountability will accelerate this shift, forcing platforms to justify their design choices and potentially reveal the extent to which their systems prioritise platform interests over user welfare.

    What To Watch

    Monitor regulatory developments in the EU and UK regarding algorithmic transparency requirements and whether these extend explicitly to dating platforms. Watch for platforms that implement explainable AI features showing users why specific profiles are recommended, and observe whether outcome-aligned pricing models (pay-per-date, success-based fees) gain traction as alternatives to subscription models. Track research on the socioeconomic effects of algorithmic assortative mating and whether platforms respond with design changes that facilitate cross-class connections rather than reinforcing stratification.

    Create a free account

    Unlock unlimited access and get the weekly briefing delivered to your inbox.

    No spam. No password. We'll send a one-time link to confirm your email.