Skip to content

The Product-Market Fit Survey: How One Question Tells You If Your Product Will Survive

Most startups don’t die from building the wrong product. They die from building a product nobody would miss.

That distinction matters. Teams spend months refining features, improving onboarding, and optimizing funnels — but never ask the one question that predicts whether any of it matters: “How would you feel if you could no longer use this product?”

This is the Product-Market Fit (PMF) survey. It was developed by Sean Ellis after studying nearly 100 startups, and it has become the most widely used early-stage diagnostic in the startup ecosystem. Companies like Slack, Dropbox, Superhuman, and Eventbrite have used it to measure and improve their market position.

This guide covers the research behind the survey, the frameworks for acting on results, and the practical steps for running it in your own business — whether you’re a two-person startup or a 50-person team.


What product-market fit actually means

The term was coined by Andy Rachleff (co-founder of Benchmark Capital) and popularized by Marc Andreessen in his 2007 essay The Only Thing That Matters.

Andreessen’s definition is blunt:

“Product/market fit means being in a good market with a product that can satisfy that market.”

He argues that of the three elements that determine a startup’s fate — team, product, and market — market is the most important. A great product in a bad market fails. A mediocre product in a great market can succeed.

His description of what PMF looks like in practice:

“Customers are buying the product just as fast as you can make it — or usage is growing just as fast as you can add more servers. Money from customers is piling up in your company checking account.”

And what it looks like when it’s absent:

“Customers aren’t quite getting value out of the product, word of mouth isn’t spreading, usage isn’t growing that fast, press reviews are kind of ‘blah,’ the sales cycle takes too long, and lots of deals never close.”

This creates a binary reality. Andreessen calls it BPMF (Before Product-Market Fit) and APMF (After Product-Market Fit). Everything a startup does before PMF should be focused on finding it. Everything after should be focused on scaling it.

The problem is: how do you measure it?


The Sean Ellis test: one question, one threshold

Sean Ellis — the first marketer at Dropbox, founding team at LogMeIn (sold for $4.3 billion), and head of growth at Eventbrite — developed a survey-based approach after finding that traditional metrics like NPS and customer satisfaction didn’t reliably predict growth.

His method is built around a single question:

“How would you feel if you could no longer use [product]?”

With three response options:

  1. Very disappointed
  2. Somewhat disappointed
  3. Not disappointed (it isn’t really that useful)

The metric is the percentage of respondents who answer “very disappointed.”

After benchmarking results across nearly 100 startups, Ellis found a consistent pattern:

  • Companies where 40% or more said “very disappointed” almost always found sustainable, scalable growth
  • Companies below 40% consistently struggled — regardless of funding, team size, or feature set

This 40% threshold became the standard benchmark for product-market fit.

Why “disappointment” instead of “satisfaction”

The question is deliberately framed as a loss. Asking whether people like your product invites polite, non-committal answers. Asking whether they’d be disappointed without it reveals dependency — how necessary the product has become in their life or workflow.

Products people would miss are products with market fit. Products people wouldn’t notice losing are features looking for a problem.


How Superhuman turned PMF into a system

The most detailed public case study of the PMF survey in action comes from Rahul Vohra, CEO of Superhuman (the premium email client). His framework, published by First Round Review, has become required reading in Y Combinator and startup accelerators worldwide.

When Vohra first ran the PMF survey in summer 2017, Superhuman’s score was 22% — well below the 40% threshold. Instead of panicking or pivoting, he built a systematic engine to improve the score.

Over 18 months, running the process every quarter, Superhuman’s score progressed: 22% → 32% → 48% → 58%.

Here’s the framework he used.

The PMF Survey Process — a 5-step framework showing Survey, Segment, Analyze, Build, and Track stages with score interpretation zones


The 5-step PMF engine

Step 1: Survey

Run the Sean Ellis question plus four follow-up questions:

  1. “How would you feel if you could no longer use [product]?” — the core PMF question
  2. “What is the main benefit you receive from [product]?” — reveals your actual value proposition (which may differ from what you think it is)
  3. “What type of person do you think would most benefit from [product]?” — your users describe your ideal customer better than you can
  4. “How can we improve [product] for you?” — surfaces gaps holding back the “somewhat disappointed” group
  5. “What would you likely use as an alternative?” — identifies your real competitive landscape

Step 2: Segment

Don’t look at the aggregate score first. Instead, segment responses by user type, role, company size, or use case — and find which segment has the highest PMF score.

When Superhuman segmented their results, they found that founders, managers, executives, and business development professionals had significantly higher “very disappointed” rates than the overall average. By narrowing their focus to this segment alone, their PMF score jumped by 10 percentage points — without changing the product at all.

This step produces your High-Expectation Customer (HXC) — the persona whose needs you should optimize for. Superhuman named theirs “Nicole”: a busy professional who lives in email and values speed above everything.

Step 3: Analyze

Focus on the “somewhat disappointed” group — not the “not disappointed” group.

As Vohra explains: the seed of attraction is already there in the “somewhat disappointed” respondents. With targeted improvements, they can become “very disappointed” users. The “not disappointed” group, by contrast, will “request distracting features, present ill-fitting use cases, and probably be very vocal — all before they churn out.”

Cross-reference what “very disappointed” users say they love (Question 2) with what “somewhat disappointed” users say is missing (Question 4). The intersection reveals your highest-leverage improvements.

Step 4: Build

Split your roadmap:

  • 50% on deepening what “very disappointed” users already love — protect and strengthen what’s working
  • 50% on fixing what holds “somewhat disappointed” users back — convert fence-sitters into fans

Prioritize using a cost-impact framework. Low-cost, high-impact items first (Superhuman started with keyboard shortcuts), then high-cost, high-impact items (like building their mobile app).

Step 5: Track

Treat the PMF score as your most important company metric. Survey new qualified users regularly (ensuring no repeat responses), and track the score weekly, monthly, and quarterly. Tie improvements to team OKRs.

PMF is not a milestone you reach once. It’s a metric you maintain.


What real PMF scores look like

One of the most useful things about the Sean Ellis test is that several well-known companies have shared their scores publicly:

CompanyPMF ScoreContext
Slack51%731 users surveyed by Hiten Shah when Slack had ~500K paying users
Superhuman (start)22%Summer 2017, before systematic PMF engine
Superhuman (after)58%18 months later, after running the 4-step process quarterly
Dropbox40–50%During early growth phase under Sean Ellis
Eventbrite40–50%Measured separately for organizers and attendees
Lookout (before)7%Initial product positioning
Lookout (after)40%Two weeks after repositioning around passionate minority

A few things stand out:

Even beloved products hover around 50%, not 90%. Slack — one of the fastest-growing SaaS products in history — scored 51%. If your product hits 45%, you’re in elite company.

Dramatic improvement is possible quickly. Lookout went from 7% to 40% in two weeks by repositioning around what their most passionate users valued. They didn’t rebuild the product — they changed who they built it for.

Segmentation changes everything. Superhuman’s aggregate score was 22%. Their best segment scored 10 points higher. Finding your best-fit audience is often more valuable than adding features.


How PMF connects to retention

The PMF survey measures stated preference — what people say they’d feel. Retention curves measure revealed behavior — what people actually do. When both align, you have strong evidence of product-market fit.

Brian Balfour (former VP Growth at HubSpot) defines PMF through retention curves: if a cohort’s retention curve flattens at a non-trivial percentage, you likely have product-market fit. If it trends toward zero, you don’t — regardless of what users say in surveys.

Retention Curves showing how products with strong PMF flatten above 40%, weak PMF flattens around 25-30%, and no PMF trends toward zero

Casey Winters (former growth lead at Pinterest) puts it more directly:

“Product-market fit is not when customers stop complaining — they’ll never stop complaining. Product-market fit is when they stop leaving.”

His formula: PMF = Flattened Retention Curve + Month-over-Month Growth in New Users

Retention alone isn’t enough. If existing users stay but no new users arrive, you have a niche product, not product-market fit. Both signals need to be present.

What to watch in your retention data

  • B2C products (messaging, social, photos): measure retention in days. If users aren’t coming back within a week, there’s a problem.
  • B2B SaaS: measure retention in months. Monthly active usage or monthly churn rate are the key metrics.
  • Transactional products (e-commerce, marketplaces): retention is seasonal and purchase-cycle dependent. Look at repeat purchase rate over the relevant cycle.

The 5% habituation threshold is a useful rough benchmark: if fewer than 5% of your sign-ups become habitual users, you probably don’t have PMF. Once 10–15% of sign-ups become habituated, you’re starting to find it.


Who to survey and when

The most common mistake in PMF surveys is surveying the wrong people at the wrong time.

Sean Ellis’s qualification criteria

Respondents must meet all three conditions:

  1. Experienced the core of your product — not just signed up, but actually used the primary feature
  2. Used the product at least twice — one-time users haven’t formed an opinion worth measuring
  3. Used it within the last two weeks — for fresh, accurate responses

Surveying users who signed up but never engaged gives you data about your onboarding, not your product. Surveying users who haven’t logged in for months gives you data about your churn problem, not your value proposition.

Sample size

  • Recommended minimum: 40–50 responses (per Hiten Shah’s guidance)
  • Absolute minimum: 30 responses for directionally useful data
  • Margin of error reality: With 50 responses showing 40% PMF, your actual range is 27–53%. With 200 responses, it narrows to 33–47%. With 1,000 responses, it’s 37–43%.

For early-stage products, 40–50 responses are sufficient to guide decisions. Don’t wait for statistical perfection — directional accuracy is enough.

When to send the survey

  • After users have been active for at least 2–4 weeks — enough time to experience core value
  • After major product updates — once users have experienced the changes
  • When growth stalls or churn increases — to diagnose the underlying problem
  • Before seeking funding — to know where you stand and have data to share
  • Quarterly as an ongoing practice — to track progress over time

When NOT to send it

  • Immediately after sign-up (users haven’t experienced value)
  • During a service outage or major bug (responses reflect the incident, not the product)
  • To users who never activated (you’re measuring onboarding failure, not PMF)

What to do with your score

The PMF score isn’t just a number — it’s a decision framework. Different scores demand fundamentally different strategies.

PMF Survey Decision Tree showing what actions to take based on your score: below 25% means rethink, 25-39% means iterate, 40% or above means scale

Below 25%: No PMF — rethink

This score means the product, in its current form, does not solve a problem that matters enough. The response should be:

  1. Interview churned users — understand specifically why they left, not just that they left
  2. Re-evaluate your target audience — you may be building for the wrong people
  3. Consider pivoting the value proposition — the Lookout example shows that repositioning (not rebuilding) can move the score from 7% to 40%
  4. Rebuild and re-survey in 4–6 weeks — set a concrete timeline for reassessment

Do not scale marketing or sales at this stage. You’d be pouring fuel on a fire that isn’t lit.

25–39%: Getting close — iterate

This is the most actionable range. You have something, but it’s not yet compelling enough for a large enough group.

  1. Segment by user type — find the persona with the highest PMF score within your data
  2. Ask “somewhat disappointed” users what’s missing — they’re close to loving you
  3. Double down on what “very disappointed” users love — protect what’s working
  4. Split your roadmap: 50% deepen love, 50% fix gaps — the Superhuman formula

Most companies that reach product-market fit pass through this zone. The difference between companies that break through and those that stall is whether they systematically act on the data or just keep building what they were already planning to build.

40% and above: PMF achieved — scale

Congratulations, but don’t relax.

  1. Shift focus to growth and acquisition — now is the time to invest in marketing, sales, and distribution channels
  2. Invest in retention systems — protect what you’ve built with onboarding improvements, engagement features, and customer success
  3. Continue quarterly surveys — PMF can erode over time, especially as you expand beyond your core audience
  4. Watch for score decline as you scale — broadening your market often means serving less ideal users, which can dilute your PMF score

Andreessen’s advice: “When you are APMF, do whatever is required to maintain and extend product-market fit.”


Common mistakes that invalidate PMF surveys

Surveying only power users

If you only ask your most engaged users, you’ll get an artificially high score. Buffer’s early survey returned 78% “very disappointed” — but it was sent only to ultra-engaged beta users. The broader user base didn’t retain nearly as well. This was a false positive that could have led to premature scaling.

Fix: Send to a random sample of all qualified users (meeting Ellis’s three criteria), not just your biggest fans.

Changing the question wording

The exact phrasing matters. “How satisfied are you?” measures something different than “How would you feel if you could no longer use this product?” The Ellis question specifically measures dependency and loss aversion, which is more predictive of retention than satisfaction.

Fix: Use the exact wording. Don’t paraphrase, don’t add qualifiers, don’t combine it with other questions.

Treating it as a one-time measurement

A single PMF score is a snapshot. Products change. Markets change. Competitors enter. What scored 45% six months ago might score 32% today.

Fix: Run the survey quarterly and track the trend line. A declining score is an early warning system.

Ignoring the follow-up questions

The PMF score tells you where you stand. The follow-up questions tell you why and what to do about it. Running the survey without the follow-ups is like getting a diagnosis without a treatment plan.

Fix: Always include the four follow-up questions (main benefit, ideal user, improvements needed, alternatives).

Adding too many questions

Survey fatigue degrades response quality. The PMF survey should take under 5 minutes to complete. Adding NPS, CSAT, feature satisfaction, and demographic questions turns a focused diagnostic into a research project that nobody finishes.

Fix: Keep it to 5 questions maximum. Use separate surveys for other research needs.


A practical PMF survey blueprint

For a small business or early-stage startup running its first PMF survey, here’s a complete template:

Qualification criteria:

  • Used the core product feature at least twice
  • Active within the last two weeks

Survey questions (5 total):

  1. “How would you feel if you could no longer use [Product]?”

    • Very disappointed
    • Somewhat disappointed
    • Not disappointed
  2. “What is the main benefit you receive from [Product]?” (open-ended)

  3. “What type of person do you think would most benefit from [Product]?” (open-ended)

  4. “How can we improve [Product] for you?” (open-ended)

  5. “What would you likely use as an alternative if [Product] were no longer available?” (open-ended)

Target sample: 40–50 qualified responses

Frequency: Quarterly

Analysis process:

  1. Calculate the “very disappointed” percentage (your PMF score)
  2. Segment by user type — find your highest-scoring segment
  3. Cross-reference what fans love with what fence-sitters want
  4. Prioritize improvements using the 50/50 framework (deepen love + fix gaps)
  5. Re-survey next quarter and track the trend

Timeline to first survey: Send 2–4 weeks after users have experienced your core product. Allow 1–2 weeks for collection. Analyze and act within 1 week of close.


PMF is a spectrum, not a switch

The 40% threshold is a useful benchmark, but product-market fit is not binary. It exists on a continuum, and it changes over time.

Sean Ellis himself has acknowledged that the 40% number is “a bit arbitrary” — derived from pattern-matching across dozens of startups rather than rigorous statistical analysis. It’s a heuristic, not a law of physics.

What matters more than hitting an exact number is:

  • The trend — is your score improving over time?
  • The segment — do you have strong PMF with at least one well-defined group?
  • The alignment — do your retention metrics confirm what the survey says?

Companies that find product-market fit don’t find it once and forget about it. They measure it, protect it, and rebuild it as their market evolves.

The PMF survey is the fastest way to know whether you’re building something people would genuinely miss — or something they’d scroll past without a second thought.

Start measuring your product-market fit with SurveyReflex


References


— The SurveyReflex Team