Skip to content

The Complete Guide to Net Promoter Score: How to Run, Interpret, and Act on NPS

Net Promoter Score is the most widely used customer loyalty metric in the world — and also one of the most misused. Teams calculate the number, add it to a dashboard, and then don’t know what to do with it. They celebrate when it goes up, panic when it drops, and rarely connect it to decisions that actually change customer behavior.

The metric itself is not the problem. When NPS is used correctly — with the right question, the right timing, the right follow-up, and a systematic process for acting on results — it is genuinely one of the most useful signals a product or service team can collect. The challenge is that “NPS” has become shorthand for “ask customers if they’d recommend us,” when the real framework is considerably more precise than that.


What NPS measures — and what it doesn’t

Net Promoter Score was introduced by Fred Reichheld in his 2003 Harvard Business Review article “The One Number You Need to Grow.” The concept was developed at Bain & Company after Reichheld analyzed survey data from hundreds of companies to find which single question most reliably predicted customer loyalty and revenue growth.

The question he landed on:

“On a scale of 0 to 10, how likely are you to recommend [company/product/service] to a friend or colleague?”

Respondents fall into three categories based on their score:

  • Promoters (9–10): Loyal customers who will actively recommend you and continue buying
  • Passives (7–8): Satisfied but unenthusiastic — vulnerable to competitive alternatives
  • Detractors (0–6): Dissatisfied customers who may damage your brand through negative word-of-mouth

The score itself is calculated simply: subtract the percentage of detractors from the percentage of promoters. Passives are not counted — they’re acknowledged but excluded from the formula.

Net Promoter Score calculation: 11-box scale with color zones, formula NPS = % Promoters − % Detractors, and worked example showing 35% promoters, 15% detractors = NPS of 20

The result ranges from −100 (every respondent is a detractor) to +100 (every respondent is a promoter). Most healthy businesses fall somewhere between 0 and 60, depending on the industry.

One thing NPS does not directly measure: satisfaction with a specific feature, interaction, or touchpoint. NPS measures loyalty — a customer’s general willingness to stake their reputation on recommending you. That’s a meaningfully different signal than “did this customer enjoy their last interaction?” For interaction-level feedback, you want CSAT (customer satisfaction score). For loyalty and overall relationship quality, you want NPS.


The follow-up question that most teams skip

Reichheld himself has been explicit on this point in The Ultimate Question 2.0: the score is not where the value is. The value is in the verbatim response.

Always pair the rating question with this follow-up:

“What is the primary reason for your score?” (open-ended)

Without it, you know a customer is a detractor. With it, you know why — whether it’s a pricing concern, a product gap, a support experience, a onboarding failure, or something else entirely. The follow-up is the difference between a vanity metric and an actionable diagnostic.

Keep the survey short. Reichheld’s recommendation: two questions, occasionally three. Adding satisfaction scales, demographic questions, and feature ratings turns an NPS survey into a general survey and reduces the specific signal you’re trying to capture. If you need that other data, use separate surveys.


Transactional vs. relational NPS — two different tools

This distinction is one of the least understood aspects of NPS, and confusing the two leads to misinterpretation.

Transactional NPS is sent immediately after a specific customer interaction: closing a support ticket, completing onboarding, making a purchase, or using a new feature for the first time. The score reflects how customers feel about that specific moment — not your product overall.

Relational NPS is sent on a regular schedule, independent of any specific interaction. Quarterly is the most common cadence for B2C and SMB; semi-annually is typical for B2B enterprise. The score reflects how customers feel about the overall relationship — their accumulated experience to date.

Two types of NPS: transactional sent after specific interactions, relational sent on a regular schedule

The implication is important: a transactional NPS score and a relational NPS score for the same customer can be very different — and both can be correct. A customer might score you 9 immediately after a great support interaction (transactional) but score you 6 on a quarterly pulse because they’ve been frustrated with the product for months (relational). If you’re only running one type, you’re missing half the picture.

Most companies benefit from running both. Transactional NPS catches problems in specific touchpoints early enough to act on them. Relational NPS tracks whether your overall customer relationship is improving or eroding over time.


Industry benchmarks: what a “good” score actually looks like

One of the biggest mistakes teams make with NPS is comparing their score against a generic benchmark. A score of 30 is unremarkable in retail — and genuinely excellent in healthcare. An NPS of 41 makes you competitive in the SaaS industry; the same score puts you well above average for airlines.

NPS benchmarks by industry: Retail 62, Insurance 57, Education 56, B2B Services 46, Financial Services 44, Technology/SaaS 41, Utilities 33, Telecom 29, Healthcare 27, Airlines 14

The global median NPS across all industries is 42, according to Retently’s 2025 benchmark study of 150,000+ organizations. But that number is almost meaningless on its own — it averages industries that naturally score very differently.

The right benchmark is your industry, your competitors, and your own historical trend. If your SaaS product has moved from NPS 24 to NPS 38 over 18 months, that’s a meaningful improvement — regardless of where the industry average sits.

A few practical benchmarks for context:

ScoreWhat it typically signals
Below 0Serious and immediate problem — investigate before scaling anything
0–20Room for significant improvement; structural issues likely
20–40Solid — typical range for healthy companies in moderate-NPS industries
40–60Strong — customers are genuinely enthusiastic; word-of-mouth is working for you
60+World-class — typically seen in best-in-class consumer brands

Research from Bain & Company has found that companies with high NPS scores grow revenue at roughly 2x the rate of competitors with lower scores in the same industry. The mechanism is straightforward: promoters refer new customers, buy more, and stay longer. Detractors churn at higher rates and generate negative word-of-mouth that offsets acquisition efforts.


What each group is actually telling you

The aggregated NPS number is a headline. The segmented data is the story.

Promoters (9–10) are your most valuable customers — not just because they’re satisfied, but because they actively generate growth. They’re more likely to buy again, upgrade, and refer friends. Their open-ended responses tell you why your product creates value: what specific benefits they rely on, what language they use to describe you, and what would convince others to try you. This is marketing research hiding inside your NPS data.

Passives (7–8) are frequently overlooked, but they represent a significant opportunity. They’re not unhappy — they just aren’t enthusiastic. A passive is one bad interaction away from becoming a detractor, and one great experience away from becoming a promoter. Passives who describe what’s almost but not quite right are your best source of actionable product feedback. Treat them as a conversion opportunity, not a category to ignore.

Detractors (0–6) are the most urgent group. Research from Bain and Satmetrix has found that detractors are 4x more likely to churn than passives, and they typically share their negative experience with 9 to 15 people — compared to promoters, who tell an average of 3 to 5. The asymmetry matters: one vocal detractor can offset multiple promoters from a growth perspective.

The useful distinction within detractors: score 0–4 (strong detractors) versus score 5–6 (mild detractors). Strong detractors are often in active distress — they need a response fast. Mild detractors are disappointed but not yet hostile — they’re the most likely to be converted with a well-handled follow-up.


Why your NPS score can lie to you

NPS is only as reliable as the process used to collect it. Several common practices produce inflated or misleading scores that feel good in presentations but don’t reflect reality.

Selection bias in who you survey. If you send NPS surveys only to active, recent users — or only to customers who haven’t churned — you’re measuring your best audience. Customers who had a bad experience and stopped engaging never see your survey. The result is a score that systematically excludes your detractors, often by 10–20 points compared to a truly random sample.

Timing effects. Sending NPS immediately after a positive interaction — a successful onboarding, a support resolution, a delivery — captures the emotional high of that moment, not the customer’s overall relationship quality. This is the correct use of transactional NPS. But if you’re treating this score as a proxy for relational NPS, you’re measuring the wrong thing.

Survey frequency and respondent fatigue. Sending NPS monthly to the same customers produces declining response rates and satisficing — respondents giving quick, unconsidered answers. Best practice is quarterly for most businesses, with a minimum 60-day gap between surveys to the same individual.

Not controlling for timing across cohorts. If you survey new customers at the same time as long-term customers, you’re averaging very different relationships. New customers tend to score higher (the “honeymoon effect”), while long-term customers who’ve encountered product limitations score lower. Segment by tenure when analyzing results.

Gaming. Some teams have learned to suppress NPS surveys to customers who’ve recently had negative interactions — which obviously inflates scores. This is common enough that Reichheld himself has written about it as a failure mode. If your NPS process gives your team any control over who gets surveyed, the score is compromised.


Closing the loop: the inner and outer loop

The survey is not the end product — the action is. Without a structured process for responding to results, NPS is a number that generates reports and nothing else.

The framework developed by Bain & Company distinguishes between two types of action:

NPS inner and outer loop: inner loop recovers individual detractors within 24-48 hours; outer loop analyzes themes and fixes root causes at a systemic level

The inner loop is immediate, individual, and operational. When a detractor submits a survey, someone from your team follows up within 24–48 hours to understand the specific issue. Not with a template response — with a genuine attempt to understand what went wrong and fix it. The inner loop converts individual detractors into passives or promoters. It also signals to customers that their feedback was actually read.

The outer loop is periodic, aggregate, and strategic. Monthly or quarterly, you review all NPS verbatims to identify patterns: what are detractors consistently complaining about? What do promoters consistently love? The outer loop identifies root causes that the inner loop can’t address individually — a confusing pricing model, a broken onboarding flow, a feature gap that multiple customers have hit. Fixes from the outer loop move the score over time; the inner loop keeps individual customers from churning while you work on them.

Both loops require explicit ownership. The inner loop needs someone responsible for following up on every detractor response — typically customer success or support. The outer loop needs someone responsible for aggregating themes, presenting findings to leadership, and tracking whether improvements actually move the score.

Without both loops, NPS becomes a measurement practice instead of a management practice.


Common mistakes that waste your NPS data

No follow-up question. The most common mistake by far. A score without a verbatim is a signal without a cause. If you can only fix one thing about your NPS survey, add the open-ended follow-up question.

Setting it and forgetting it. NPS surveys that run on autopilot without a review process accumulate data nobody acts on. Customers who responded as detractors and never heard back are less likely to respond next time — and more likely to churn. The survey creates an implicit promise that feedback will be used.

Treating NPS as a single number. An aggregate NPS of 35 tells you almost nothing useful. The same number segmented by customer size, tenure, product tier, geography, and acquisition channel tells you where you’re succeeding and where you’re failing. Always segment.

Using the wrong benchmarks. Comparing your healthcare company’s NPS to a retail company’s NPS is a category error. Use industry-specific benchmarks, and prioritize your own trend over external comparisons.

Surveying too frequently. Monthly NPS surveys to the same customers produce survey fatigue and declining data quality. The signal degrades. Quarterly is the standard cadence; for most B2B businesses, semi-annual is sufficient.

Not tracking the trend. A single NPS score is a snapshot. The value of NPS compounds over time as you compare scores quarter over quarter, attribute changes to specific product or process improvements, and build a history of how customer loyalty has evolved. Teams that run NPS once and draw conclusions from it are missing the core benefit.


How SurveyReflex helps

Running NPS well requires consistency: the same question, the same follow-up, consistent timing, and a clean interface that doesn’t add friction to the response experience. SurveyReflex makes this straightforward — you can build an NPS survey with a rating question and an open-ended follow-up in a few minutes, preview it on mobile, and publish without being locked into a monthly subscription.

Because SurveyReflex is usage-based, running NPS quarterly means paying only when you publish — not every month whether you’re collecting data or not. For businesses that run NPS four times a year, that’s four payments that align directly with four cycles of actual data collection.

The AI analysis tools in SurveyReflex can help you process open-ended verbatims at scale, surfacing themes across hundreds of responses without manual reading — which is where the real time savings show up as your customer base grows.


Try SurveyReflex free — pay only when you need more than 50 responses.


References


— The SurveyReflex Team