Operationalizing Customer Feedback (NPS/CES/CSAT)

Tons of articles argue whether to use a customer feedback tool or not. There are valid (and not so valid) points on all sides. This article is written assuming you have or are planning to institute some sort of customer feedback. If you’re already an expert but want to see some ways of incorporating surveys, skip down.

Let’s take three types of customer feedback, when/where to implement each one, some recommended vendors, and the biggest question: “okay, so I’ve got the data. Now do I follow up with every customer? How?” Let’s explore different considerations.

Your company size

As a small company, it’s easy to maintain a strong pulse on customers. As a company scales, that distance grows wider. Companies must institutionalize processes and procedures to minimize some of the effects of elongating feedback loops. E.g., compare the feedback loop to product at a small 10 person company versus a large, publicly traded company. Early on you can get away without solid systems, but later on you’ll struggle. 

When to set it up

It’s easiest to start sooner. The work required is less, and you can usually get on an affordable plan with a vendor. Secondly, starting today means you have tons of data one, two, and five years from now. You’ll want that data and be kicking your future self if you don’t have it — true whether you are bootstrapped or are or will be VC-backed. Your company value goes up.

Customer size

If your customers are extremely high touch, surveys may not be necessary because you can simply talk to them. 

Cmon, if I, a CSM, only manage 3-10 accounts and I’m talking with each customer every week, they’re going to tell me their issues and joys. Okay, it may not be totally necessary for extremely high touch customers.

That may be true, but also consider the following:

  1. Some users want to reflect on the questions, rather than give a knee-jerk reaction over the phone.
  2. Many customers would rather write it in a survey than say it face-to-face.
  3. Survey results come directly from the customer and are not the CSM or account manager’s interpretation.
  4. Even extremely high touch customers many still have tens, hundreds, or thousands of users. You can’t talk to all of them, but only your top execs. This does not apply to all high touch customers, though.
  5. The survey results can be a catalyst for your next customer business/executive review (“your users told us they were ________ satisfied with Acme, Inc.’s product and…”
  6. Leadership hears customer feedback when it comes directly from the customer, rather than through an intermediary.

Should I use in-app, email, phone, SMS, or..?

Listen to your customers. How do your customers want to respond? For most, email will be best, but that depends on a few factors.

  1. If you’re running QBRs with your customers, you’re already asking questions for insight (“How are you using the product? Are you experiencing any issues? …”). Better to focus on that over the phone than diverge to NPS questions.
  2. If you use email, SMS, or in-app, then those can serve as talking points in your QBR. That way you don’t need to react to their feedback while on the phone or schedule a separate follow-up. Chronologically, they fill out the survey in one month and you address the feedback in the next QBR.
  3. Phone interviews tend to skew higher scores because of conflict avoidance (some people don’t mind conflict, but a good percentage of people avoid it at all costs). Email, in-app, and SMS abstract away much of that conflict.
  4. If you want a realistic NPS, email is probably best. In my experience, in-app gets a higher completion rate but is more annoying to the user. Email has a lower completion rate BUT results in better qualitative answers because they have time to answer and is a better customer experience. And it’s less annoying.
  5. Asynchronous methods (email, for instance) allow a more thoughtful response. It helps mitigate knee-jerk responses that you’d see in-app or over the phone.
  6. Do your customers use your product with clients? If so, in-app is terrible as it likely pops up at the worst times. 
  7. Similarly, if you expose the question in-app, is it likely to derail your customer from using your product? Or does it enhance their experience?

Intended outcomes

Here are some of the outcomes I’ve come to expect and have found to be true. I’ve notated which surveys are most likely to deliver the intended outcomes.

  • (NPS) Receive direct, product-specific feedback for the entire company to view—celebrate our victories
  • (NPS) Hear from customers who are thinking about cancelling—allowing us to reach out them first
  • (CES) Hear from onboarding customers and intercept them if onboarding went off the rails—preventative medicine
  • (All) Understand the product’s ease of use or confusion
  • (NPS) Track the product’s performance from the customer’s point of view
  • (All) Great feedback will be delivered to Marketing for creating success stories and developing brand ambassadors
  • (CSAT + CES) Help the Support team gain qualitative feedback

Which survey(s) to use?

Each survey their own strengths, weaknesses, and most suitable environments. It’s not one versus the other, but each have their areas of expertise. For instance, I’ve employed all three at one company before. Done right, it works well and isn’t a nuisance but enlightening.


Customer Satisfaction as in C(ustomer) Sat(isfaction). The express goal is understanding the customer’s satisfaction around a transaction. Usually that is a Support call, email, or chat. For instance, after you contact Amazon for a refund or help with a purchase, you may be asked ““How would you rate your overall satisfaction with _________?” It’s best used for a specific event or activity rather than the overall product or company.


Support should own CSAT if the surveys are specific to Support tickets. Management should track feedback scores, at least in aggregate.


  • Feedback should be piped into Slack, DM’ed to the specific rep, or to a specific channel for managers to be aware of and then share with their teammates
  • Within the ticketing system, set up CSAT reports so each employee can measure and monitor their own scores
  • Support team trends (over weeks and months)
  • Percentage of respondents (2% is low, 10% is good, higher is better)

One of the most important — yet often overlooked — metrics is the percentage of respondents. If it’s low, that’s a terrible sign that your customers are not willing to engage. You should see flashing red sirens in that case!


Net Promoter Score, even with all its criticisms, is a helpful tool to measure a customer’s likelihood of promoting your company or product to others. Gauge the level of satisfaction a customer has with their overall experience. This is specifically not transactional, and is best abstracted away from any specific instances like the customer support example above. Here are several example questions to get an idea:

  • How likely are you to recommend [company name] to a friend or colleague?
  • How likely are you to recommend [product name] to a friend or colleague?
  • How likely are you to recommend […] to a friend or colleague who does not compete with you for business? (This is useful if you get responses like “I would never share your product with anyone! This is my competitive edge!!”)
  • How confident are you that [product] will help you reach your goals over the next [time period]? (all credit and rights go to Lincoln Murphy)

From there, it’s best to use with existing customers and NOT new customers unless you want to responses like “I gave a 0 because I haven’t spent enough time in your product”. The frequency can vary, but anything from every 3-12 months per customer is fine. Keep in mind most vendors are also nagging your customers with NPS. Survey fatigue is real. 


This can vary as I’ve seen it owned by Product, Customer Success, Marketing, and other teams. What are you hoping to achieve? 

It’s likely a joint effort, and it’s okay for one team (E.g., CS or CX) to own the number and process. Whoever owns it must be empowered to share feedback with other teams and work on cross-functional initiatives to improve those ratings, regardless of the ownership structure (e.g., Marketing owns NPS, but they still need empowerment to tell Product what customers want). The person who handles the feedback (e.g., the CSM) may not necessarily be the one who has to address the feedback. For instance, Product feedback comes in many channels, but rarely does it go directly to the Product team. 


  • Compare level of engagement with scoring (is there a relationship between usage and scores?)
  • Analyze NPS by customer segment (ARR, location, company size, etc.)
  • NPS monthly trend analysis
  • NPS as a churn early warning system (again, do you find a relationship between NPS scores and churn?)
  • Percentage of respondents (how many users take time to fill out the survey? Is it low? What can you do to improve that?)


Customer Effort Score is more transactionally focused, like CSAT above. Understand the ease for the customer to accomplish their intended goal, whether a support conversation, adopting a new feature, or onboarding. For instance:

  • Support transaction: “How easy was it for you to complete your task?”
  • Adopting a new feature: “How easy was it for you to [use/adopt/incorporate] [new feature]?”
  • Onboarding: “How easy was it for you to get started with [company or product name]?”

For instance, I’ve historically used it toward the tail end of customer onboarding to measure their success and delight early on. If we start off on the wrong foot, we need to resolve that ASAP and not wait months to learn about that. The relationship is new. If you’ve screwed up, apologize.


This depends on the question. In the CES Onboarding example below, the feedback and process should be owned by the Onboarding team. If the question were product related (e.g., “How easy was it for you to adopt [product feature]?”), then it should be owned by Product. 


  • Compare level of customer engagement with scoring (is it true that customers who use the product more tend to rate it higher?)
  • Understand CES by customer segment (ARR, company size, Onboarding Manager, etc.)
  • Track onboarding trends over time (are we trending up or down? We just shipped XYZ to help onboarding, did it make any difference?)
  • Correlate CES with the sales rep to track effective onboarding expectations
  • Percentage of respondents (how many users take time to fill out the survey? Is it low? What can you do to improve that?)

CES Onboarding Process Example

For onboarding, I’ve seen the following question be effective: “On a scale from 1-7 with 1 being terrible and 7 being exceptional, how easy was your onboarding experience?” 
If customer onboarding is expected to last, say, 60 days, send it between day 40-50 so the customer has a pretty good handle on their experience and it allows your company 10-20 days to respond. And to state the obvious, only one onboarding survey per customer. 

Reporting and analysis

Reporting can be a black hole, so be careful. Sticking with the onboarding example above, here are the core relationships I focus on:

  1. What are the scores as they relate to each sales rep?
  2. What are the scores as they relate to the Customer Onboarding team? (This could be a CSM, account manager, Onboarding Manager, or entirely self-service content)
  3. What are the M-o-M trends? 
  4. What are the scores for some VIP or white glove customers or enterprises?

Now what? What do I do next?

I believe it’s best for everyone and anyone in the company to see customer feedback. It may likely create small issues (“what if one of my coworkers looks down on me from that interaction?”), but the long-term is that it actually builds trust. Engineers, Product, Sales, Admin may not interact with customers. This is the lens for them to empathize with you. It also helps illuminate issues for teams to help or respond. Fixing bugs, changing the sales demo, or prioritizing a feature launch. In this case, the Onboarding team cannot fix everything themselves. They rely on others. That’s the power of a “team.”

Because of that, I like the following process:

  1. Integrate the survey tool with Slack (or whatever tool you may use) for everyone in the company to see the feedback
  2. Pipe data into your CRM and/or CS platform
  3. Customer follow-through mechanism
    1. Most streamlined approach: use your CRM or CS platform to create follow-up tasks for the CSM or Onboarding Manager to follow up with the customer. This can be done with automation, such as a task is created based on the score. 
    2. Small scale: use Slack as your task list and apply a reactji whenever you respond to a customer. Note: this is not idealas Slack is A) not intended as a task manager and B) it’s not intended as a task manager
  4. Pass any relevant feedback to the proper team (e.g., product feedback to the product team)
  5. Create any additional tasks in your CRM or CS platform to check in with the customer

Note: you can create an automatic email that goes out for high or low scores, but that seems silly since they’re already responding to an automatic email. Why not douse it with some humanity at this point? Plus, they may have added some qualitative feedback that it’s best to reply with a custom email.


Establish a loose benchmark as a place to start. Realize you may only have a month or quarter’s worth of data and there could be seasonality effects or other concerns. So hold loosely to these benchmarks. Companies and industries are different, so it may not be realistic to shoot for X score just because someone else’s company achieved that. That said, here’s an example process and presentation to share with your leadership team:

Example CES benchmark presentation (fictitious figures)

We believe healthy CES scores can help us measure how the onboarding experience is going — the effectiveness of Onboarding Managers, content, webinars, and ease of product.

For Q3201X, we’re ready to set that CES goal that we discussed last spring.
Average CES scores:

  • Jan 1 – Mar 31: 4.5
  • Apr 1 – June 15: 5.1

Suggested Q3 goal: 5.4.

Rationale: 5.4 will still be a stretch, as getting closer and closer to 7 becomes asymptotic. Meaning, moving from 4 to 5 is much easier than 5 to 6, which is easier than 6-7 (averaging exactly 7 is impossible if anyone gives less than a 7, for instance). It’ll take some good work and ensuring our team is focused on deliver high value to customers.

[add in additional detail on why the goal makes sense, and how you’re going to get there. For example, you have several changes planned. This helps validate raising the bar as reasonable rather than a pipe dream]

I welcome your feedback.

CSAT process considerations

In addition to the CES Onboarding process example above, here are several CSAT specific considerations.

  • You may or may not want to pipe all Support CSAT feedback to a company-wide channel. Negative feedback can be targeted at the Support rep. They may be new and become afraid. You may work for a large company and coworkers form their view of Support reps based only on that feedback.
  • Should the Support rep respond to the customer? Should the manager? Should they first discuss it? This is more dependent on your leadership, culture, and needs of the people. Most important is that this should reinforce trust and confidence between teammates, not tear it down.
  • It’s really important to track the percentage of respondents for CSAT. A very low amount could indicate customers don’t even want to respond. 90% of the iceberg is beneath the surface, how many of your customers are withholding their issues with your company?
  • Do not annoy your customers. Experience CSAT from your customer’s standpoint. Are they getting a follow-up email after every Support interaction? That’s obnoxious. It’s better to embed the feedback request in the Support ticket, if it’s an email. Zendesk does this really well. Help Scout has it embedded well, too. Think about the experience.
  • Track CSAT as it relates to your enterprise customers, are there any warnings coming up that your CSM(s) should be aware of?

NPS process considerations

Here are NPS specific considerations:

  • Piping this data into Slack is beneficial, but you may see a string of promoters or detractors. Be sure to focus on the aggregate number and not succumb to today’s trends.
  • Keep in mind the NPS methodology is different. NPS ranges from -100 to +100. You cannot have an NPS rating of 8.2.
  • NPS fatigue is a thing. Don’t annoy your customers.
  • Diversify whenNPS gets sent out. A few each day is better than surveying en masse. Getting overwhelmed with thousands of NPS responses likely negates timely follow-ups.

Which customers do I choose to follow-up with?

These are only considerations…feel free to deviate. The most important customers to follow up with are your happy customers. These are the people who will speak on your behalf. Don’t you want that? If you’re short on time, spend some time reading customer feedback to decide which strategy to apply.

Promoters / high scores

Please follow up with your happy customers! It’s super easy to play a firefighter role and try to turn the ship around on upset customers. It’s “problem-solving” which is how most of us are wired. Instead, first start with the customers who give you a great score — NPS for example. Why try to appease those who are likely to churn? Priority number one are your advocates, your ambassadors. 

First, consider creating a task to send them a thank you note — whether that’s an email, a hand-written card, swag, or something else. 

Secondly, if you’re looking for customer testimonials, this is a great source. Reach out to them. Marketing can choose customers who give great feedback — especially if it’s posted to a Slack channel, then it’s easy to scan.

Third, these are your customers who will most likely give referrals. Treat these customers with delight and care.

Passives / lukewarm

Don’t ignore these customers. These are the customers who had a so-so Support experience, or are a “passive” according to NPS. Some of these had an okay experience, or may be undecided. Using a political election as an analogy, these are the swing voters that you want to win over to your side.

Consider following up with them. If they didn’t already give it, ask them for feedback: “what could we do better next time?”

Learn what’s keeping them from being more engaged. Is it something with the product? Response time? Poor onboarding or training?

Detractors / low scores

These customers create a goldmine for feedback. You’ll likely experience quite a bit of negativity (and risk getting yelled at), but many of these customers are willing to share critical feedback of your company. Sometimes it’s an edge case. Other times they’re trying to solve a problem with your product that it was never intended to solve. Still other times they’ll be the canary in the coal mine and give that feedback that others aren’t likely to give.

They can also be the customers who had a bad experience and help uncover where your customer journey map is insufficient or holes exist.

Customer segments

While not related to their “score,” after you analyze by customer segment, understand the themes and if there are actions you can take. Perhaps many of your enterprise customers see your product/service deficient in one area that’s different from how your SMBs see it. 

Vendor selection

Full disclosure: I’m not paid for this nor are there any rewards or gifts that I am getting from this. These are simply my experiences.

Before evaluating vendors, write down your requirements! I prefer the MoSCoW Prioritization (the term is scarier than it actually is):

  • Must Have (what are the make-or-break features?)
  • Should Have (what should it have but I may be able to do without?)
  • Could Have (what would I like it to have, but would only be a nice-to-have?)

For example, here’s an excerpt of a MoSCoW Prioritization that I’ve used in the past. Add a column for each additional potential vendor.

CategoryDescriptionDetailed needPriorityVendor 1

IntegrationsSlackPush responses to relevant channelsMust HaveYes
Integrations CRMPush responses to Contact pageMust HaveYes
Customer ExperienceWhitelabel Emails must be white labeled as XYZ brandMust HaveNo
Customer ExperienceWhitelabelWhitelabel emails so they don’t say the vendor nameShould HaveYes
DeliveryDKIM/SPF recordsHave emails be sent from XYZ.com domainMust HaveYes

Another example is that I needed to run NPS and CES. I could go with two separate vendors or a single one. Also keep in mind the surveys like NPS are being absorbed into other products, like CRM, CS platforms, reporting, etc.

Now that you have that information, here are a list of vendors to start with. That said, there are always new vendors so do not consider this a complete list. Also, they’re randomly organized, the first one is not necessarily “the best” nor is the last “the worst”. You need to figure out what fits your needs and budget. Oh, and be sure to check out www.g2crowd.comwww.capterra.com, and other review tools.

  • AskNicely
  • Delighted
  • Satismeter
  • Wootric
  • NiceReply
  • HubSpot
  • Promoter.io
  • Surveymonkey
  • Getfeedback


There’s a lot involved in customer feedback. The three biggest points are the following:

  1. What do you want out of this?
  2. What’s the customer going to experience?
  3. How will measure and improve your operations?

Published by Jeff Beaumont

I love helping companies scale and grow their organizations to delight customers and employees, enabling healthy teams, fast growth, and fewer headaches. Scaling quickly is wrought with potholes and plot twists. When you’re running a company, losing customers, and employees are on their way out, and don’t have your systems running smoothly, then you’ll be at your wits' end. I've been there and hate it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: