SFA

Sales Force Automation (SFA) Software & Social CRM Forum

got SFA software?
 HomeBlogAboutPrivacyTermsContactSitemap

Twitter  Facebook  LinkedIn

got SFA?

This online forum shares the experiences, lessons and learning about the selection, deployment and continuous improvement of Sales Force Automaton software (SFA) systems, Customer Relationship Management systems, Social CRM (SCRM) and to a lesser extent Marketing and Lead Management software.

 

 

 

W3C

 


 

Measuring Customer Satisfaction with Survey Simplicity and Customer History

Looking Beyond The Net Promoter Score

One of the classic problems in Customer Relationship Management (CRM) is figuring out when your customers became satisfied, and then determining exactly how you managed to satisfy them so that you can institutionalize those actions to grow your business. One of the classic customer satisfaction measures is the Net Promoter Score (NPS), which seems simple enough: ask your customers to answer the question, "how likely are you to recommend this business?" on a scale of 1 to 10, count the 9s and 10s as promoters, count anything from 1 to 6 as a detractor, and there's your score.

It seems pretty simple, and it is, but it's also over-simplistic. First, using a scale of 1 to 10 is imperfect as a baseline. It relies on the customer's understanding of how that scale works. One person's 5 might be another person's 8. Some people will never give a top score as a matter of principal (because as they say, there is always room for improvement), while others grant high scores with frequency. That means that the results can be affected by the overall personality or unequal understanding of the customer base.

Second, because that scale is so non-intuitive, it's also a question that requires some real thought by the person being surveyed. That can harm your survey response rate, and the peril you face is that your survey will only be answered by people motivated to provide answers. Those are typically the people who are very happy or very unhappy with your business, and while understanding them is important, you need to understand the people in the middle too, because the direction they go in the future will determine your organization's fate.

Lastly, it doesn't really matter if a person is likely to recommend your business if that person doesn't actually make recommendations or have much influence among his peer group. A person who has no friends giving you a 6 has the same weight in Net Promoter Score as the super-influencer giving you a 6, but clearly, one is more important to you. The real questions needs to be "have you recommended this business?" and "to how many people have you recommended this business?"

But Net Promoter Score captures at least an honest stab at attempting to apply a structured methodology to an emotional or subjective opinion, something that's never easy to do. I've written in the past about similar attempts that go awry; my favorite was a survey for a high-end hotel for which the three question responses were "failed to meet expectations," "met expectations" and "exceeded expectations." The experience I expected going in was a wonderful one, and the hotel delivered exactly what it promised. They met my expectations, and that's how I filled out my survey. Then, a week later, I got a panic call from the manager because they had not exceeded my expectations.

It would have been better if the hotel been able to see that I'd spent a certain amount of time on their website before I arrived or used certain services during my stay, and could then relate my satisfaction to my actions. But, as I said, this is never easy to do.

The secret to developing a customer satisfaction methodology is to make the survey drop-dead easy, with very little ambiguity for the user to grapple with. Beyond that, you add your knowledge of the customer, business conditions, and context. It's easy on the customer, and thus can result in higher response rates, and allows you to put simple answers into context.

An example of that was introduced by Zendesk, a cloud-based call center software solution. They've added a self-survey feature called Customer Satisfaction Ratings that asks customers one question: "Were you satisfied with the help you received?" Your options are "yes" and "no." It doesn't get much simpler.

The Zendesk solution takes those binary responses and then applies some added intelligence to them. For example, are customers from a specific geography happier when dealing with a certain agent? Do frequent callers have a better rate of satisfaction when routed to more technically astute agents? How do rates of returning customers correlate to rates of satisfaction with the help desk? These types of answers often uncover patterns which can be extremely helpful in improving the customer service experience.

Zendesk's approach effectively shifts a lot of the effort to the vendor and away from the customer, which makes sense for the obvious reasons. It also phrases the question in a way that acknowledges the customer and his or her experience; most surveys still come across as bald-faced efforts to figure out how to wring more money out of the survey participants.

Zendesk gets it: ask a simple but pertinent question, then correlate it to what you know about your customers and your own operations to get an answer. How well does your business identify the right question to ask, and then compare the results to the context of your customers?

got SFA?
 Sales Force Automation (SFA) Software Reviews