Quality Digest      
  HomeSearchSubscribeGuestbookAdvertise April 26, 2024

 

This Month
Home
Articles
ISO 9000 Database
Columnists
Departments
Software
Contact Us
Web Links
FDA Compliance
Health Care
Web Links
Web Links
Web Links
Need Help?
Web Links
Web Links
Web Links
Web Links
ISO 9000 Database
ISO 9000 Database


Departments: SPC for the Real World

  
   
Davis Balestracci

It’s Time to Declare War on Surveys--Part 2

Customer satisfaction surveys aren’t early warning systems.

 

 

Maybe the best article I’ve ever seen on survey design appeared in the October 1996 issue of this magazine (“Designing Customer Surveys That Work;” Richard E. Zimmerman, Linda Steinmann, and Vince Schueler; www.qualitydigest.com/oct96/surveys.html). In this column, I’m going to continue to deal with survey motivation while answering the eternal question: “What size sample should I take?”

First, etch the following statement permanently in your brain: There’s no universal sample size for every situation.

Once again emphasizing analytic context over enumerative, large samples aren’t necessary for many surveys. A sample of 40 to 50 completed surveys is often quite adequate, and 100 suffice in most cases.

As taught, most survey methodology is designed to estimate, say, a literal percentage in an opinion poll. There are statistical formulae that will yield sample sizes in response to the specification, “I need to know the actual percentage who will respond positively to this question within plus or minus 5 percent, with 95- percent confidence.”

The issue in customer satisfaction isn’t estimation, but information. People ask me, “What size sample should I use?” and I answer, “Enough… whatever will give you some degree of confidence in the result and an ongoing process to increase your degree of belief in the result.” This is best done as a series of smaller surveys designed to track subsequent progress on specifically identified issues. I’d rather have a random sample of 50 people surveyed monthly (or even 10-15 weekly) who are followed up aggressively to achieve 80-percent response, then plot the 12 monthly “dots” over time to track progress on a formal organizational effort.

This is far superior to the more typical 600 surveys received in December that are then compared to the 600 surveys received the previous December to answer the vague question, “How are we doing?” I so often see these sample sizes determined “statistically,” then 2,000 surveys are sent out with the hope of receiving the “statistically calculated sample size” of 600.

The issue of concern is moving the process average on something important, not estimating it. Ongoing, repeated small sample sizes will affect the variation limits seen in the control chart, but that isn’t the concern--what is important is “moving the needle.” Major successful efforts will be visible regardless of the underlying variation of the sampling process.

Sometimes, organizations can even learn more about their customers during a 90-minute focus group or a 15-minute interview than they will ever find out in a decade of surveys. The right time to survey is after determining what customers want, designing the product or service to meet those expectations, and determining that customers got what they wanted.

Customer satisfaction is not objective, nor is it easily or accurately measured. It’s also not quickly or easily changed. It takes awhile for an attitude shift to take place. Because of their high degree of inertia, many customer satisfaction survey efforts, as currently practiced, can only be lagging indicators, not leading or current ones. To put it differently, customer satisfaction surveys aren’t early warning systems. If you can’t take action on customer perceptions within a few weeks of the perceptions being formed, there’s a strong chance that you will lose your window of opportunity… or even the customer.

Even informational surveys aren’t a panacea. I highly recommend that you take a look at the article “Why Your Customers Lie to You” by Mark Silver in The Business Heart eZine ( http://heartofbusiness.com/articles/2006/Dec20.htm).

Unless you make customer input and involvement an ongoing and fundamental part of your business, it’s pointless to survey just to “know how we’re doing.” Even if the numbers are valid, they aren’t actionable because they give no clue as to why customers are satisfied or not, and what might be done to improve things.

Part of a true commitment to improvement means a continuous flow of customer information coming in, which then allows a companywide summary measure of customer satisfaction. For balanced scorecard purposes, such a measure should be a set of measures capturing different aspects of satisfaction, for example:

Percentage of highly satisfied customers

Percentage of highly dissatisfied customers

Cost of dissatisfied customers

Levels of expectation

Perceived value for the money spent

Brand loyalty

 

Customer satisfaction is a subtle, complex attitude shaped gradually by repeated experiences with a company’s products and services. It’s most directly affected when a company involves the customer, and it’s that process of involvement--as well as acting on key information--that drives customer satisfaction.

About the author
Davis Balestracci is a member of the American Society for Quality and past chair of its statistics division. He would love to wake up your conferences with his dynamic style and unique, entertaining insights into the places where process, statistics, and quality meet. Visit his web site at www.dbharmony.com.