Does your organization measure customer satisfaction? If so, how do you use those measures to improve customer satisfaction?
A learning loop is a cyclic process of experimentation that fosters learning and continuous improvement of some outcome. The term was originally coined by Chris Argyris1, who made the analogy to a thermostat maintaining the temperature in a room. But this doesn’t feel right because thermostats don’t learn. They don’t ask “why?” or adjust strategy. They are in a feedback control loop without any analysis of cause and effect. In a “true” learning loop (which Argyris called “Double Loop Learning”) the learning happens when the team examines those things that didn’t go the way they expected them to, and asks “why did this happen?” If they ask “why” enough times, they develop root cause hypotheses that they can test by further development and experimentation.
The problem with “metrics”
Many performance measures are dashboard items – they are good for checking whether there might be an underlying problem, but not useful for helping you to identify the causes of the problem. Many leaders view Net Promoter Score as the standard for measuring customer satisfaction, but by itself NPS is useless for deciding what ought to change to drive improvement. What about a customer journey map? Customer journey maps are models of how the organization thinks things work. The map is not the journey. You only know any particular customer’s journey when you journey with them somehow.
Measures that do not have good diagnostic support tend to become vanity measures; vanity because they do the measuring without enabling the discovery necessary to improve outcomes. It’s not unusual to see teams assigning traffic light thresholds to these sorts of measures. If you are committed to continuous improvement, thresholds are not meaningful. Improvement is all that matters.
The best measures are the ones you want to improve, and the best outcome is measurable improvement. If you’re not looking for improvement, a vanity measure might be adequate. Few successful organizations are unconcerned about improving customer satisfaction though.
Measures that work for you
So what does it take to drive continuous improvement? Critical characteristics are: Timely, diagnostic evidence that is used to analyze the root causes of problems followed by corrective action on a cadence.
Timeliness is essential. If a customer has a bad experience and tells you so, but nobody pays attention until weeks later, the trail is cold. On the other hand if someone can contact the customer immediately the company benefits twice: it’s possible to get enough context to understand what caused the customer frustration, and there is an opportunity to make amends, potentially turning a negative experience into a positive one. It’s important to use measures that respond quickly to events and commit to rapid follow up.
Diagnostic evidence is a stream of evidence that enables root cause analysis. Beyond timeliness, such an evidence stream must be detailed (it answers the 5 W’s of who, what, where, when, and ideally why). A great diagnostic evidence stream also supports follow-up by offering respondents the opportunity to be contacted for more information.
For a web application, logging clickstreams may provide some hints to trouble spots to correct. An even better implementation would notice when a user is having trouble and ask if they would be interested in describing briefly what they were trying to do and whether they would be willing to have a follow-up call with an engineer (this may be a random selection). If the user answers in the affirmative, it’s important that they hear back promptly.
When a customer contacts a call center for support, in an ideal situation the support person will think in terms both of solving the customer’s immediate issue and collecting enough context to inform design changes that have the potential to reduce the number of calls for this issue. This takes a log or CRM system, some root cause thinking, and the ability to open a bug report with development as appropriate. Again, capture the 5 W’s in enough detail that an engineer can attempt to reproduce the problem (this takes training) and capture them in the support log. Offer follow up if the organization can support it.
Root Cause Analysis identifies the ultimate “why” the problem occurred, rather than a proposed solution. A great deal has been written on Root Cause Analysis2, so we won’t cover it in detail here. It’s critical to identify and treat the underlying cause rather than focusing exclusively on symptoms. Effective RCA requires skill and practice, and happens best in a team setting. The RCA produces a list of root causes ranked in order of frequency of occurrence (often called a Pareto Analysis).
Note: If you see “training” or “documentation” listed as a root cause it might be worth asking “why” a few more times. Training is a solution to an underlying problem. What is the problem?
Corrective Action refers to the process of making design, process, staff, and training changes to counter selected root causes identified by the early stages of the learning loop. By taking countermeasures and monitoring the results in subsequent iterations of the loop, the team can check that the changes resulted in reduced incidence of the targeted root cause.
Cadence is a fixed cycle time on which the process repeats. This may be set by an interaction in which the team does RCA on a new batch of evidence and makes decisions for corrective action. It’s hard to overstate the role of cadence in continuous improvement efforts. Cadence reduces scheduling overhead and emphasizes the importance of the activity. The team or the leaders can select the cycle time. The specific cycle time depends upon how quickly evidence accumulates, how quickly you can take corrective action, how often the team is able to meet, and possibly the maximum time you can afford to wait before responding to follow-up offers if this is tied to the process cadence.
Support organizations are usually managed as cost centers3 – managed to field the most calls at the least cost. The support team may feel pressured to deflect and close out calls as rapidly as possible to hit productivity targets. In so doing, support staff may not have the time or tools to provide feedback to the product team that might lead to better outcomes.
This is a significant missed opportunity: driving continuous improvement of customer outcomes can easily pay for the cost of support in revenue growth and reduced churn. If the support organization is the front end of a learning loop connected to the product delivery team, it is possible to deliver continually improving outcomes to customers and reduce support cost. Failing to make this connection misses a significant potential source of value.
Summary
Customer satisfaction measures can be used to drive continuous improvement if:
- The feedback captured contains enough context to allow for root cause diagnosis
- Net Promoter Scores and other single-valued measures are not sufficient
- Context must be captured (who, when, where, what, why)
- Somebody makes use of this information while it is still warm
- User comments and callback information are very helpful
- Designated people regularly use the feedback results to develop a Pareto analysis of root causes and then injects changes into the development cycle to counter those root causes
- The cycle time from customer feedback to follow-up is short so that the trail is still warm
- Extra credit: Customers see that their feedback is being used to make improvements. Builds your credibility.
- Extra credit: Customers who have had a subpar experience get quick follow-up and a genuine effort to make things right. Directly improves the customer experience where it most needs improvement.
References
1. Chris Argyris – Double Loop Learning in Organizations, Harvard Business Review, Sept 1977
2. US Center for Medicare and Medicaid Services – Guidance for Performing Root Cause Analysis
3. Wikipedia – Pareto Analysis
4. Wikipedia – Fishbone or Ishikawa Diagram
5. Wikipedia – Five Whys