How can you continuously monitor your CX in a meaningful way?
In-store customer experience has never been more important — you wish you could monitor it every hour of every day, but that just isn't affordable or practical. Sales results aren't a reliable proxy because they are often overly influenced by store traffic. So how then can you understand your CX in any meaningful way?
You already have the best proxy for customer experience for every one of your stores and each hour they are open — and you may not even know it.
Customer experience is a continuous process that is in effect every hour of every day your stores are open. Like customer experience itself, traffic and conversion data are also collected every hour of every day.
Beyond being a continuous stream of insights, traffic and customer conversion data are actual measures of prospect behavior versus opinions they might state later — in a CSAT survey for example. The data are completely objective and fill in the insight gaps created by evaluating customer experience measures against sales results.
Customer experience depends not only on factors the store controls, but also on variables the store doesn’t control, such as the perceptions of the customers themselves. What may be a great store experience for one customer may be a poor experience for another.
Without traffic and conversion data customer experience measurements can be very subjective.
Two of the most common methods for measuring customer experience are Post-purchase Customer Satisfaction Surveys and Mystery Shopping.
Both methods have merit and can provide useful insights into store experience; however, both have limitations. See the tabs below for details.
To increase the amount of data collected and reduce the cost of data collection, more customer surveys are conducted online.
- Typically, check-out staff will invite the customer to participate in an online survey.
- Details for the website address and store coordinates will be printed on the sales receipt.
- Incentives and other tactics are often used to encourage responses.
Surveying in this way does provide the retailer with a perspective on what some customers think, but it doesn’t go far enough as it does not tell you what customers who didn't buy thought.
In this scenario, surveys poll only customers who actually made a purchase.
What about the prospects who came into the store and didn’t make a purchase?
In many ways these non-buyers may have the more important opinions and insights. After all — despite the retailer’s best efforts, they didn’t buy.
Unfortunately, while many retailers do have general customer feedback capabilities on their websites, the number of non-buyers who actually make the effort to go and complete a survey or complain is small.
It’s hard to find any evidence that these surveys are delivering the insights retailers truly need.
Customer satisfaction surveys do have a place, but retailers need to seriously ask themselves what this information means and how they should interpret it.
There are several advantages of mystery shopping including:
- The retailer can control when and where the sampling is happening and what data will be collected.
- The mystery shops are being conducted by “professional” shoppers, therefore the data collected should be more complete and reliable.
Despite the benefits, there are also two key challenges: Cost and Objectivity.
According to the Mystery Shopping Providers Association, the average shop costs $65. For a 200-store chain that conducts three shops a month that would translate into $468,000 per year – load in some upfront set-up and reporting fees, and you could easily get to half a million dollars – quite a substantial sum of money.
Because of the cost, many retailers simply don’t do mystery shop or don’t do it frequently enough.
Also, just as with inviting buyers to fill out a survey, there is an inherent and well-understood sampling error:
- Mystery shoppers aren’t real customers.
- They’re not spending their own hard earned dollars.
- To some extent, they influence the way they interpret and score the customer experience in the stores they are paid to visit.
So if Customer Satisfaction Surveys and Mystery Shop data are too subjective and too infrequent, how can you truly understand customer experience in your stores? The answer: look at customer experience measures in the context of traffic and customer conversion.
Traffic and customer conversion data make customer experience measures more meaningful and more actionable by providing critical context that is otherwise missed.
Let's look at an example of in-store experience to highlight the issue.
Imagine what the customer experience would be like in the two stores below.
In Store A, the check-out line is clogged, it’s hard to find a Sales Associate, and it appears that this store does not deliver the best customer experience. If you had a mystery shopper visit the store, or if you conducted an exit survey of customers, the scores would likely not be stellar.
Store B is a very different case. It’s very quiet, there’s hardly anyone in the store, and there are no lines at the check-out. It would be easy to find an Associate for help and it appears that customer experience would be great. If you conducted a mystery shop or surveyed customers, the scores likely would be positive – especially compared to Store A.
Here’s the problem: Store A delivered significantly higher sales than Store B despite the poorer customer experience.
- Store A had an abundance of traffic and generated more sales than Store B. Customer conversion rates likely sagged in Store A as some prospects left the store, because they couldn’t find what they were looking for or gave up on the long lines at the check-out.
- Store B had an exceptional store experience, but due to the lack of prospects in the store, generated only modest sales. Conversion rates were very high, but without sufficient traffic, the store was unable to generate significant sales.
Every retailer who has ever measured customer experience has likely seen conflicting results similar to these. So from management’s perspective, a great customer experience is one that strikes the right balance between serving customers and maximizing sales.
To do this, retailers need to understand that a great store experience is not an isolated phenomenon. It must be understood in context. Solving for “customer experience” without understanding the context will lead to sub-optimal results.
When you plot Store A and Store B on the customer experience and sales performance axes, it’s not hard to see how stores with higher customer experience scores can generate low sales, and stores with low customer experience scores can generate high sales. In fact, it makes perfect sense when you view customer experience scores in light of traffic and customer conversion data.
If these were your stores, without knowing the stores’ traffic volume and customer conversion rates, there‘s no way for you to understand what may be driving sales results. Any customer experience data you may have would just add to the confusion because you would see that the store with the lower customer experience scores is producing better sales results than the store that has a better store experience.
Compare customer experience scores to conversion rates instead of sales results
In the chart above, you can see that Store A has low customer conversion, which is consistent with the low customer experience scores. This is a powerful insight you cannot see when you evaluate customer experience against sales.
Once you see conversion rates in your Store A, you will know that the store’s high sales results are being generated by the high traffic volume in the store. You would also rightly conclude that this is a store that is under-performing versus its traffic opportunity. The high sales volume has no bearing on the in-store experience, which needs to be improved.
Now consider your traffic and conversion data for Store B. This store has high customer experience scores and is effectively converting the traffic it receives. It is performing well but sales are suffering due to a lack of traffic into the store.
You would be very hard pressed to find a scenario where customer experience scores are high and customer conversion rates are low, as in quadrant 2 in the top left corner of the graph. In effect, this would mean that people loved the store experience, but just didn’t buy. That’s not very likely. Also, results like those in quadrant 4, while more plausible, would also be unlikely or a result of extenuating circumstances — for example, long lines at the Apple store as people wait for the release of the latest iPhone.
Use Conversion Rates as a proxy for customer satisfaction: Test it out for yourself.
To improve your sales, you need to understand what lever to pull. Traffic and customer conversion help you do that.