More and more contact centers see the value of Quality Assurance in improving their operation both in terms of efficiency and effectiveness. However, in most cases the effectiveness of the process is measured internally based on what the organization believes to be the right approach or behavior. This internally focused approach could be in conflict with what customers feel and experience. In fact, we have seen situations where internal quality results had negative correlations with CSAT, customer satisfaction and by extension customer retention!
If high customer satisfaction and retention is part of the center mandate, then it is a ‘best practice’ to include customers’ feedback and input as part of the process.
The process is rather simple! Customers are randomly selected to answer a short (5 – 7 questions) customer survey and score their satisfaction with the call. Quality Listeners, then, listen to these calls for coaching opportunities while scoring the agent based on number of predefined mandatory elements (Compliance). With multiple calls (best in class centers review 2 contacts per agent per week) being scored during a typical reporting period, a clear trend in performance can be observed. The combined results of the internal and external score will provide a true picture of what needs to be done to improve CSAT customer satisfaction while ensuring the mandatory requirements have been fulfilled.
Transitioning from an internally focused perspective to one where the customer feedback is incorporated can raise a number of concerns with the front-line staff. It has been suggested that customer survey results could be unnaturally biased as the client may be upset with the result of the transaction and not necessarily with the way the agent operated and responded.
In such cases, the survey may rate the interaction result rather than the agent’s activities. This can produce a result that is different from what may have been achieved had the agent’s activities and process been honestly evaluated. This is to say that the agent may have followed the desired process correctly and did everything right and the customer was still not satisfied. Of course, the opposite is also possible, wherein a customer scores higher because of the positive outcome (for example higher than expected discount or faster service) rather than the performance of the agent.
The key point is that this new process is focussed on coaching and continuous improvement. It is not meant to be punitive. Thus, if a consumer gives a low score, the Quality Listeners will be offering coaching assistance on how it might be improved. If the call was executed properly and a low score was given due to the final result of the call and not agent performance, perhaps no relevant coaching guidance will be required, and the agent will know they did perform in a professional manner. It has been said that the Customer is not always right, but the Customer is always the Customer.
In addition, in the long run and with multiple evaluations during the reporting period, such biased results (plus or minus) may balance each other; or at least result in a consistent variance; across all agents and interactions. This consistent variance will not impact any trend analysis and/or observed improvement (continuous improvement). At the same time, trend analysis based on customer input will assist in more effective coaching efforts in improving agent performance.
Using a customer satisfaction survey requires a paradigm shift in how we look at QA results and their role in improving contact center operations:
Perhaps it is time for many contact centers to take a deeper look at their QA process and ensure that it is not being used as a punishment but rather a powerful tool in improving agent and center performance.
Please Contact ApexCX for all your customer experience needs.
(Nov 18, 2019)