This article is contributed. See the original author and article here.
As Microsoft continues to invest in AI technologies across Dynamics 365 and Power Platform, many enterprise organizations are rapidly adopting Copilot in Microsoft apps such as Dynamics 365 Customer Service. Unlike solutions in other business areas, customer service solutions are particularly sensitive for a couple of reasons.
First, the customer service team acts as the organization’s frontline, dealing directly with customer inquiries and issues. Moreover, most interactions between support agents and customers occur in real-time, leaving zero tolerance for error. Any customer frustration can easily impact the customer satisfaction rate.
Additionally, introducing a new tool like Copilot in Customer Service to customer service agents must be well-tested and validated. In the era of AI and generative AI, organizations face the critical question of how to build their testing strategy for these innovative tools.
Copilot business value
Before delving into Copilot test cases, let’s quickly discuss the business value of Copilot in Customer Service. Copilot and AI features in the customer service world act as an agent assistant. Copilot helps agents with tasks such as retrieving information from the knowledge base, drafting emails, or providing quick summaries of customer conversations or cases with long threads, multiple notes, and emails.
Leveraging Copilot in Customer Service brings quick wins to the business. For instance, reduced handle times for customer requests allow agents to focus on core tasks. And since agents can provide more accurate and timely responses, organizations see improved customer satisfaction levels.
A closer look at each Copilot feature reveals the need for agent review before presenting any information to the customer. Take, for example, the case summary feature. A disclaimer indicates that this is an AI-generated summary, emphasizing the need to “Make sure it’s appropriate and accurate before using it.” This highlights the critical role of human oversight in ensuring the accuracy and appropriateness of AI-generated content. It reinforces the value of Copilot as a supportive tool rather than a replacement for human judgment and expertise.
Defining success metrics
Having covered the basics, it’s crucial to establish a success matrix for implementing Copilot in Customer Service. Most enterprise customers follow a standard process for introducing new tools or features. While this approach is recommended and applicable to almost all new Dynamics 365 features, the success criteria for Copilot should address several specific factors, due to its unique functionalities and impact:
- Time efficiency: Measure the amount of time Copilot saves agents in performing their tasks. This can be quantified by comparing the time taken to complete tasks with and without the assistance of Copilot.
- Relevance and helpfulness of responses: Evaluating Copilot’s responses isn’t as straightforward as saying they’re right or wrong. Measure their effectiveness with a percentage that shows how relevant and helpful these responses really are. When it comes to measuring Copilot’s impact, we look at it like this:
- Totally irrelevant: Assistance that does not address the agent’s inquiry at all, providing no useful information for handling customer queries.
- Partially helpful: Responses that offer some relevant information but may not fully equip the agent to resolve the customer’s issue, possibly requiring further clarification or additional resources.
- Mostly helpful: Assistance that is largely on point, providing substantial information and guidance towards resolving the inquiry, with minimal need for further action.
- Completely helpful: Responses that fully equip the agent with the necessary information and resources to address and resolve the customer’s issue without any need for additional support or clarification.
- Agent satisfaction and ease of use: Assess how user-friendly and intuitive Copilot is for customer service agents. Agent satisfaction with the tool can be a key indicator of its usability and effectiveness in a real-world setting.
- Impact on customer satisfaction: Monitor changes in customer satisfaction metrics. You can do this through surveys or analyzing customer feedback. See if there is a noticeable improvement due to the implementation of Copilot.
- Return on investment: Consider the overall costs versus the benefits of implementing Copilot. This evaluation is crucial, as it is important to test and evaluate any feature intended for user adoption. Remember, Copilot is not a new product but a feature within Dynamics 365 Customer Service. It incurs no extra cost for most customers.
Start your Copilot journey with confidence
The best way to test and measure Copilot’s success is through real scenarios, real agents, and real customers in a production environment. This is why we recommend starting quickly with a pilot or initial phase and gradually rolling out Copilot capabilities. You can closely monitor the results and feedback during the initial phase.
We use the name ‘Copilot’ and not ‘Autopilot’ for a good reason. Essentially, Copilot in Customer Service acts as an assistant to the agents. While it proves useful in some situations, there are instances when questions or requests become too complex, requiring human expertise. However, even in these scenarios, business operations continue seamlessly, thanks to the human agents.
In Customer Service, think of each Copilot feature as being in one of two categories: those that do not rely on the knowledge base and those that do.
The easiest way to begin is with the first category, which includes summarization features. This category has minimal risk and requires less change management effort. This article provides in-depth information on this.
Test and optimize Copilot
A pilot phase is vital for testing Copilot, where you will document the results and collect feedback from your agents. The best candidates for the pilot phase are the highly skilled agents. They have the expertise to deal with customer questions efficiently, allowing them to give thorough feedback without affecting the normal call center functions. Moreover, they help ensure the proper use of Copilot, avoiding any incorrect or unverified information being passed from Copilot to customers.
During the pilot phase, you need to keep track of your success metrics and aim for ongoing improvement. This mainly involves improving the knowledge base articles. Copilot in Customer Service is not a magic tool; its performance depends on the quality of the information it can access. Providing Copilot with clear and complete knowledge articles will help it to produce clear and correct results.
Microsoft is heavily investing in integrating AI capabilities into Dynamics 365. Organizations with live implementations of Dynamics 365 Customer Service should view this as an opportunity to enhance their customer service operations. While testing remains essential, they should not hesitate to deploy these native capabilities in production mode, especially since Copilot in Customer Service comes without any extra licensing costs.
Generative AI is evolving rapidly, and organizations that start to adopt and utilize it early will secure a competitive advantage in the future!
Learn more
For more details on how to enable Copilot for a specific number of users using agent profiles, refer to Enable Copilot features in Customer Service | Microsoft Learn
The post Build your Copilot testing strategy in Dynamics 365 Customer Service appeared first on Microsoft Dynamics 365 Blog.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
Recent Comments