This is the fourth blog post in a series that examines key Customer Experience metrics that every Voice of the Customer (VoC) program should measure to better understand the visitors’ digital experience. In this post, we look at a key CX metric that provides the only way to truly know if your visitors accomplished what they wanted to do on your website: Task Completion.
Whether you’re trying to research the latest products, gather some DIY product support information or purchase that sweater you’ve been eyeing for the last couple of weeks, every time you access a website, you expect to be able to accomplish a goal, a specific intent.
Naturally, if you’re unable to accomplish that goal during your session, you will turn to other means to meet your need, like going to a competing website or calling customer support. Visitors’ ability to accomplish their goal on a website can strongly impact a company’s relationship with those visitors, and potentially have a lasting impact on the bottom line. While there are ways with which you can get a sense of how successful your visitors are during their session on your website, the only way to truly know is by asking them directly.
Task Completion gauges that self-reported level of success.
In this post, I look at why you need to measure this Customer Experience (CX) metric, along with a use case that shows where the value of measuring Task Completion becomes apparent. But first, let’s look at how you can get started with measuring this metric.
How you can measure ‘Task Completion’
Naturally, to measure whether someone completed what they wanted to do on your website, you need to know what their actual goal was in the first place. As such, the value of the Task Completion metric comes out when it is paired with another key CX metric – Visitor Intent.
Once you’ve confirmed a visitor’s intent, you must then ask a follow-up Task Completion question in the context of that intent. To do so, phrasing your Task Completion question the following way is ideal:
“Were you able to complete the purpose of your visit today?”
Using a close-ended question is recommended to measure Task Completion because not only does it make it easier for your respondents to answer, but it also greatly simplifies your analysis since you will ideally want to segment those who completed their tasks and those who didn’t.
The answer choices you can use for this question come down to 2 options:
- A binary “Yes / No” scale
- A 3-point scale that includes “Yes”, “No” and “Only partially”
Between these two options, the binary scale is ideal because it prevents any “muddying of the waters” that can come from the “Only partially” answer choice. For example, some respondents may select “Only partially” if they completed their task, but it didn’t give them their desired outcome (i.e. their intent was to look for pants, which they found during their visit, but not the pants they wanted). This unclear answer choice can make it difficult to identify who was truly successful during their visit, while also making it much trickier to segment your data.
Instead, it’s recommended to keep clean, clear binary answer choices for your Task Completion question, and then add any necessary follow-up questions when you want to dig further into your visitors’ experience.
All things considered, an ideal Task Completion question should be asked this way following the Visitor Intent question in your research:
Why You Should Measure 'Task Completion'
1. Behavioral data and web analytics don’t tell the whole story
Web analytics provides ample information on not only how your visitors interacted with your website, but also if they completed specific actions you want, such as making a purchase, browsing through the latest products, or obtaining support information. But sometimes, there’s more to the story than what behavioral data shows that can only truly be confirmed via direct customer feedback, which the Task Completion metric helps to uncover.
An example we’ve often seen with our clients is that even though a customer may have made a purchase during their session, they still said that they did not complete their task. Looking further into their experience, they confirmed that even though they made a purchase, they were not able to find all the items that they were looking for which meant that they needed to go elsewhere to purchase those items.
Which brings us to the next reason…
2. Failed visits hurt you in several ways, so it’s crucial to identify these bad experiences
Naturally, when your customers can’t accomplish what they want to do on your website, it can have a negative outcome on both you and your customers, including:
If one visitor experiences a certain issue and tells you about it, it’s likely that many other visitors are also experiencing similar issues, which ultimately can take a toll on your bottom line. By identifying these specific problem experiences, you can then determine the key drivers to these visitors’ dissatisfaction and address them hastily to prevent future visitors from experiencing them, or start planning for longer term updates that can boost your visitors’ experience even further.
3. Task Completion provides a great way to measure success and ROI for website changes
One of the trickier aspects of updating a website is to track how it impacts your visitors’ ability to complete their desired tasks. This becomes extremely important especially when it comes to complete website or mobile app redesigns, and other major projects your company might undertake with your digital properties.
To expand on this point, let’s look at a use case where the Task Completion metric enables you to delve deeper into your Customer Experience both before and after a website redesign project.
Use Case: Website Redesigns and the Recovery Curve
A website redesign is one of the most time intensive and critical projects that a brand can undertake. By reorganizing, streamlining, adding and removing items on your website, you are reconfiguring many key aspects of your customer experience.
A great way to evaluate the success of a website redesign is by gauging how successful your visitors were in completing the tasks that they primarily came to your website to do both before and after the redesign.
Not unlike the customer satisfaction ‘recovery curve’ we have come to expect from a website redesign, many visitors may experience difficulties completing their desired tasks immediately following a website redesign as they reacquaint themselves with your website and the new path they must take to accomplish what they want to do.
To truly understand how your website redesign impacted your visitors’ ability to complete their tasks, you need to deconstruct this recovery curve by cross-tabbing your Task Completion and Visitor Intent data.
Let’s look at an example of how a recovery curve could be broken down by Visitor Intent for a Telecom website. Based on our research, Telecom websites are among the lowest of all industries in terms of Task Completion (The average Task completion rate is around 65 percent for telecommunication websites). Failing your task on Telecom websites can lead you to consider several next steps, including calling / chatting with a support agent, which can become very costly when you’re talking about thousands (and sometimes millions) of customers.
Here’s an example of what a Telecom website’s Task Completion rate can look like when broken down by Visitor Intent:
As we can see, the Task Completion rates for each of the Visitor Intents examined follow a similar pattern to the recovery curve mentioned above, except for one group:
In this chart, we can see the following:
- All Visitor Intents experienced the same notable drop immediately following the website redesign (“T”)
- Those coming to ‘Learn about products’, ‘Make a purchase’ or ‘Manage their account’ all experienced steady bounce-backs in their respective Task Completion scores after about 4 months (“T+4”).
- The “Overall” Task Completion score for the website has improved since the redesign.
- The Task Completion score for those who came to ‘Get support’ never fully recovered from the initial dip after the website redesign and remains 5 percent below what it was prior to the redesign (“T-1”).
Despite the website’s overall Task Completion score improving since the redesign, seeing the drop for those coming to get support information can set off some alarms internally. Why is this specific group of customers the only one that did not benefit from the website redesign?
With the help of a follow-up inquiry (“Please tell us why you were unable to complete your task.”), you can now collect additional insights and use Text Analytics tools to delve deeper into the specific pain points for this user group and identify the aspects of your redesign that you need to address.
“Do. Or do not. There is no try.”
Yoda was on to something when he uttered this famous line in The Empire Strikes Back. When it comes to Task Completion, you either did what you wanted to do on the website, or you didn’t. Plain and simple. However, the experience gap between these two extremes can be massive, and the implications of a successful or failed visit on your bottom line can be very different.
It’s crucial to be able to accurately segment these two groups to better understand the key drivers on your website that may be leading to success or failure. The most precise way to measure your customers’ level of success is by confirming it with your visitors via the Task Completion metric. This CX metric is crucial in better understanding your Customer Experience, and also provides a great starting point to identify the user groups whose experiences you should research further with your Voice of the Customer (VoC) program.
Did you like this post? Make sure to check out these other posts in our ongoing Customer Experience Metrics Series:
Net Promoter, Net Promoter Score, and NPS are trademarks of Satmetrix Systems, Inc., Bain & Company, Inc., and Fred Reichheld
Banner image source: Pexels