-->

Designing a Customer Feedback Survey

When it comes to the design of a voice of the customer (VOC) survey, there's been a long-standing debate about the right placement of key customer satisfaction questions, such as "Overall Satisfaction with the Company" (OSAT) and "Willingness to Recommend." Some place them at the beginning of their customer feedback surveys, while others lean toward placing them at the end.

There's no right or wrong answer to the issue of OSAT question placement. However, the location of these key questions can and likely will alter your survey results, so it's important to carefully consider which placement makes the most sense for your business needs.

Placing OSAT Questions First

Are you trying to capture your customers' top-of-mind, immediate reaction response? If the answer is yes, then questions about overall satisfaction with the company and likelihood to recommend or renew should be placed up front in your survey design. Be aware that placing OSAT questions at the beginning of your survey can yield higher scores than if you place them at the end—although that also depends on the content of the rest of your survey questions.

The advantage to placing OSAT questions up front is that you'll likely get a higher total number of responses to them. Even if respondents drop off later in the survey, you'll still have captured their input on your key questions. Placing key questions first also means that if you make changes to the survey content that follows, you won't have to worry about whether those updates will impact your OSAT questions.

Placing OSAT Questions Last

Placing the key OSAT questions at the end of your survey allows you to walk your customers or clients through their entire experience with your company before they answer questions about overall satisfaction levels. This process may help them recall certain issues or events that they might not have remembered immediately.

As an example, you might ask your customers questions about their experience with the invoicing process or with timeliness/accuracy of deliverables, and then follow up with an OSAT question such as "How satisfied are you overall with the billing process?" Because your lead-in questions can trigger a memory of a time in the process that did not meet their satisfaction, this can sometimes lead to lower scores on the OSAT questions. However, the advantage is that it can also help make areas for improvement more obvious or clarify specific factors that cause dissatisfaction.

Placing key questions last also works well if a certain amount of time has elapsed since an individual interacted with your company, or if the last time you gathered this type of feedback happened a while ago. The preliminary parts of the survey can help jog your customers' memories so they can more mindfully respond to OSAT questions.

However, if you decide to place OSAT questions at the end of your survey, be aware that any change you make to the survey content that precedes your key questions can add bias and possibly change the outcome of the OSAT responses. For example, if you add pointed rating questions around areas known to be customer pain points later in the survey cycle, this may lead respondents to assign a lower rating on the OSAT question that concludes the questionnaire. When you compare the results with previous surveys, this can lead trends downward. Whenever you choose to update questions that come earlier in the survey, you'll need to recheck the overall questionnaire to see how those changes will impact responses to the questions that follow. If you want to keep the ability to tweak your questionnaire as needed, you may want to stick with asking your OSAT questions up front.

What If I'm Still Not Sure?

There are more options for survey design if you're still unsure which placement is best for your program. You can test the alternatives: Start your survey with placement of the questions at either the beginning of the survey or toward the end, gather data for a given time period, and review how the results trend. Then test the questions in the alternate position for the same length of time to determine which results give you the best data to drive change. One caution: With this approach, you need to make sure that the customers you're surveying for each test are similar in nature, since different client types can give you different results.

Another option—which is not done often but can be insightful as well—is putting OSAT questions at both the beginning and end of the survey, but wording them slightly differently. For instance, at the beginning of the survey, you might ask customers to rate their initial perception of your company overall, then at the end, ask their overall satisfaction with your company. This will allow you to trend the results for each placement side by side over the same time period, allowing you to spot the differences that will help you identify changes in the customer experience, leading you to achieve your overall business goals. If the initial perception question scores always trend significantly higher than the OSAT scores at the end, for example, then you can develop a line of questions in between to dig in and identify which areas are triggering the lower OSAT score and are in need of improvement.

Whatever you decide as you finalize your survey design, make sure you own the choice you make for placement of the fundamental customer satisfaction questions. It's important to be able to use the results of the questions to determine your next move to increase not only satisfaction scores, but the value of your VOC program as a whole. Question placement really does impact your overall results, and it's extremely important to make a well-thought-out decision that you can stick with.


Heather Mitchell consults on enterprise feedback management and customer survey programs as a senior project manager with MarketTools CustomerSat. She has 12 years of experience in the market research field.

CRM Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues