The Customer-Centric Marketer  |  iperceptions Blog
The Customer-Centric Marketer - Customer Experience Blog

4 Factors that Affect the Quality of VoC Intelligence Entry...


May 1, 2013, By Duff Anderson

Garbage in equals garbage out especially when it comes to the noisy online research medium

The saying: “garbage in = garbage out” is very relevant, especially when it comes to the noisy online research medium. Results are limited in confidence by solicitation and collection interface methodology, as well as design and type of questions posed.

Capturing the experience of visitors in the context of self-initiated situations is the key to defining VoC business intelligence. The main factor effecting the quality and reliability of VoC intelligence can be broken into 4 main areas.

1 - Solicitation on arrival – response rate

Passive, exit, and conditional solicitation bias results as the experience itself affects the choice to participate. A balanced view of the total visitor experience can only be accessed when opt-in occurs before the experience occurs, ideally on arrival.

iperceptions gets a positive response to participate in feedback of between 2-5% of visitors who are solicited on arrival to the site, who complete their feedback immediately after their visit.

2 - Collection interface – completion rate and respondent fatigue

Completion rate is the number of respondents who complete a fixed number of required responses to the survey compared to the number who start answering the survey. A low completion rate suggests that your survey is too long or badly designed.

Our research suggests that survey fatigue, dropping out of the survey before completing the required number of questions, is driven more by the number of screen interactions required by the collection interface used to answer the questions, than simply the number of questions itself.

A screen interaction is any mouse click or scroll required by the respondent while completing the study.

Average completion rate for first 25 screen interactions is ~90%. Exponential drop out starts at about 35 screen interactions. Ideally design a survey so that any respondent, regardless of skip logic (question flow), can complete the survey in no more than 40 screen interactions.

Graph shows respondent fatigue

iperceptions’ collection interface, with auto progress to proceeding questions, requires only 1 screen interactions per data point collected (using single select question format), providing industry best completion rates. Using iperceptions’ collection interface allows for more questions to be asked before survey fatigue is notable. Our collection interface provides the opportunity to get maximum information and value from your engaged visitor without causing irritation or affecting completion rates.

3 - Type of questions and number of response choices

Questions designed for ‘Single select’ format require the minimum screen interactions (1) to complete and automatically proceed to the next question. For this reason a ‘single select’ format is the preferred data type for questions when possible.

The number of possible responses for a ‘single select’ question is ideally 6 or less. More choices require much more reading effort by the respondent and can make it difficult for the respondent to choose the appropriate answer.

Multiple select (select all that apply) type questions are sometimes required based on the decision support needed. When using ‘select all that apply’ format for questions ideally limit the number of items listed to 9 or less.

‘Open-ended’ questions, where respondents type in their response in their own words require the greatest effort to complete. Excessive use of open-ended questions can severely affect completion rates. Ideally ask open-ended questions at the end of the survey and always make them optional.

4 - Number of questions

Questions designed for ‘Single select’ format require the minimum screen interactions (1) to complete and automatically proceed to the next question. For this reason a ‘single select’ format is the preferred data type for questions when possible.

The optimal number of questions asked varies based on type of questions used, as well as survey flow (skip logic). As mentioned above, you should count the number of screen interactions required to complete the survey as designed not simply the number of questions to determine optimal survey length.

To maximize the amount of information you can get from the overall study and from the valuable commitment your visitors are providing ask certain questions only to select groups based on what the respondent has revealed.

For example, it is possible to have 40 questions in field, 20 that all respondents answer, as well as 4 sets of 5 unique questions to four different respondent groups. In this case no individual respondent answers more than 25 questions, however, you are able to collect answers to 40 questions from the study.

Question flow i.e. 1,2,3,4,5,6,7,8,9,10,11,23,14,15,16,17,18,19,20 stream
A) 21,22,23,24,25 stream
B) 26,27,28,29,30 stream
C) 31,32,33,34,35 stream
D) 36,37,38,39,40

Use branching and skip logic when possible to maximize value.

Image source: AmsterSam The Wicked Reflectah

Duff Anderson

Duff Anderson is a visionary in Voice of the Customer research with over 20 years’ experience. As SVP and Co-founder at iperceptions, Duff is responsible for providing expert advice to organizations on how to gain a competitive advantage across the customer lifecycle and improve the customer experience.

4 Factors that Affect the Quality of VoC Intelligence Entry...


May 1, 2013, By Duff Anderson
|0 comments

Garbage in equals garbage out especially when it comes to the noisy online research medium

The saying: “garbage in = garbage out” is very relevant, especially when it comes to the noisy online research medium. Results are limited in confidence by solicitation and collection interface methodology, as well as design and type of questions posed.

Capturing the experience of visitors in the context of self-initiated situations is the key to defining VoC business intelligence. The main factor effecting the quality and reliability of VoC intelligence can be broken into 4 main areas.

1 - Solicitation on arrival – response rate

Passive, exit, and conditional solicitation bias results as the experience itself affects the choice to participate. A balanced view of the total visitor experience can only be accessed when opt-in occurs before the experience occurs, ideally on arrival.

iperceptions gets a positive response to participate in feedback of between 2-5% of visitors who are solicited on arrival to the site, who complete their feedback immediately after their visit.

2 - Collection interface – completion rate and respondent fatigue

Completion rate is the number of respondents who complete a fixed number of required responses to the survey compared to the number who start answering the survey. A low completion rate suggests that your survey is too long or badly designed.

Our research suggests that survey fatigue, dropping out of the survey before completing the required number of questions, is driven more by the number of screen interactions required by the collection interface used to answer the questions, than simply the number of questions itself.

A screen interaction is any mouse click or scroll required by the respondent while completing the study.

Average completion rate for first 25 screen interactions is ~90%. Exponential drop out starts at about 35 screen interactions. Ideally design a survey so that any respondent, regardless of skip logic (question flow), can complete the survey in no more than 40 screen interactions.

Graph shows respondent fatigue

iperceptions’ collection interface, with auto progress to proceeding questions, requires only 1 screen interactions per data point collected (using single select question format), providing industry best completion rates. Using iperceptions’ collection interface allows for more questions to be asked before survey fatigue is notable. Our collection interface provides the opportunity to get maximum information and value from your engaged visitor without causing irritation or affecting completion rates.

3 - Type of questions and number of response choices

Questions designed for ‘Single select’ format require the minimum screen interactions (1) to complete and automatically proceed to the next question. For this reason a ‘single select’ format is the preferred data type for questions when possible.

The number of possible responses for a ‘single select’ question is ideally 6 or less. More choices require much more reading effort by the respondent and can make it difficult for the respondent to choose the appropriate answer.

Multiple select (select all that apply) type questions are sometimes required based on the decision support needed. When using ‘select all that apply’ format for questions ideally limit the number of items listed to 9 or less.

‘Open-ended’ questions, where respondents type in their response in their own words require the greatest effort to complete. Excessive use of open-ended questions can severely affect completion rates. Ideally ask open-ended questions at the end of the survey and always make them optional.

4 - Number of questions

Questions designed for ‘Single select’ format require the minimum screen interactions (1) to complete and automatically proceed to the next question. For this reason a ‘single select’ format is the preferred data type for questions when possible.

The optimal number of questions asked varies based on type of questions used, as well as survey flow (skip logic). As mentioned above, you should count the number of screen interactions required to complete the survey as designed not simply the number of questions to determine optimal survey length.

To maximize the amount of information you can get from the overall study and from the valuable commitment your visitors are providing ask certain questions only to select groups based on what the respondent has revealed.

For example, it is possible to have 40 questions in field, 20 that all respondents answer, as well as 4 sets of 5 unique questions to four different respondent groups. In this case no individual respondent answers more than 25 questions, however, you are able to collect answers to 40 questions from the study.

Question flow i.e. 1,2,3,4,5,6,7,8,9,10,11,23,14,15,16,17,18,19,20 stream
A) 21,22,23,24,25 stream
B) 26,27,28,29,30 stream
C) 31,32,33,34,35 stream
D) 36,37,38,39,40

Use branching and skip logic when possible to maximize value.

Image source: AmsterSam The Wicked Reflectah

Duff Anderson

Duff Anderson is a visionary in Voice of the Customer research with over 20 years’ experience. As SVP and Co-founder at iperceptions, Duff is responsible for providing expert advice to organizations on how to gain a competitive advantage across the customer lifecycle and improve the customer experience.

6 characteristics of a first class voice of the customer solution

Designing a Great Survey Can be Difficult. Let VoC Experts Help You!

Check out this eBook that covers the 6 characteristics you should look for when choosing a VoC solution.

Download "6 Characteristics of a First-Class VoC Solution"

Popular posts