The other day I stopped in at a national coffee chain to work and have a cup of coffee. Soon after, I received a “transactional survey” asking about my experience. I often don’t take the time to complete these types of surveys because often it feels like nothing more than a black box of information that no one is actually looking at. But since I’m a frequent customer there I decided to fill it out this time. One of the first questions included a scale regarding the “Quality of the food you ordered”. Problem was…I didn’t order food. Yet I was forced to answer. As a result I quit the survey, and they received NO feedback at all from a good, frequent customer. That got me to wondering how often surveys are started and not completed. We track that for the surveys we conduct – and it’s usually pretty low.
I’ve found a few secrets that can help ensure your research effort doesn’t make a big unforced error like this one. We have a process we use. While we can’t be SURE that a mistake never gets made (hey, the world’s not perfect!), but we can be sure to avoid almost all mistakes – and certainly the easily-avoidable ones.
Have Several People Review It
Once a survey instrument is drafted, make sure that several people review it. And charge no fewer than two people commit to reviewing the survey in a VERY detailed way. And use an iterative process where the team reviews and edits until it’s correct.
Look for typos, questions that don’t read well, questions with multiple meanings, etc. And be sure to look closely at questions with “required” responses. Do you truly want to make them required? If so, make certain that the question can fit any situation. Also, have them give different answers and follow the survey as it goes though different lines of questions (called “piping”).
Recruit a “Buster” from Outside the Team
Have someone that is NOT part of your survey/research team review and test the survey. Ask them to try out all different ways. Ease of use. Read the intro and notice what happens after the survey gets submitted. Have them see if they can make the survey NOT work. Let them try strange things. (Believe me some respondent will do something you never dreamed about!)
Do a Small Test Batch and Keep Monitoring
And just when you think everything is perfect…assume it may not be. It’s always a good idea to test a small batch first. That way, if there is a problem, it will only affect a small number. So, when we send a survey, we typically give the initial “test” batch about 24 hours and watch the results. How many we send depends on the distribution list. As an example, if it’s a 5,000-10,000 distribution we may pull out 100-250.
Watch for how many get opened, started, completion rate, etc. Look through the actual responses. Do any look strange – such as not answering the question in a logical way? Are they complete? Is there a specific spot where surveys stop? Are there specific questions that do not get answered?
Once we are comfortable that the survey looks correct, we send the rest. But it doesn’t end there. Be vigilant. Watch the results regularly. Look for irregularities that you may have missed.
A survey is only valuable if someone actually responds to it. Duh. It’s hard enough getting your target respondents to open and take the survey. You don’t need to shoot yourself in the foot by confusing the respondent. Take your time and get it right. Read closely.
Let’s make sure your organization has a solid path to the future. Please feel free to reach out and get in touch and let’s explore how I can help you and your business succeed. No pressure. Just an informal discussion to explore some ideas.