In Steve Levine’s most recent blog he challenges us to think about the changes in marketing research. Has it been an evolution or a revolution? From my perspective, it might just be devolution.
First, a little context. I am a dinosaur. When I started in marketing research we were still doing door-to-door interviewing, and CATI was the hot new topic. I can remember using checklists to randomize attributes in a brand rating (Yup, questionnaires actually had flaps that someone would manually check the first attribute to ask). Data was stored on cards (and then tapes) back then. And I bet I can still recite how to do a stratified block sampling design (when block literally meant a city block).
Technology has changed all of that. Now a majority of our work is done online. But the funny thing is, the way we ask questions really hasn’t changed since 1979. Of course we now can easily randomize attributes and even build checks into our survey engine to identify falsification. But when you think about it, we’re still asking consumers to rate our brands, products, services – on a 7- or 9-point scale. The same as we did in 1979!
Now you may be asking yourself, is that a bad thing? To answer that question, think about what you do when you’re working or playing online at home. What’s going on – you may be streaming Pandora, your preferred IM is running, your email is alerting you to the newest offer from (insert your favorite online retailer), Facebook is pinging you and you’re surfing to book your next vacation. My question is – do you think the average consumer is doing anything different than you when they are completing our surveys? I think not.
So our surveys have to exist in a multimedia, real-time environment – and I think they’re failing. Very simply our surveys are boring and getting lost in digital clutter. How can we expect respondents to truly pay attention to our surveys when they are rating 3 brands on 25 attributes using the same scale we used in 1979? Very simply, we can’t.
The days of arguing the merits of online versus telephone interviewing are long gone. Online is here to stay, so let’s utilize the technology to the fullest to increase engagement with our surveys.
MSI recently launched a 5,000-interview examination of alternative methods of asking questions. We fielded a number of cells where we systematically increased the degree of interaction the respondent had with the survey. While not quite gamification, we looked at increasingly interactive methods of collecting brand ratings. The question is, what impact does it have on the data itself, respondent engagement, mid-interview terminates and respondent satisfaction? Our hypothesis is the more interactive the survey, the more engaged the respondent and the better the data (possibly different data as well).
Results should be available in late October. Check back here for results as they become available, or contact us for the complete insights into what we found.