Having spent the best part of 25 years at a research agency directly involved in designing questionnaires, it was very rarely that I ever got a chance to be on the receiving end of a survey.
I was either excluded as being in the industry or was working long hours so was never around to pick up the phone at home or walk past interviewers in the street. That changed when I became a consultant and I’ve done more surveys in the two years than in the last two decades.
It’s the last three surveys I have taken part in that prompted this blog. All were perfectly fine and dandy, but were based on an underlying assumption – took something for granted – that was not actually addressed in the questioning itself. So at the end of all three I was left with a ‘vague ennui’ a slight feeling of frustration, a suspicion that what I actually thought had been obscured by the questionnaire design itself. So let me explain why.
It’s about not leaving out the one question that sets the context for all the other answers.
I won’t name the clients or the agencies involved as that would be unfair and the examples add up to a more general point, rather than the detail of the surveys. Also, working in the industry I know I am the respondent from hell, the hunter turned gamekeeper.
It’s just that in each case the addition of one simple question at the start could have transformed their understanding of my answers.
Exhibit one is an online survey I have just completed this very morning. The survey was for a UK broadcaster, mainly about their On Demand service and their App. It was a well designed questionnaire, did not overstay its welcome and was quite enjoyable to fill in, but at the end of it I was left frustrated that they were missing one important fact.
There was a lot of talk about the nice functionalities in the App, about personalised experiences and various things that would make me more likely to use the App, but the survey ignored the central point, the thing that would make them understand why I hardly ever use it: the channel doesn’t show programmes I particularly like or indeed are even aimed at me. The thing is, I am a 50 year old male and lie outside their target market, so no amount of customised, personal experiences and design enhancements are going to make me watch a programme about the love lives of Welsh teenagers, botched plastic surgery or going on tour with One Direction.
There was just one open-ended question into which I managed to squeeze this point, the one fundamental reason I don’t use the App: their channel wasn’t aimed at me and I don’t have a problem with that, as there is plenty of other great TV to watch. So I am not sure how my data will help them now. Nice app, shame about the content.
Exhibit Two. Last month I was invited to attend a focus group being run on behalf of our local council. It was the classic format of ten strangers in a room in a community centre after work. It was well moderated, but the central objective soon emerged. The council needed to make cuts and we, the assembled council tax payers, were going to help them by deciding what to cut.
At one point the moderator said – and I paraphrase – that in this period of austerity, obviously cuts had to be made somewhere. To give him his due, when I contested this statement (using an argument that’s a whole other blog) he did concede that this was not a politically neutral statement. Indeed the broad sentiment of the group was that the group would actually tolerate a rise in council tax if they thought it would go on better services, but actually they believed that all politicians were corrupt and the money would be wasted on council salaries (the cynicism was palpable – again a whole other blog).
So, regardless, we were given a list of local services and forced to tick the five that we thought should not be saved. Next time the bin man is late it’s my fault as I thought that street lighting was more important! A ridiculous situation, but again it was all based on two assumptions – that everyone wants lower council tax and that we all accept that cuts have to be made in local services rather than city bonuses.
Exhibit Three is slightly more benign and was a street interview just after I stepped off a bus. The interviewer was fine, but again the interview about my ‘experience’ was frustrating. I was asked about endless characteristics of my trip, all based on the assumption that people pay attention whilst commuting – as opposed to attempting to block it all out as most Londoners do. So I was asked to rate the bus driver on a ten-point scale for a variety of characteristics when I couldn’t even recall what gender they were, and the quality of the paint job on the bus when all I could say for certain was that it was, indeed, red.
The only thing I recall about the journey at all was that I had to sit downstairs at the back, right over the engine, which made the whole journey very unpleasantly like sitting on a washing machine on spin cycle, which highlights that for many zombie-like commuters the only things we notice are when something goes wrong, otherwise we block it all out.
The only thing I could remember and it was not actually covered. So the central assumption of this interview could also be questioned.
My point – and I do have one – is that sometimes in designing a questionnaire, moderating a focus group or conducting an interview, researchers need to take a step back and start at the widest point – how much does this actually matter to the respondent? – as opposed to diving straight in. Does it really matter how well an App is designed if I don’t like the content? How can I prioritise cuts if I don’t think cuts should be made? Can I answer ten minutes of questions about a journey I can barely remember making?
All of this reminds me of a rare piece of vital feedback I did get when designing questionnaires at a major agency. A national broadcaster had a survey in the field in the 90s about its news content and one of the respondents took the time to get through to ‘the person who designed the questionnaire’ – me. Coincidentally – or perhaps not – they were a recently departed employee of the organisation (as the conversation went on I started to recognize their voice), so clearly the topic matter meant a lot to them. We had asked a detailed series of questions about the quality of the broadcasters regional news service. The respondent had answered ‘Don’t Know’ to each of them. This was not because they didn’t know – after all they had actually worked for the organisation – it was for a more fundamental reason:
“You ask me to rate the quality of the coverage, but you never asked me if I felt that they should have a regional service in the first place! I don’t believe they should. If I say the coverage is excellent – which it is – then that will be seen as an endorsement to continue..They do a great job, but it is a job they should leave to others. So I put ‘Don’t know.”
If it is any consolation to the anonymous presenter, whilst I couldn’t do anything about that survey, it has stuck in my head ever since when designing questionnaires. Don’t make assumptions.
So…. at the risk of being overly didactic, two things for market researchers out there to bear in mind:
1. Don’t just assume that the respondent is likely to be as invested and interested in the topic matter as you and the client are. This is difficult as we want to show our clients how enthusiastic we are, we get caught up in the enthusiasm for their business. It is hard to take a step back and ask them to take a reality check.
2. Is the questionnaire based on an assumption that you don’t actually test? If so, shouldn’t you ask, even if it may seen obvious to you?
In my case, to come back to my three recent market research experiences, three simple questions early on could have greatly aided the understanding of all my answers:
- Do you pay much attention to surroundings when you use public transport?
- Does our channel offer content you like to watch?
- Do you think local services should be cut?
All three were assumed to be ‘yes’ but were actually ‘no’, making my subsequent answers of debateable use without that context.
My frustration led to this blog, but for other frustrated respondents not working in MR it may mean cynical people less likely to say ‘yes’ to taking part in the future.
That’s a worry.