A colleague emailed me this article about inaccuracies in UK political polls.
This was an interesting quote.
Prof Patrick Sturgis, director of the National Centre for Research Methods at the University of Southampton and chair of the panel, told the BBC: “They don’t collect samples in the way the Office for National Statistics does by taking random samples and keeping knocking on doors until they have got enough people.
“What they do is get anyone they can and try and match them to the population… That approach is perfectly fine in many cases but sometimes it goes wrong.”
For the record, this is not the approach used by all New Zealand political polls. Two New Zealand polls that I know of do not target specific numbers of people (quotas) by age, gender or ethnic identification. Instead, they randomly generate phone numbers, randomly select the person in the each household to interview, and try repeatedly to get hold of that particular person until that person says whether they want to take part. Where possible, interviewers make appointments to reach that person, or will even call them on a different number or outside of normal shift times if requested. No substitution is made within the household under any circumstances. This approach is what’s called ‘approximating a probability sample’.
The key benefit of this approach is it seeks to generate a sample that is representative by both the known and unknown factors that contribute to party support. Sure, age, gender and ethnicity might predict preference for some parties, but what about the infinite number of other factors that contribute to one’s political preference (eg, socialisation, personality, peer group, etc). A methodology that focuses solely on age, gender and ethnicity is one that may be ignoring these unknown factors.
Yes, response rates are certainly an issue. For that reason no poll can be said to be ‘truly random’ – that’s where weighting comes in, to (hopefully) adjust for non-response and non-coverage.
Why am I making this point?
Recently I’ve come across some very senior people within the research industry telling clients “weighting is bad”, and they’ve recommended changing a well thought out methodology to a non-random quota approach for no other reason. BAH!
I recommend quota sampling myself in some circumstances. It can be very useful. However the ‘weighting = bad’ idea is throwing the baby out with the bathwater.
Sure, if your weights are extreme, this is bad. This increases ‘variance’, and can make party support appear quite volatile. However if you’re carrying out high quality fieldwork in New Zealand among the general population, your weights should not need to be too extreme. If your weighting approach is logical and scientific, your poll results should be close to the mark. This has been borne out through poll results close to the last couple of New Zealand general elections.
Whenever I’m polling I’m looking carefully at my final survey weights. When the weights become extreme (perhaps due to non-coverage of landlines, etc), that’s one signal that it may be time to adjust the polling methodology.
If you’re telling people “weighting = bad” either a) you don’t know what you’re talking about, or b) you do know what you’re talking about but you don’t have the ability to weight your data (I’m guessing due to lack of software, or understanding of weighting).