I read the odd blog. Every now and then people comment, quite rightly, about sources of error in surveys and polls. Comments typically focus on sampling-related matters, such as the landline/cellphone issue (which is why I’ve written three posts about it – see here and here and here).
The thing is though, anyone pointing to just one or two potential sources of survey error is really only seeing the tip of the iceberg. Someone recently said to me “Polls are easy to do, but are difficult (and expensive) to do well.” This really sums up my point of view. The potential sources of error in surveys are infinite. To name a few…
- question order effects
- question wording effects
- interviewer effects
- interviewer variance
- non-response (this is where we can’t get hold of the selected person, where people decide not to take part, or where people can’t be interviewed due to language difficulties or having a disability)
- response bias (this is where certain types of people are more/less likely to respond than others)
- sampling error (even perfectly selected random samples will deviate from the target population – the cited ‘margin of error’ and ‘confidence interval’ typically relate only to this type of error)
- design effects (any deviation from a simple random sample increases variance – margins of error and confidence intervals can be adjusted for this)
- sample (non) coverage error (this is where the cell phone/landline issue sits)
- data processing error.
Each of these may exert a very small influence on a survey’s results, some may cancel each other out, and (if left unchecked) some may exert a larger influence.
I subscribe to a ‘survey methodologist’ approach. It’s the job of a good researcher to try to identify and understand all sources of error, and to try to reduce error as much as practically possible (there are always constraints). It’s not easy, because changing one thing can influence some other aspect of a survey. Also, different errors are manifested in different ways; some can lead to an apparent bias while others can result in wave-on-wave volatility (I know which I prefer).
The search for survey error needs to be continuous and systematic, and each researcher/pollster needs to make an informed judgement about where and how to address sources of error. I’m fortunate to work at a company that runs its own fieldwork division, which means that our researchers and our field workers can really put their heads together to try to identify and address sources of error.
In my view, the biggest issue for telephone surveys in New Zealand today isn’t non-coverage, it’s declining response rates. Things like this do not make our job any easier.
*The post replaces one I tried to post earlier today, but somehow managed to permanently delete.