Really interesting article in Nature.
Don’t polling companies take these factors into account?
Not really, and it would be unfair to blame polling companies for the way things are done. They always state quite openly that they are only taking a snapshot of public opinion at a particular point in time.
Important point! A poll can only measure public sentiment at the time of the poll.
…But pollsters use quota samples, in which you try to create a representative mini-population based on a number of criteria: gender, age, group, region and social class. In that case it’s problematic to talk about margins of error.
When you do quota samples, you are making assumptions about what actually matters. From the point of view of the psychological behaviour of people, this isn’t correlated very much with gender, religion and so on. So the quota samples could be biased from the point of view of psychological behaviour.
Why don’t polls use a random sample?
Random samples are much more expensive.
Quota sampling has its uses, especially in situations where it’s not really possible to try to approximate a random sample.
Interestingly, things are not quite the same in New Zealand. The two public polls that have come closest to the New Zealand election result (for the last two elections) do not use quota sampling. They use the (more expensive) random sampling approach. (Well, I know one of them does, and I’m 90% sure the other does too.)
One of these polls did get things quite wrong in 2005, but that had very little to do with their sampling approach. It would be a mistake to assume polling accuracy all comes down to the sampling method.
A friend at Auckland University and I are working on a paper where we’re modelling the demographic vs psychological determinants of political preference in New Zealand. We’re using a year’s worth of polling data (with permission from the client) and data from the longitudinal New Zealand Attitudes and Values study. When it’s published, I’ll blog about it here.
EDITS: Because iPads suck for blogging.