Just posted this in reply to an article on LinkedIn. Thought it could also go here…
Folks trying to explain polls often seem to look for one or two sources of error. Even the UK polling enquiry seemed to be looking for that ‘one big thing’ and appeared to discount small issues that could have made a small difference for only some polls.
Having tinkered with polls and surveys over the last decade or so, I can confidently state that there is never one source of error. They are infinite, they often interact and amplify each other, and they can also cancel each other out
It’s not the job of a pollster to carry out the perfect poll. It’s their job to understand why they can’t, and to minimise sources of error as much as possible within time and budget constraints.
Companies can make a lot of money from polling and surveys under the illusion their polls are ‘profressionally conducted’, when they may not be doing anything at all to systematically understand and minimise error.
The real problem is that polling is easy to do, but it’s really hard to do well.