Comments : Leave a Comment »
Tags: push polling
Categories : Methodology, Real polls
The AAPOR has a good discussion of this.
A so-called “push poll” is an insidious form of negative campaigning, disguised as a political poll. “Push polls” are not surveys at all, but rather unethical political telemarketing — telephone calls disguised as research that aim to persuade large numbers of voters and affect election outcomes, rather than measure opinions. This misuse of the survey method exploits the trust people have in research organizations and violates the AAPOR Code of Professional Ethics and Practices.
The main thing to note is that a ‘push poll’, despite its name, is not actually a poll at all. It is a form of campaigning under the guise of being a poll. Essentially, push polling is conducting very short telephone calls to a very large number of people, specifically to influence their view. For it to be effective you’d need to call a much larger number of people than is typically called for a random political poll.
The fact that a poll contains negative information about one or more candidates does NOT in and of itself make it a ‘push poll.’ Political campaigns routinely sponsor legitimate “message-testing” surveys that are used by campaign consultants to test out the effectiveness of various possible campaign messages or campaign ad content, often including negative messages. Political message-testing surveys may sometimes be confused with fake polling, but they are very different.
If it’s a random survey by an established company, and/or the results are made public, it’s probably not a push poll.
Comments : 7 Comments »
Tags: Cell phones, Landlines
Categories : Sampling
Why don’t you poll cellphones?
This question, or variations on it, is the one I’m asked most frequently. I’ve answered it before on this blog, but this time I thought I’d share some data to help explain my view.
Firstly, let me state that the company I work for does call cellphones. We just don’t randomly dial them for the political poll. As I’ve mentioned before this has very little to do with the actual cost of calling cells. For a polling company, the cost isn’t that much more than it is for landline calls.
I’d like to start by addressing the misconception that it is just low income or ‘young’ households (for lack of a better term) that don’t have a landline telephone.
Please look at the chart below, which I created using data from Statistics New Zealand’s 2012 Household Use of Information and Communications Technology Survey. This is a very robust door-to-door survey of New Zealand households. You can find out more about the methodology here. As you can see in the chart, relative to all NZ households there is a greater proportion of non-landline households in the lower income (and likely younger) groups. However, what’s also clear is that there are substantial proportions of non-landline households in higher income groups too.
Read the rest of this entry »
Comments : Leave a Comment »
Tags: design effect
Categories : Sampling
Thomas Lumley over at StatsChat has used Peter Green’s polling average code to estimate the actual margin of error for political polls after adjusting for design effects. I had no idea how this could be attempted across non-probability samples (EDIT: To be fair, I had no idea how this could be attempted across multiple polls – at all).
If the perfect mathematical maximum-margin-of-error is about 3.1%, the added real-world variability turns that into about 4.2%, which isn’t that bad. This doesn’t take bias into account — if something strange is happening with undecided voters, the impact could be a lot bigger than sampling error.
That last point is a fairly important one. There are many potential sources of error in a poll other than the sampling error.
Comments : 1 Comment »
Tags: undecided voters
Categories : Interpretation
Something I neglected to mention in my last post, is that polls can actually be designed to try to maximise the number of undecideds.
My view is that non-response is probably the most important source of error for political polls. Part of the problem is that the average person is not obsessed with politics, and they are harder to survey for this reason (because they are less inclined to take part in a poll). By targeting as high a response rate/as low a refusal rate as possible, polls are trying to maximise coverage of non-politically-obsessed people.
So if you follow this through…
- Non-politically-obsessed people are more likely to be undecided (they are more likely to say ‘don’t know’ at the party vote question).
- Poll response rates can improve a wee bit in an election year, so the proportion of undecideds may go up a bit (this is a good thing because it’s a sign a poll is getting to those who are less interested in politics)
- The undecideds may actually then decrease a bit the closer you get to the election (because some of these people start deciding).
- So the change in undecideds may have nothing at all to do with people party-switching.
In reality – a whole bunch of things will be going on, including party-switching and improved response rates.
One last thing to mention – the different polls use different definitions of ‘undecided’ so it’s not easy to compare the level of undecideds across polls, and it’s not appropriate to use this as a way to decide on the quality of a poll.
Comments : 2 Comments »
Tags: undecided voters
Categories : Interpretation, Reporting
I’ve had some interesting posts forwarded to me over the past few weeks about polls, and how they exclude undecided voters from the party support result.
Posts at Sub-Zero Politics and The Political Scientist illustrate that poll results for party support can look quite different if undecided and/or unlikely voters are included in the base.
This is not a critique of their analyses or conclusions. I found these posts interesting, and The Political Scientist’s post inspired me to look at my own data in a different way (and that’s always a good thing). I simply want to add a few points about polling and undecided voters:
- A poll is commissioned to estimate party support if an election was held at the same time as the poll. Given that this is the purpose, it doesn’t make sense to include those unlikely to vote in the results for party support. Also it’s not possible to include the undecideds in that result because they are, well, undecided.
- Yes, the results would look very different if unlikely voters and undecided voters were included. But those results would look nothing at all like the result of an election held at the time of the poll, and they would be misleading in this regard. It would not be possible to translate the result into parliamentary seats – which can help to show how close an election might be under MMP.
- Undecided voters are important. As far as I know, most polls probe undecided voters to try to get an idea of their preference. This may not make a big difference to a poll result quite far from an election – but I think it’s very important during the week prior to an election. During that week, some of the undecided voters will be paying closer attention to politics and will be starting to lean one way or another.
- Having made the above point, it’s important to keep and mind that a large proportion of undecided voters won’t vote in an election. Based on my own analysis, about a quarter of undecided voters openly state that they don’t plan to vote. I think the true proportion would be higher than this.
- All poll reports should state the percentage of undecided voters. It has come to my attention that these results can be hard to find. They shouldn’t be.
- Here’s the biggie – a poll should not be expected to perfectly predict the result of the General Election. The pollsters will do their best to measure party support at the time they are polling – but they do not poll on Election Day, they do not ask ‘who will you vote for?’, they cannot predict what undecided voters will do (or whether they will vote), and there are many other factors outside their control.
- Factors outside their control include the weather, and what politicians and political commentators do and say leading up to the election. Let’s take the 2011 election as an example. Most poll data were collected 5-7 days out from the election. In the interim, there were reports of the Prime Minister telling John Banks that NZ First supporters weren’t going to be around for much longer! It was no surprise to me that most polls tended to over-estimate National and underestimate NZ First.*
Update: *I’m not suggesting this is the sole factor behind this pattern of results.
Comments : Leave a Comment »
Categories : Methodology
Someone I know was listening to talk back last week (the morning the IPSOS poll was released). She said there were claims that IPSOS used Computer Assisted Telephone Interviewing in their latest poll!
I don’t work at IPSOS – but I’m fairly sure these claims are accurate. What’s inaccurate, however, is that respondents were interviewed by Skynet, c3PO, HAL2000, Orac, Cylons, or any other type of computer.
Computer Assisted Telephone Interviewing (CATI) is when you are called by a real person, and that person is assisted by a computer. The computer dials the number, displays the interviewer’s script, and collects the data. There are a number of advantages to this. For example, there’s no data entry needed after fieldwork is completed, auditors can check calls to make sure interviewers are recording responses correctly, we can get an instant overview of sampling success rates, interviewer refusal rates, and interviews per hour per interviewer, and we can see the results instantly as they roll in.
It’s actually kinda fun.
There are other similar acronyms used by the industry, for example:
WAPI – Web assisted personal interviews
PAPI – Paper assisted personal interviews
CAPI – Computer assisted personal interviews