Polls and cell phones… again…

7 07 2014

Why don’t you poll cellphones?

This question, or variations on it, is the one I’m asked most frequently. I’ve answered it before on this blog, but this time I thought I’d share some data to help explain my view.

Firstly, let me state that the company I work for does call cellphones. We just don’t randomly dial them for the political poll. As I’ve mentioned before this has very little to do with the actual cost of calling cells. For a polling company, the cost isn’t that much more than it is for landline calls.

I’d like to start by addressing the misconception that it is just low income or ‘young’ households (for lack of a better term) that don’t have a landline telephone.

Please look at the chart below, which I created using data from Statistics New Zealand’s 2012 Household Use of Information and Communications Technology Survey. This is a very robust door-to-door survey of New Zealand households. You can find out more about the methodology here. As you can see in the chart, relative to all NZ households there is a greater proportion of non-landline households in the lower income (and likely younger) groups. However, what’s also clear is that there are substantial proportions of non-landline households in higher income groups too.

1
Read the rest of this entry »





What’s the actual margin of error?

2 07 2014

Thomas Lumley over at StatsChat has used Peter Green’s polling average code to estimate the actual margin of error for political polls after adjusting for design effects. I had no idea how this could be attempted across non-probability samples (EDIT: To be fair, I had no idea how this could be attempted across multiple polls – at all).

If the perfect mathematical maximum-margin-of-error is about 3.1%, the added real-world variability turns that into about 4.2%, which isn’t that bad. This doesn’t take bias into account — if something strange is happening with undecided voters, the impact could be a lot bigger than sampling error.

That last point is a fairly important one. There are many potential sources of error in a poll other than the sampling error.





Alternative reason for undecideds increasing in an Election Year

2 07 2014

Something I neglected to mention in my last post, is that polls can actually be designed to try to maximise the number of undecideds.

My view is that non-response is probably the most important source of error for political polls. Part of the problem is that the average person is not obsessed with politics, and they are harder to survey for this reason (because they are less inclined to take part in a poll). By targeting as high a response rate/as low a refusal rate as possible, polls are trying to maximise coverage of non-politically-obsessed people.

So if you follow this through…

  1. Non-politically-obsessed people are more likely to be undecided  (they are more likely to say ‘don’t know’ at the party vote question).
  2. Poll response rates can improve a wee bit in an election year, so the proportion of undecideds may go up a bit (this is a good thing because it’s a sign a poll is getting to those who are less interested in politics)
  3. The undecideds may actually then decrease a bit the closer you get to the election (because some of these people start deciding).
  4. So the change in undecideds may have nothing at all to do with people party-switching.

In reality – a whole bunch of things will be going on, including party-switching and improved response rates.

One last thing to mention – the different polls use different definitions of ‘undecided’ so it’s not easy to compare the level of undecideds across polls, and it’s not appropriate to use this as a way to decide on the quality of a poll.





Undecided voters and political polls

28 06 2014

I’ve had some interesting posts forwarded to me over the past few weeks about polls, and how they exclude undecided voters from the party support result.

Posts at Sub-Zero Politics and The Political Scientist illustrate that poll results for party support can look quite different if undecided and/or unlikely voters are included in the base.

This is not a critique of their analyses or conclusions. I found these posts interesting, and The Political Scientist’s post inspired me to look at my own data in a different way (and that’s always a good thing). I simply want to add a few points about polling and undecided voters:

  • A poll is commissioned to estimate party support if an election was held at the same time as the poll. Given that this is the purpose, it doesn’t make sense to include those unlikely to vote in the results for party support. Also it’s not possible to include the undecideds in that result because they are, well, undecided.
  • Yes, the results would look very different if unlikely voters and undecided voters were included. But those results would look nothing at all like the result of an election held at the time of the poll, and they would be misleading in this regard. It would not be possible to translate the result into parliamentary seats – which can help to show how close an election might be under MMP.
  • Undecided voters are important. As far as I know, most polls probe undecided voters to try to get an idea of their preference. This may not make a big difference to a poll result quite far from an election – but I think it’s very important during the week prior to an election. During that week, some of the undecided voters will be paying closer attention to politics and will be starting to lean one way or another.
  • Having made the above point, it’s important to keep and mind that a large proportion of undecided voters won’t vote in an election. Based on my own analysis, about a quarter of undecided voters openly state that they don’t plan to vote. I think the true proportion would be higher than this.
  • All poll reports should state the percentage of undecided voters. It has come to my attention that these results can be hard to find. They shouldn’t be.
  • Here’s the biggie – a poll should not be expected to perfectly predict the result of the General Election. The pollsters will do their best to measure party support at the time they are polling – but they do not poll on Election Day, they do not ask ‘who will you vote for?’, they cannot predict what undecided voters will do (or whether they will vote), and there are many other factors outside their control.
  • Factors outside their control include the weather, and what politicians and political commentators do and say leading up to the election. Let’s take the 2011 election as an example. Most poll data were collected 5-7 days out from the election. In the interim, there were reports of the Prime Minister telling John Banks that NZ First supporters weren’t going to be around for much longer! It was no surprise to me that most polls tended to over-estimate National and underestimate NZ First.*

Update: *I’m not suggesting this is the sole factor behind this pattern of results.





Skynet and Computer Assisted Telephone Interviewing (CATI)

25 06 2014

Someone I know was listening to talk back last week (the morning the IPSOS poll was released). She said there were claims that IPSOS used Computer Assisted Telephone Interviewing in their latest poll!

I don’t work at IPSOS – but I’m fairly sure these claims are accurate. What’s inaccurate, however, is that respondents were interviewed by Skynet, c3PO, HAL2000, Orac, Cylons, or any other type of computer.

Computer Assisted Telephone Interviewing (CATI) is when you are called by a real person, and that person is assisted by a computer. The computer dials the number, displays the interviewer’s script, and collects the data. There are a number of advantages to this. For example, there’s no data entry needed after fieldwork is completed, auditors can check calls to make sure interviewers are recording responses correctly, we can get an instant overview of sampling success rates, interviewer refusal rates, and interviews per hour per interviewer, and we can see the results instantly as they roll in.

It’s actually kinda fun.

There are other similar acronyms used by the industry, for example:

WAPI – Web assisted personal interviews

PAPI – Paper assisted personal interviews

CAPI – Computer assisted personal interviews

 





Do polls manipulate voters?

21 06 2014

Recently I was questioned on whether polls influence voters and if, because of polls, people are less likely to vote based on issues and policies. I found this line of questioning difficult because in reality, voters will base their decisions on a wide range of factors, and I’m sure that polls are a factor for some people. There’s a fair amount of academic literature about the influence of polls, but the most NZ-relevant and accessible piece I’ve found is a series of guest posts on Bryce Edwards’ blog, written by then Honours student Michelle Nicol. Unfortunately I don’t think the last part of the series was ever posted.

Essentially, polls might influence a person’s expectations about the outcome of an election, and this could become a factor in their voting decision. They may decide to back the winner or support the underdog, or they may decide to vote tactically based on poll results (this is voting for a party that isn’t your preferred one, in an effort to influence the outcome in some way). Even worse, people may decide not to vote at all, because the election looks to be a foregone conclusion.

It’s not easy to know exactly what part the polls play, because it’s difficult to disentangle the many factors people consider when making their voting decisions. It seems reasonable to suggest though that polls play some part. If I’m being honest, I feel a bit uncomfortable about this. What I like most about Election Year is that it’s a time when, as a county, we discuss and debate important issues – those that help shape who we are as a nation. It’s depressing when I think polls detract from this.

Here’s the thing though. Imagine if there were no polls. Do you think New Zealand would be more democratic or less? With no independent measure of public sentiment, imagine how much more potential there would be for politicians or political commentators to influence the views of the public – to tell us ‘what the public really think’.

Let’s take the debate about same-sex marriage as an example. Some of the loudest voices in this debate were telling us the public doesn’t want same sex marriage, and they were gathering signatures to prove their point. Yet, independent polls showed quite the opposite.

Simply put, it would be a danger to democracy if independent polls were abolished in New Zealand.

UPDATE: Typo corrections – there vs their! How embarrassing.





Internet Mana Party petition on MMP threshold

15 06 2014

Reported by Newstalk ZB:

Internet MANA’s making a move on the five percent MMP threshold.

The Party wants the law on the threshold changed and is running an online petition to test public opinion.

Internet Party leader Laila Harre says there’s a lot of politics being played on the issue and it’s time for the people to have a say.

She says the five percent threshold means more than 100,000 voters for a single party can have their vote ignored.

Ms Harre says collectively it’s a significant share of the party vote.

I’ve no idea if what Internet Mana Party is doing has been reported correctly, but a petition will tell you that a bunch of people care enough about something to sign a petition about it. It will not measure (or test) public opinion about that thing. For that you need a well designed survey.








Follow

Get every new post delivered to your Inbox.

Join 512 other followers