Political party Opening Statement evaluations (parties on the left)

24 08 2014

Polling is really a very small part of what I do. One of my main jobs is measuring the effectiveness of different public sector social marketing campaigns.

I thought it might be interesting to think about each Opening Statement from a comms-development perspective. Okay, so I’ve not gone out and actually evaluated the Opening Statements among each target audience (which is what I’d usually do), but the work I’ve done over the past 10 years has given me a pretty good feel for the strengths and weaknesses of different campaigns, messages and executions.

Below I’ve rated how well I think each Opening Statement (of the left) would perform against norms for engagement, message relevance, message believability, and brand ascription among their target audiences. If I get some more time, I’ll look at Opening Statements from parties on the right.

Labour

Engagement: Below average (passive positive)
Message relevance: Above average
Message believability: Average
Brand ascription: Above average

Overall assessment – Labour’s messages are right on target, and there’s no mistaking that this was the Labour Party. However the message is let down by the execution – it’s what we call a passive execution, which (unless highly enjoyable) does not keep people’s attention. Viewers will either not notice a lot of the key messages, or will forget them. The execution also felt very scripted and unnatural, which lowers message believability.

Green

Engagement: Above average (clever mix of active positive and active negative)
Message relevance: Above average
Message believability: Above average
Brand ascription: Above average

Overall assessment – Messages right on target and clearly conveyed. Very good use of imagery – there’s no mistaking this was the Green Party. Moving between emotive positive and negative messaging was a really clever way to keep people’s attention for a long time.  The negative National Party messaging probably made sense to the Greens, because they probably don’t see potential National voters as their target audience. They need to be careful here though, so they can be a viable alternative for more centrist voters who don’t want to keep supporting National, but who don’t currently see Labour as an option (I digress, as this is more about their strategy than their Opening Statement).

Internet-Mana

Engagement: Above average (active positive)
Message relevance: Average
Message believability: Average to below average
Brand ascription: Average

Overall assessment – The animation, the cat, and the whole Jetsons thing was pretty clever. This definitely encourages the audience to look and to keep watching.  I do wonder though if young people will think the message is patronising. I’m not sure if the ‘We will fix everything! Cool! Radical! Awesome!’ message will fly with a lot of young potential voters.

The branding was fairly average too. Sure, they show the logo, and they mention Internet-Mana, Laila, and Hone, but the creative execution isn’t tied to the brand as well as it is for the other Opening Statements (which you instantly know are for Labour or Green). Strong brand ascription is REALLY important for a new brand, so this should have been much stronger.





What is a push poll? (HINT – It’s not a poll.)

22 08 2014

The AAPOR has a good discussion of this.

A so-called “push poll” is an insidious form of negative campaigning, disguised as a political poll. “Push polls” are not surveys at all, but rather unethical political telemarketing — telephone calls disguised as research that aim to persuade large numbers of voters and affect election outcomes, rather than measure opinions. This misuse of the survey method exploits the trust people have in research organizations and violates the AAPOR Code of Professional Ethics and Practices.

The main thing to note is that a ‘push poll’, despite its name, is not actually a poll at all. It is a form of campaigning under the guise of being a poll. Essentially, push polling is conducting very short telephone calls to a very large number of people, specifically to influence their view. For it to be effective you’d need to call a much larger number of people than is typically called for a random political poll.

The fact that a poll contains negative information about one or more candidates does NOT in and of itself make it a ‘push poll.’ Political campaigns routinely sponsor legitimate “message-testing” surveys that are used by campaign consultants to test out the effectiveness of various possible campaign messages or campaign ad content, often including negative messages. Political message-testing surveys may sometimes be confused with fake polling, but they are very different.

If it’s a random survey by an established company, and/or the results are made public, it’s probably not a push poll.





Polls and cell phones… again…

7 07 2014

Why don’t you poll cellphones?

This question, or variations on it, is the one I’m asked most frequently. I’ve answered it before on this blog, but this time I thought I’d share some data to help explain my view.

Firstly, let me state that the company I work for does call cellphones. We just don’t randomly dial them for the political poll. As I’ve mentioned before this has very little to do with the actual cost of calling cells. For a polling company, the cost isn’t that much more than it is for landline calls.

I’d like to start by addressing the misconception that it is just low income or ‘young’ households (for lack of a better term) that don’t have a landline telephone.

Please look at the chart below, which I created using data from Statistics New Zealand’s 2012 Household Use of Information and Communications Technology Survey. This is a very robust door-to-door survey of New Zealand households. You can find out more about the methodology here. As you can see in the chart, relative to all NZ households there is a greater proportion of non-landline households in the lower income (and likely younger) groups. However, what’s also clear is that there are substantial proportions of non-landline households in higher income groups too.

1
Read the rest of this entry »





What’s the actual margin of error?

2 07 2014

Thomas Lumley over at StatsChat has used Peter Green’s polling average code to estimate the actual margin of error for political polls after adjusting for design effects. I had no idea how this could be attempted across non-probability samples (EDIT: To be fair, I had no idea how this could be attempted across multiple polls – at all).

If the perfect mathematical maximum-margin-of-error is about 3.1%, the added real-world variability turns that into about 4.2%, which isn’t that bad. This doesn’t take bias into account — if something strange is happening with undecided voters, the impact could be a lot bigger than sampling error.

That last point is a fairly important one. There are many potential sources of error in a poll other than the sampling error.





Alternative reason for undecideds increasing in an Election Year

2 07 2014

Something I neglected to mention in my last post, is that polls can actually be designed to try to maximise the number of undecideds.

My view is that non-response is probably the most important source of error for political polls. Part of the problem is that the average person is not obsessed with politics, and they are harder to survey for this reason (because they are less inclined to take part in a poll). By targeting as high a response rate/as low a refusal rate as possible, polls are trying to maximise coverage of non-politically-obsessed people.

So if you follow this through…

  1. Non-politically-obsessed people are more likely to be undecided  (they are more likely to say ‘don’t know’ at the party vote question).
  2. Poll response rates can improve a wee bit in an election year, so the proportion of undecideds may go up a bit (this is a good thing because it’s a sign a poll is getting to those who are less interested in politics)
  3. The undecideds may actually then decrease a bit the closer you get to the election (because some of these people start deciding).
  4. So the change in undecideds may have nothing at all to do with people party-switching.

In reality – a whole bunch of things will be going on, including party-switching and improved response rates.

One last thing to mention – the different polls use different definitions of ‘undecided’ so it’s not easy to compare the level of undecideds across polls, and it’s not appropriate to use this as a way to decide on the quality of a poll.





Undecided voters and political polls

28 06 2014

I’ve had some interesting posts forwarded to me over the past few weeks about polls, and how they exclude undecided voters from the party support result.

Posts at Sub-Zero Politics and The Political Scientist illustrate that poll results for party support can look quite different if undecided and/or unlikely voters are included in the base.

This is not a critique of their analyses or conclusions. I found these posts interesting, and The Political Scientist’s post inspired me to look at my own data in a different way (and that’s always a good thing). I simply want to add a few points about polling and undecided voters:

  • A poll is commissioned to estimate party support if an election was held at the same time as the poll. Given that this is the purpose, it doesn’t make sense to include those unlikely to vote in the results for party support. Also it’s not possible to include the undecideds in that result because they are, well, undecided.
  • Yes, the results would look very different if unlikely voters and undecided voters were included. But those results would look nothing at all like the result of an election held at the time of the poll, and they would be misleading in this regard. It would not be possible to translate the result into parliamentary seats – which can help to show how close an election might be under MMP.
  • Undecided voters are important. As far as I know, most polls probe undecided voters to try to get an idea of their preference. This may not make a big difference to a poll result quite far from an election – but I think it’s very important during the week prior to an election. During that week, some of the undecided voters will be paying closer attention to politics and will be starting to lean one way or another.
  • Having made the above point, it’s important to keep and mind that a large proportion of undecided voters won’t vote in an election. Based on my own analysis, about a quarter of undecided voters openly state that they don’t plan to vote. I think the true proportion would be higher than this.
  • All poll reports should state the percentage of undecided voters. It has come to my attention that these results can be hard to find. They shouldn’t be.
  • Here’s the biggie – a poll should not be expected to perfectly predict the result of the General Election. The pollsters will do their best to measure party support at the time they are polling – but they do not poll on Election Day, they do not ask ‘who will you vote for?’, they cannot predict what undecided voters will do (or whether they will vote), and there are many other factors outside their control.
  • Factors outside their control include the weather, and what politicians and political commentators do and say leading up to the election. Let’s take the 2011 election as an example. Most poll data were collected 5-7 days out from the election. In the interim, there were reports of the Prime Minister telling John Banks that NZ First supporters weren’t going to be around for much longer! It was no surprise to me that most polls tended to over-estimate National and underestimate NZ First.*

Update: *I’m not suggesting this is the sole factor behind this pattern of results.





Skynet and Computer Assisted Telephone Interviewing (CATI)

25 06 2014

Someone I know was listening to talk back last week (the morning the IPSOS poll was released). She said there were claims that IPSOS used Computer Assisted Telephone Interviewing in their latest poll!

I don’t work at IPSOS – but I’m fairly sure these claims are accurate. What’s inaccurate, however, is that respondents were interviewed by Skynet, c3PO, HAL2000, Orac, Cylons, or any other type of computer.

Computer Assisted Telephone Interviewing (CATI) is when you are called by a real person, and that person is assisted by a computer. The computer dials the number, displays the interviewer’s script, and collects the data. There are a number of advantages to this. For example, there’s no data entry needed after fieldwork is completed, auditors can check calls to make sure interviewers are recording responses correctly, we can get an instant overview of sampling success rates, interviewer refusal rates, and interviews per hour per interviewer, and we can see the results instantly as they roll in.

It’s actually kinda fun.

There are other similar acronyms used by the industry, for example:

WAPI – Web assisted personal interviews

PAPI – Paper assisted personal interviews

CAPI – Computer assisted personal interviews

 








Follow

Get every new post delivered to your Inbox.

Join 561 other followers