How to improve pie charts (via Stats Chat)

3 06 2015

Just brilliant – link.

(Via Stats Chat)





Article in Nature about UK polling accuracy

10 05 2015

Really interesting article in Nature.

Don’t polling companies take these factors into account?
Not really, and it would be unfair to blame polling companies for the way things are done. They always state quite openly that they are only taking a snapshot of public opinion at a particular point in time.

Important point! A poll can only measure public sentiment at the time of the poll.

…But pollsters use quota samples, in which you try to create a representative mini-population based on a number of criteria: gender, age, group, region and social class. In that case it’s problematic to talk about margins of error.

When you do quota samples, you are making assumptions about what actually matters. From the point of view of the psychological behaviour of people, this isn’t correlated very much with gender, religion and so on. So the quota samples could be biased from the point of view of psychological behaviour.

Why don’t polls use a random sample?
Random samples are much more expensive.

Quota sampling has its uses, especially in situations where it’s not really possible to try to approximate a random sample.

Interestingly, things are not quite the same in New Zealand. The two public polls that have come closest to the New Zealand election result (for the last two elections) do not use quota sampling. They use the (more expensive) random sampling approach. (Well, I know one of them does, and I’m 90% sure the other does too.)

One of these polls did get things quite wrong in 2005, but that had very little to do with their sampling approach. It would be a mistake to assume polling accuracy all comes down to the sampling method.

A friend at Auckland University and I are working on a paper where we’re modelling the demographic vs psychological determinants of political preference in New Zealand. We’re using a year’s worth of polling data (with permission from the client) and data from the longitudinal New Zealand Attitudes and Values study. When it’s published, I’ll blog about it here.

EDITS: Because iPads suck for blogging.





Inquiry to be held into UK polling accuracy

9 05 2015

Report by  Sky News.

An independent inquiry is to be carried out into the accuracy of election polls after they consistently underestimated the Conservatives’ lead over Labour.

Predictions of a neck-and-neck race, a near-balanced parliament, and a potential constitutional crisis following the General Election put forward by all major pollsters during the campaign have been proved drastically wrong.

My impression is that, relative to NZ, there’s more information available about individual poll methodologies in the UK. This should assist the inquiry.

The British Polling Council (BPC), which acts as the association for opinion pollsters, will look into the causes of the “apparent bias” and make recommendations for future polls.

The BPC, which counts all major UK pollsters among its members, said in a statement: “The final opinion polls before the election were clearly not as accurate as we would like, and the fact that all the pollsters underestimated the Conservative lead over Labour suggests that the methods that were used should be subject to careful, independent investigation.

It would be wrong to assume political preference is stable during the week before an election, so ‘apparent bias’ is the correct term.

Survation said it conducted a telephone poll on Wednesday evening which showed the Tories on 37% and Labour on 31% but “chickened out” of publishing it as it appeared so out of line with other surveys.

Case in point.

Meanwhile, ICM director Martin Boon appeared to sum up the mood among Britain’s pollsters, tweeting “oh s**t” after the publication of the exit poll showing the Tories would be by far the largest party.

Heh… that’s what he said in public.

UPDATE: Article by Survation. (Thank you Matthew Beveridge)

Survation conducted a voting intention telephone poll the day before the election (Wednesday) with three specific attributes:

  • Naming candidates through a ballot prompt specific to the respondents’ constituency based on their postcode.
  • Carefully balancing our sample based on age, region, sex, and past vote prior to weighting, from a nationally representative sample frame
  • And crucially,
  • Speaking only to the named person from the dataset and calling mobile and landline telephone numbers to maximise the “reach” of the poll.

This was conducted over the afternoon and evening of Wednesday 6th May, as close as possible to the election to capture any late “swing” to any party – the same method we used in our telephone polls during the Independence Referendum that produced a 54% and a 53% figure for “no”.
This poll produced figures of:

Survation Telephone, Ballot Paper Prompt:
CON 37%
LAB 31%
LD 10
UKIP 11
GRE 5
Others (including the SNP) 6%

Which would have been very close to the final result.

But!

We had flagged that we were conducting this poll to the Daily Mirror as something we might share as an interesting check on our online vs our telephone methodology, but the results seemed so “out of line” with all the polling conducted by ourselves and our peers – what poll commentators would term an “outlier” – that I “chickened out” of publishing the figures – something I’m sure I’ll always regret.

So in short, there will be no internal review of polling methodology for Survation post this General Election result.

Really? Based on the result of a single poll that was a better predictor? Let’s hope their model holds true for the next election.

Online polling is very effective and useful for a wide range of reasons, however we’re now clearer than ever that this method of telephone polling as described above will be our “Gold Standard” for voting intention going forward.

I think online polling can be effective for political polling. However this is just one of an infinite number of things that contribute to a poll.





Annoying article about polling error

31 03 2015

#BeginRant

The Royal Statistical Society has published a ‘report card‘ detailing various problems with various polling methods.

Actually, that’s all the article really does – points out all the problems with different survey methodologies, and suggests folks should keep all these in mind when interpreting results.

NEWS FLASH: The potential sources of error for any methodology are endless. All survey methodologies have advantages and disadvantages. I could write pages and pages on error sources within each of the methodologies they mention.

The article might give a reader the impression it’s actually possible to carry out a ‘perfect’ survey. It’s not possible. It has never been possible since the dawn of population surveys. The data are flawed!

As I’ve said numerous times on this blog, the pollster’s job is not to carry out the perfect survey. It’s their job to understand why they can’t.

#EndRant





Non-probability sampling and sampling error

29 03 2015

Just noticed this infographic on Twitter, and I’m blogging it mainly so I don’t forget where I saw it. It has a wee note at the bottom:

“This online survey is not based on a probability sample and therefore no estimate of theoretical sampling error can be calculated.”

I may use this note in future when I don’t use probability sampling.

A probability sample is one where every person in the target population has a chance of being selected for the survey, and you can accurately determine the probability of that happening. Due to things like poor response rates, it’s never actually possible to collect a true probability sample. However survey researchers can try to closely approximate it.

The margin of error that is often reported with survey results is based on the assumption that the survey used probability sampling. However quite a lot of the time the researchers have made no effort to do this, or it’s simply not practical to do this.

One of the reasons some political polls weight by household size is that people in larger households have a lower probability of being selected then people in smaller households. So, if they’re trying to approximate a probability sampe, pollsters will apply an inverse probability weight to adjust for this. In my experience, if the survey has been well conducted, this weight alone goes a fair way toward adjusting for Māori, Pacific and low-income under-representation.





Not about polls and surveys anymore

19 12 2014

I like to always have something to do.

One Sunday I was feeling bored, and I came across this tweet by Vaughn Davis (@vaughndavis).

After a brief discussion about his technique for those fantastic looking panel lines, I was off to Toy World in J-ville. My first build was this 1/72 scale Spitfire Mkla.

Photo 14-12-14 08 19 38

There’s a massive learning curve, and clearly I’ve got a way to go. I was never patient enough to build scale models when I was a kid. I’m very slightly more patient now, and I’m totally addicted to this. I’ve just bought a second hand compressor and airbrush, and am two thirds of the way through an ME109 (sticking with the WW2 theme for now). I’ve also bought the kit for a Douglas C-47 Skytrain – and will be painting it in D-Day colours.

So, as well as posting about polls and surveys, I’ll now post the occasional completed scale model. :)





How did the polls do? The final outcome.

4 10 2014

Now we have the final election result, I’ve updated the table from my previous post. In addition, I’ve included a similar table for the polls-of-polls, and a pretty graph!

UPDATE: I’ve revised the chart and first table with UMR’s pre-election poll result, published by Gavin White on SAYit Blog. I’ve checked all my numbers fairly carefully, but if any pollster, pundit, or media organisation spots any errors please let me know and I’ll update this post accordingly.

Final result chart

How I calculated the above results.

Polls

Poll of polls

The overall picture remains similar.

  1. Well done DigiPoll and DPF (Curia poll-of-polls)
  2. Still no evidence, this election, of the ‘National bias’ that some people talk about.
  3. If there is any poll bias, it appears to be toward the Green Party.
  4. The landline bias/non-coverage issue is a red herring – the polls that came closest only call landlines. It’s just one of many potential sources of error that pollsters need to consider. Here’s another post about this, if anyone is interested in finding out why it’s not such a big deal.







Follow

Get every new post delivered to your Inbox.

Join 720 other followers