Initial thoughts on buying lotto tickets at checkouts

6 04 2014

I don’t think it’s good to be encouraging people to treat lotto as part of their weekly grocery shopping.

Personally I think lotto is the least harmful type of gambling and yes, buying a lotto ticket is a personal choice. However gambling also has a direct influence on those personal choices. It uses a kind of reinforcement schedule that maximises the chances of that personal choice being made again and again. Now it will be even more effective, because you don’t need to make an active decision to walk over to the lotto counter.

Yes it will be more convenient for some. But do we really need to make it as easy as possible for people to spend part of their weekly grocery budget on lotto?

And you know what else? No matter where you sit on this, at least people are talking about issues that are important. That’s instead of who’s tricky, and who has made a mistake in some speech, and which MPs the internet party has been talking to, and other stuff that turns non-politics-obsessed New Zealanders off politics altogether.





Not about ‘The impact of Labour’s GOTV efforts’

3 04 2014

Sigh… Rob has put up a post about my analysis being ‘wrong’.

“Here is why I think they are both wrong.”

With respect, here is why Rob’s reasoning that we are ‘both wrong’ is poorly constructed.

“1. 25.8% of people did not vote in the last election, but only 8.2% of the population admitted it in the survey Grumpollie was using. That’s a very big discrepancy.”

Yes it is, and that is one of the reasons why I said “There are a bunch of caveats to this analysis, including the small sample size and how representative the sample of non-voters was.” Also, it is extremely hard to solicit participation from non-voters in a survey about voting. Having worked on the post-election survey twice, I can attest to this. The discrepancy is not that unusual. Sample bias among the non-voters is more important than their level of response.

 “2. There is a long-standing tradition to lying to pollsters about whether you voted. It is based on “social desirability bias.” And the people most susceptible to it, people who often do vote and are embarrassed that they did not in 2011, are also in my view among the most likely to have voted for Labour in 2008.”

There is a difference between lying and forgetting. Rob has clearly read some social psychology papers. However, there is a considerable body of research in cognitive psychology showing that people forget events very easily, including events they claim to remember ‘as clear as day’. If people don’t accurately report their vote at the previous election, it’s not fair to assume they were all lying.

 “3. The analysis, and David Farrar’s conclusion, is based on the idea that Labour will go hunting for non voters randomly around the country, convincing non-voters in the bluest parts of Clutha-Southland to vote just as much as we do in Labour stronghold areas. We are a bit smarter than that.”

My analysis was based on no such thing. No mention of the Labour party was made in my post, and I would not assume for a second that Labour were silly enough to be using up valuable resources trying to target non-voters in National strongholds. The post was calling into question the assumption I’ve seen made that most non-voters would vote Labour. When people say things like ‘we just need to get that 800,000 out to vote’ or ‘if even a portion of that 800,000 get out to vote we will see a change of government’, that’s the assumption they’re making (or at least helping to reinforce).

Even though the sample is very small and there are a bunch of caveats, the results (at the very least) call that assumption into question.

What these people could be saying instead is ‘We need to focus on the needs of the people in areas with higher deprivation, show them how we can make a difference in their lives and give them a better chance, and encourage them to get out and vote.’

Extra note:

Because I know people will make assumptions about my political leanings based on this and Rob’s post, I’d like to reiterate that I’m a left-of-center voter. I voted for National once, when I was 18 or 19, but have not voted in that direction since. There’s more about why in the ‘about‘ page.

UPDATE 6/4/14: Added a link to a Stuff.co.nz article reporting on assumptions about the 800,000.





What IS a pollster’s job anyway?

3 04 2014

Wow, there has been a lot of online discussion about polls in the last few days. I’d like to make a point.

It seems a lot of people assume that pollsters are trying desperately to get a representative sample, and that they fail if they do not achieve that. To tell a pollster their data are flawed, it seems, is the ultimate insult!

That’s an incorrect assumption.

Any pollster, ever, who tells you their sample is perfectly representative either doesn’t know what they’re doing, or is lying. Yes! The data are flawed, and they always have been. This is nothing new, and has not just come about in recent years due to increasing non-coverage. No pollster can get a representative sample of eligible voters. It’s simply not possible.

The pollster’s job, in my view, is to try to understand why they can’t, and to attempt to disentangle the signal from the noise. It’s the job of a good pollster to spend hours thinking about sources of error, and considering ways to reduce it, cancel it out, or otherwise adjust for it. They can’t always get it right, but that’s the nature of measurement in a context where there are so many variables.

A poll is a measurement tool, and all measurements contain error.

That is all.





Do party preferences change during an Election Year?

3 04 2014

Another excellent post by Gavin White at UMR’s SayIt blog.

Gavin’s analysis of both UMR and Colmar Brunton poll results going back to 1993 and 1996, respectively, shows that party preferences do actually change considerably during an Election Year.

“I’ve heard it argued from time to time that party votes don’t tend to shift much in election year, when in actual fact nothing could be further from the truth.  I looked at UMR polls back to 1993, Colmar Brunton polls back to 1996, and other public polls since those dates, where they were available.  I simply compared the first poll of election year with the actual election result, and here’s what I found

- The vote for the incumbent party (i.e. the main party of government at the time) has fallen during election year at 6 of the last 7 elections.  It has fallen by an average of 5%.

- The vote for the party leading the polls at the beginning of election year has fallen by the election for every one of those seven elections.  The average decline is 6%.

- The major beneficiaries of those drops have generally been minor parties (e.g. United Future, NZ First and ACT in 2002, NZ First in 2011).”

In another post, Gavin examines poll results industry-wide to show that, on average, the final mainstream media polls prior to elections have tended to get a result:

  • 2.4% too high for National
  • 0.5% too low for Labour
  • 1.5% too high for Green
  • 1.1% too low for NZ First

That’s an average across all polls, so some will have results in the opposite direction, or more strongly in the same direction, for some parties. Another difference I think it’s good to note is that some polls appear to be more volatile than others, at different times, which can sometimes make it hard to know if support is increasing or decreasing at any given time. There could be a number of reasons for this. I really dislike poll volatility – makes it impossible to detect the signal for the noise.

UPDATE: Corrected some typos.





Rob Salmond’s post on cell phone polling

13 03 2014

I agree with Rob Salmond that within five years polling methodologies will likely change.

Some thoughts though..

1. The proportion of cell only households is much higher in the US than in New Zealand (where it’s around 12% to 14%), so of course there are more cell only polls in the US. Just because ‘they do it in the States’, doesn’t mean we should do the same here. The sorts of people who live in ‘cell only households’ in New Zealand are not necessarily the same as the sorts of people who live in cell only households in the States.

2. The company that I work for has no policy on “…refusing to call cell phones.” In fact, they do randomly dial cell phones for telephone surveys. They will also call them for the poll if a non-qualifying person in the household gives them a cell number to call.

3. Our decision about not randomly dialing cell phones (yet) has very little to do with the cost of calling cell phones (in fact the cost is not that substantial). It’s due more to a) the degree to which the additional sample frame will reduce bias versus increase variance (produce less stable results), and b) the structure of the New Zealand cell phone system.

4. When the decision is made to change polling approaches it may or may not be a decision to change to cell phone polling (Rob hints at this in his post).

5. At present my view is that, in New Zealand, non-response is a far far bigger source of error than non-coverage. If non-coverage of cell only households is such a big issue, how come most polls seem to over-state support for the Green party? And why don’t they under-state support for the Labour Party?

6. In New Zealand, does calling cell phones decrease non-response or increase it? Don’t underestimate the importance of this.

Calling cells is not, and will never be, the magic bullet for opinion polling.

UPDATE: I’ve read, here and there, some comments that polls use a) published landline listings, or b) an outdated list of number banks for RDD sampling. I can categorically state that ‘a’ is absolute rubbish. None of the main media-client public polls use published listings. At the company I work for ‘b’ is also rubbish. It’s quite possible to uncover new number ranges.

For those interested, RDD works by randomly generating numbers within number banks, then connection testing them, and then re-sampling the connected numbers.





Explaining how a minimum wage rise can reduce inequality

13 03 2014

Inequality is not when some people earn more or less than other people. It’s when entire groups are under resourced (eg, earn less, or have poorer health, or lower education levels) relative to other groups.

This infographic does a great job at explaining how a minimum wage rise can reduce a gender pay gap.

20140313-081657.jpg





Which would you choose?

26 02 2014
Who would have thought? An example of something that’s worse than a self-selecting poll.







Follow

Get every new post delivered to your Inbox.

Join 415 other followers