Why don’t you poll cellphones?
This question, or variations on it, is the one I’m asked most frequently. I’ve answered it before on this blog, but this time I thought I’d share some data to help explain my view.
Firstly, let me state that the company I work for does call cellphones. We just don’t randomly dial them for the political poll. As I’ve mentioned before this has very little to do with the actual cost of calling cells. For a polling company, the cost isn’t that much more than it is for landline calls.
I’d like to start by addressing the misconception that it is just low income or ‘young’ households (for lack of a better term) that don’t have a landline telephone.
Please look at the chart below, which I created using data from Statistics New Zealand’s 2012 Household Use of Information and Communications Technology Survey. This is a very robust door-to-door survey of New Zealand households. You can find out more about the methodology here. As you can see in the chart, relative to all NZ households there is a greater proportion of non-landline households in the lower income (and likely younger) groups. However, what’s also clear is that there are substantial proportions of non-landline households in higher income groups too.
In fact, you can see in the next chart that nearly half (47%) of non-landline households receive more then $40,000 per year, and a quarter (25%) receive more than $70,000 per year. So sure, absolutely, there is a skew toward lower income (and likely younger) households, but don’t assume that cell-only households all contain lower income people and young people.
To help illustrate my next point I’ve combined these data with what we know from the most recent Census – that 85.5% of households have a landline. This chart is not perfect because the survey was carried out at a different time to the Census, and things will have changed a bit in between the two. The chart breaks all New Zealand households down by income band and whether they are covered by an up-to-date RDD (random digit dialing) sample frame.
Most polling companies will weight their data to (EDIT: try to) correct for things like non-coverage of age and socioeconomic groups. The problem with weighting is it assumes the people you have surveyed (and are weighting) are similar to the people who you haven’t surveyed.
But here’s what you need to think about if you’re wondering how big the cellphone non-coverage issue really is:
- Do voters in the under $40k income band who are not covered differ in party support from voters in the under $40k income band that are covered?
- Do voters in the $40-70k income band who are not covered differ in party support from voters in the $40-70k income band that are covered?
- Do voters in the $70k+ income band who are not covered differ in party support from voters in the $70k+ income band that are covered?
When I say differ, I’m not talking differ ‘just a little’. When you factor in the size of each non-covered income group (8%, 3%, and 4%), they would have to differ massively from each covered income group, when it comes to party support, for them to make much difference at all to a total poll result.
Now add to this the complexities of calling cells, such as cell response rates and the difficulties nailing down where people actually live (to make sure you’re covering all rural, provincial and urban areas in the correct proportions), and there’s a possibility that including a cell number sample frame won’t add to the robustness of your poll at all.
I’m not saying that polling companies shouldn’t call cell phones. I’m saying each polling company needs to do their own analysis of all the potential sources of error in a poll, and make their own decision about how best to address them. As I’ve said before, calling cells is not, and will never be, the magic bullet for opinion polling.