The public sector fixation on high response rates

In my current role I spend a good portion of my time responding to requests for proposals from government agencies. I really like working with social sector clients, because their research often needs to be highly defensible and robust, and because I prefer to spend my time on research that can make a positive difference.

One concern I have is that when selecting their preferred research design (and preferred supplier!), public sector clients can place too much weight on the design that will deliver the highest response rate. Please don’t misunderstand me, response rates are important. A high response rate is an indication your sample is a good reflection of the population it was taken from, and is not skewed toward or away from any particular type of person. However, the response rate is not the only indicator of sample quality.

Here are two examples of situations when a higher response rate actually produced poorer quality samples or ‘suspect’ survey estimates. (Note: I have never been directly or indirectly involved with the two studies I describe below.)

Example One: Lower response rate, higher quality sample.

Lifestyle survey of school children

This is an overseas survey of tens of thousands of school pupils about alcohol, smoking and substance use conducted annually. It uses a self-completion questionnaire design, which is administered by teachers.

The survey designers became concerned about school administrators selecting participating classrooms for the study, and thought this may be influencing their estimates of alcohol, smoking and substance use. Their solution was to select classrooms randomly using a computerised telephone script (ie, taking this decision out of the administrators’ hands).

This new approach resulted in a lower response rate among schools, presumably because school administrators didn’t like to be told what to do. However, despite this lower response rate the data itself suggests the sample was of higher quality.

The chart below shows the ‘design effect’ over three waves of the survey. Design effects impact on margin of error. They are calculated based on the degree of weighting required to adjust for non-random sampling and non-response. The higher the design effect, the higher the margin of error (ie, the smaller the design effect, the better).

Design effect

As can be seen in the chart, the wave with the lowest response rate produced the sample with the lowest design effect. This is an example of a situation where the better methodology was the one that produced the lower response rate.

Example Two: Higher response rate, suspect data.

Department of Conservation Visitation Survey

This example is a little closer to home. The Department of Conservation measured visitation three different ways over eight years.

Initially, questions were placed in an omnibus survey, with a range of other questions from other clients. Prior to answering the questions, omnibus respondents had no knowledge of the content of the survey.

Then the survey moved to a customised telephone approach. Under the customised approach, potential respondents were randomly selected, and were sent a pre-notification letter explaining what the survey was about and that an interviewer may call. This method increased the response rate over the omnibus approach.

Finally the survey moved to a sequential mixed-method approach, where people were selected at random from the electoral roll, and sent an invitation via post to complete an online survey. Those who didn’t complete the online survey in a given timeframe were then sent a paper-based self-completion questionnaire. This method delivered the highest response rate, but under this approach potential respondents find out about the topic, and in many cases can see all the questions, before deciding whether to take part.

The risk with giving people a lot of information about a survey, is that those more involved with the topic are more likely to take part. The chart below shows the approximate response rates for each approach, and recreation area visitation over time. You can decide yourself which results are more plausible, but it looks to me like the visitation figure is correlated with the amount of pre-interview information given to respondents.

Visitation

So what’s the take home message here?

The take home message is that the response rate should not be seen as the main driver of sample quality.[1] Reducing response bias is much more crucial for producing reliable population estimates. I could probably increase the response rate for every single survey I carry out, by making the topic sound exciting and interesting. This is true for social surveys, political polls and commercial research. I don’t do this though, because I like my research to be robust.

[1] For non-random quota surveys, it’s not really even very relevant an indicator of sample quality, but this would require an entirely separate post.

Advertisements

Make a comment...

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s