How many poll questions is too many?
It's not nice to club your survey's respondents over the head. Might not help the reliability of your results, either
- Yesterday a Twitter discussion broke out among several of us about a situation political pollsters often face: How to do ballot tests in a race with a large number of potential candidates, many of whom may have low name recognition. It's tough to cram a lot of trial heats into one poll. It's also tough for me to express some of my thoughts on this within Twitter's 140-character constraint. So here I'll collect yesterday's tweets and say a few things on top of that.It started with this post from Logan Dobson, a research analyst at Republican polling firm The Tarrance Group:
- To which I replied (yeah I used a milder verb than Dobson did; hey, my 15-yo son follows me):
- Now to jump ahead to my bottom line - I think the answer to my own question is: Potentially the former; almost certainly the latter.
- Pollsters always run risks when asking a lot of questions about which respondents may know little or nothing. While this is especially true with IVR (interactive voice response - automated telephone polls, with questions recorded and answers given by touch-tone), which is what PPP does, it's also the case with live-interviewer phone surveys.There are two main risks: The respondent gets annoyed or frustrated or for some other reason feels compelled to hang up ("break off") rather than complete the poll, potentially leaving a sample that is biased in some way because those who complete the poll aren't representative of the surveyed population; and/or, if the respondent feels sufficiently invested (for whatever reason) in completing the poll, as time goes on s/he may "satisfice" and give whatever the easiest/quickest answer may be to any question, without giving it a lot of thought. (In a political poll it seems clear to me that many partisans will simply pick whoever their party's candidate is in any trial heat - more on that in a minute.)As an aside, outside of the political polling realm, I confronted this respondent-burden problem in a survey I project-managed for the Pew Forum on Religion & Public Life in 2010, measuring Americans' religious knowledge. We ended up with a total of 32 religious knowledge questions. A Pew Forum FAQ addressed the breakoff issue:
- Anyway, back to yesterday's tweets, which started to fly faster than I was able to type (Steven Shepard is polling editor at National Journal Hotline):
- In the middle of this, Mark Blumenthal, senior polling editor at the Huffington Post, made an observation that I'd been researching at that very moment. SUSA is SurveyUSA, which pioneered IVR polling in the early 1990s.
- Now, from the lists of questions for each of its polls that SUSA posts on its website, some of their election polls do run as long as 12 or so questions. But other IVR pollsters often run much longer. The PPP poll Logan brought up had as many as 28 questions (for a Republican primary voter). And a PPP poll released the day before had 36 (I incorrectly said 38):
- The first part of that tweet gets at one key issue: If those who break off differ in non-random ways from those who complete a long survey, non-response bias may result. The whole sample of completes may have a skew to it. (In the religious knowledge survey example above, it left us with a sample that probably was slightly more knowledgeable about religion than the public overall.)A separate issue is whether there is varying reliability by question within a sample that may or may not be skewed overall.Shepard's reply ...
- .... raises the issue of "satisficing" (well, Dobson alluded to that earlier, too - he called it "button mashing").Natalie Jackson, an analyst for the Marist Poll, correctly noted this is not only an issue for IVR polling:
- In reply to Shepard I pointed out that the PPP poll results seemed pretty consistent internally - with clear evidence of partisan response:
- This type of result also is not unique to IVR by any means. Political polls of all kinds often begin by asking for favorable/unfavorable ratings of various political figures, which among other things serves to indicate how many respondents aren't familiar enough with the politician to give a rating. Those same polls then pit various hypothetical candidates against each other, typically - and importantly - noting which is a Democrat and which is a Republican. Thus party-line voters can and do say they'd vote for their party's candidate even if they'd never heard of that person before.That exercise is useful in determining minimum likely support for a party's eventual nominee. Is it necessary to repeat the exercise for a potentially large number of hypothetical candidates? Dobson says no; Steve Koczela, pollster for the MassINC Polling Group, notes the PR/political problem when you start to exclude candidates from a poll. (And I note one alternative question structure that helps minimize respondent burden, though at the cost of later being able to trend the results with a traditional trial heat question.)