Thursday, January 10, 2008

The Problems with Polls: A Brief Primer

In view of the inaccuracy of the poll projections in the Democratic Presidential Primary in New Hampshire on Tuesday - the polls showed Obama winning by 10 or more percentage points, in contrast to his actually losing by three - I thought I'd don my hat as a professor (I just finished teaching a graduate course in Media Research Methods at Fordham University this Fall), and offer a little primer on why poll projections can and do go wrong.

To begin with, New Hampshire this Tuesday is hardly the first time that a poll's predictions have been wrong.

The most famous example is the Literary Digest Poll of 1936, which predicted Alf Landon beating President Franklin Delano Roosevelt, 57 to 43 percent, a near landslide. In fact, FDR went on to win 62 percent of the vote.

What was wrong with the poll?

The people polled were drawn from lists of Americans who owned automobiles and had telephones in their homes - and, in 1936, cars and phones were much more likely to be in the hands of the rich than the poor or even the middle class. The sample, in other words, was biased towards upper income respondents, who had no love for Roosevelt.

That was the last time that kind of error was made in polling. But the prediction of human behavior, or attempts to measure it based on what people say they did, is still vulnerable to all kinds of errors.

Here are some perennials pitfalls of polling - with solutions or precautions, if possible:

1. People polled before voting can either be lying, or change their mind. Remedy: none.

2. People polled after voting can be lying (since they already voted, they can't change their mind). Remedy: none.

Strategies, however, which might or might not help with lying include interviewing people separately from their spouses. Of course, that creates a new problem of people not willing to participate, or answer questions:

3. People refuse to answer the pollster. Remedy: none.

Now a courteous pollster might have more success than an abrasive pollster, but no one can force an uncooperative person to participate in a poll. The best that surveyers can do is report the number of people who refuse to participate. If the percentage gets too high, this can invalidate the results. For example, if 10 people are polled, and 5 refuse to answer, 4 answer "a" and 1 answers "b" - what does this tell you? Not much, because who knows how the 5 who refused would have answered.

This problem can be aggravated by pollsters who under-report non-participants - because they think (perhaps correctly) that a high number of non-participants makes them (the pollsters) look bad.

And, to make matters even worse, in addition to the above problems, every poll faces the problem of getting a genuinely random sample to answer the questions. The 1936 poll wasn't random - it was biased towards the rich. We see that now. But what kind of biases are afoot that we are not aware of in our current poll?

In sum: polls do succeed, most of the time.

But the above problems are formidable and in principle intractable. Which means that, forever and anon, we are wise to take all poll results with a nice, big, sparkling grain of salt.
Post a Comment