bookbanner
CHRISTOPHER TAYLOR'S BOOKS

Tuesday, May 03, 2011

POLL DANCING

"People prefer beef that is 80% lean to beef that is 20% fat, or medical treatments that will save 80% to those that would let 20% die."

Pollster
Anyone who knows me knows how little I care for or trust polls. I've tried to make that clear on my blog and I try to avoid relying on polling data or surveys for any hard conclusions. Polling is, at best, carefully controlled guesswork, little better than gamblers trying to work out the spread for Monday night's game in advance. They get pretty good at it, but they are still only guessing.

That said, polls can be quite accurate at times. Over at Legal Insurrection, William Jacobson had a guest blogger do a series on polling named Matthew Knee, a pollster and expert on polling. This five-part series was quite informative and I encourage reading the entire series.

What Knee didn't cover very much was how to do bad polling. He touched on the subject a few times, but for the most part the series was on how polling is supposed to work and how it works when done right.

What he didn't do is go into detail how much that skews polls or how often this is done, even by allegedly more reputable polling organizations. Jacobson asked Knee to cover the topic after looking at a particularly outlying People Polling result for Daily Kos which was quite poorly done. Yet although the topic is brought up a few times, Knee never really did an in depth analysis of a bad poll to show the problems.

However reading the series you can get a pretty good idea how to run a bad poll to get specific results. There are eight ways to go about this, ways that give any poll reader tools to use to analyze polling. First off, you have to understand that most people don't care about politics as much as you do, and hence do not know about politics as much as you do. Knee writes:
Polling is not data retrieval. Rather, it is a process of activating associations and considerations to produce a response. People do have opinions and emotions about particular figures or values, but often do not have consistent views on specific policies. This means that, frustratingly, there is no one correct answer to what certain people think about certain topics – only how answering specific questions about them illuminates the complexities and contradictions within.
Another fact is that although they are interested, most people aren't paying close enough attention to the poll to carefully analyze each question. They are distracted by children, television, their dinner, the raid on Icecrown Citadel, and so on. Because of this, people tend to focus in on specific words or phrases that stand out rather than the questions, and will often answer based on that rather than the question as a whole.

Because people are involved, they can be manipulated unless they are very clear on the topic, and sometimes even then. Here are ways to manipulate a poll:

1) Question order. You can use a series of questions or the sequence of what you ask and when to adjust the responses people give. Knee notes that "people respond differently to questions on abortion when they are preceded by questions about women's rights than when they are preceded by questions about religion." Whose name comes first in a list of candidates helps that person's results. 'Push' polling follows this general pattern, by making a statement about the subject before asking questions. In my experience, most polling follows this pattern, with leading questions designed to find an answer.

2. The wording used in questions change the answers you get. An example is the infamous New York Times poll which asked about "Bush plan for social security" and later asked about the same plan without the name. The Bush plan got less than 30% support, and the actual plan without the name got more than 60%. People reacted negatively to the "Bush" label, but actually liked the plan. Naturally, the NYT publicized the negative result. Again, a lot of polling uses this technique, with wording designed to reach a conclusion - often not by purpose, but by unrecognized bias in the poll writer. "anti-abortion" rather than "pro-life" for example. Also, people tend to react positively to positive wording and negatively to negative wording. The quote at the top of this piece is one such example.

3. People like free things or being given things. Despite the fact that most people prefer smaller government and less spending, they will consistently support big spending programs and cuts in government. Why? Because if people are asked if they want to cut x program, that means less for them and the nation, so they are less likely to support it. They will almost always support tax cuts, because they like to have more money in their pocket.

4. Dividing up the questions can manipulate results, particularly when the polls are reported on. Knee says:
For instance, the recent NYT/CBS poll on public sector unions asks if people prefer balancing the budget by raising taxes, cutting public employee benefits, cutting roads, or cutting education. The pollsters note that a plurality prefer to raise taxes. In dividing spending cuts into multiple options, while only having one tax increase option, the poll creates the illusion that more people back tax increases than spending cuts, when in fact more people opted for the latter.
News organization polls tend to do this a lot, because it tends to give them the technical results they're looking to report on.

5. While statisticians are pretty confident about the "margin of error" based on sample size, that's only for the whole poll. Because individual groups within a given poll are smaller, any polling that examines these subgroups is typically unusuable. The problem is that if you get a poll big enough to shrink the theoretical margin of error to 3% or so (again, this is guesswork, supported by experience and some math), the sub groups within that poll will necessarily be smaller samples. Thus, these samples have a much larger margin of error, often being so small that the margin of error is so big as to render the poll questions meaningless.

For example, if you take a poll of around 800 people - typical for many polls - and 49% of that poll is male, then then your sample for men is 392. Now suddenly the results are questionable. Smaller subgroups of that such as homosexuals, Hispanics, independents, etc end up being useless because they are so few people. Yet those numbers are often heavily relied on by politicians and policy makers; women want this result! The largest minority group in America hates this policy! The polls usually don't support that, but that's what people take from it.

6. General questions such as "what do you think about unions" are less useful than specific questions such as "what do you think about the Wisconsin Teacher's Union." By asking general and vague questions you can tend to get results that you can manipulate reporting on the results whereas the more specific a question is the more exact the answer and reporting has to be.

7. The longer a poll is, the more annoyed the answerer gets and more likely they are to just blow it off at the end, hang up, or answer without thinking. That means you can end-load a poll with stuff you want people to think less about or answer too hastily just to get rid of the pollster.

8. People can agree or disagree with a topic without feeling very strongly about it. Just because you, for example, view President Obama favorably doesn't mean you care much about the guy or will vote for him, or even would defend him against anyone. Thus, a poll which shows favor vs disfavor on a topic or person doesn't necessarily mean policy should be driven by this position. How the questions are asked can manipulate this favor by avoiding or pinpointing how deeply or significant the favor is.

More important, in my opinion, than the polling is the reporting on polling. Few people actually sit down and examine or analyze the polling results (bloggers tend to, but not news reporters, very often). So when you read a news report about the polling, you get their version of the results which might leave off key information, ignore the sampling breakdowns, and even misrepresent results as shown in #5.

That means that a poll can be perfectly done with the best of intentions, yet the reporting can be skewed to show a specific result (e.g. majority reject Bush Social Security plan). As a result, good polling can seem to be poorly done because of how reporters write about the results. And ultimately, since that's where most people get polling data from, that's what matters more than the actual polls. Politicians are no less lazy or hasty than any other reader, and they have less time to analyze polls than most people. That means the legacy media can still use polls to manipulate public opinion and policy even if the polling is actually well done.

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home