Printer friendly version

October 7, 2004

The (Ratings) Bounce

Lane Core asks someone to explain the origin of the support that supposedly has Kerry in a statistical tie with President Bush:

Let's see now. The Catholic vote went for Gore in 2000 (Kyrie eleison) and Catholics polled for Kerry a few months ago. But Catholics now poll for Bush. And the Gender Gap is narrowing, if not quite disappearing. (And don't forget the "battleground" states that are already being abandoned by the Kerry campaign as losses, nor the "blue" states that are being hotly contested by the Bush campaign.)

So, tell me, somebody: how can the race be so close? What group(s) have shifted seismically towards Kerry to offset the shift towards Bush in two very large voting blocs?

Personally, I've been hard-pressed to resist a cynical smile with each bit of polling data. Before the debates began, one was apt to hear — from parents who've followed politics for decades, from Fox News, from blogs — how the media would be anxious for Kerry to close the widening gap, so as to make the race exciting and, therefore, closely followed, with all the media revenue that political photo-finishes attract.

What groups have "shifted seismically towards Kerry"? I can't say for sure, but I wonder if the polls' samples have been taken from among the MSM's phantom readers.

Posted by Justin Katz at October 7, 2004 10:37 PM
News Media
Comments

I had understood that serious polling involved conforming your sample to a model of who was expected to show up at the polls. For example, suppose that historically equal numbers of men and women actually vote, but in your random phone poll you happen to draw 60% men and 40% women. There is no reason to let this deviation in your sample skew your poll results, so you increase the effect of the women's responses and decrease the effect of the men's to fit your model.

This technique is obviously open to abuse, as is just about anything in statistics, but the basic idea is sound. One major weakness is that the model may be wrong. In our example, suppose that the 50/50 assumption about men and women voting is wrong for a particular election, and instead the voters are 45% men and 55% women. In that case, all the poll's numbers will be off systematically.

Even though it has its weaknesses, conforming the data to a model like that is probably the best practice. Recently I learned that at least some of the polls we hear about aren't doing this: Someone was complaining about the much-ballyhooed Newsweek poll right after the first debate, pointing out that the sample in that poll happened to contain unusually many Democrats.

That stunned me. It is grossly irresponsible to simply generate random phone numbers, limit the respondents to likely voters, and then use the resulting unadjusted group statistics to draw conclusions about voters. That methodology will warp the data like crazy. Different types of people are more or less likely to be near a phone at different times of the day. Different types of people are more or less likely to hang up on a pollster. And, as described above, the numbers of each type of person in the sample may not match the likely numbers for each type who actually show up at the polls.

I haven't studied how the current polls are being done. Many different companies are polling, and I'm sure their methodologies differ. But I lost a lot of confidence in the widely publicized polls on learning that the vaunted Newsweek poll was done so very sloppily.

Posted by: Ben Bateman at October 8, 2004 1:12 AM

Thanks.

Posted by: ELC at October 8, 2004 10:18 AM

Really Ben? I lost confidence in widely publicized polls upon hearing that President Clinton used them in decision making. Public opinion polls might try to rely on statistical principles, but they are nothing remotely as simple as polling whether or not a nickel lands heads up or down. The poll can be skewed not only by the sampling group as you pointed out, but also by the sampling probe, the question in other words. Public opinion polls can be manipulated just by phrasing the questions towards the poll-takers bias.

I don't know if anybody is familiar with the British Series titled "Yes, Minister", but one episode delved into that very subject. One of the characters was instructing another how to phrase the questions to get the answers they desired. If anybody wants the episode title, let me know and I will find it.

Posted by: smmtheory at October 8, 2004 1:16 PM

Polls are inherently unreliable, but there really isn't much better out there. Looking at several polls (and trends rather than actual %) is probably a better indicator, but nothing is all that reliable. And if its a CBS poll, they probably just make it up anyway.

Posted by: c matt at October 8, 2004 5:57 PM