parados wrote:A quick check of the exit polling from VNS in 2000 showed a different story from what this table claimed. [..] In actuality the VNS exit polls were within .1% for each candidate and had the difference between their percentages correct. This raises serious questions about the statements made by the author. If his numbers are innaccurate then his conclusions are no more accurate than his numbers.
Nah, his numbers are not inaccurate, they just refer to something different. (It's OK, I initially got confused on this one too, just ask Foxfyre who I repeatedly wrongly hit over the head on this score before I found out myself ;-))
How it works is like this: over the course of the day, more and more numbers are added from the pollsters out in the precincts and updated into sets of raw exit poll data that are sent to the networks (and this year, leaked out to the blog community by Slate and Drudge). The mid-day raw data are usually the furthest out of synch with what the real results will turn out to be, but as further numbers come in the exit poll data usually get more correct. Also, in the course of the day the exit poll company might already apply weightings if it finds that certain population groups are under- or overrepresented in the samples, which will also make the data more correct. However, even so they remain a "blunt tool", as someone in that article I linked in noted, and usually end up quite a bit off. The degree to which they still ended off is recorded in the table I just linked in.
Now what happens then, at the
end of the day, once the actual election results are mostly in, is they
re-weigh the exit poll results, and make them fit the actual election results by proportionally multiplying/dividing the numbers. Say, the exit poll totals said D56/R44 and the actual results turn out to be D53/R47, they then recalculate all the exit poll numbers to fit the real results. And
those are the numbers published online (and the ones you now refer to).
Now why do they do this, you may ask, does it not amount to falsifying the numbers? Well, they do it because exit polls are not intended as a way to double-guess the actual election results. They were devised as a tool to
analyse the actual election results, in terms of their breakdown by demographic and political group. And the only way you can say something credible about how, say, Afro-Americans or regular church-goers voted in the elections, is if your total numbers do actually fit the actual election results. Hence how they weigh them into conformity with them and why the exit polls you now find online about 2000 or 2004 are within 0,1% of the actual results.
E.g., Freeman, the analyst who tried to substantiate the case for fraud by pointing to discrepancies between the initial exit poll raw data and the recorded election results could only do so because he quickly screensaved the exit poll data from the CNN site late in the evening, just before they were taken offline and later replaced by reweighed exit poll numbers that fitted the actual election results.
It's complicated stuff, it's true - I'm also just learning as I'm reading. For example, I have several times here asserted that exit polls are conducted by polling voters in a selected set of precincts that historically have turned out to provide a representative cross-cut of the state's electorate. I have used that as an argument why the raw exit poll data may have underestimated some voter groups this year (eg, conservatives), noting that one election's set of representative precincts may turn out not to be representative at all in a next election as turnout increases or decreases variably from one population group to another. Alas, turns out thats bull. The exit polls do
not use some historically-based selection of precincts; they use a totally random set of precincts. (Thought I'd set the record straight before it comes back to haunt me ... ;-))