In a nation that counts its votes in secret, polls take on a special significance. The United States is such a nation and our polls, both pre-election and exit, serve as parallel vote counts and establish the baseline against which the electoral results themselves are seen to be somewhere on the spectrum from “just what we expected” to “shocking.” The polls, in other words, serve as a kind of “smell test”: Miscounted elections, the vote counts of which veer widely or consistently away from the polling numbers, emit a certain odor, whether or not their results are actually challenged or investigated.
As a veteran analyst of election forensics, I have crunched polling and voting data from elections dating to 2002, when the Help America Vote Act hastened the computerization of voting in the United States. During this period, a pervasive pattern characterized that data: a “red shift” in which official vote counts in competitive electoral contests were consistently and significantly to the right of polling results, including both pre-election and exit polls.
Unfortunately the standard response to our forensic red-flagging of such patterns was “the polls are ‘off’ again; they must have oversampled Democrats.” It did not seem to occur to anyone to actually examine the polling samples or make any impression when we did analyze the samples and found that they had not in fact oversampled Democrats. But any explanation that might point to corruption of the computerized vote counting mechanism was strictly verboten. One suspect election after another managed to pass the stuffed-nose smell test based on the premise, as unshakable as it was irrational, that election rigging could never happen here in the beacon of democracy.
Polls and vote counts form a feedback loop, and corruption of one ultimately expresses itself in corruption of the other.
Now the polls tracking the upcoming election (“E2014”) are telling us to expect a resounding Republican victory, including control of the US Senate and reinforcement of the GOP House majority. Such results on November 4, 2014, will therefore not be shocking, as was the GOP sweep in 2010, which none of the pollsters predicted. No alarm will sound, even though there would be ample reason to scratch our heads that a party with which a dwindling minority of voters identifies would be rewarded for intransigent political behavior that has dragged Congress down to single-digit levels of approval (lowest in history) by having its control over that very same Congress strengthened. Odd, yes, but just what the polls have been predicting, so no surprise at all.
Everything fits neatly – too neatly. Polls and vote counts form a feedback loop, and corruption of one ultimately expresses itself in corruption of the other. Pollsters stay in business by predicting election outcomes accurately. A “Certificate of Methodological Purity” may make a nice wall ornament, but matters not a whit when it comes to success within the highly competitive polling profession. If election returns in competitive races were being systematically manipulated in one direction over a period of several biennial elections, we would expect pollsters to make methodological adjustments necessary to match those returns. Indeed, it would be nothing short of professional suicide not to make those adjustments and turn whatever methodological handsprings were required to continue “getting elections right.”
Enter the likely voter cutoff model, or LVCM for short. Introduced by Gallup about 10 years ago (after Gallup came under the control of a right-wing, Christianist heir), the LVCM has gathered adherents until it is now all but universally employed. The LVCM uses a series of screening questions – about past voting history, residential stability, intention of voting, and the like – to qualify and disqualify respondents from the sample. The problem with surveying registered voters without screening for likelihood of voting is obvious: You wind up surveying a significant number of voters whose responses register on the survey, but who then don’t vote. If this didn’t-vote constituency has a partisan slant it throws off the poll relative to the election results – generally to the left, since as you move to the right on the political spectrum the likelihood of voting rises.
But the problem with the LVCM as a corrective is that it far overshoots the mark. That is, it eliminates individuals from the sample who will in fact cast a vote, and the respondents/voters so eliminated, as a group, are acknowledged by all to be to the left of those who remain in the sample, skewing the sample to the right (a sound methodology, employed for a brief time by The New York Times/CBS poll, would solve the participation problem by down-weighting, but not eliminating, the responses of interviewees less likely to vote). So the LVCM – which disproportionately eliminates members of the Democratic constituency, including many who will in fact go on to cast a vote, by falsely assigning them a zero percent chance of voting – should get honestly tabulated elections consistently wrong. It should over-predict the Republican vote and under-predict the Democratic vote – by just about enough to cover the margins in the kind of tight races that determine the control of Congress and key state legislatures.
Basic logic tells us that the methodological contortion known as the likely voter cutoff model can get election results so consistently right only if those election results are consistently wrong – that is, shifted to the right in the darkness of cyberspace.
Instead it performs brilliantly and has therefore been universally adopted by pollsters, no questions asked, setting expectations not just for individual electoral outcomes, but for broad political trends, contributing to perceptions of political mojo and driving political dynamics – rightward, of course. In fact, the most “successful” likely voter cutoff models are now the ones that are strictest in limiting participation, including those that eliminate all respondents who cannot attest that they have voted in the three preceding biennial elections, cutting off a slew of young, poor and transient voters.
There is something very wrong with this picture and very basic logic tells us that the methodological contortion known as the LVCM can get election results so consistently right only if those election results are consistently wrong – that is, shifted to the right in the darkness of cyberspace.
A moment to let that sink in, before adding that, if the LVCM shift is not enough to distort the picture and catch up with the “red-shifted” vote counts, polling (and exit polling) samples are also generally weighted by partisanship or party ID. The problem with this is that these party ID numbers are drawn from prior elections’ final exit polls – exit polls that were “adjusted” in virtually every case rightward to conform to vote counts that were to the right of the actual exit polls, the unshakable assumption being that the vote counts are gospel and the exit polls therefore wrong.
In the process of “adjustment”- also known as “forcing” – the demographics (including party ID, age, race etc.) are dragged along for the ride and shift to the right. These then become the new benchmarks and baselines for current polling, shifting the samples to the right and enabling prior election manipulations to mask forensic and statistical evidence of current and future election manipulations. Specifically, the dramatically red-shifted and highly suspect 2010 election sets the sampling model for the upcoming 2014 election (“off-year” elections model for off-year elections and presidential elections model for presidential elections).
To sum up, we have a right-shifting, tunable fudge factor in the LVCM, now universally employed with great success to predict electoral outcomes, particularly when tuned to its highest degree of distortion. And we have the incorporation of past election manipulations into current polling samples, again pushing the results to the right. These methodological contortions and distortions could not be successful absent a consistent concomitant distortion of the vote counts in competitive races – noncompetitive races tend neither to be polled (no horserace interest) nor rigged (an outcome reversal wouldn’t pass the smell test).
Since polls and election outcomes are, after some shaky years following the advent of computerized vote counting, now in close agreement, everything looks just fine. But it is a consistency brought about by the polling profession’s imperative to find a way to mirror or predict vote counts (imagine, if you will, the professional fate of a pollster stubbornly employing undistorted methodology, who insisted that his/her polls were right and both the official vote counts and all the other pollsters wrong!). It is a consistency which, though achieved without malice on the part of the pollsters, is capable of concealing computerized election theft on a scale grand enough to equate to a rolling coup. On Election Day, accurate polls should be seen as a red flag.
Reprinted with permisioon of Truthout. May not be reprinted without permission.