Is there a shy Tory factor in 2015?

As the election approaches, a hotly debated topic has been the comparison to 1992, when a catastrophic opinion polling failure led almost everyone to predict a hung parliament, with Neil Kinnock as Prime Minister, only for John Major’s Conservatives to win with a 21-seat majority.
The debate has been heavy on conjecture, with many qualitative arguments being put forward, mainly around social, economic or political factors, and the superficial similarities and differences (there being plenty of both – relevant and otherwise) between now 23 years ago.
In this report, I’ve analysed electoral data and polling internals from the last 35 years and toplines from the last 50, to try and quantify the shy Tory effect. My investigation into polling accuracy has actually been progress for some time and is the reason why I've been holding off making formal projections.
Almost everything we think we know about the state of public opinion is predicated on the assumption of unbiased polling. There is evidence counter to that assumption, but because a generation has passed since the biases have caused a general election to be called wrongly, the public and the media have become relaxed about the risks of it ever happening again.
It’s important to distinguish between late swing (where voters actually change their minds) and polling error (where voters intentions are measured inaccurately). They are very different, and the distinction matters from the pollsters’ perspective, but the electoral consequences are the same – the result is different to what the polls say beforehand. Lord Ashcroft repeatedly reminds his followers that a poll is a snapshot, not a prediction. Late swing is where the scene changes after the snapshot is taken, whereas polling error is a misleading snapshot.
Furthermore, the so-called shy Tory factor is really a statistical pattern, for which shyness or dishonesty are merely possible explanations. As will be discussed, there is evidence for the various theories as to its causes, but we don't know, and may not know, how big their impacts are (or were) individually. We just know that polls have shown evidence of bias. Thus the terms “shy Tory factor” and “shy Tory effect” are used in a far broader sense than their literal meanings.
This analysis is not intended as a critique of opinion polling but rather as a cross-check to determine whether we should be concerned about systematic bias. Pollsters are very smart people who do a tricky job. But we, as observers, forecasters, media or the public, need to be realistic about how accurate we can expect polls to be.
Here are the key findings:

  • Opinion polls at British general elections are usually biased against the Conservatives and in favour of Labour. In 10 of the last 12 elections, the Conservative vote share has been underestimated and in 9 of the last 12, Labour’s share has been overestimated. The spread between the two has been biased in Labour’s favour in 9 of the last 12 elections, including 5 of the last 6.
  • At the last general election in 2010, both the Conservatives and Labour were underestimated by the polls, with the Liberal Democrats overestimated. My analysis of polling internals from 2010 suggests that this incident was a one-off, rather than an end to the traditional pattern of bias.
  • In terms of risk factors, the unusual fluidity of the electorate in the 2010-2015 parliament may have severely blunted the effectiveness of some of the adjustments introduced after 1992. This is a very significant concern for pollsters, some of whom have even gone on record to say so.
  • Every one of the 16 opinion polls with a comparable election in the last two years has seen a pro-Labour bias in terms of the spread. This has closely matched the period during which the Labour lead was falling.
  • There are other warning signs – conflicting poll internals, as was the case in 1992, plus intelligence to suggest that things on the ground are not going as the polls suggest they ought to.
  • In an attempt to quantify the shy Tory effect and/or the potential for late swing, I’ve created three statistical models, all of which significantly outperform “face value” opinion poll numbers in historical tests. The first model is based on adjusted topline numbers, the second uses polling internals and the third uses only real votes. All three suggest that the Conservatives will achieve a much stronger result than current polling averages or forecasts suggest.
  • While none of these models are guaranteed to be “right”, the second and third in particular highlight significant anomalies in the relationship between current opinion poll topline numbers and “fundamental” measures that have historically been extremely strong predictors of election outcomes. It is possible that the changes in the political landscape since 2010 have caused previously very strong patterns to break down. But it is also possible – for reasons that will be discussed – that the changes have reduced the accuracy of opinion polling, particularly the effectiveness of some of the changes introduced in the 1990s.

(Continued, please navigate using numbered tabs…)





About The Author

Related Posts