Polling divergence – phone versus online and established versus new

This article was first published on General Election 2015, the election blog of the London School of Economics and Political Science.
It has been suggested in recent weeks that one source of variation in the polls of vote intention for the general election is the difference between traditional phone polls, and those polls that are conducted online. In this post, Matt Singh recasts the debate by focussing on the difference between newer and more established pollsters. He finds that polls from more established pollsters have tended to favour the Conservatives over recent months, while measuring somewhat lower shares for UKIP.
The debate around the relative merits of online and telephone polling has intensified in recent weeks, as the apparent closeness of next month’s election has brought the divergence into focus. With the two largest parties so close in the polls, a relatively small disparity between polling modes has created a situation where Labour leads among most online pollsters, while the Conservatives lead in most phone polls. This is contrary to the received wisdom that the Conservatives should do better in online surveys than phone polls, the logic being that the “interviewer effect” of the latter is more likely to produce a “shy” response.
This divergence wasn’t present at the last election – Pickup et al. found no statistically significant difference in accuracy between online and phone polls in 2010. The following chart shows that the pattern has been reasonably consistent since early 2011. All pollsters have been equally weighted (with a monthly average taken for those that poll multiple times a month). The source for the historic polling in this piece is Mark Pack’s opinion poll database.
lseCONLABPO2
Turning to the reasons for divergence during the current parliament, one lay theory that can be probably be discarded immediately is that the demographics of landlines create a bias towards the Conservatives. While it may be true that older people are both more likely than the wider electorate to have and to answer landlines, and more likely to vote Conservative, modern polls are always demographically weighted, so there is no reason to think that this would be a factor.
Various alternative theories have been put forward (see, for example, this discussion on UK Polling Report). One is a variation on the interviewer effect – that some shy UKIP supporters are either telling phone pollsters they intend to vote Conservative, or are being reallocated back to the Conservatives via the “spiral of silence” adjustment – having voted Conservative in 2010 – and artificially inflating the Conservative vote share. This is plausible, but doesn’t seem to be supported by the data. The bias appears to have existed for about a year before the UKIP surge began (in spring 2012) and didn’t change much for a year or so thereafter. In that two-year period, UKIP support rose from around 4 percent to around 13 percent without any great effect on the phone-online spread. However the spread did react noticeably when the Conservatives began to regain ground relative to Labour in spring 2013.
It’s also notable, despite the limited data, that the by-election polls (all of which are conducted by phone) that have underestimated UKIP’s vote share have tended to be in more “Labour” areas (Barnsley, Wythenshawe & Sale East and Heywood & Middleton), whereas in contests with the Conservatives (Newark, Clacton and Rochester & Strood) the polls have relatively accurately measured the UKIP vote share, or even modestly overestimated it.
The other theory concerns the representativeness or otherwise of online panels. Rather than being random probability samples, online polls are statistical models based on non-random samples, which attempt to estimate for the population by adjusting for known biases. The problem arises where there are biases that are unknown (or difficult to quantify). If, for instance, members of an online panel happen to be more politically engaged than the average voter, or – more seriously – if party activists deliberately infiltrate a panel in an attempt to manipulate its results, then ceteris paribus a systematic bias arises. In a recent example, one online pollster, which shall remain unidentified, reported that 56 percent of its respondents intended to watch the recent BBC leaders’ debate, compared with actual viewing figures equivalent to slightly less than 10 percent of the electorate.
Pollsters can also be segmented by their longevity. The data does seem to show that the more established pollsters are showing different patterns to the newer ones. In the next chart, any pollster/mode combination that’s polled at least the last two general elections is classified as “established” [1] (which in practice means phone pollsters plus YouGov) and the others, including online polls by established pollsters, as “new” [2].
lseCONLAB2
Splitting out YouGov from the newer online polls, it can be seen that the phone-online difference is, in part, really a new versus established pollster difference. The one established online pollster is in some cases more in line with phone polls than with other online polls, though in the chart of the Labour-Conservative spread, it seems to behave like a mixture of the two. But in a scenario where relatively few votes are switching directly between the two largest parties, the phone-online difference in the Labour-Conservative spread is essentially driven by the net of the spreads for each of the smaller parties.

(Continued…)





About The Author

Related Posts