How to spot a voodoo poll

Recently there's been a spate of voodoo polls being reported as measures of public opinion. This is not (just) coming from ultra-partisan blogs but from traditional media. This is worrying, because unscientific open access polls are at best meaningless, and arguably pollute the public debate with fake data.
Some arguments used to defend these include their size (for an explanation of why it's better to have a representative sample of hundreds than a voodoo sample of millions, see this from Anthony Wells) or sometimes the recent spate of mishaps of scientific polls. This “equivalence” is, of course, nonsense.
Scientific polls have a margin of error and can be subject to additional error beyond that. But voodoo polls don’t even attempt to represent the profile of the population, or adjust to match it. They often appear on websites or social media with very unrepresentative traffic. The same person can usually answer as many times as they like, or forward it to their friends. If one of these “polls” gets anything right, it’s down to luck.
Treating this junk as public opinion isn’t just bad statistics, it’s bad journalism, and I’m glad that it’s being criticised as strongly as it has been. But given that some generally respectable media are doing just that, how do you spot a voodoo poll?
There are a few tell-tale signs. Not all of these are definitive – some have legitimate explanations – but they are all things to watch out for:

  • No mention of a polling company or organisation that knows what it’s doing (unless, of course, the writeup is being done by the polling company itself!)
  • Phrases like “of our readers” or “website visitors”. They may or may not be a good measure of opinion of the audience in question. But they’re certainly not representative of the country.
  • Ridiculously large sample size. Scientific polls rarely have sample sizes of much more than 2,000 in the UK, and greater than 10,000 is very rare. Surveymonkey sometimes does, and the BES online panel is usually about 30,000 a wave to enable proper analysis of subsamples. But no-one commissions legitimate polls of 200,000 people.
  • Excessive focus on sample size in the writeup, or the use of terms like “megapoll”. This sometimes arises in writeups of scientific polls (in which case it’s merely a bad writeup rather than a bad poll). But it’s often used to imply that “it’s big so it must mean something”.
  • Absurdly one-sided results. No, 99 per of Brits do not agree with the paper’s editorial line.
  • Being in a local paper. A bit harsh, perhaps, but local papers are very unlikely to have the resources to commission a legitimate poll (though it is theoretically possible).

At the other end of the spectrum, NatCen has some interesting data on what Britain thinks about the customs union. The short version is that there are quite a few contradictions.
Also, Roger Scully has shared some thoughts on the Welsh Labour leadership contest. And Peter Yeung has some interesting historical dataviz, including some very old school election maps.

Get it before it’s on the website – sign up for this briefing: <style type="text/css"
#mc_embed_signup{background:#eeeeee; clear:left; font:14px Helvetica,Arial,sans-serif; }
/* Add your own MailChimp form style overrides in your site stylesheet or in this style block.
We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */

Please send me (select as required):

Email Format





About The Author

Related Posts