There were no local by-elections this week and no polls yesterday (besides the ScotCen survey already mentioned in Thursday’s briefing). But one thing yesterday brought home was the poor understanding that many (including people that should know better) have of the relationship between sample size and the accuracy we can expect from polls.
In particular, the ScotCen survey had a sample size of 859, compared with typical sizes of 1,000-2,000. Apparently some people think that anything less than 1,000 isn’t reliable and should be dismissed.
Let’s first remind ourselves why sample size matters. The bigger the sample size, the smaller the margin of error. For a 500 sample the MoE is ±4.5, for 1,000 it’s ±3.1, for 2,000 ±2.2, for 100,000 ±0.3. If the sample size is 859, as with the ScotCen poll, the MoE is ±3.4, meaning that 95 per cent of the time, the difference between the poll result and the truth due to chance will be less than 3.4 percentage points.
But it doesn’t change the biggest problem conventional polls have – making sure the sample is representative, to avoid other types of errors. Probability samples largely avoid non-random error; conventional pollsters try to, but as recent experience and history generally have demonstrated, they aren’t always 100 per cent successful.
That means that overall polling accuracy isn’t just about sample size – it’s also about sample quality. If a sample is unrepresentative, making it bigger won’t help, and if it is representative, making it smaller (within reason) won’t hurt. Therefore treating a high quality sample as less reliable than a conventional sample that’s fractionally bigger is very, very silly.
Bottom line, I would rather have a probability sample of 859 than a biased sample of 1,000. Actually, I’d rather have a probability sample of 859 than a biased sample of millions.