This year’s election was meant to be the one of the most unpredictable nights in British politics. Instead – by Friday lunchtime – the Conservatives had a 12 seat majority, and Ed Miliband and Nick Clegg had both resigned as party leaders.
Central to the conversation and subsequent fallout has been a focus on the credibility of the opinion polls leading up to the election, all of which confidently predicted a hung parliament, with Labour and the Tory’s neck and neck. So confident was Paddy Ashdown in the opinion poll data, he said that he would eat his hat should the exit poll turn out to be correct.
Paddy was not available for comment at the time, so we asked Edward Cyster, Managing Director of Atomik Research, what happened with the opinion polls and what they can do in the future to repair their reputation.
Firstly, the difference between a poll and an exit poll is extremely important. As you probably are already aware, an exit poll is taken straight after respondents have cast their votes, while initial polls can be taken well before the ballot has been cast, which can lead to massive discrepancies. An exit poll would be commissioned to happen on the day of voting using face-to-face methodology where the interviewer would ask the voters when exiting the voting poll, hence the exit prefix – who did you vote for? This is no longer a presumption but a fact. That’s one big difference as the polls would ask a question about an intention, Who would you vote for?
Secondly, polls are usually done using online or over the phone methodologies and smaller non-representative samples whereas exit polls would be conducted with 20,000+ sample in key representative seats. And thirdly, timing… you’ll notice as you get closer to the voting day opinions starts to change as the pressure of making a decision starts to weigh on the voter.
There are two main issues with comparing polls and exit polls. Firstly and most obviously, people can easily change their minds in the run-up to polling day (if they couldn’t, then there wouldn’t be a great deal of point in campaigning). But also there’s the issue of whether the candidate you say you’ll vote for in public is actually who you’ll put an ‘X’ next to when nobody’s watching. The simple fact is that a smaller sample size produces less reliable results, while a repeated survey of a similar group of people can’t possibly give you the full picture of a nation’s attitudes or changing opinions.
Good question and I guess one that the BPC and the MRS are now trying to find the answer to. Personally I believe a combination of factors should be viewed: samples need to be changed to include a more political-economic diverse population. Timings need to be consistent. Ask same set of questions every month for at least 6 months using both river sample and panel sample, hence getting a more holistic angle from established panellists but also one timers. Of course this is a much more complex debate. Times are changing, people are able to get information from more channels than before and right up until the second they step into the voting booth. The way the British are voting is changing and I think that is why it will become more and more challenging to ask a question today and get the same results tomorrow.
Receive thought pieces from our leadership team, views on the news, tool of the month and light relief for comms folk