Another triumph of expert predictions

One theme of this blog has been the failure of the predictions made by expert climate scientists, together with the failure to acknowledge or investigate this failure.

Last night we had another very interesting example of expert predictions failing. With all the results now in, we know that the Conservatives have 331 seats, and Labour 232.

How does this compare with the various predictions made just before the vote?

Con Lab Con – Lab
Final Result 331 232 99
YouGov (Peter Kellner) 284 263 21
Bookies (oddschecker) 287 267 20
Nate Silver (538) 278 267 11
Guardian 273 273 0
British Election Study 274 278 – 4

I’ve listed here some of the predictions made yesterday, in decreasing order of accuracy (Con-Lab difference). The “Bookies” row comes from Oddschecker, which lists odds provided by 20 or so bookies in a neat Table form (currently showing, for example, the options for next Labour Leader). You’ll have to take my word for it that I copied down their most likely outcome correctly. Nate Silver’s prediction is still on-line; he is sometimes regarded as a guru of great wisdom, despite having got the 2010 UK election spectacularly wrong (he predicted about 100 Lib Dem seats). The final projection from the Guardian was a dead heat between Labour and the Conservatives. The British Election Study is a group of, um, expert UK academics. Their final forecast is here.

The first thing to note of course is that everyone got it badly wrong, greatly underestimating the Conservative support. Reasons for this include
(a) the “Closet Conservative” factor – there is a tendency for people not to own up to supporting the Conservative party, and
(b) incorrect sampling by the pollsters – perhaps quiet conservatives stay at home, don’t answer the phone much and aren’t as eager as some others to express their opinions.
However, I thought that the pollsters were well aware of these factors, particularly since the 1992 election when something very similar happened, and compensated for it.

But what I found most interesting is that of all the predictions, the worst was that given by the team of expert university academics. Roger Pielke wrote a post about their predictions back in March, when their average prediction was similar to that in the table above, suggesting a small lead for Labour. There was a consensus – in fact not a 97% consensus, but a 100% consensus – among the experts that the Conservatives would get less than 300 seats. But the consensus was wrong.
Why does a team of experts perform worse than the bookies, who presumably base their odds mainly on the money placed, i.e. on public opinion?! One possible explanation for this apparent contradiction is suggested by the work of Jose Duarte and others, on the effects of the well-known left-wing bias in academia; it may be that inadvertently the researchers are building in their own political bias into the assumptions they make in their model, and this is influencing their results.

Other possible explanations for the surprise election results and the apparent failure of the expert predictions are as follows:

  • This is just a short-term fluctuation – a hiatus, or pause, in the Labour vote – that the models cannot be expected to predict correctly. The experts have much more confidence in their projection for the 2100 election. (HT David)
  • The raw data from the election results is not reliable, and needs to be adjusted by the experts. After suitable UHI and homogeneity adjustments have been applied, the results are in line with the expert predictions, and Ed Miliband is declared the new Prime Minister.
  • More funding and bigger computers are urgently needed, so that we can get more accurate predictions.
  • The missing Labour voters are hiding at the bottom of the oceans.

Finally, Feynman’s rule applies again:

Science is the belief in the ignorance of experts.

Updates and links:

Roger Pielke has published his evaluation of the predictions: “… mass carnage for the forecasters”. He notes a really interesting point, that asking people who they think will win in the constituency is more effective than asking them who they will vote for.
He also has an article in the Guardian.

The BBC has a post-mortem How did pollsters get it so wrong? which asks many questions but offers few answers beyond mentioning the “shy conservative” effect.

One Survation poll was very accurate – but was not published because it was so out of line with all the others!

Both the Tories and Labour had their own internal polls in the final week suggesting that the seat split would be about 300 – 250 (The Times, 9 May). But they kept this to themselves, either doubting it or in Labour’s case so as not to discourage the faithful.

Paddy Ashdown argues that the inaccurate opinion polls were a factor in the Lib Dem collapse – if the polls had shown the true Tory lead, the SNP fear factor would have been diminished and the value of the Lib Dems as a moderating influence would have been enhanced.

Tory MEP Daniel Hannan says the answer to why the polls got it wrong is given in this quote from Edmund Burke, a more poetic version of my answer (b).

Frank Furedi in Spiked goes for answer (a): “Is it not worrying that in a free society ordinary citizens feel uncomfortable with publicly expressing their true opinions?”

Josh has produced a cartoon

Josh also links to a Dan Hodges piece from April 30th predicting a Tory lead of 6-7 points – spot on (Andrew Lawrence got it right too – see also Ian Woolley’s comment below).


Newsnight on 11 May looked into why the polls did badly. Survation thought there was simply a late swing. Labour’s internal poll had shown they were behind for months – more details here and here. The “shy conservative” and “poor sampling” factors were also mentioned.

Lord Ashcroft says he did not make a prediction, but then contradicts himself by saying he got it right regarding Scotland and UKIP. Acknowledging the underestimate of the Con vote, he suggests late swing, Tory micro-targetting of key seats, and Shy Tory as factors. (In my marginal constituency there was no effective Tory micro-targetting).

In The Conversation there’s a jaw-dropping apologia for the failure of the pollsters by two academics who seem to be in denial. They come up with a confidence intervals excuse that doesn’t survive the simplest scrutiny – see my comment there. There’s a climate analogy here again – the group defends itself and refuses to acknowledge its errors.

538 are much more honest, admitting straight away that they got it wrong. They say they adjusted for the “stick with what we know” factor, but nowhere near enough. A second article says it’s all down to getting the vote share wrong, but doesn’t say why they got that wrong.

Matt Singh has a post-mortem saying that factors may be electoral flux (meaning things were very different this time because of UKIP and the SNP), shy voters, and overestimated turnout. He also wrote a very detailed blog post on the shy Tory effect the day before the election, ending with a spot-on prediction of a Con lead of 6 points (HT botzarelli in comments).

In the Mail, an Ipsos Mori pollster claims that the problem with the polls was mainly that the Labour supporters just didn’t bother to vote. I don’t find that explanation at all convincing.

David Spiegelhalter says he got it wrong and acknowledges Matt Singh’s success. He praises the exit poll, discusses some suggestions for improvement but sits on the fence regarding what actually went wrong.

The Guardian says that more accurate results are obtained if you ask people other questions about their values first, rather than just leaping in with “who are you going to vote for”. This sounds odd to me – like steering. It also repeats the claim that the Tory internal polls had told them they’d win comfortably.

55 thoughts on “Another triumph of expert predictions

  1. There is a good point to be made here about the faith we put in objective sounding methods, especially our own, but the actual charge of academic wishful thinking made here is unfair – the exit poll (which was uncannily accurate) is also designed and analysed by academics. The key difference is that, with good sampling (a big qualifier, of course) exit poll should, for obvious reasons, be more powerful than any opinion poll. All academic psephologists would I guess recognise the advantages of the exit poll methodology and resources over any opinion poll.

  2. Kieron, thanks for your comment, yes, the exit poll was remarkably accurate. The exit poll of course immediately solves problem (b) as long as you have a fairly good sample of polling stations. I think Curtice said on the BBC last night that they do it by giving people slips of paper to put into a box, in which case problem (a) would be solved as well. This means that much less in the way of adjustment, modelling and parameter choosing is needed for the exit poll, so I think there’s less scope for subconscious bias in the researchers’ methodology.

  3. Some of us called it right 🙂

    The exit poll tends to be better because the mimic poll slip gives anonymity and catches more Tory voters because they are less likely to answer polls over the phone, in the street and on line. The slight under representation was probably due to the missing postal vote contributions, which favour the Conservatives.

    They also underestimated that this was a triple Shy Conservative situation. 1) People hide being members of the ‘nasty party’ * 2) They hide even more when it’s a second term because they could be forgiven for voting Tories in as a change but to make the same mistake twice in a row? 3) The term the Conservatives served through was a grim one. Many blame them for the hardships, regardless of how bad it could have been, so voting for them was wrong for so many reasons… except that’s what people wanted to vote for.

    On the other side of the scale, we have a very socialist leader who was threatening to spend his way out of the dregs of the recession. Unlike Tony Blair’s early years, where a Labour government became Labour Lite. (That’s where they still spend like mad but they do it by selling the family silver instead of taxes). Not entirely happy with that prospect, in comes the SNP who brag they’re going to play Miliband like a cross between a string puppet and a cash machine. Undecided voters thought ‘I can’t vote in the Tories, I can’t, I can’t… I can’t give Scotland the keys to the piggy bank. X’ As the last decision was probably made on the way to the voting station if not in the booth, it would be hard to predict.

    So, much like climate models, the polls are missing vital data and rules that influence the output (the ones above are only the tip of the iceberg). The output might be correct but you have no real way of knowing whether it will be or not so is worse than a guess because you might be fooled into giving it more weight than it deserves as it imitates science.

    * The TV media are very anti Conservative. It pervades their whole output, from casual comment, jokes and serious discussions. It’s a wonder there are any right wing voters at all, let alone a voting majority.

  4. Anecdotally, I wouldn’t agree that political science is a particularly left-wing branch of academia. Certainly within our local School of Politics, I could identify individuals from both sides, politically speaking. So I’m not buying the line re baking in their own biases.

    The ‘missing’ Survation poll that was much closer to final result suggests that a v late swing (or probably “don’t knows” breaking to Con) forms at least part of the explanation.

    The exit poll was a triumph, but also reminds us that opinion polls can *never* give us unimpeachable info about the future. Having said that, they are normally much closer than they proved this year, which suggests to me that ‘late swing’ formed part of the story.

  5. I think if there was a late swing, it was as Tiny and Warren say, the undecideds reluctantly sticking with the status quo rather than risking chaos and SNP blackmail. That may be part of the answer, the other part being just that the polls were wrong and have been for months.

  6. I do see parallels with the climate “experts”, notably they are not actually experts at all except within their own minds and those of their followers. I use my usual litmus test and look for a track record of being right in similar circumstances; if absent, no respect for their views at all. I’ve read Nate Silver’s book, I liked it and I agree in principle with the gist of his methods…but as you say after a good run with the US elections (luck?) he failed miserably with UK elections last time suggesting different factors at play. Same again this time, epic fail, sorry. And yet they still make a living doing this and people listen to them.

    I found the differences in the betting markets against the polls interesting. Despite the level polls, in recent weeks the Tories hovered around 1/4 to win most seats versus 4/1 Labour. Even an outright Tory majority was only on offer around 10/1 to 16/1 on Betfair (so hardly the greatest “shock” as portrayed); from your link Silver had this as a 500/1 shot!

    Once you cultivate a healthly disdain for the “experts” and are also aware of the media bias it gives a great opportunity for an informed independent mind to take them on and make some cash. For example Miliband was clearly a weird geek whose own side rated as a rubbish leader; you don’t need polls to tell you that when push comes to shove floating voters are not going to vote for him as PM.

    I just wish there was similar opportunity to take on the even more hopeless Climate soothsayers. Think how much you could have made over the last 15 years. Remember Lynas with his 6 degrees, Myles Allen with up to 10 degrees was it and even Richard Betts, who now tries to be relatively reasonable, has some ludicrous claims to his name not that long ago. Luckily the internet doesn’t forget so we can see them for the nincompoops they are.

    My wish is that the media stops referring to these failed forecasters as experts.

  7. Bad sampling and late swings are/sound like excuses to let the pollsters off the hook. The mere fact that they could all get together and be so damningly wrong is a facet of their own reality.

  8. They are excuses but then nobody should pretend that anything with a huge number of unknown inputs is ever much more than a guess. The polls themselves often provide an input to the system. Probably a lot of people voted because they feared another hung parliament and party patch up, like the polls were indicating. The Conservative victory might have been even greater if the weather had been bad, since fine weather increases the Labour vote.

    There are still a lot of questions that nobody has answered – did the Lib Dems change their vote or did they just not vote at all, either because they resented the party of just assumed it would lose anyway? Were the changes at the polls the result of more Labour and Conservative voters and not swing voters? The papers gave very good information about how close the vote had been in some wards last time, spurring some to make an effort and others to stay home and others to vote tactically.

    It’s quite funny to watch the confused reaction at the BBC, trying to pretend the left wing victory in Scotland will still bring the Conservatives to their knees.

  9. Tiny, yes, I have been puzzled by the BBC’s apparent obsession with Scotland. It seemed to be the main issue of last night’s Question Time, and it was much the same on Today this morning. The far more important issues of what this new government is going to do hardly gets a mention. Maybe your suggestion that Scotland is the only place where the Beeb political view holds is correct!

  10. I thought this was amusing (definitely from Andrew Lawrence’s perspective, anyway):

    Did Lawrence have much confidence in the bet, or was it more that he was willing to lose £200 on the very slight off-chance that he might get to rub it in the faces of the liberal, metropolitan comedy elite (who rottweilered him 6 months back for making positive noises about UKIP)? Anyway, shows he’s a bit cleverer than Russell Brand.

  11. Shub, have no desire to let them off the hook. Just that my hypothesis/guess is that there must have been more than one factor at play to cause such a discrepancy. Not all the pollsters were wrong: the exit poll has incredibly accurate, which I don’t think can be solely put down to the increased sample size. There’s a brief vid on the methodology at

    Intuitively, you will get more accurate info from “who did you just vote for?” than “who will you vote for?”, an important health warning that should be put on any polls asking the latter. BUT this doesn’t explain why the polls in the run-up to the election provided the worst predictions since 1992.

  12. Interesting news in The Times today. Apparently both Labour and Conservative had their own private poll results in the last few days that indicated the way things were going – suggesting Con ~ 300, Lab ~ 250 seats. But both kept this to themselves.

  13. I am well removed from the intricacies of British politics. But it appears that Labour lost seats and the Conservatives gained seats, both, whereas even the most successful of predictors foresaw only one of the two.

    What does this mean for the climate position of the UK in Paris?

  14. The Telegraph ran some poles that asked quite simple question like which of the two parties would you trust to run the economy.
    The result were of the scale support for the Conservatives, 10% against 90% those poles somehow showed the underlying support for the Tories.
    I am in NZ we also had a stunning election last year and the mini poles said the same bias.


    “Following the crowd, let alone fearing the possible criticism of politicians, is not a sensible strategy. Instead we leave aside all expectations, follow the data, and trust in the method we have developed and which has proven itself at recent elections.”

    Seems like the guy who ran the successful exit poll has some good advice that climate scientists should follow.

  16. I’d love to have a conversation with Tamsin on how to handle uncertainties in opinion polls.

  17. @Warren Pearce @TinyCO2 are wrong to compare Apples and Oranges of Exit Poll when people have actually voted
    ..with a poll conducted the day before with people who may or may not vote

  18. Simon W comments: “Miliband was clearly a weird geek whose own side rated as a rubbish leader; you don’t need polls to tell you that when push comes to shove floating voters are not going to vote for him as PM.” I think the Tories were regarded as the safest and surest anti-Milibrand vote.

  19. Hang-on guys put on your “full colour heads”. Up until now I have only seen over-simplistic black and white thinking
    “The first thing to note of course is that everyone got it badly wrong, greatly underestimating the Conservative support. Reasons for this include…” “Closet Conservative” factor”…” incorrect sampling”
    …. step right back’ve made a big ASSUMPTION that pollsters were trying to be HONEST and accurate ..but publishers (just like climate data publishers) have a wide variety of motivations including DECEPTION.
    As ever there is a difference between what we ARE SURE ABOUT and what we don’t and are GUESSING/ASSUMING.
    We know the forecasts they published, we know the actual result.
    Yet there are dozens of things we don’t know : we aren’t SURE about Survation’s motives for not publishing the accurate survey, whether Conservatives and Labour actually had very accurate polls etc..
    Motivation fallacy is when you dismiss ALL someone’s data out of hand cos they they have a motivation eg “they are BigOil”, but when we are not sure then it is not wrong to consider scenarios of motivation as to why accurate surveys were not published.
    1. Conservative motivation : Cameron wouldn’t want a survey published that says he’ll have a clear win, cos the risk is then some voters would stay at home. It’s preferable to have the illusion that your vote can make all the difference between “dangerous socialist loonies” winning or losing.
    2. Labour motivation : Likewise their narrative is that they are building up momentum, so you don’t want to the public to realise that momentum is stuck and that they might be voting for a losing side.
    – Survation’s motivation – You have to consider the scenario that their explanation is not true. A quick phonecall to Cameron might have led to it being unpublished.
    – Pollsters motives : Polls are not just about collecting data, but also about spinning opinion. 1. I can write a survey to get the results I want.
    2. I can write a survey to train voters along a particular path by making questions like “you are not going to vote for that nasty Tory party, are you ?” So that when they come to vote for real, they vote for my party.
    – Publishers motives linked to the above, there is a PUBLICATION BIAS since the media pay for most polls its’s them who often hype up or suppress the results to drive their desired narrative.
    – Groupthink BIAS : Survation said they didn’t publish their poll cos it was out of line with other pollsters ..Hang-on : You are training you model against other models ! Shouldn’t you be testing them against REALITY ?
    There’s probably CONFIRMATION BIAS involved in that case ie pollsters saying their tuning is good enough cos they remember the time it is confirmed by other surveys and ignore all the times it fails against reality.

    – One final thing I wonder if the Daily Mail’s tactical voting suggestions affected results – it appeared to tell UKIP voters to always vote Tory … Also I wonder if the Mail’s suggestions turned out to propaganda for the Conservatives rather than truth

  20. Oh one more thing ..”It was the Sun that swung it ?”
    or “the Sun that was Swung by it ? (secret polling )
    ..a few days before the election the Sun declared its support bizarrely
    ..In England vote Tory, In Scotland vote SNP
    perhaps the heavily financed and resourced Sun already had access to accurate polling data
    and just simply wanted to be on the winning side.

  21. another bias I forgot to add, that is very relevant to the climate debate v media reporting is
    THE INTERESTING NARRATIVE media BIAS – That the media will ignore the boring REAL world in favour of reporting an exciting fictitious world.
    News reporting is no education it is ENTERTAINMENT. Fake Narratives get reported over true real world.
    So the BBC actually has teams of people sitting around in offices ignoring Climate News,
    20 years ago they weren’t bothering to report on data to make headlines “In 20 Years Time the Planet and Climate will be Pretty Much The same as Now”, even today they sit around doing nothing, until that 1 report comes in, that CONFIRMATION BIAS style confirms their existing narrative that doom is coming soon. They will leap up and do that one news story to death ..oh the Cyclone is going to be “worse than ever etc”.

    ..Well the same would go for Elections ..they created some narratives “UKIP is evil and must be kept out” ..that was the lefty reporters pet narrative rather than “Labour must win”
    – Then when it seemed it could be possible that we would get rid of the evil Tories with a Labour SNP coalition that became their second narrative and they reported every story that confirms it and ignore the rest. Hence pushing the reporting of opinion polls in one direction.

    You’ll notice the BBC is so wedded to that narrative that even tho it didn’t happen it continues with the same reporting plans they formed in their minds before hand ..Hence they report breathlessly on Sturgeon as if she really does hold the reins of power.

    You can imagine that before the election there was a polling expert saying “we could adjust the figs to account for the closet Tory effect” and the media bods replying “no ..neck and neck is an exciting story”

  22. Nope stewgreen, if you watched the gobsmacked look on BBC and political faces (all sides) you’d know it came as a huge shock. However, I think the Tories did benefit from the neck and neck narrative. The polling agencies are now in hot water, so I doubt they planned it either.

    We haven’t had a hard left UK government since before Maggie (I wasn’t voting before she started so I’ve no idea how far back it goes). How many times do the wider public have vote to show Labour that they’re not interested any more? Scotland on the other hand has drifted further left. How curious.

    Or it could be a nationalistic thing. A lot of people above and below the border are trying to make this into a racist issue. The English are supposed to hate the Scots so much they’d vote tactically to keep them downtrodden. Sigh. Perhaps they think in terms of race and not politics?

    But the English public have grown up somewhat in recent years. They know that the government isn’t a magical money pot and when a government spends loads of money. the working classes (from to to bottom) end up paying the bill. The relative equanmity with which they’ve accepted austerity was a sign. They have lessons from Europe to learn from. It was odd how little the Conservative campaigners made of that.

  23. Don’t finalize your results until the groundswell you mention registers at the polls.

    Some of the votes hiding in the deep ocean must be riding the Antarctic drift whose dense and frigid waters take 700 years to turn the seas unside down, and barely one century has passed since Scott’s companion, Captain Oates gave us fair warning that it may be some time before we hear in full from Antarctic Tories.

  24. Stew, interesting thought – I always like it when someone spots and challenges an undeclared assumption! At first I thought that sounded like a bit of a conspiracy theory, but now I can see your point. Of course it’s well known that pollsters can steer a poll to get the result they want. There may be something in your ‘entertaining narrative’ ‘exciting story’ point. Reiner Grundmann says much the same thing at Pielke’s blog, “The storyline of a neck-to-neck race seems to have been the most appealing to all parties concerned, and to the pollsters.” Tiny’s reply is a good one though – everyone was genuinely gobsmacked when that exit poll came out.

  25. It’s kind of difficult, for me, to compare apples with grapes. In this case, it’s strange to compare politics and elections with something real, like climate change. I know that climate change is real, that it is due mainly to the oceans and to man’s influence over the oceans. As for voters and politicians…. they come and go, we’re having a variable here.

  26. Is it just me or is all this polling analysis a bit…well naff. Asking people who they’ll vote for then messing about interpreting sampling methods, “shy” effects and so on. What a rubbish method.

    If I was paid to forecast who was going to win the election I’d be focussing on the Why (do people vote the way they do) not the How (did you vote). Work out what the first order effects are, ignore the noise and take advantage of rubbish polls that put people off the scent.

    I don’t claim any great insight but my starting point for research would be to ignore all those who would vote for a donkey with a rosette (of any colour). Contrary to what the political bubble think, those remaining possibly take 5 minutes to make up their mind (listen to office talk to confirm this) and I reckon the two biggest factors are:

    1) Charisma and leadership qualities of party leaders.
    Just think who wins (Clinton, Obama, Blair, Thatcher) and who loses (Hague, IDS, Brown, forgotten the US ones). Cameron scores moderately highly on this but Miliband totally bombs (lower than even Brown at one point).
    2) It’s the Economy stupid (perception thereof).
    Have they got a job and is it more secure? How much disposable income have they got?
    Again very good employment figures and a general feeling we are turning a corner (you can argue whether this is true of course) was good for Tories. Also a general perception that it was Miliband and the toxic Balls that messed it up in the first place.

    That’s it, ignore the rest. Tories clear winners. For this election overlay the one-off SNP effect but that also pointed to voting Tory to keep out the Lab/SNP coalition. Enough people must have had a similar idea as Tory majority was only around 10/1 whereas the pollsters had virtually written it off as unthinkable.

    I think similarly about forecasting the Climate. What are the first order effects? I remember when I first looked into this being flabbergasted that the climate “scientists” had such little understanding of the Oceans (massive heat sinks) and Clouds (water biggest greenhouse effect which you can easily feel on a sunny day when a cloud goes over). Yet they were trying to convince us they can predict the future pretty much via the concentration of CO2 alone. Preposterous, as it has since proved.

    Predicting future climate by focussing on CO2 is like predicting future election results by focussing on voters’ views on foxhunting and gay marriage. Sure you might be able to tease out a bit of correlation here or there but it will be totally swamped by other factors (as yet not understood).

    (I see others have mentioned similar effects above)

  27. Pollsters, at least the ones you would pay money to, take extreme care to not bias their results with leading questions.

    I worked for one for years, and the question was always “If an election were held today, what party would you vote for?”. Even if biased, which it isn’t, the fact that the question never changed reduced the bias to a consistent factor from poll to poll.

    My belief is that it is the voters bias opinion polls, in favour of encouraging people to vote. That means some deliberately say they will vote for someone that they won’t, in order to make the race seem closer — enhancing the concept of a contest, which is key to democracy. That is why polls always seem to narrow in the lead up to an election.

    There is also not much of a “swing” vote. I vote for one party, and I “swing” by not voting, not by voting for anyone else. Polls would be more accurate if they tried to determine which “supporters” are not going to turn up rather than which are going to change votes.

  28. Bottomline : Putting narrative before evidence : The Narrative cart driving the evidence horse : is the common thread between UK election and climate stories.
    – It’s probably how Newsfotainment biz works, identify a narrative (a good story) and collect the evidence that supports it.
    For the UK election the narrative “Labour could win, and nasty Tories be out” was met by a grasping & publicising evidence confirming this and ignoring contradictory evidence.
    For Climate news the narrative “CO2 is sure to bring Climate catastrophe” is met in a similar manner.

    BTW – One standard for climate stories is : with a narrative of “CO2 drives temperature drives catastrophe” evidence of rising CO2 leads to true believers desperately seeking confirmation that temperature is going up and of catastrophes.

    Faith in the narrative leads to errors.
    1. The published polls got the Conservative percentage way too low *
    You can imagine when the polling experts turned up with the graph in the BBC office with the graph that showed Labour had a slightly higher percentage of public support than the Conservatives, then the BBC journalists grasped the narrative “Looks like a Labour win, add onto that a probable SNP landslide, so surely nasty Tories will be out”,
    but any experienced person would have said :
    1. “in the UK it’s seats that drive UK election wins, national percentages are only a rough guide” **
    2. “So let’s see a SEAT graph taking account of such regional variations”*
    3. “At first instinct only a close Labour higher percentage is not good enough” ***
    And indeed look how party % turned into seats :
    Labour got 30.4% of the votes but took 35.7 % of the seats
    …Tories 36.8% gave them 51% of seats
    add in the SNP’s 8.6% + LIbDems 1.2%
    : meaning the combined Leftwing opposition got 44.8% of seats
    (see how the Tories have a much higher “% to seat multiplier” 1.38x vs Labour’s 1.17x)

    *** Faith in Experts – Cos the media never showed properly accounted regional seat predictions many of us discounted our instinct thinking the media must have a higher level of expertise on tap.

    * (one poll Communicate’s poll for the Independent, was outside their margin of error ! Labour 39%, Tories 31%)
    ** Votes are not so evenly spread, as a party’s heartland can have huge percentages and other areas much less.

  29. Reply to TinyCO2 You can imagine that before the election there was a polling expert saying “we could adjust the figs to account for the closet Tory effect” and the media bods replying “no ..neck and neck is an exciting story”

    TinyCO2 : “Nope stewgreen, if you watched the gobsmacked look on BBC and political faces (all sides) you’d know it came as a huge shock”
    ..yes I agree MOST BBC bods were surprised ..I imagined only key BBC producers knew ie having found the “right for BBC narrative result” for fear of upsetting them the pollster whispered about “adjusting to account for the closet Tory effect” and that producer patted him on the head and said “shush”.

    UPDATE May 11, 2015 at 3:52 am I began with a quote but forgot to attribute it
    It was Nate Silver on his webpage “The first thing to note of course is that everyone got it badly wrong, greatly underestimating the Conservative support. Reasons for this include…” “Closet Conservative” factor”…” incorrect sampling”

  30. I’ll shut up eventually, but 2 (no 7) more common factors I just remembered (after Joshes hint)
    2. The EXTRAPOLATIONS away from primary evidence
    : CO2 -> temp ->Climate catastrophe
    : vote ->seats **

    3. The FAKE CERTAINTY beyond evidence in these extrapolations
    – Climate science is quoted with ‘made up’ certainty figs like ‘95% certain” and “”97% of scientists say”
    – And see although the pollsters % predictions only once strayed out of the margin of error (In the Independent on Sunday)
    yet when the media extrapolated that to SEATS they did it with CERTAINTY ..and I never saw them quote the increased margin of error.
    ** Note that Conservative percentages transfer to SEATS at a multiplier 20% better than Labour’s (1.38x vs Labour’s 1.17x see my comment above )

    4. Groupthink BIAS : They tuned the MODELS against each other, instead of the real word.
    ie when Survation’s poll seemed at odds the others , they declined to publish it .

    5. Faith in Experts – Cos the media never showed properly accounted regional seat predictions many of us discounted our instinct, thinking the media must have a higher level of expertise on tap.

    6. No Turning Back : The Science is settled : Once the media accept a narrative it becomes so ingrained that they continue with it ie the BBCbods were convinced that “UKIP are racists who must be kept out, and that the SBP Sturgeon was going to hold the reigns of power” Note how BBC radio agenda continued after the election as if Sturgeon did indeed win the balance of power.

    7. Possible Data Cheating : It is not conspiracy theory to consider these scenarios.
    That once people are wedded to a narrative, the actual data they collect can be dishonest.
    e.g. I can design polls to get the answers I want. I can design polls to train opinion.
    : I can choose to adjust/collect climate data in one way and not another

    I can choose to publish/hype some polls and not others. Both Labour & Conservative had different motives for this that I outlined above May 11, 2015 at 3:52 am
    : same goes for climate stories

  31. – oh and it’s SIMPLISTIC narrative vs COMPLEX reality
    ie the media want to pick up the most simplistic story they can.

  32. More Likely possible reasons are that a sizable proportion of the public finds out facts like:-

    – Conservatives pay down debt in good times, whereas Labour Leader (ex-Marxist Economics lecturer) doesn’t even know you’re supposed to
    – Conservatives create jobs during their government, whereas Labour destroy them

  33. The pollsters are in somewhat hot water over this. These are businesses and won’t do well financially if they get things regularly wrong. They claim they did adjust for shy Tories, ‘just not enough’.

    I imagine that political bias might make a difference to how a pollster develops their rules but I doubt they’d try to innacurately skew the results when their employment/credibility was at stake. They couldn’t even be sure they wouldn’t influence the public the wrong way.

    As yet, there are no serious conclusions why the polls were wrong and what makes a shy Tory become a voting Tory. I think that it’s a confluence of reasons that change from election to election. I suspect the internet news sites will have the best idea of what the public mood is. Which issues are clicked on, which get a lot of comments, what is the tone of those comments. These will give pollsters an idea which issues are really important and which are just headlines. eg News people and Labour thought Russell Brand was influential – but most comments on news sites were almost all negative. Turns out we don’t want our leader to be down with the boyz, we want him or her to be a grown up.

  34. stew, the news stories ring hollow enough when considered as items of reportage. But much of the news media are aware they are at least marginally capable of manufacturing consent and opinion (to borrow from Noam Chomsky), so this sort of reporting serves a dual purpose.

    If the reported ‘news’ comes true, you go ‘See, told you’, and if it turns out to be completely wrong, you go ‘well, at least we tried’.

    If you read sociologic literature, authors frequently stress not to mistake news coverage of an event, with the news itself. A lesson that is frequently forgotten.

  35. Paddy Ashdown argues that the inaccurate opinion polls were a factor in the Lib Dem collapse – if the polls had shown the true Tory lead, the SNP fear factor would have been diminished and the value of the Lib Dems as a moderating influence would have been enhanced.

    Had that been the case, the party that picked up the extra votes would not have been last GE’s protest party – the Limp Dims, but the new protest party – UKIP.

    THAT would have been very interesting.

  36. Quick aside : I had a theory, but when I checked it seems wrong. There are 2 stats issues.
    1. That the pollsters were vastly wrong in saying the percentages of Lab/Con would be same
    2. The different vote to seat multiplier that Lab/Con have. ie Con percentages votes turn into more seats.
    My theory was that this was this was accounted for by Labour votes being ‘wasted” in Scotland as seats just flipped over to become SNP. But when I check the numbers for Scotland the Conservatives also had more “wasted” votes than I anticipated so not a huge difference from Labour: ie 434K vs Lab 707K that 270K difference is not much in a pool of 33million

    BTW : The BBC webpage today says the turnout was 375 million !

  37. Well much good and interesting comment on this topic.

    I feel that it was just like 1992 again. Many uncommitted voters simply looked at the Labour leader and said “Eh! NO”. Still it hasn’t done Kinnoch any harm has it? Done nothing but politics since he left the Welsh Valleys, but is worth millions. Who says that Labour doesn’t understand aspiration??!!

    I can only be trivial on this occasion – we all know the definition of an “expert”.

    “Ex” is a has been


    “spert” is a drip under pressure

    I demonstrated this truism during my long and not too illustrious career – being the role model for many who said to themselves if they will pay “Dave” to do that, they will pay me.

  38. Very interesting blog. From a personal perspective, the phenomenon of shy Tories is quite real

    On the reliability of polls it is worth having a read of a revealing analysis posted online the evening before the General Election;

    This focused on how 10 of the previous 12 elections had had the opinion polls understating the Tory vote share, not just the disastrous predictions of 1992. It concluded with a prediction of a Tory lead of around 6%. On the basis, it’ll be interesting to look at the local election results of 2019 if you want to get a good idea about who’ll win in 2020!

  39. 2 more recent podcasts about problems of prediction science.
    (oops the spam filter has deleted a post I mentioned 4 hours ago so i will repeat it and elaborate.)
    * 1. ABC Radio prog Future Tense have a radio series “The prediction predicament” which by coincidence first aired on Sunday (seemingly made before the election)
    “So why do we get it so wrong, so often? Particularly when it comes to the social sciences ”
    In it David Hand gaves some good observation
    Transcript :
    Highlights : Philip Tetlock who classically showed experts are bad at it, said we go back to them cos we can’t stand not knowing
    : The media like certain and simplistic pundits this makes them relatively poor forecasters.
    David Hand : “highly improbable events are commonplace, they happen every day. “eg guy was struck many times by lightening, cos he is a Park Ranger

    * 2. Spiegelhalter and the actual Poll experts actually spoke on the BBC’s prog More or Less on Friday 8th ..
    one quote “no you can’t keep saying it’s the margin of error, (if you have enough samples) if it’s in the one direction it’s systematic bias”
    The pollsters themselves declined to apologise properly, but it seems to me they took a contract to make predicitions which they didn’t have the ability to do. Their models can’t predict the government ..the voter response rates are very low (30% on telephone polls)

    don’t forget to check next Sunday’s show : it’s about Hyping up the Fear Factor ..very relevant for climate

  40. Did Labour have secret TRUE polls ? well Alistair Campbell genuinely showed he was “in a BUBBLE” and didin’t expect the result On Radio 3 May 12th
    Direct audio link

    He explained Andrew Neil kept the EXit Poll secret from him and just said “you will be very shocked”
    AC says how he thought ‘oh my God we’ve won (outright), that is cos I was inside my own little BUBBLE’ (and couldn’t consider anything else)
    – He also revealed the Labour strategists had not been considering anything other than the possible Labour government coalition options ie that was their bubble.

    podcast version
    (The segment is about ‘gut instinct’ actually starts at 8.30 mins)

  41. #1 This media election prediction failure is a VERY big deal it has such big parallels with climate.
    “THEY CAN”T ALL BE WRONG ! All the institutions, all the experts, all the authorities !
    .. So you Climate skeptics are the ones who are clearly wrong”
    is a main argument people use when you challenge green dogma.
    – BUT THEY WERE ALL WRONG when it came to UKElection2015.

    #2 Don’t let FAILed media industry blame it on the pollsters
    A good question came from
    On The R4MediaShow .. the BBC keep blaming it on the pollsters yet you have thousands of media staff, how come none of them DETECTED that the narrative were wrong ?
    ..well surely that is the channel thinking you get into, CONFIRMATION bias means you accept anything that confirms your narrative and dismiss anything that contradicts it. You don’t look for stuff that challenges your ‘confirmed’ narrative.

    3# 3% wrong is the IMPORTANT context. The pollsters said it was neck and neck at 34% Labour 34% Tory, but the pollsters error may have been a little as 3%
    ie if 3 out of 100 Intended Labour voters put their cross next to the Conservative box that explains the 31%, 37% result.
    *There is no evidence that the surprise vote came from the other minor parties)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s