Monthly Archives: May 2015

Making things up about Jeb Bush

I’m no fan of the Bush family. I couldn’t believe it when those dumb Americans voted for W, then did it again 4 years later. Now apparently his brother Jeb is going to stand for President. Those Yanks must be so glad they got rid of hereditary monarchy.

But to climate activists, it seems that Jeb Bush is so evil that misrepresenting what he said is perfectly acceptable, and normal standards of integrity don’t apply.

Here’s what Bush said, according to Reuters

“Look, first of all, the climate is changing. I don’t think the science is clear what percentage is man-made and what percentage is natural. It’s convoluted. And for the people to say the science is decided on, this is just really arrogant, to be honest with you.”

“It’s this intellectual arrogance that now you can’t even have a conversation about it. The climate is changing, and we need to adapt to that reality.”

Now here’s how the climate propaganda brigade reported what he said. First up, Mat Hope, Associate Editor at NatureClimate with a focus on social sciences.

A double misrepresentation – Bush didn’t refer to scientists, nor did he he say talking about climate science was arrogant. When challenged on this, Hope claimed that it was “fair paraphrasing” and that his tweet was an “analysis” of what Bush had said.

Not quite as bad, climate scientist Michael Oppenheimer said

Here he’s making an unjustified presumption – there’s no evidence he was talking about climate scientists. In fact “The science is settled” tends to be a claim made by journalists and politicians, not scientists. Since Bush is a politician, he’s probably talking about Obama.

Then there was the ubiquitous Bob Ward,

Quite was is meant to be “denial” in Bush’s statement isn’t clear.

Finally, HuffPost writer Kate Sheppard says

https://twitter.com/kate_sheppard/status/601398556566630401

Bizarrely, she quotes Bush in her article, so anyone reading it can see that her tweet and the headline of her HuffPo article are, in the words of one of the climategate emails, “not especially honest”.


Tangentially related to this, and following up from the previous post on why the election opinion polls were wrong, there’s an interesting article in the Times Higher by an academic, Diana Beech, who confesses to the sin of voting Conservative. She says she was a floating voter but was driven to the right by the “self-righteous and intolerant nature of the comments I saw from colleagues on my Facebook feed”. She goes on to say “The belligerence of the Left’s intelligentsia in the social media sphere – at least in my circles – left no room for the balanced, honest debate which could have ultimately brought undecided voters into the fold.”

Climate communication experts could perhaps benefit from reading this and giving some thought to how this might work in the case of public opinion on climate change, where the belligerence and intolerance of the activist left is just as bad, if not worse. It’s unlikely that Hope, Oppenheimer, Ward and Sheppard will take any notice.

Another triumph of expert predictions

One theme of this blog has been the failure of the predictions made by expert climate scientists, together with the failure to acknowledge or investigate this failure.

Last night we had another very interesting example of expert predictions failing. With all the results now in, we know that the Conservatives have 331 seats, and Labour 232.

How does this compare with the various predictions made just before the vote?

Con Lab Con – Lab
Final Result 331 232 99
YouGov (Peter Kellner) 284 263 21
Bookies (oddschecker) 287 267 20
Nate Silver (538) 278 267 11
Guardian 273 273 0
British Election Study 274 278 – 4

I’ve listed here some of the predictions made yesterday, in decreasing order of accuracy (Con-Lab difference). The “Bookies” row comes from Oddschecker, which lists odds provided by 20 or so bookies in a neat Table form (currently showing, for example, the options for next Labour Leader). You’ll have to take my word for it that I copied down their most likely outcome correctly. Nate Silver’s prediction is still on-line; he is sometimes regarded as a guru of great wisdom, despite having got the 2010 UK election spectacularly wrong (he predicted about 100 Lib Dem seats). The final projection from the Guardian was a dead heat between Labour and the Conservatives. The British Election Study is a group of, um, expert UK academics. Their final forecast is here.

The first thing to note of course is that everyone got it badly wrong, greatly underestimating the Conservative support. Reasons for this include
(a) the “Closet Conservative” factor – there is a tendency for people not to own up to supporting the Conservative party, and
(b) incorrect sampling by the pollsters – perhaps quiet conservatives stay at home, don’t answer the phone much and aren’t as eager as some others to express their opinions.
However, I thought that the pollsters were well aware of these factors, particularly since the 1992 election when something very similar happened, and compensated for it.

But what I found most interesting is that of all the predictions, the worst was that given by the team of expert university academics. Roger Pielke wrote a post about their predictions back in March, when their average prediction was similar to that in the table above, suggesting a small lead for Labour. There was a consensus – in fact not a 97% consensus, but a 100% consensus – among the experts that the Conservatives would get less than 300 seats. But the consensus was wrong.
Why does a team of experts perform worse than the bookies, who presumably base their odds mainly on the money placed, i.e. on public opinion?! One possible explanation for this apparent contradiction is suggested by the work of Jose Duarte and others, on the effects of the well-known left-wing bias in academia; it may be that inadvertently the researchers are building in their own political bias into the assumptions they make in their model, and this is influencing their results.

Other possible explanations for the surprise election results and the apparent failure of the expert predictions are as follows:

  • This is just a short-term fluctuation – a hiatus, or pause, in the Labour vote – that the models cannot be expected to predict correctly. The experts have much more confidence in their projection for the 2100 election. (HT David)
  • The raw data from the election results is not reliable, and needs to be adjusted by the experts. After suitable UHI and homogeneity adjustments have been applied, the results are in line with the expert predictions, and Ed Miliband is declared the new Prime Minister.
  • More funding and bigger computers are urgently needed, so that we can get more accurate predictions.
  • The missing Labour voters are hiding at the bottom of the oceans.

Finally, Feynman’s rule applies again:

Science is the belief in the ignorance of experts.

Updates and links:

Roger Pielke has published his evaluation of the predictions: “… mass carnage for the forecasters”. He notes a really interesting point, that asking people who they think will win in the constituency is more effective than asking them who they will vote for.
He also has an article in the Guardian.

The BBC has a post-mortem How did pollsters get it so wrong? which asks many questions but offers few answers beyond mentioning the “shy conservative” effect.

One Survation poll was very accurate – but was not published because it was so out of line with all the others!

Both the Tories and Labour had their own internal polls in the final week suggesting that the seat split would be about 300 – 250 (The Times, 9 May). But they kept this to themselves, either doubting it or in Labour’s case so as not to discourage the faithful.

Paddy Ashdown argues that the inaccurate opinion polls were a factor in the Lib Dem collapse – if the polls had shown the true Tory lead, the SNP fear factor would have been diminished and the value of the Lib Dems as a moderating influence would have been enhanced.

Tory MEP Daniel Hannan says the answer to why the polls got it wrong is given in this quote from Edmund Burke, a more poetic version of my answer (b).

Frank Furedi in Spiked goes for answer (a): “Is it not worrying that in a free society ordinary citizens feel uncomfortable with publicly expressing their true opinions?”

Josh has produced a cartoon

Josh also links to a Dan Hodges piece from April 30th predicting a Tory lead of 6-7 points – spot on (Andrew Lawrence got it right too – see also Ian Woolley’s comment below).


Post-mortems

Newsnight on 11 May looked into why the polls did badly. Survation thought there was simply a late swing. Labour’s internal poll had shown they were behind for months – more details here and here. The “shy conservative” and “poor sampling” factors were also mentioned.

Lord Ashcroft says he did not make a prediction, but then contradicts himself by saying he got it right regarding Scotland and UKIP. Acknowledging the underestimate of the Con vote, he suggests late swing, Tory micro-targetting of key seats, and Shy Tory as factors. (In my marginal constituency there was no effective Tory micro-targetting).

In The Conversation there’s a jaw-dropping apologia for the failure of the pollsters by two academics who seem to be in denial. They come up with a confidence intervals excuse that doesn’t survive the simplest scrutiny – see my comment there. There’s a climate analogy here again – the group defends itself and refuses to acknowledge its errors.

538 are much more honest, admitting straight away that they got it wrong. They say they adjusted for the “stick with what we know” factor, but nowhere near enough. A second article says it’s all down to getting the vote share wrong, but doesn’t say why they got that wrong.

Matt Singh has a post-mortem saying that factors may be electoral flux (meaning things were very different this time because of UKIP and the SNP), shy voters, and overestimated turnout. He also wrote a very detailed blog post on the shy Tory effect the day before the election, ending with a spot-on prediction of a Con lead of 6 points (HT botzarelli in comments).

In the Mail, an Ipsos Mori pollster claims that the problem with the polls was mainly that the Labour supporters just didn’t bother to vote. I don’t find that explanation at all convincing.

David Spiegelhalter says he got it wrong and acknowledges Matt Singh’s success. He praises the exit poll, discusses some suggestions for improvement but sits on the fence regarding what actually went wrong.

The Guardian says that more accurate results are obtained if you ask people other questions about their values first, rather than just leaping in with “who are you going to vote for”. This sounds odd to me – like steering. It also repeats the claim that the Tory internal polls had told them they’d win comfortably.