Tag Archives: The Economist

Lies and statistics

This week The Economist has an interesting article, Unreliable research: trouble at the lab, on the worrying level of poor quality scientific research, and weak mechanisms for correcting mistakes. Recently a drug company, Amgen, tried to reproduce 53 key studies in cancer research, but could get the original results in six. This does not appear to be untypical in attempts to reproduce research findings. The Economist points to a number of aspects of this problem, such as the way in which scientific research is published. But of particular interest is how poorly understood is the logic of statistics, not only in the world at large, but in the scientific community. This is, of course, applies particularly to the economic and social science research so beloved of political policy think tanks.

One particular aspect of this is the significance of a concept generally known as “prior probability”, or just “prior” for short, in interpreting statistical results. This is how inherently likely or unlikely a hypothesis is considered to be, absent any new evidence. The article includes an illustrative example. Hypotheses are usually tested to a 95% confidence level (a can of worms in itself, but let’s leave that to one side). Common sense might suggest that this means that there only a 5% chance of a false positive result – i.e. that the hypothesis is incorrect in spite of experimental validation. But the lower the prior (i.e. less inherently probable), the higher the chance of a false positive (if a prior is zero, at the extreme, no positive experimental result would convince you, as any positive results would be false – the result of random effects). If the prior is 10% there is a 4.5% inherent probability of a false positive, compared to an 8% change of a true positive. So there is a 36% chance that any positive result is false (and, for completeness, a 97% chance that a negative result is truly negative). Very few

The problem is this: an alternative description of “low prior” is “interesting”. Most of the attention goes to results with low priors. So most of the experimental results people talk about are much less reliable than many people assume – even before other weaknesses in statistical method (such as false assumptions of data independence, for example) are taken into account. There is, in fact, a much better statistical method for dealing with the priors problem, called Bayesian inference. This explicitly recognises the prior, and uses the experimental data to update it to a “posterior”. So a positive experimental result would raise the prior, to something over 10% in the example depending on the data, while a negative one would reduce it. This would then form the basis for the next experiment.

But the prior is an inherently subjective concept, albeit one that becomes less subjective as the evidence mounts. The scientific establishment hates to make such subjective elements so explicit, so it is much happier to go through the logical contortions required by the standard statistical method (to accept or reject a null hypothesis up to a given confidence level). This method has now become holy writ, in spite of its manifest logical flaws. And , as the article makes clear, few people using the method actually seem to understand it, so errors of both method and interpretation are rife.

One example of the scope for mischief is interesting. The UN Global Committee on Climate Change presented its conclusion recently in a Bayesian format. It said that the probability of global warming induced by human activity had been raised from 90% to 95% (from memory). This is, of course, the most sensible way of presenting its conclusion. The day this was announced the BBC’s World at One radio news programme gave high prominence to somebody from a sceptical think tank. His first line of attack was that this conclusion was invalid because the standard statistical presentation was not used. In fact, if the standard statistical presentation is appropriate ever, it would be for the presentation of a single set of experimental results, and even that would conceal much about the thinness or otherwise of its conclusion. But the waters had been muddied; our interviewer, or anybody else, was unable to challenge this flawed line of argument.

Currently I am reading a book on UK educational policy (I’ll blog about it when I’m finished). I am struck about how much emphasis is being put on a very thin base of statistical evidence – and indeed how statistical analysis is being used on inappropriate questions. This seems par for the course in political policy research.

Philosophy and statistics should be part of very physical and social sciences curriculum, and politicians and journalists should bone up too. Better than that, scientists should bring subjectivity out into the open by the adoption of Bayesian statistical techniques.


The Euro end game

It’s been a tough year for Europhiles, especially those, like me, who have always supported the single currency and thought Britain should have been part of it.  Most of them have been very quiet, and no wonder.  Whatever one says quickly has the feel of being out of touch and in denial.  And now this week the Economist asks in a leading article  Is this really the end? that has been tweeted over 1,200 times and picked up over 500 comments.  In today’s FT Wolfgang Munchau article is headlined: The Eurozone really has only days to avoid collapse (paywall).  Is now the moment to finally let go, and admit that the whole ill-fated enterprise is doomed?

There is no doubting the seriousness of the current crisis.  While most of the headlines have been about sovereign debt (especially Italy’s) what is actually threatening collapse is the banking system.  It seems to be imploding in a manner reminiscent of those awful days of 2007 and 2008.  The Germans’ strategy of managing the crisis on the basis of “just enough, just in time” seems to be heading for its inevitable denouement.  Unless some of their Noes turn to Yeses soon there could be a terrible unravelling.

The most urgent issue is to allow the European Central Bank (ECB) to open the floodgates to support both banks and governments suffering a liquidity crisis.  “Printing money” as this process is often referred to, seems the least bad way to buy time.  Two other critical elements, both mentioned by Mr Munchau, are the development of “Eurobonds” – government borrowing subject to joint guarantee by the member states – and fiscal integration – a proper Euro level Finance Ministry with real powers to shape governments’ fiscal policy in the zone.  Most commentators seem to be convinced that some sort of steps in both these directions will be necessary to save the Euro.

I have a lingering scepticism about these last two.  I thought that the original idea of allowing governments to default, and so allowing the bond markets to act as discipline, had merit.  The problem was that the ECB and other leaders never really tried it before the crisis, allowing investors to think that all Euro government debt was secure.

Still the short term crisis is plainly soluble, and most people will bet that the Germans will give the ECB enough room to avert collapse.  But that leaves the zone with a big medium term problem, and two long term ones.  The medium term one is what to do about the southern members whose economies are struggling: Spain, Portugal and Greece especially, with Italy lurching in that direction.  The stock answer, which is to enact is reforms such that their economies become more competitive, seems to involve such a degree of dislocation that we must ask if it is sustainable.  This treatment is not dissimilar to that meted out by Mrs Thatcher to Britain in the 1980s (an uncompetitive currency was part of the policy mix here, deliberately or not), for which she is still widely loathed.  And she was elected (though “democratically” is a stretch given Britain’s electoral system).  How will people react to unelected outsiders imposing such treatment?  Better than Britons would, no doubt, since there is so little confidence in home grown politicians , but it’s still asking a lot.

And that leads to one of the two long-term problems: the democratic deficit.   A lot of sovereignty is about to be shifted to central institutions, and it won’t be possible to give electors much say.  The second long term issue is dealing with the root cause of the crisis in the first place, which is how to deal with imbalances of trade that develop within the Euro economy.  Germany simply cannot have a constant trade surplus with the rest of the zone without this kind of mess occurring at regular intervals.  But there is no sense that German politicians, still less their public, have the faintest grasp of this.  For them the crisis is the fault of weak and profligate governments elsewhere.

So if the Euro survives the current crisis, there is every prospect of another one down the road, either political (one or more countries wanting to leave the Euro and/or the Union) or financial (say an outbreak of inflation).

My hope earlier in the crisis was that it was part of a learning curve for the Euro governments.  As they experienced the crisis institutions would be changed and expectations made more realistic, such that zone could get back to something like its original vision.  I am afraid that there is a lot more learning to do.


The Economist shoots itself in the foot. Twice.

This week The Economist has come out for a No vote in a leader on the UK’s referendum to the voting system.  It argues that AV is no improvement on FPTP, so we should vote no.  It wants the system to be more proportional, with 20% of parliament’s seats reserved selected by proportional representation (PR), and the rest on FPTP.  It dismisses the argument that a Yes vote would make further change more difficult, without really saying why, beyond “It might exhaust the national appetite for reform.”  This is pretty weak stuff, but on two counts the paper has undermined its own argument.

Update.  Having read more of this week’s edition, the Economist’s hidden reasoning looks a bit clearer.  They are worried that a Yes vote would make the Conservatives so angry that they will derail other reforms, such as that for House of Lords.  Alternatively if there is a no vote then these reforms are more likely to go through as a consolation prize.  They appear to think that these other reforms are more important.

Also they are making a big deal out of the fact that because many voters will not preference all candidates, then some candidates will be elected with less than 50% of the vote.  They think this is a major problem with the Yes case; and yet I think this is simply equivalent to an abstention.  Nobody suggests that MPs should be elected with 50% of the whole electorate.  And enough people will cast preference votes, especially if there is a major left-right polarisation, to make the change in system worthwhile.  This article on NSW and Queensland state elections, which use the same preferential system envisaged here, makes that quite clear.

In the first instance, in the very same edition, the paper covers the Canadian general election, held under FPTP, with the sub-headline “A last-minute surge for the left might end up benefiting the right.”  This is exactly the sort of perverse outcome that AV can do much to prevent, because it reduces the problem of the split vote.  In Australia, which uses AV, the rise of the Green party has not benefited the right.  Under FPTP, using first preferences, the right would have benefited royally from the Green’s success in the last Australian general election.  So, “no improvement”?  The Economist makes the case against its own editorial with wonderful succinctness.

Next, the paper has changed its view.  It used to be a strong advocate for AV in the UK.  Admittedly that was a long while back (and certainly before 1997, when the paper’s online archive starts).  But the paper routinely refers to previously held positions, sometimes dating back to the 19th century.  And yet the paper’s leader makes no reference to this earlier view, and why it has changed its mind.

I have been reading the Economist since 1984 (and its position on AV swayed me in its favour back in the 1980s – one of the reasons that my support is not as lukewarm as some).  This is very disappointing.  The editorial team was probably deeply split.  Its Bagehot columnist, burnt by experiences elsewhere in Europe, hates PR and fears that AV might eventually lead to PR because more people will cast first preference votes for minor parties.  Others no doubt favour full PR.  I can imagine the poor leader writer trying to reconcile all this.  And failing.