The Economist advocates turning the clock back on trade

Last week’s Economist led on the dangers of changing political attitudes to world trade. The paper suggested that the rise of “zero-sum thinking” threatens capitalism, liberal democracy and the livelihoods of many. But we live in a world were the conventional wisdom of economists is being challenged – from inflation to interest rates to economic growth. The conventional wisdom on trade needs to be challenged too: not because the economics is wrong, but because the context has changed.

There are two central foundations to economists’ understanding of the benefits of trade. One is the logic of comparative advantage, one of the first insights of modern economics when it got going more than two centuries ago. What matters when resources are constrained (as they almost always are) is opportunity costs, and not absolute costs. It is more efficient for for a less productive supplier to produce goods, if the more efficient one is better able to produce other goods that are in short supply. It is one of the first things economics students are taught, and one of the most important challenges to “zero-sum thinking”, which suggests that imports are bad because they put local people out of work. Those people can be redeployed to make things things more productively in world terms, meaning that everybody can benefit.

The second foundation for the economic benefits of trade is economies of scale and the benefits of specialisation (or economies of scope). Industries may not have critical mass in their own market – but through trade they can access bigger markets, benefiting everybody. This idea can work alongside comparative advantage (the concentration of watchmakers in Switzerland presents economies of scope and scale, which in turn leads to comparative advantage, for example). That makes them easy to muddle. Economies of scale and scope do not necessarily lead to comparative advantage, and you can have comparative advantage in a particular area without economies of scale or scope. This needs to be picked through with care – which alas The Economist seldom does. Now let’s step back and look at how world trade has evolved in the last 40 years or so.

The massive explosion in global trade in the 1990s and 2000s is mainly explained by comparative advantage. The thing to understand about comparative advantage is that it is driven by differences in economic structure, which create differences in opportunity costs: it is a function of difference. China, the largest driver of this surge in global trade, was a very different place to the developed countries it traded with in 1990. A vast number of people were still employed on the land, in highly inefficient agriculture; in the developed world the agricultural workforce was nearly insignificant, while producing much more food than it could consume. By shifting workers from agriculture to manufacturing in China, a lot more manufacturing goods could be produced, with any shortfalls in agricultural production made up for by developed world production with a negligible increase in workforce. This meant that Chinese manufactured goods were dirt cheap, while its agricultural produce was expensive – a colossal opportunity for world trade, even if Chinese manufacturing productivity was much lower than in the developed world. The process worked something like this: low agricultural productivity ensured low wages; low wages meant cheap manufacturing products, even with low manufacturing productivity. The picture was a lot more complicated than this – it wasn’t actually a case of China importing grain while exporting washing machines (they imported more capital goods than food) – but comparative advantage was the driver.

That is all Economics 101. But while economists understand how comparative advantage works in principle, they are surprisingly ignorant of how it works in practice. It has improved impossible to model the dynamics of comparative advantage in a way that produces the detailed results and predictions that are most economists’ day job. So, after they have completed their undergraduate studies, few economists think much about it. If they did they would me more alive to the issue of convergence. The Chinese economy, like the Japanese and South Korean economies before it, did not stand still. Productivity shot up, especially in agriculture, and the agricultural workforce rapidly diminished, while that of manufacturing and services rose. Convergence with the developed world happened at astonishing speed, and as that happened the differences that drove comparative advantage diminished. Developed countries started to find Chinese products becoming more expensive. The incentives for long range trade between China and the rest of the world diminished. This hurt developed countries much more than it did the Chinese – as the Chinese benefited directly from increased productivity and rising wages. It is, I believe, one of the reasons for sluggish growth in the developed world since the great financial crisis of 2007-09, though it is almost never mentioned as a factor (and certainly not by The Economist), in spite of the great economist Paul Samuelson drawing attention to it.

This is where the second factor can come into play – economies of scale and scope. In Europe, for example, the leading economies converged in the late 19th and early to mid-20th centuries. Comparative advantage diminished. But, after the Second World War, trade within the continent flourished, so clearly something else was behind it. Why, for example, did Germany, France, Britain and Italy all have substantial car industries, all with a lot of cross-border trade? This shouldn’t happen under comparative advantage, unless the cars each country made were somehow very different from each other. In fact the economics of motor manufacture meant consolidation into larger and larger firms was required to be competitive. Cross-border trade gave consumers more choice, and the other benefits of competition, if their own country only had room for one or two car firms. As Europe developed its single market, economic benefits flowed – but on nothing on scale that flows between more diverse economies. There are two sorts of benefit here. The first is that the benefits of economies of scale lifting productivity; this can work in quite a similar way to comparative advantage, with some countries specialising and others happy to have cheaper products (the aero industry is a bit like this). The second derives from good old fashioned competition between businesses in different countries. This is very different, as this only works if multiple countries are making similar products. We must also bear in mind that as the gains or more limited, it requires a level playing field to work; if one country suffers a systemic disadvantage, such as high transport costs because they two oceans away, then the benefits of trade diminish. That is one reason that Europe had to develop detailed rules for free trade, while the Asian economies’ rise was based on much cruder arrangements, such as World Trade Organisation (WTO) rules.

The important thing to realise about economies of scale and scope is that they are dependent on technology and not any iron logic of economics, as is the case for comparative advantage, although you wouldn’t think it from the way many executives from large businesses talk. And technology changes with time. The late 20th Century was particularly good for economies of scale, but that is changing. And that is for two reasons. The first is the rise of technologies that diminish the costs of short production runs and individualisation (indeed the same edition of The Economist featured this in its business section – a new theory of the firm – one of the articles featured on the cover). The second is the diminishing importance of manufactured products in the economy as whole, compared to services, such as healthcare, which are largely untradeable. All this points to reduced benefits from trade, especially between big geographical blocks like America, Europe and East Asia, as opposed to within them.

Where does that leave the current debate on trade? The first point is that I don’t think the benefits of trade are diminishing because of political obstacles; I think those political obstacles are arising because the benefits of trade are diminishing. The second thing is that, for developed economies, there is no great box of goodies that can be unlocked through trade liberalisation to help flagging growth along. Doubtless there are further benefits to be had – and especially with less developed countries if done in the right way, but not on the scale that saw the economic transformation of the 1990s and 2000s.

Now let’s look at biggest specific issue bothering The Economist – the problem of US government subsidies for green industries. Europeans are worried that this will make their own industries uncompetitive. That is a legitimate worry, but if Europeans match those subsidies with their own, the damage will be limited – and, indeed, it might hasten the transition to clean technology, with the benefits that will flow from that. The Economist worries that it will lead to inefficiency and, horror, duplication. And yet duplication is a prerequisite of competition.

Still, trade remains integral to the modern way of life and deserves continued political attention. For some things, the importance of both comparative advantage and economies of scale and scope remain undiminished. Only a few countries have direct access to metals such as cobalt and lithium, which play a critical role modern industries. And serious economies of scale or scope remain in others, such as the mining of iron ore (Australia has unmatched scale economies), or the manufacture of advanced microchips (Taiwan leads in scope economies). But the key the issue is not just economic costs, it is the potential for serious dislocation if supplies are interrupted. The modern economy contains many bottlenecks. We have to balance the benefits of short term cost savings with the risks of natural disaster and conflict. Alas the solutions are likely to make manufactured products yet more expensive.

The reason for the rise of “zero-sum thinking” is that the economics of trade is moving in that direction too, though the benefits of free trade remain substantial. It is not surprising that other issues loom larger than trade freedom, such as security of supply and the need to accelerate the transition to clean energy. It is easy to understand why The Economist wants to turn the clock back to the days of easy trade gains and steady economic growth – but it does not help prepare its readers for the hard choices ahead.

Lies and statistics

This week The Economist has an interesting article, Unreliable research: trouble at the lab, on the worrying level of poor quality scientific research, and weak mechanisms for correcting mistakes. Recently a drug company, Amgen, tried to reproduce 53 key studies in cancer research, but could get the original results in six. This does not appear to be untypical in attempts to reproduce research findings. The Economist points to a number of aspects of this problem, such as the way in which scientific research is published. But of particular interest is how poorly understood is the logic of statistics, not only in the world at large, but in the scientific community. This is, of course, applies particularly to the economic and social science research so beloved of political policy think tanks.

One particular aspect of this is the significance of a concept generally known as “prior probability”, or just “prior” for short, in interpreting statistical results. This is how inherently likely or unlikely a hypothesis is considered to be, absent any new evidence. The article includes an illustrative example. Hypotheses are usually tested to a 95% confidence level (a can of worms in itself, but let’s leave that to one side). Common sense might suggest that this means that there only a 5% chance of a false positive result – i.e. that the hypothesis is incorrect in spite of experimental validation. But the lower the prior (i.e. less inherently probable), the higher the chance of a false positive (if a prior is zero, at the extreme, no positive experimental result would convince you, as any positive results would be false – the result of random effects). If the prior is 10% there is a 4.5% inherent probability of a false positive, compared to an 8% change of a true positive. So there is a 36% chance that any positive result is false (and, for completeness, a 97% chance that a negative result is truly negative). Very few

The problem is this: an alternative description of “low prior” is “interesting”. Most of the attention goes to results with low priors. So most of the experimental results people talk about are much less reliable than many people assume – even before other weaknesses in statistical method (such as false assumptions of data independence, for example) are taken into account. There is, in fact, a much better statistical method for dealing with the priors problem, called Bayesian inference. This explicitly recognises the prior, and uses the experimental data to update it to a “posterior”. So a positive experimental result would raise the prior, to something over 10% in the example depending on the data, while a negative one would reduce it. This would then form the basis for the next experiment.

But the prior is an inherently subjective concept, albeit one that becomes less subjective as the evidence mounts. The scientific establishment hates to make such subjective elements so explicit, so it is much happier to go through the logical contortions required by the standard statistical method (to accept or reject a null hypothesis up to a given confidence level). This method has now become holy writ, in spite of its manifest logical flaws. And , as the article makes clear, few people using the method actually seem to understand it, so errors of both method and interpretation are rife.

One example of the scope for mischief is interesting. The UN Global Committee on Climate Change presented its conclusion recently in a Bayesian format. It said that the probability of global warming induced by human activity had been raised from 90% to 95% (from memory). This is, of course, the most sensible way of presenting its conclusion. The day this was announced the BBC’s World at One radio news programme gave high prominence to somebody from a sceptical think tank. His first line of attack was that this conclusion was invalid because the standard statistical presentation was not used. In fact, if the standard statistical presentation is appropriate ever, it would be for the presentation of a single set of experimental results, and even that would conceal much about the thinness or otherwise of its conclusion. But the waters had been muddied; our interviewer, or anybody else, was unable to challenge this flawed line of argument.

Currently I am reading a book on UK educational policy (I’ll blog about it when I’m finished). I am struck about how much emphasis is being put on a very thin base of statistical evidence – and indeed how statistical analysis is being used on inappropriate questions. This seems par for the course in political policy research.

Philosophy and statistics should be part of very physical and social sciences curriculum, and politicians and journalists should bone up too. Better than that, scientists should bring subjectivity out into the open by the adoption of Bayesian statistical techniques.

The Euro end game

It’s been a tough year for Europhiles, especially those, like me, who have always supported the single currency and thought Britain should have been part of it.  Most of them have been very quiet, and no wonder.  Whatever one says quickly has the feel of being out of touch and in denial.  And now this week the Economist asks in a leading article  Is this really the end? that has been tweeted over 1,200 times and picked up over 500 comments.  In today’s FT Wolfgang Munchau article is headlined: The Eurozone really has only days to avoid collapse (paywall).  Is now the moment to finally let go, and admit that the whole ill-fated enterprise is doomed?

There is no doubting the seriousness of the current crisis.  While most of the headlines have been about sovereign debt (especially Italy’s) what is actually threatening collapse is the banking system.  It seems to be imploding in a manner reminiscent of those awful days of 2007 and 2008.  The Germans’ strategy of managing the crisis on the basis of “just enough, just in time” seems to be heading for its inevitable denouement.  Unless some of their Noes turn to Yeses soon there could be a terrible unravelling.

The most urgent issue is to allow the European Central Bank (ECB) to open the floodgates to support both banks and governments suffering a liquidity crisis.  “Printing money” as this process is often referred to, seems the least bad way to buy time.  Two other critical elements, both mentioned by Mr Munchau, are the development of “Eurobonds” – government borrowing subject to joint guarantee by the member states – and fiscal integration – a proper Euro level Finance Ministry with real powers to shape governments’ fiscal policy in the zone.  Most commentators seem to be convinced that some sort of steps in both these directions will be necessary to save the Euro.

I have a lingering scepticism about these last two.  I thought that the original idea of allowing governments to default, and so allowing the bond markets to act as discipline, had merit.  The problem was that the ECB and other leaders never really tried it before the crisis, allowing investors to think that all Euro government debt was secure.

Still the short term crisis is plainly soluble, and most people will bet that the Germans will give the ECB enough room to avert collapse.  But that leaves the zone with a big medium term problem, and two long term ones.  The medium term one is what to do about the southern members whose economies are struggling: Spain, Portugal and Greece especially, with Italy lurching in that direction.  The stock answer, which is to enact is reforms such that their economies become more competitive, seems to involve such a degree of dislocation that we must ask if it is sustainable.  This treatment is not dissimilar to that meted out by Mrs Thatcher to Britain in the 1980s (an uncompetitive currency was part of the policy mix here, deliberately or not), for which she is still widely loathed.  And she was elected (though “democratically” is a stretch given Britain’s electoral system).  How will people react to unelected outsiders imposing such treatment?  Better than Britons would, no doubt, since there is so little confidence in home grown politicians , but it’s still asking a lot.

And that leads to one of the two long-term problems: the democratic deficit.   A lot of sovereignty is about to be shifted to central institutions, and it won’t be possible to give electors much say.  The second long term issue is dealing with the root cause of the crisis in the first place, which is how to deal with imbalances of trade that develop within the Euro economy.  Germany simply cannot have a constant trade surplus with the rest of the zone without this kind of mess occurring at regular intervals.  But there is no sense that German politicians, still less their public, have the faintest grasp of this.  For them the crisis is the fault of weak and profligate governments elsewhere.

So if the Euro survives the current crisis, there is every prospect of another one down the road, either political (one or more countries wanting to leave the Euro and/or the Union) or financial (say an outbreak of inflation).

My hope earlier in the crisis was that it was part of a learning curve for the Euro governments.  As they experienced the crisis institutions would be changed and expectations made more realistic, such that zone could get back to something like its original vision.  I am afraid that there is a lot more learning to do.

The Economist shoots itself in the foot. Twice.

This week The Economist has come out for a No vote in a leader on the UK’s referendum to the voting system.  It argues that AV is no improvement on FPTP, so we should vote no.  It wants the system to be more proportional, with 20% of parliament’s seats reserved selected by proportional representation (PR), and the rest on FPTP.  It dismisses the argument that a Yes vote would make further change more difficult, without really saying why, beyond “It might exhaust the national appetite for reform.”  This is pretty weak stuff, but on two counts the paper has undermined its own argument.

Update.  Having read more of this week’s edition, the Economist’s hidden reasoning looks a bit clearer.  They are worried that a Yes vote would make the Conservatives so angry that they will derail other reforms, such as that for House of Lords.  Alternatively if there is a no vote then these reforms are more likely to go through as a consolation prize.  They appear to think that these other reforms are more important.

Also they are making a big deal out of the fact that because many voters will not preference all candidates, then some candidates will be elected with less than 50% of the vote.  They think this is a major problem with the Yes case; and yet I think this is simply equivalent to an abstention.  Nobody suggests that MPs should be elected with 50% of the whole electorate.  And enough people will cast preference votes, especially if there is a major left-right polarisation, to make the change in system worthwhile.  This article on NSW and Queensland state elections, which use the same preferential system envisaged here, makes that quite clear.

In the first instance, in the very same edition, the paper covers the Canadian general election, held under FPTP, with the sub-headline “A last-minute surge for the left might end up benefiting the right.”  This is exactly the sort of perverse outcome that AV can do much to prevent, because it reduces the problem of the split vote.  In Australia, which uses AV, the rise of the Green party has not benefited the right.  Under FPTP, using first preferences, the right would have benefited royally from the Green’s success in the last Australian general election.  So, “no improvement”?  The Economist makes the case against its own editorial with wonderful succinctness.

Next, the paper has changed its view.  It used to be a strong advocate for AV in the UK.  Admittedly that was a long while back (and certainly before 1997, when the paper’s online archive starts).  But the paper routinely refers to previously held positions, sometimes dating back to the 19th century.  And yet the paper’s leader makes no reference to this earlier view, and why it has changed its mind.

I have been reading the Economist since 1984 (and its position on AV swayed me in its favour back in the 1980s – one of the reasons that my support is not as lukewarm as some).  This is very disappointing.  The editorial team was probably deeply split.  Its Bagehot columnist, burnt by experiences elsewhere in Europe, hates PR and fears that AV might eventually lead to PR because more people will cast first preference votes for minor parties.  Others no doubt favour full PR.  I can imagine the poor leader writer trying to reconcile all this.  And failing.