Seeing through the hype and reality of artificial intelligence

The 1968 movie foresaw threatening general AI in 2001

Last week a group called the “Future of Life Institute” published an open letter urging governments to pause research on artificial intelligence so that a new worldwide regulatory framework can be agreed to prevent the technology taking a highly destructive route. There are quite a few distinguished signatories, of whom the most commented upon is the technology entrepreneur Elon Musk. On the same day the British government published a paper on AI strategy suggesting a minimum of regulation in order for this country to gain a technical edge. The first development bespeaks fear and panic amongst intellectuals, and the second the political reality of countries wanting to win the race. Is the world going to hell in a handcart?

The text of the letter is here. The core of it is this series of fears:

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? 

from “Pause Giant AI Experiments: an Open Letter” – Future of Life Institute

Now let’s take a deep breath. AI is important and will have a profound effect on human life. recent progress has been dramatic, and its most recent capabilities have astonished. But its impact is not necessarily benign, and there are some serious risks attached to its development. But to describe current AI capabilities as a “mind” and to suggest that it is on the road to replacing humanity in general understanding, judgement and control is to misunderstand what it is – it is classic anthropomorphism of the sort that imagines your cat to be a scheming villain. If you want to unpack this a little, a good place to go is this blog piece by Melanie Mitchell an AI academic. The interesting question is why so many intelligent people are being let astray by the hype.

For a long time, people have dreamed of building intelligence that can replicate the autonomy, command and ability to learn new tasks in such a way that any human rôle can be replaced by it. This is referred to as “general AI”. Once you leave behind the understanding that humans, and indeed animals, are animated by some form of supernatural spirit, you are left with the logical possibility of humans building something lifelike in all its capabilities. It’s just atoms and molecules after all. And of course, if we can do that, then we can make robots stronger and better than the original because we can engineer it that way. This has particularly appealed to the military, who can develop robot-powered weapon systems to replace frail soldiers, sailors and airmen, and doubtless spacemen too. And such is the confidence of modern humans, that it is widely assumed that doing so is not so hard. During the Cold War both America and the Soviet Union worried that their opponents were close to developing just such a capability, doubtless promoted by people in search of funding. Such is the grasp of military “intelligence”.

And people continue to believe that such an ability is just around the corner. In their film and novel 2001, a Space Odyssey, published in the 1960s the author Arthur C Clarke and fim-maker Stanley Kubrick speculated that out-of-control general AI would be developed by 2001. In the 2010s it was widely assumed that self-driving cars would be on the road by the early 2020s. But in 2023, instead of mounting excitement about its imminent rollout, there a silence. And now generative AI, which can manufacture very human-sounding bullshit from minimal instructions, is sparking this panic. Each wave of AI generates astonishing advances, and then seems to stop at the bits its promoters assumed were mere details. Researchers and technologists have a strong incentive to hype their latest achievements, and one of the most effective ways of doing this is to claim that you are close to bridging the gap to general AI. The people with the money often have little idea of what is or is not technically possible – not that the developers often have much idea themselves – and investing in the first successful general AI project sounds like a good deal. Just how hard can it be, after all? General AI occurs naturally.

Behind this lies a striking failure to understand what humans are. It is assumed that humans are naturally-occurring robots – machines created by an intelligence to fulfil particular tasks. Back in the days when I followed evangelical Christians there was a popular idea called “intelligent design”. This posited that the world, and animals and people in particular, were far too clever and complicated to have been created by dumb processes like natural selection, so they must have been designed by an intelligence – God. It seems that most people instinctively believe this, even if they disregard the divine revelation of the Abrahamic religions. We create god in our own image. Evolution is often described as if it were an intelligent process; DNA is described as a blueprint; and so on. Man has created God in his own image, and assumed he (well, maybe she) would create things in the way that men would. But the most important thing to understand about humans and the world is that there is no intelligent creator; there is no thread of intent; no design. It just happened. Over a very long time. It follows that trying to reproduce human intelligence through a process of intelligent design is going to be at best slow and frustrating, and at worst not feasible. Indeed a lot of breakthroughs in AI design, such as neural networks, are based on the idea that the thing should build itself, rather than be designed. But that has limits. Development is not going to be an orderly progression of achievements.

And indeed, each phase of AI development comes with inherent limitations. Generative AI, the current craze, requires massive computer processing power, and a huge database of knowledge in digital form. It is easy to see how this might be useful; harder to see how it leads to general AI. The battle over modern AI seems to revolve around training data and computer firepower. One of China’s advantages is that it has access to cheap labour to produce training databases. There is a paradox there. The idea is that in due course that the robots will find and produce their own training data – but problems abound. Indeed for AI to be useful (including self-driving cars) I think it will be necessary to simplify the the world so that it can be represented more accurately in training data. But that makes it of limited use in the wild.

And so the hype cycle goes on. At each turn we will be distracted by the prospect of general AI rather than tackle the more important issues that each iteration throws up. In the case of generative AI it is how to deal with the misinformation and prejudice lurking in the training data, that the technology won’t be able to recognise – when its “reasoning” is so opaque that it can be hard to spot what is causing the problem.

And what of the British government’s AI strategy? I haven’t read it so I can’t comment with authority. Generally I am suspicious of politicians jumping on bandwagons – and any call to invest in this or that technology because otherwise we will be left behind in the global race is suspect. Economic advantage usually accrues either from developing things in areas where people are not already heavily invested, or from copycatting after somebody else has done the difficult bits. But in seeking to develop a light-touch regulatory advantage while the European Union and the US are tangled in moral panic, and China over fears of loss of party control, it might be onto something – though it is hard to see that the US will in the end be weighed down by over-regulation. On the other hand the US and China have advantages in access to finance, computing power, and data – while the multilingualism of the EU perhaps offers an advantage to them. Britain may simply become a base for developing intellectual property owns by others.

Anyway, a it is hard to see what a six-month pause could possibly achieve. Humanity may reach general AI in due course. But not for a while yet.

Automation should not lead to a workless society. Bad economic management could.

Featured on Liberal Democrat Voice

The current issue of Liberator, an anti-establishment house magazine for the Lib Dems, bemoans the lack of policy on the advance of automation and robotics:

If we are heading for a world in [which] relatively few people conventionally work – because machines can perform tasks better and cheaper – how will the non-working population be paid, and what will it be paid for?

Such thoughts are prevalent in chattering circles these days. On one occasion I expressed a little scepticism in a Facebook conversation, given that current employment rates have never been higher – and I was quickly shouted down for having my head in the sand. As it happens the Lib Dems are developing policy here, with a working group on the “21st Century Economy” in the advanced stages of deliberation, having already held a consultative session at Conference. But what should liberals be thinking?

The claims about automation and robotics are not all hype, though there is a fair bit of that too. Artificial Intelligence (AI), and its harnessing of “Big Data” is in the process of revolutionising many areas of work. This has proceeded faster than I personally expected, after AI had gone through decades of marginal progress combined with absurd hype. Many jobs  are now threatened, including vehicle drivers and many professional roles. Just how far this is going is very hard to say. Colossal resources are being ploughed in, and there have been some spectacular achievements, and yet few of the breathless boosters of the technology appear to have much idea of what intelligence actually is – they just project present progress into the future and assume that AI will catch up with humans. There is no magic in the human brain, after all, just cells, chemicals and electrical connections. My guess is that at some point AI will hit a point of diminishing returns, and progress will slow.

But not for a while yet. Meanwhile undoubtedly consume many jobs will be replaced there will be a lot of economic disruption. But are we heading to a near jobless society? Here I struggle. Quite a bit of progress has been made already, after all, and yet in many countries, Britain in particular, employment has never been higher. And not just that. We are constantly aware of jobs that need to be done that are being cut. This is a lot of what is behind the fuss about austerity. Not enough care workers, doctors, or police officers: the list goes on. And there are plenty of new fields of endeavour that are opening up: cancer treatments, mental health care, green energy and so on. So the jobless society looks like self-harm rather than an inevitability.

Indeed a conventional economist would say that there is nothing much to worry about. Similar things were being said in the 19th century, as new technology cut swathes through agriculture and textiles. That didn’t work out too badly in the end did it? As one industry becomes more efficient, it simply creates demand for others. I believe that a lot of this has been happening already. A lot of the new jobs are being created in low productivity sectors of the economy, meaning that overall economic growth is not advancing as much as many expected. That’s just the nature of the beast.

But that is too complacent. A closer examination of 19th Century economic development reveals a lot of human misery as workers were thrown out of modernising industries. It was not at all clear for many years that human wellbeing was being advanced. In any case economists tend to overlook the the specifics of how particular technologies affect the overall economy. The massive advances in working class and middle class welfare in many economies after 1945 until the 1970s is often attributed to good management of overall demand in the economy. And yet it had everything to do with the advance of light industry and office work, based on technological breakthroughs made in the war years, which were particularly good for the creation of medium-skilled jobs. We do have to look closely at the specific implications of technologies of the age, and adjust our economic management to ensure the best outcome for overall human wellbeing.

The post war boom was particularly happy on two fronts. It threw up a lot of new things that people wanted to buy, from cars to washing machines to cosmetics and new textiles. And producing them was intensive in mid-level jobs on the factory floor, distribution and administration. Further, other industries, like insurance, produced the same mix of things people wanted and lots of jobs. But modern technology is focusing on efficiency rather than labour intensive new products. A lot of it is about replacing labour with capital, without necessarily producing much more product or service.

But are what people are going to need more of? Overall we do not need to consume more things or eat more food, though in parts of our society that is clearly true. The technology sector itself will generate a lot of demand, in the development of new systems, and in teaching people how to make best use of them. A second obvious area is health and care. The health economy has huge potential to expand, especially here in Britain – it is much larger in the US, even when so much of the population is excluded from it by lack of insurance cover. And an ageing population will not only need more health services, but general care too. Automation and AI in health and care is also likely to generate more work, by opening up new treatments and by making diagnoses more available, faster than it destroys work by making things more efficient. That has been our consistent experience to date. And a third area of potential expansion is what is being called “experiences” – entertainment, travel, games and so on. This is mixed up a further wrinkle: a possible increase in leisure time. People may want to work fewer days and hours. This will both create work (depending on what they do in that leisure time) and reduce time available to work.

But there is an obvious problem with all this: money. And by money I do not mean limits to real resources, but the fact that if there is not an even distribution of spending power, too many people will not able to afford these things, while a minority will have more money than they can actually spend things that create work, rather than being part of a churning cycle of finance and property . That cycle of finance, incidentally will also create some jobs – but this looks more like part of the problem than the solution. The problem is that the jobs being created, in sectors with low productivity, are often too badly paid or insecure for people to buy enough services and things. This is the way things seem to be heading in too many developed societies.

Technological advance should be a good thing. It allows us to do more while consuming fewer of the world’s scarce resources. But s skewed distribution of income means that the changes to work patterns risk suffocating the economy rather than advancing wellbeing. That is one of the central challenges of our times.