Thinking intelligently about artificial intelligence

by Margaret A. Boden

Date
08 Nov 2018

This article is published in British Academy Review No. 34 (Autumn 2018).


The print version of this article can be downloaded as a PDF file.


Margaret A. Boden is Research Professor of Cognitive Science at the University of Sussex. She was elected a Fellow of the British Academy in 1983, and served as Vice-President 1989-1991.



‘There’s a whole lot of nonsense talked about AI,’ says Professor Margaret Boden FBA, when asked at what point artificially intelligent machines will take over the planet. ‘If you look at the history of AI over the last 50 years, there have been at least half a dozen instances of very widespread hype, where not only some people in the field said things that were really over the top, but the journalists and the public in general got really worked up about it.’


But, if she had to guess? ‘Well, I don’t think that the robots will take over. And there are two reasons why I don’t think that. One is that I don’t think they will be intelligent enough. And another is that they won’t want to. They don’t want anything, they do what they are designed to do. So they’re not going to turn around and want to do things that we don’t want them to do.


‘But, of course, in trying to solve certain problems that we give them, they might come up with solutions which don’t suit us ...’



Professor Margaret Boden is a world authority in the field of artificial intelligence, having spent a lifetime attempting to answer philosophical questions about the nature of the human mind, but from a computational viewpoint. She is Research Professor of Cognitive Science at the University of Sussex, where she helped pioneer the world’s first academic programme in cognitive science with AI. She is also a technical adviser on the social implications of robotics and machine-learning for the All-Party Parliamentary Group on AI. Her career was the subject of the BBC Radio 4’s The Life Scientific in October 2014.


Professor Boden’s most recent book is Artificial Intelligence: A Very Short Introduction, published in August 2018 in Oxford University Press’s Very Short Introduction series. The book presents, in just 150 pages, a rounded and accessible account of artificial intelligence – its history, successes, limitations and future goals – and the political, philosophical and legal questions that it raises.


For while the machines may not yet pose a danger to the existence of the human race, as Professor Boden says, the rise of AI is going to bring about some ‘very real changes’ in the not-too-distant future, and in so doing pose a host of unprecedented challenges to our society.



At the beginning of her new book, Professor Boden says that ‘Artificial intelligence seeks to make computers do the sorts of things that minds can do,’ and lists the many and varied benefits of artificially intelligent systems. Such systems, she explains, can be found in the home, the car (and the driverless car), the office, the bank, the hospital, the sky, the Internet, and what is often called ‘the Internet of Things’, which connects the ever-multiplying physical sensors in our gadgets, clothes, and environments. Some AI systems even lie outside our planet, such as satellites orbiting in space, or the robots currently roving across the Moon or Mars.


And, speaking from her home in Brighton, Professor Boden is keen to emphasise just how useful AI can be for the ordinary person.


‘It’s already improving your life in all sorts of ways,’ she says. ‘Take medicine, for example. Already there are computer systems which are better at diagnosing certain conditions than even the best human doctors. And parts of the world don’t have access to even average human doctors. So AI systems for use by people who are not expert in whatever area we are talking about – medicine is just one example – is beneficial.


‘Then, all the apps you have on your phone – I don’t know if you regard those as beneficial – but if you do, then put them on the list, because they’re all AI.’



So, the current practical applications of AI may be clear. What is less clear is how we are going to use artificially intelligent systems in the future, and, more to the point, whether we will be able to use them responsibly.


For example, there are the legal and ethical dilemmas inherent in the use of new AI technologies – such as driverless cars.


‘This is partly why Google is terrified of having a young child, or a baby, killed by one of its driverless cars,’ says Professor Boden. ‘Can you imagine the reaction to that? These things are going to have to be settled in the law courts. Who should be responsible? Should it be the manufacturers who made the car? Should it be the people who did the programming (who may be dead)? Should it be the designers? The retailers who sold it to the person who used it? Or should it be the person who used it for deciding to use it? All of this will have to be sorted out.


‘And how much responsibility do the politicians have, in terms of regulations? Again, that’s something which is subject to different political opinions: a right-wing person and a left-wing person are likely to give very different answers, because they’ll have different ideological views on the role of government in life in general, never mind AI.’


More dramatically, will the rise of AI affect geo-politics? The United States, Russia and China have all recently announced huge amounts of investment in military AI, which is certain to result in new, more destructive weaponry.


‘Maybe you can rely on countries to be sensible and restrained with such weapons,’ says Professor Boden, ‘just as the US and the Soviet Union were with respect to nuclear weapons in the Cold War. But, of course, there are other nations that may have very different agendas, and there may be other groups – or even what we might regard as crazy individuals – who might want to use this stuff.’



Meanwhile, AI will have a huge impact on the future of work. While many projections of how many jobs will be lost, gained or changed by AI have been published over the last five years, a consensus has begun to emerge that 10-30 per cent of the tasks done by employed people in the UK are automatable.


And, says Professor Boden, such changes are already occurring. ‘Some people say it will be like the industrial revolution. There will be some jobs that will go – like jobs dealing with horses – and there will be lots of new jobs that didn’t exist before – like for example, car mechanics. And this is already happening. If you mentioned the term ‘data scientist’ or ‘data analyst’ a few years ago, people would say, ‘What does that mean? Never heard of it.’ Now there’s a desperate need for people to fill these roles because we haven’t got enough people trained in that area. Those jobs didn’t even exist five years ago, never mind 10 years ago.


‘Another example is looking for precedents in law, which now can be largely done not just faster and more cheaply, but – in many cases – better by machines than by young lawyers.’


As a solution to the unemployment this could cause, many politicians and policy-makers are touting the introduction of a universal basic income (UBI). But that raises more questions.


‘First of all,’ Professor Boden says, ‘where is the money going to come from for UBI? If things carry on as they are, where an increasing amount of capital and financial power is in the hands of a relatively small number of companies, and if those companies don’t pay all their taxes, where is that money going to come from? So, is it actually going to be possible to provide everybody with a non-means tested basic income, which is going to be sufficient for them to live on? That’s not at all clear.


‘And besides, will people even want UBI? Will they vote for that? And if they do vote for it, how do you ensure people lead satisfying lives when they’re not working? There could be huge social disruption – I mean, very nasty social disruption. I’m not saying it will happen, but it could.


‘So, there are all sorts of questions about UBI. It isn’t straightforward at all. And the economists don’t agree about it either.’



In October 2015, a computer program developed by Google beat a human professional Go player for the first time in history. Go is widely considered to be the most complex board game ever made. Six months later, the same program, AlphaGo, defeated the second most-decorated Go player in history, Lee Sedol, 4-1. During the second game, the Google machine made a move that no human ever would, a move that was described as ‘beautiful’ by onlookers and which forced Sedol to leave the room for fifteen minutes to gather himself before responding – he is now using that move in his own Go playing.


In recent years, machines have also been programmed to paint, write poetry and compose music so convincingly that human test subjects, when shown the work, have no idea of its artificial origins.


But is this real creativity? For Professor Boden, this is a philosophical question that depends on understanding the related concepts of intelligence and consciousness – and we still know very little at the neuroscientific level about the mechanisms involved in the mind, about how the brain really works.


‘Now, it is true that there are programs which can write poetry – although I don’t know of any AI programs that can write good poetry – and produce very interesting and, in a few cases, I would say, very aesthetically satisfying graphics, including coloured paintings. There are even programs which can write prose – for example, news reports describing a football game.


‘But, if you think of a report about a football match, I don’t think that there’s any system, at the moment, that could visually recognise what was so special about that goal by David Beckham against Crystal Palace [in 1996] when he scored from inside his own half. And even if it could realise how special it was, could it find the language to describe it?


‘If somebody were to try to describe that goal, they aren’t going to just say, “Oh, Beckham then scored a goal from the other side of the pitch.” They’re going to write more than that, because it was very special. And they’d not only have to have a good understanding of football, they’d have to have a good understanding of language – which, at the moment, these programs don’t have. They don’t understand language at all. They just either use canned phrases or they rely on statistics for word clusters – words that tend to appear together in human written prose – to pick their words, but they don’t understand any of the language that they use.’



All the questions and challenges posed by the rise of AI, involving issues of philosophy, ethics, politics, law … so much to fit into Artificial Intelligence: A Very Short Introduction. How easy was it to write?


‘You have to think very hard about what the intellectual priorities are. And obviously, the less room you’ve got to say stuff, the more difficult it is to decide what is important and what should be communicated. It isn’t easy!


‘One thing that helped was that, a few years ago, I wrote Mind as Machine, a two-volume history of cognitive science, which included a lot about AI. Those 1,300 pages captured my life’s work. So, I’d done a lot of the serious thinking already, deciding what was important, and what related to what.’


And an interdisciplinary approach is key. ‘You have to read a hell of a lot of stuff in different disciplines,’ she says. ‘My two-volume history, for example, draws on classical times, and involves philosophy, psychology, linguistics, anthropology, neuroscience, theoretical biology, computer science and AI. And that involves straddling the arts-science divide. You have to have a sense for language and the arts and various human aspects of psychology, as well as being able to understand scientific language in neuroscience or computer science. You have to be a very queer fish – and I am a very, very queer fish!


‘My first degree was in medical sciences – I was planning to be a psychiatrist at that point – my second degree in effect was in philosophy, my PhD was in psychology. That’s a very unusual background, but that was why I am able to write about artificial intelligence in the way that I do.’


Interdisciplinarity is something about which Professor Boden is passionate. ‘I’m very much against this increasingly narrow specialisation that’s creeping into academia everywhere. If you ask me which side of the arts-science fence I sit, my answer would be that I don’t sit on either side. I sit on the middle! I jump down to one side from time to time, and then immediately jump up and onto the other side. I identify with both sides and neither.’



So, while reassuring us that the machines won’t actually take over the planet, how does Professor Boden see the future for humans and AI?


‘Well,’ she says, ‘I have four grandchildren and I don’t envy a single one of them. I think that with AI (and other problems like global warming), they’re going to have a very hard life, and I think that their children, when they have them, will have it even harder. As I’ve said, AI is already improving your life in all sorts of ways. It’s just that there are a huge number of open questions and the people who take the time to think about them frequently disagree about the answers.


‘We simply need to make sure that AI is put to good human use.’



Margaret Boden was speaking to Joe Christmas.


Sign up to our email newsletters