We live in a world that is being transformed, by vast volumes of data and an increasing awareness of its value, computing power, and perhaps above all, by the algorithms of artificial intelligence: AI. These elements in combination are changing the ways we all live and how organisations deliver goods and services.
New demands on the workforce
AI ‘machines’ are good, and continually becoming better, at reading, hearing, seeing, translating and taking over any process which is rules-based. This facilitates the re-engineering of many processes and has deep implications for the future of work. There will be increasing demand for a highly skilled workforce that can develop and implement these systems and there will be a continuing demand for the kinds of service jobs that can’t be taken over by machines. In the ‘middle’, it is predicted that there will be large scale ‘hollowing out’, sometimes estimated as much as 50% of employment, as jobs are automated. It is a matter of speculation how many new jobs will be created: will it be as in past ‘revolutions’, such as ‘post-industrial’ in terms of creating an equivalent number of new jobs, or will this time be different? There are many uncertainties here. These are explored in a recent report by the British Academy and the Royal Society. Whatever the possible outcomes, there are huge implications for the future of education.
Technical and ethical challenges
The impact of AI is further complicated and challenged by a range of ethical questions. Many processes demand access to personal data and organisations make the biggest advances by linking such data from different sources. What are the implications of this for data access, ownership and privacy, for example? Machines will ‘make decisions’, such as medical diagnostics or the determination of insurance premiums. Is such decision-making transparent, and therefore defendable? Is it fair and free from bias? These questions place responsibilities on individuals as citizens and as workers; and responsibilities on organisations of all kinds. AI-driven machines will also be used to influence us – nudging and recommending. This has always been a part of marketing and electioneering but the sophistication of new systems, particularly with the propagation of ‘fake news’ through social media, raises new issues. All these challenges manifest themselves in different ways in different sectors.
Rethinking the curriculum
At least one substantial part of our response takes us back to what is always one of the fundamentals: the education system. Education has to provide the basis for individuals being able to lead good lives, to contribute to others leading good lives, and to have skills as employees – perhaps to transfer good ‘values’ into employment? Organisations need help to build a value system that will guide their planning and development in this new world. This implies that we certainly need a population skilled in mathematics and STEM subjects, with data science and AI as new elements in an interdisciplinary curriculum. But we also need education that facilitates understanding of this world and responds to the implicit questions of what ‘good’ means in these contexts – so we need not just AI-science, but also the social sciences and the humanities. This will demand some urgent rethinking about curricula at all levels: there will be a need – which is not a new one, but which has rarely been met effectively – to balance breadth and depth. Can this kind of thinking about education help us to develop and understand a meta-system that sits above the AI world? Thinking caps on!
Sir Alan Wilson FBA FRS was until recently Chief Executive of the Alan Turing Institute and is now Director, Special Projects; he is Executive Chair of the Ada Lovelace Institute at the Nuffield Foundation.
The impact of AI and work, produced by the British Academy and the Royal Society, is available to read in full online.