We are now in an era in which AI could profoundly shape the ways we live and work.
It is therefore crucial that we thoughtfully address, and put in place systems to mitigate, both the immediate impacts of how AI affects people’s lives and livelihoods in the near-term as well as the longer-term concerns surrounding existential risk. This is essential to realising the significant benefits and opportunities the technologies can offer, alongside managing risks and preventing harm.
AI is not solely a technical matter but one which cuts across science, engineering, humanities, social sciences, and medical sciences. It is essential that all these disciplines come together to address the complex challenges and opportunities presented by AI. A collaborative, interdisciplinary approach in defining how AI is governed will ensure its responsible and safe development and deployment as well as its ethical use. It is crucial that the public is meaningfully involved in these discussions from the outset.
Together the UK’s four national academies are committed to championing comprehensive AI safety standards that incorporate ethical and societal considerations into AI development and deployment to effectively balance the risks of AI with its potential to provide great benefits to society.
While the longer-term risks are important, the near-term risks associated with AI, which have the potential to erode trust and hinder adoption of beneficial new technologies must be addressed. This is vital to ensure the benefits of AI are distributed across all of society, from advancements in healthcare and the delivery of critical public services, to the scale up of AI companies and wider improvements to productivity and work across the economy.
International collaboration between governments, AI companies, researchers, and civil society is also critical to understand and effectively manage the risks posed by established technologies and those at the frontier of AI. The challenge of governing future AI technology on a global scale will require sustained efforts by all involved parties. The UK’s four national academies welcome the progress that has been made at the AI Safety Summit. As national and international bodies and future summits are established to build safe AI, the Academies will ensure that experts and professionals across science, social sciences, humanities, engineering and health can support in horizon scanning and responding to risks.
Signatories include: the Academy of Medical Sciences, the British Academy, the Royal Academy of Engineering and the Royal Society