A new House of Lords report says the UK government needs to better coordinate its artificial intelligence (AI) policy and the use of data and technology by national and local government.
The report examines the progress made by the government in the implementation of the recommendations made by the select committee in its 2018 report, ‘AI in the UK: ready, willing and able?’.
Lord Clement-Jones, chair of the select committee on Artificial Intelligence, said: “The Government has done well to establish a range of bodies to advise it on AI over the long term. However, we caution against complacency. There must be more and better coordination and it must start at the top.
“A Cabinet Committee must be created whose first task should be to commission and approve a five-year strategy for AI. The strategy should prepare society to take advantage of AI rather than be taken advantage of by it.
“The Government must lead the way on making ethical AI a reality. To not do so would be to waste the progress it has made to date, and to squander the opportunities AI presents for everyone in the UK.”
Other findings and conclusions include:
- The increase in reliance on technology caused by the Covid-19 pandemic has highlighted the opportunities and risks associated with the use of technology, and in particular, data. Active steps must be taken by the Government to explain to the general public the use of their personal data by AI.
- The government must take immediate steps to appoint a Chief Data Officer, whose responsibilities should include acting as a champion for the opportunities presented by AI in the public service, and ensuring that understanding and use of AI, and the safe and principled use of public data, are embedded across the public service.
- A problem remains with the general digital skills base in the UK. Around 10 per cent of UK adults were non-internet users in 2018. The government should takes steps to ensure that the digital skills of the UK are brought up to speed, as well as to ensure that people have the opportunity to reskill and retrain to be able to adapt to the evolving labour market caused by AI.
- AI will become embedded in everything we do. It will not necessarily make huge numbers of people redundant, but when the Covid-19 pandemic recedes and the government has to address the economic impact of it, the nature of work will change and there will be a need for different jobs and skills. This will be complemented by opportunities for AI and the government and industry must be ready to ensure that retraining opportunities take account of this. In particular the AI Council should identify the industries most at risk, and the skills gaps in those industries. A specific national training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.
- The Centre for Data Ethics and Innovation (CDEI) should establish and publish national standards for the ethical development and deployment of AI. These standards should consist of two frameworks, one for the ethical development of AI, including issues of prejudice and bias, and the other for the ethical use of AI by policymakers and businesses.
- For its part, the Information Commissioner’s Office (ICO) must develop a training course for use by regulators to give their staff a grounding in the ethical and appropriate use of public data and AI systems, and its opportunities and risks. Such training should be prepared with input from the CDEI, the government’s Office for AI and the Alan Turing Institute.
- The Autonomy Development Centre will be inhibited by the failure to align the UK’s definition of autonomous weapons with international partners: doing so must be a first priority for the Centre once established.
- The UK remains an attractive place to learn, develop and deploy AI. The government must ensure that changes to the immigration rules must promote rather than obstruct the study, research and development of AI.
The report identifies growing investment in AI, as well as its deployment. In 2015, the UK saw £245m invested in AI; by 2018, this had increased to over £760m, and in 2019 it was £1.3bn. Uses of AI have been seen in everything from agriculture and healthcare, to financial services, through to customer service, retail and logistics. AI is being used to help tackle the Covid-19 pandemic, but also to underpin facial recognition technology, deep fakes and other ethically challenging uses.
The report states: “AI has become such a prevalent feature of modern life, that it is not always clear when, and how, it is being used. It is all the more important that we understand its opportunities and risks”.
Quoted in the report, Professor Michael Wooldridge, head of the Department of Computer Science at the University of Oxford and also programme director for AI at the Alan Turing Institute, said that “data and privacy” was one of the risks for artificial intelligence in the next five years.
He added that since the select committee’s original inquiry in 2017, “we have seen endless examples, every week, of data abuse. Here is the thing: for current AI techniques to work with you, they need data about you. That remains a huge challenge. Society has not yet found its equilibrium in this new world of big data and ubiquitous computing.”
Echoing this concern, Dr Daniel Susskind, a fellow in Economics at Balliol College, Oxford, said: “Before the pandemic, there was a very lively public debate about issues of data privacy and security. At the start of the pandemic, a ‘do what it takes’ mentality took hold with respect to developing technologies to help us to track and trace the virus. Technology companies around the world were given huge discretion to collect smartphone data, bank account statements, CCTV footage and so on in a way that would have been unimaginable eight or nine months ago.
“There is an important task in the months to come, once the pandemic starts to come to an end, in reining back the discretion and power we have granted to technology companies and, indeed, to states around the world.”
Caroline Dinenage MP, minister for digital and culture, said that “the public feel deeply suspicious of some parts of AI. There seems to be no rhyme or reason as to how we will embrace some aspects of it and not others, on that aspect of trust”, adding that the government wants “to ensure the public understand AI, its powers, its limitations and its opportunities, but also its risks”.
The report also identified concerns about the effect of AI and automation on jobs, anticipating “further significant disruption”. Professor Wooldridge said that “AI will change the nature of work” and that “AI will become embedded in everything we do. It will not necessarily make huge numbers of people redundant, but it will make people redundant.”
This has a knock-on effect on the skills shortage and general readiness of the UK to cope with, and take advantage of, the changing nature of work. The report quotes a Microsoft survey from August 2020, ‘AI Skills in the UK’, which stated: “Only 17 per cent of UK employees say they have been part of re-skilling efforts (far less than the 38 per cent globally) and only 32 per cent of UK employees feel their workplace is doing enough to prepare them for AI (well below global average of 42 per cent)”.
Similarly, concerns over the effects of Brexit on the AI field in the UK were raised. Serious problems in retaining leading international researchers on AI were laid bare, with many world-class personnel already reported to have left the UK and returned to mainland Europe, disillusioned by the tone of conversation around the UK’s exit from the EU.
The report states: “The Government must ensure that the UK offers a welcoming environment for students and researchers, and helps businesses to maintain a presence here. Changes to the immigration rules must promote rather than obstruct the study, research and development of AI”.
However, there was an overall positive perception of the progress being made. Professor Wooldridge said: “I give a big thumbs up for what has happened nationally in AI in the UK over the last few years. We have done the right things.”
Looking forward, he added a note of caution: “The one thing I would ask is to understand that this is a long-term project. Let us hold this course and be aware that this is not a one-year or 18-month project.”