Meeting of the Parliament 13 November 2019
I welcome the fact that the debate has been brought to the chamber. Politics can be very short term. Perhaps especially during an election campaign, we are all a little bit guilty of looking just at what is immediately ahead of us. As a society, we need a chance to have a forward-looking debate about what the future has in store for us, as well.
When Parliament debates things such as digital participation, we sometimes focus only on the positives—how many people are online and how fast their broadband connections are—instead of thinking about how we are using the technology and how it is changing society. That is not a criticism of any one party or of government, as opposed to the private sector; it is something that we are doing as a society.
Technological change constantly forces us to think differently about how we will deal with the new opportunities and challenges that lie ahead. Einstein wrote, in his time, that
“Today the atomic bomb has altered profoundly the nature of the world as we knew it, and the human race consequently finds itself in a new habitat to which it must adapt its thinking.”
I defy anyone to suggest that the digital world and the prospect of artificial intelligence will not alter our world every bit as profoundly. Our thinking rarely keeps pace with the changes around us, and we are only just beginning to come to terms with thinking about a connected and networked world.
When I was a kid, I read science fiction stories about the idea that we would all have a device like my smartphone, with which we could, at the touch of a screen, communicate with any person anywhere in the world and access the sum total of human knowledge. It was a utopian idea, and I never dreamed that it would unleash the social-media bin fire that we now live in, or of how it has opened up opportunities for unscrupulous people to hack our democracy.
We need to begin to think about such issues, and we have a great deal of catching up to do on new developments, including AI. An open question faces us all: will artificial intelligence be a tool to help us all to expand our capabilities and intelligence, or will it become a way for us to outsource our intelligence, our thinking and our human agency to technology that we do not really control?
So much of the development in AI is being done by the private sector, which is focused on the opportunities and the economic benefits that it might gain, but not so much on the potential downsides for society.
Some of the biggest challenges might come from possibilities that we cannot predict and from questions that we do not even know how to ask, although it is necessary that we do our best to do so. Having this conversation is not a rejection of the positives. I see more upsides than downsides, but if we are to truly maximise the social benefit that technology offers us, and minimise the risk of harm, the conversation is necessary.
I was, therefore, not happy to see the debate being framed purely in terms of opportunities, so I lodged an amendment that sets out some of the risks. I welcome the work that is under way and that the Scottish Government and the UK Government have tentatively begun to do. There is recognition that we need an ethical framework, but we also need to acknowledge that we do not yet have it, even at the theoretical level, and that even if we achieve it at the theoretical level, we are still far from having the regulatory tools that can enforce such a framework. I was interested to look at the Data Lab’s website on that work, but there is no mention of what an ethical framework might encompass.