AI @ Surrey workshop

Surrey is an excellent place to study and research central issues relating to Artificial Intelligence (AI) and the technologies that utilise it. As an ethicist and political philosopher with an interest in AI, I was privileged to contribute to this year’s workshop on Law and AI hosted by the School of Law at Surrey, a major hub for AI activity in the University. My contribution to the workshop was twofold.

First, I was honoured to be the discussant of Professor David Thaw’s (Law, Pittsburgh) thought-provoking paper ‘Hacking Democracy’. In the paper, Prof Thaw considers how foreign governments can utilise AI-based technologies, such as bots and other forms of cyber-interference, in order to manipulate elections, primarily through Changing Opinion Operations (changing voters’ minds and influencing how they cast their ballot – COOs). Responding to Prof Thaw’s paper from a philosophical, rather than legal, perspective, I argued that COOs generate two basic theoretical problems. The first is that they usually involve a degree of deception. Bots on social media, for instance, will masquerade as citizens of the state in which elections are taking place, rather than reveal that they have been programmed by a foreign – and possibly hostile – government. Without this deception, it is unlikely whether COOs would work. For voters are likely to become suspicious of the intentions behind the messages they receive.

The second problem – mostly for the theory of democracy – is that COOs reveal a weakness of current theoretical approaches to democracy. In particular, there is a tendency to assume that voters arrive at the ballot box with relatively solid and preconceived preferences. This, one could argue, makes some forms of representative democracy especially susceptive to COOs. Paying more attention to how voters form preferences and giving them the opportunity to deliberate with others on those preference might be the best way to counter the potentially pernicious effects of COOs.

My second contribution to the workshop was a personal reflection on the state of the public discourse on AI, not least through my experience with the academic and public debate on autonomous weapons – or what some refer to, rather sensationally, as Killer Robots. The use of AI-based technologies by the military is a hugely contentious issue. Not only that, but its structure, I argued at the workshop, also shines a light on general public engagement with AI. Based on my experience with the ‘Killer Robots’ debate, I pointed out two ways in which AI is often presented to the public. First, it is often left unclear what AI is and how it is utilised in specific technological contexts. Second, AI tends to be presented as a more or less binary issue – either AI is unequivocally a force for good or it is a force for bad. However, such generalisations, I argued, are far too crude when it comes to something as complex as AI. Academics working on AI, I concluded, need to look for way of adding nuance to the various debates on the subject. Policy makers and the general public deserve accurate and technologically realistic assessments of AI; otherwise it will neither be possible to create sound regulation for AI, nor will AI be accepted by the general public.

 

Alex Leveringhaus, March 2019