Artificial Intelligence (AI) offers simultaneously a wealth of opportunity and a significant challenge for a society wanting to handle its power responsibly. The Digital Societies research grouping of the Department of Sociology at Surrey carries out research across multiple substantive domains, to look beyond the hype, to explore the impacts of AI and to inform policy and practice. Our research applies social science expertise to help to understand and improve the growing influence of AI across public services, healthcare and commercial sectors, with a particular view to highlighting unanticipated impacts in relation to enduring and emerging social inequalities.
Across all domains, our research focus is not simply a question of studying downstream societal impacts – instead we are focused on understanding the active ways in which people engage with AI and shape its outcomes as well as the power relations that mean some people are more able to take on active roles than others. We are also focused on processes of development and implementation, both engaging with technology developers and working on ways of bringing stakeholder voices actively into the developments that will impact upon them. PJ Annand, for example, is working on approaches to foster participatory AI research/systems development. Christine Hine is a Fellow of the interdisciplinary Surrey Institute for People-Centred AI, forging connections between research on AI in Sociology and the wider networks within and beyond Surrey.
Our research also encompasses an exploration of the ethical dimensions of AI, broadly construed – considering how we handle concerns about what AI should and should not do, whether on the level of policy and strategy or in terms of the everyday dilemmas that we encounter when using AI in decision-making. Below we introduce some current projects, across substantive domains of: policing; health and wellbeing; homes, families and media; and public policy. We are keen to take part in ongoing conversations and collaborations across these areas, with those working within technical disciplines, policy development and implementation domains.
Policing
The use of AI in policing can be understood as a site of contestation. Police forces across the world hold unwieldy amounts of data, some of which could be utilized to create new insights into crime, help catch wanted persons, and prevent crime from occurring. Yet, police forces must balance these possibilities with the realities of using AI in police work; namely, the legal, ethical, social, and organizational impacts. Jonathan Allen and Tyler Dadge explore such issues in their respective projects on two specific applications of AI in policing.
AI is being used by police forces to predict places where crime could occur, predict likely offenders, and predict victims of crime. Jonathan Allen is researching how the police use predictive analytics in their work, how it is integrated into practice and the implications on job roles and responsibilities. Using a case study approach, this research looks to aid police practice in terms of decision-making, training, and best practice. It also seeks to provide considerations for other police forces in England and Wales on the use and integration of predictive analytics in police work.
Facial recognition uses AI algorithms to detect, analyse and match faceprints of live, or still images to a database of known persons. In policing it is used mainly to identify known or suspected criminals. There are three types that police forces use, Live, Retrospective and Operator Initiated. Tyler Dadge is conducting interviews with those within policing, companies developing facial recognition, civil rights groups and researchers on a range of related topics, seeking to build an understanding of how this technology can be utilized within policing while limiting the negative impact to the human and data protection rights of the citizens the police serve and protect.
Health and wellbeing
Chatbots use AI and Natural Language Processing to simulate human conversations, and can offer new ways of interfacing with support services. Rob Meadows and Christine Hine are carrying out research on use of chatbots in mental health and wellbeing apps, exploring where people find it acceptable to engage with a chatbot-enabled app and how this experience is compared with conventional ways of supporting mental health support delivered by humans.
In healthcare settings AI can be deployed to detect patterns in large quantities of data in support of clinical decision-making. Christine Hine is working with the Care Research and Technology Centre of the UK Dementia Research Institute, exploring how we handle ethical issues that arise in machine-learning enabled remote monitoring for people living at home with long term conditions such as dementia, with a view to developing better ways both to reflect ethical concerns within the development process and to support potential users in making informed decisions. This research has also led to production of resources to help potential users to make informed decisions about smart care.
Homes, families and media
Our everyday lives are increasingly permeated by AI-enabled services that shape our access to information and entertainment. Ranjana Das has a current research focus on parents (of 0-18 year olds) navigating (AI) algorithms in relation to parenthood, with specific focus on these as used in search engines, feeds and filters and recommendation systems.
Ranjana Das has recently completed a project on media personalization with Philip Jackson of FEPS and Rhianne Jones of the BBC, with various journal papers in the review process. The team conducted citizens councils in 3 waves with technical use cases of AI in data-driven media personalisation investigating citizens views about how their personal data was handled in these and other algorithmic systems.
AI in public policy
In more and more countries, policymakers increasingly use Artificial Intelligence (AI) algorithms for the provision of public services and state benefits. This raises ethical, philosophical, and social questions and leads to important issues about responsibility, accountability, transparency and the quality of social decision.
Nigel Gilbert and Martha Bicket are contributing to AI FORA, a research project funded by the Volkswagen Foundation that studies the use of AI systems in public goods provision around the world. Using a participatory and emancipatory research approach, AI FORA aims to provide better AI and a more just use of digital technology.
Please note that articles published on this blog reflect the views of the author/s and do not necessarily reflect those of the Department of Sociology.