By Tyler Dadge
On the 22nd of June 2022, I presented at the University of Surrey Doctoral College Conference, discussing my PhD research on the police’s use of facial recognition, and its relationship with citizens’ human and data protection rights. I was delighted to be awarded the prize for the best afternoon parallel session oral presentation.
Here I offer a brief summary of some of the some the key points from the talk, followed by some reflections on the experience of presenting at the event.
What types of facial recognition do the police use?
The most popular of the three facial recognition types is Retrospective Facial Recognition (RFR). South Wales, Gwent, Nottinghamshire, Met Police, and Cheshire Constabulary have all adopted the technology. RFR is used after an event has happened, it compares still images of faces of unknown persons against reference images on a database of police accessible images known as a “watchlist”
The second most common type of facial recognition used in policing is Operator Initiated Facial Recognition (OIFR) which is used by South Wales, Gwent Police, and Cheshire Constabulary. When I explained how this type of facial recognition worked, it was met with a few worried faces from the audience. OIFR works by enabling a police officer to have a facial recognition app on their phone, which when they take a photo of you compares your photo to a “watchlist” possibly helping to identify you to the police officer.
The third and final type of facial recognition used by the police is probably the most known but least used type: Live Facial Recognition (LFR). It is currently used by Met and South Wales Police. LFR compares live camera feed of faces against a tailored “watchlist” if a match is generated this is reviewed by a police officer who decides whether to stop a person or not.
What is a watchlist and who is on one?
A watchlist is a database of police-accessible images of people who are of interest to the police or courts. Those who can be included are:
- Wanted by courts or police
- Suspected of having committed an offense, or about to commit an offense
- Breached bail conditions, court order, or other location-based restrictions
- Missing persons deemed to be at risk of harm
- Presents a risk of harm to themselves or others
- Victim of an offense
- Believed to have important information about a suspect
- Close associate with a suspect
- Witnesses to an offense.
Scanning the audience for their reactions I noticed a lot of shocked faces. I went on to explain that how an individual fits the criteria was completely down to police discretion and the College of Policing offered forces very little guidance on who can and cannot be included. I obviously cannot testify to the thoughts of the audience members, but I do know that the shocked facial expressions reflect criticisms of the College of Policing’s guidance about inclusion on a watchlist with many saying that it isn’t specific enough and risks harming vulnerable people.
Key Themes
Once I had explained the types of facial recognition the police use, what it is, and who is on a watchlist I went on to discuss what the current literature has to say about the police’s use of facial recognition. To do this I broke it down into three main themes that I have identified that are regularly discussed in the literature:
1) Discrimination
The first theme that I discussed was the issue of discrimination within the technology. I discussed Buolamwini and Gebru’s 2018 study of facial recognition algorithms from companies like Microsoft and IBM. They found that lighter-skinned males had a false positive match rate; this rate is when someone is falsely matched with someone on a watchlist, of 0.8% whereas darker-skinned females had a false positive match rate of up to 34.7%. They also found that most facial recognition algorithms were less accurate for females than for males. The US National Institute of Standards and Technology has noticed that racial accuracy is improving, however, others argue that there is still discrimination amongst the algorithms for transgender and non-binary persons, children, older persons, and disabled persons.
2) Human and Data Protection Rights
During my presentation, I explained how the College of Policing had acknowledged that Live Facial Recognition could impact our human rights. I also gave an example of a facial recognition company, ClearView, which was used by police forces in England and Wales and fined £7.5m after it was revealed that they were responsible for one of the biggest data breaches of UK citizens’ biometric data. This particular concern from the literature was the focus of the questions being by the audience during the Q&A session. The audience asked about how facial recognition was allowed to be used if it impacted our human rights, or how the police personally ensure that our biometric data is protected. I answered these questions by explaining that the police use the fact that some rights are qualified meaning that the police could interfere with individual rights to protect the wider community, and this is how they justify their use of facial recognition. I also explained that the data protection is left to the facial recognition software companies to comply with.
3) The need for more regulation and/or legislation
This third concern of needing more regulation and/or legislation is arguably the biggest concern to come out from the literature and is often cited as being needed because of the two previous concerns, discrimination, and human and data protection rights. Fussey and Murray say that the legal basis for underpinning live facial recognition is vague and allows for too much police interpretations and discretion. In March 2022, the College of Policing released their guidance document for LFR (ignoring the other two types), it is 38 pages long and confirms what Murray and Fussey said about the legal basis two years previously. It often refers police forces to other policy documents without expanding on how these policies apply to police use of facial recognition. I had printed the document from the College of Policing out and held it up so that the audience could have a visual representation of how small this document was; it shocked the audience!
Presenting at the Doctoral College was a great experience, not only did it help me to identify some of the key aspects of my research topic that I wanted to discuss (and helped to condense what I wanted to say into 10 minutes!). It also was helpful to engage with people outside of the topic and be questioned, making me think about my topic from different perspectives, which also gave me a confidence boost as when being questioned, I realised that I do know more than I think I do!
*Please note that articles published on this blog reflect the views of the author/s and do not necessarily reflect those of the Department of Sociology.