Facial recognition helps fight serious crime, but for minor UK offences it should be off limits | Asress Adimi Gikaythedigitalchaps

[ad_1]

Privacy campaigners have long considered the UK an outlier among democracies for its stance on facial recognition technology, since we allow the police to regularly deploy it in public spaces. Further concerns were raised last week when it emerged that the new criminal justice bill could allow the police to use the technology to run a search on millions of driving licence holders.

Campaigners have argued that the bill puts all UK drivers on a “permanent police lineup”. A day later the Times reported on the growing call by the government for the police to adopt the technology nationwide.

Facial recognition uses artificial intelligence software to compare an individual’s biometric facial features with existing records to estimate similarity. Live facial recognition allows a real-time comparison of an image captured by live camera with an existing police “watchlist”; retrospective facial recognition involves matching after the fact, for instance once the suspect left a crime scene.

The push for the technology in policing is motivated by its public safety benefit. The Metropolitan police’s latest publicly available data shows that between 2020 and 2023, 34 people were apprehended through live facial recognition. In October 2023, they identified 149 retail crime suspects using retrospective facial recognition. And after a deployment on 14 December in south London, police arrested 10 people for offences including “threats to kill, recall to prison for robbery and possession of an offensive weapon”. These results support the claim that the technology is effective in assisting the police and freeing-up resources.

But it also poses risks. It could misidentify people, leading to wrongful arrests; invade people’s privacy; expand surveillance; and help governments curtail democratic rights such as peaceful protest. However, these risks stemming from how the technology is used could also be mitigated by adopting safeguards.

The technology is known to misidentify, especially women of colour, with Big Brother Watch claiming that Met and South Wales police facial recognition systems have been over 89% inaccurate between 2016 and 2023. A 2019 review of the Met’s deployments by Essex University’s Human Rights Centre showed an accuracy rate of only 19.05%.

However, an audit of police facial recognition systems published this year by the UK’s National Physical Laboratory concluded that, while the software performed poorly on black female faces, if configured optimally the difference wasn’t statistically significant, ie it was no larger than what could be obtained by chance.

The technology’s accuracy will improve, but its potential harms are dependent on how it is used. For instance, despite the UK police using facial recognition for more than six years, there has not been a single reported case of wrongful arrest, contrasted with the US where several incidents have been documented. The danger lies in employing the technology as an ultimate decision-maker rather than a tool subject to human oversight.

Campaigners assert that facial recognition violates privacy rights, labelling it as “dangerously authoritarian”. Yet the use of such systems is governed by existing privacy law. This allows necessary and proportionate interference with the right to privacy for valid reasons – and law enforcement is one of the valid reasons.

Its advocates argue that it can be used while respecting people’s privacy – by choosing places of deployment based on necessity and by automatically deleting personal information acquired during the deployment.

When it comes to the criminal justice bill, the worries revolve around section 21, which could allow the police to search driver licence records to identify suspects in a wide range of crimes. However, giving police unlimited access to the biometric data of millions of people for minor offences likely conflicts with privacy legislation. For instance, in 2020, the court of appeal found the use of facial recognition without constraints in terms of targeted places and persons to be unlawful.

Society’s historical unease with emerging technologies could also explain the anxiety about facial recognition. During the 1980s and 90s European countries resisted CCTV cameras due to privacy concerns. Today, the UK is estimated to have 5m surveillance cameras, with London alone housing 942,000 of them.

As the arrest of Sarah Everard’s murderer demonstated, CCTV footage can play a critical role in solving serious crimes.

Contrary to the perception among campaigners, facial recognition already enjoys some public support. In a 2022 joint survey by Ada Lovelace Institute and the Alan Turing Institute, 86% of the participants believed that police’s use of the technology is beneficial.

While some safeguards exist within UK law to tackle the disproportionate use of technology, there remain loopholes.

For example, currently there are no rules restricting the use of live facial recognition in investigating minor crimes.

Although campaigners portray the UK as an outlier for its approach to facial recognition, trends in the US and the European Union suggest there is acceptance for proportionate use of the technology. The focus should be on urging policymakers to adopt a nuanced approach that allows society to reap the benefits whilst mitigating its risks.

Asress Adimi Gikay is a senior lecturer in AI, disruptive innovation and law, Brunel University London

[ad_2]