The story of facial recognition in UK policing starts in a place familiar to many: Notting Hill. But it’s not the romantic comedy everyone remembers. It’s a far more complex tale about a technology’s shaky start.
In 2016, police first tested Live Facial Recognition (LFR) in Notting Hill. The technology was crude, the legal framework was undeveloped, and public awareness was minimal. The Metropolitan Police have since improved the technology, even subjecting their algorithm to independent testing. Yet, the old statistics from those initial trials still haunt the conversation. Citing that outdated data is like “jousting with fossils.”

The legal and political landscape has also matured. The government and regulators have taken a pragmatic stance on this controversial technology.
Artificial Intelligence is changing our world, and the impact of facial recognition hinges on a delicate balance. It’s a balance between what’s technologically possible, what’s legally permissible, and what society will accept. The third point, public consent, is often the most crucial. Police cannot use these tools effectively without public trust.
Those early trials at the Notting Hill Carnival caused enough public backlash for the Mayor’s Office to promise Londoners the technology wouldn’t return the following year. This was a time when law enforcement agencies globally were experimenting clumsily with technology. They were using algorithms designed for earthquakes to predict crime. This early experimentation has made it hard for police to adopt facial recognition, and public resistance to it has become a measure of public trust.
The police cannot simply build smarter biometric systems and tell citizens it’s good for them. It won’t work. The police and the public need a new approach, one that values transparency and accountability.
A real world example in Denmark Hill, a different part of London, shows how far the technology has come. In January, the police had a near-perfect use case. An LFR camera van spotted a convicted pedophile, David Cheneler, who was under a court order to not have contact with children. The camera matched his face with an image on its watchlist as he walked with a six-year-old girl. This case was a biometric success for policing.
Looking ahead, balancing these issues requires open dialogue and vigilance. Consider the policy that forces police to announce where and when they will use LFR. This mandatory advertisement limits the technology’s effectiveness, but what is the alternative? Both the community and the police must work together to find answers.
We need a shared understanding of what police accountability in AI looks like. We also need to get rid of the tired “if you’ve done nothing wrong, you’ve nothing to worry about” argument. This statement misses the point entirely.
In 1999, the great tech fear was the Millennium Bug, not facial recognition. Since then, UK policing has successfully introduced Taser and body-worn video. Today, London faces a steep rise in street robbery, knife crime, and theft. The police need help, and technology like facial recognition offers powerful options.
The initial trials are over, and the technology has proven its merit, especially as a deterrent to rising retail crime. Now, the police must show they have grown with the technology.
As the cameras return to Notting Hill, the police will be under a microscope. They have made progress, but will they earn the public’s confidence this time?