Industry Focus
Prima Facie
The Case and the Race for the Roll-Out of Facial
The maxim prima facie is the legal presumption underpinning UK jurisprudence—it is the minimum and essential provision of an evidence-based threshold upon which a case can be pursued through the criminal justice system.
Literally meaning “on the face of it”, or “at face value”, it refers to evidence that, upon initial examination, is sufficient to prove a particular point or fact and signifies that enough has been presented to support a claim for a “beyond reasonable doubt” prosecution, allowing proceedings to commence to be proven or dismissed in light of the prima facie case or more compelling counter evidence presented to a court of law.
It is therefore interesting that this ancient maxim has itself come face-to-face with a very modern technology—facial recognition (FR)—where the judge and jury of public opinion are arguably still deliberating.
At face value FR is a game changer for the security and retail loss prevention industry in terms of not only identifying prolific and persistent offenders but actually operating to prevent crimes occurring in the first instance.
However, it ironically continues to be caught in the cross hairs of controversy in terms of its perceived legality and use of personal data—factors that have arguably held back its widespread adoption for fear of legal sanction or reputational damage.
Indeed, it would be fair to say that something of a Mexican stand-off has developed between the retail community and civil rights groups around the widespread adoption of FR technology, and the stakes have never been higher in relation to whose “poker face” blinks first.
Rather like the tension that existed between two groups in the early noughties towards radio frequency identification (RFID) technology that arguably held back its widespread adoption for a number of years because of fears over so-called “spy tags”, FR and biometric technology and its application has been consumed by the same arguments around privacy invasion and the rights or wrongs of big business acting as Big Brother.
Indeed, one of the key civil rights opponents of FR technology is called “Big Brother Watch” a libertarian pressure group which in its own “Mexican stand-off” way is keeping a watching eye on any potential abuse of FR, and it has successfully gained traction in some media quarters.
The ICO
The Information Commissioner’s Office (ICO), the independent body established to uphold the information and privacy rights of UK citizens has been widely recognised as the litmus paper test of the reach of FR into people’s lives to ensure businesses are not acting beyond the scope and intent of current and future data protection and privacy laws such as GDPR and the new EU AI Act, and its subsequent UK equivalent when that becomes law.
The ICO is also the enforcer of any regulatory breaches, and with fines of up to £17.5 million, who blinks first represents an existential threat to the raft of industries looking to take advantage of live, real time facial recognition while critics hone in on any likelihoods of false positives or negatives such as the failure to identify the right person or erroneously identifying a completely innocent party as a person of interest.
With the rapid advancement of machine-learning and AI, according to several respected sources, the accuracy of live FR is now between 95 and 99.7 per cent.
It is therefore of no surprise that headlines have more than suggested that the last and present Government are supportive of a wide range of measures to reduce the scourge of retail crime, with figures highlighting two thousand incidents of violence and aggression each day and a cost to the sector of £2.2 billion, a factor leading the British Retail Consortium to comment that high street crime was “spiralling out of control”.
Facewatch
One company, Facewatch, has been front and centre of the debate as the only business with a mature working model of FR technology in the UK. Operating as a data controller on behalf of a number of retailers, Facewatch’s proprietary cloud-based FR system remains the only shared national facial recognition database and acts to safeguard subscribers against crime by sending alerts to businesses the instant that a subject of interest—a recognised and verified individual—enters their premises.
The business was established as a crime reporting platform and was reinvented as an FR business in 2016 and has very much grown up in public, submitting its model to ICO scrutiny. The ICO, as referred to earlier, is an honest broker in the debate advising businesses and interpreting and enforcing the data protection and privacy laws rather than making them.
Those organisations looking to introduce FR, for example, must conduct Data Protection Impact Assessments (DPIA) and Legitimate Interest Assessments (LIA) to ensure the systems they introduce are compliant and meet the substantial public interest threshold for holding and processing the data.
Failure to carry out this essential groundwork around whose data is being harvested for what reason without subjects having the ability to find out what specific information is stored, and request for it to be deleted, could ultimately result in punitive sanctions with fines of up to €20 million or 4 per cent of a business’s global turnover as well as running the risk of substantial brand reputational damage.
On 31 March 2023, the ICO ruled that while Facewatch’s system was permissible under data protection laws, it needed to make changes to comply. Facewatch subsequently made improvements, including reducing the amount of personal data collected and focusing on repeat offenders or individuals involved in significant offences. After reviewing these changes, the ICO concluded that Facewatch had a legitimate purpose for using facial recognition and that no further regulatory action was necessary.
However, the ICO has suggested greater nuance to the ruling and provided more context behind its decision through a blog from Stephen Bonner, its deputy commissioner for Regulatory Supervision.
In an extract of a blog entitled “Balancing People’s Privacy Rights with the Need to Prevent Crime”, Bonner, who leads programmes of work to develop strategic ICO positions on technology issues such as data, supervision of the large technology platforms, and online harm, said:
“Is it really necessary to have our faces scanned when we are simply buying some milk and a bag of frozen peas?
Some would say yes, live facial recognition can help the police catch suspects or speed our way through border control checks at ports. But as with many things, where there are benefits, there are problems too. Others argue people’s privacy and freedoms are violated, the technology is imperfect, and innocent people can be adversely affected.
It is against this backdrop that we have considered the live facial recognition technology provided to the retail sector by security company Facewatch whose product aims to help businesses protect their customers, staff, and stock. The system scans people’s faces in real time as they enter a store and alerts if a “subject of interest” has entered.
Innovative solutions helping businesses prevent crime is in the public interest and a benefit to society. Data protection law recognises this, allowing personal information—in this case facial images—to be used if there is a legitimate interest, such as for the detection and prevention of crime. However, these benefits must always be balanced against the privacy rights of the individual.
Throughout our dealings with Facewatch, we considered whether its product complied with data protection legislation. Whilst we agreed the company had a legitimate interest in using people’s personal data, we identified various areas of concern.
We highlighted these areas of concern and gave Facewatch time to address them. In response to our concerns, Facewatch made, and continues to make, improvements to its product. These include reducing the personal data they collect by focusing on repeat offenders or individuals committing significant offences, improving their procedures by appointing a Data Protection Officer, and protecting those classified as vulnerable by ensuring they do not become a “subject of interest”.
Based on the information provided by Facewatch about improvements already made and the ongoing improvements it is making, we are satisfied the company has a legitimate purpose for using people’s information for the detection and prevention of crime. We’ve therefore concluded that no further regulatory action is required.
Our decision covers the specific aspects of data protection law discussed as it applied to Facewatch at a point in time. It is not a blanket approval of Facewatch, nor of live facial recognition (LFR) use.
The closure of our Facewatch investigation does not bring our involvement in this space to an end. Nor does this decision give a green light to the blanket use of this technology. Each new application must be considered on its own merits, balancing the privacy rights of people with the benefits of preventing crime. We will continue to monitor the evolution of live facial recognition technology to ensure its use remains lawful, transparent, and proportionate.”
Vindication and Validation
The ICO is not a poster boy for FR or its opponents, but as discussed, an independent and agnostic body that has established the essential guard rails by which companies such as Facewatch must abide.
Nick Fisher, Facewatch’s CEO said the ICO position was clear to all in the industry to make sure they “engineer privacy” into their solution.
“We are the only ones doing this to date,” said Nick, a former retailer and chief operating officer for Phones4U.
As a result of the ruling a number of larger retailers—including Mike Ashley’s Frasers Group—began trials of FR and the wider retail community has begun to rally its budgets towards exploring a widespread roll-out of the technology.
“We are the only company to go through this test which took four years to conclude—we are not in the business of cutting corners,” he said.
In addition to the Facewatch algorithm, the platform also uses “super recognisers” as human verification, and in June 2025 recorded 42,000 accurate alerts through its system, resulting in double-digit reductions in offences across many of its customer’s estates.
“It simply stops crime in the first instance—it is easier to stop criminals doing what they plan to do than trying to get stolen goods back from them afterwards,” said Nick.
Facewatch—which has been audited with a 99.98 per cent FR accuracy reading—boasts more than 100 different retail clients, from multi-nationals to charity and corner shops, who are now signed up to the platform in order to better protect colleagues.
“It’s about employee welfare and protecting these most important of assets as well as preventing criminals from stealing and smashing up stores, resulting in colleagues not wanting to work there anymore.”
Nick, a proud Yorkshireman from Leeds who believes in plain speaking, argues that controlling the data and sharing it proportionately provides the USP for Facewatch in terms of building credibility and trust. He also maintains that there has been a lot of misreporting about the platform in certain sections of the press.
“It has been exaggerated and sensationalist in many cases—I don’t mind fair reporting, but it has got to be balanced,” he said.
“We are not in the business of Orwellian behaviour or stopping legitimate shoppers from shopping—we are simply protecting the store colleagues and customers. Our biggest fans are those store colleagues who previously did not want to come to work because they were afraid. They now tell us every day that they feel miles better because those who were causing the issues don’t come into the stores anymore—they feel empowered and safe.”
Now the business is experiencing double-digit growth with a number of new customers in the pipeline.
Frasers Group
One of largest customers of Facewatch has been Frasers Group which includes Sports Direct and high-end fashion brand Flannels in its portfolio.
Ben Rudd, head of loss prevention for Frasers Group began a trial with Facewatch in 2021 during the COVID years when face coverings were still mandatory.
“We were impressed with Facewatch during this challenging period because the technology also worked in identifying individuals wearing masks,” said Ben.
“It was also attractive to us because of other factors including the challenges of finding good calibre guarding. I wanted to assess Facewatch in the round because it was not all about shoplifting, but also the violence and aggression that often comes with it.”
“We realised that it was unrealistic to try and get stock back once it had left the store, so instead we focussed upon deterrence. Facewatch acted as an extension to the store manager by keeping a watchful eye on who was coming into the store.”
“This wasn’t just local offenders when it came to Flannels but travelling gangs from places like London and as far north as Scotland,” he said.
“Facewatch has been like a member of staff who never misses a shift and always turns up on time—it is a gold standard.”
“If it identifies someone on the watchlist, the alert allows us to deter that person before they’ve had the chance to commit a crime,” said Ben who has now rolled the Facewatch platform out across 350 different branded stores.
“The results have been impressive. Violence across Frasers Group is down by 30 per cent year on year, and stock loss has also reduced by 25 per cent in the stores that have had Facewatch for over a year.”
“It is a war of attrition—we stop those who cause us problems when they come into the store. For me though, it is all about staff safety as well as reducing stock loss.”
“I’m so impressed with the fall in incidents of violence as a result of the shared watch list which Facewatch as the data controller can set the radius for.”
“Yes, we also give staff body worn cameras, but in my mind, Facewatch is the best thing to come onto the market for a long time. It allows us to manage an acceptable level of loss for the business and avoids the unnecessary confrontations,” added Ben who has worked for Frasers Group for the last twenty-two years.
The Future ICO Role in Managing FRT
The results are impressive and set against a backdrop of robust governance that must remain watertight in the age of ever-evolving technology such as AI. Such developments open up another frontier of risk for businesses going down the FR route with even more players coming into the market, albeit they will be more likely adopting a data processor role as opposed to the unique catch-all data controller position taken by Facewatch. This means businesses will be sticking rigidly to the guard rails provided by the ICO in policing the process of their own data capture.
The governance organisation will conduct a wide, but light touch approach in supporting business compliance recognising that in most cases, everyone wants to do “the right thing”. The ICO will therefore default to the role as an educator rather than an enforcer, unless it is absolutely necessary to do so.
The ICO sets the general principles of the use of biometric technology as the need to be proportionate, lawful, fair, and rational and looking at applications of AI would seek to look at both aggravating and mitigating factors in the conduct of the business, such as whether there has been a history of similar activity, the nature of the issue in question, and the sensitivity of the data and context—whether it was capturing data around vulnerable people to whom a duty of care is owed, for example, where “real world harm” was a risk.
In terms of mitigation, the ICO would look at the response from the organisation and what steps it had taken to reduce the impact of any harm, although co-operation with any ICO enquiry would not be viewed as a mitigating factor.
In short, there is no ban on the use of FR, but its use has to meet the tests of necessity, proportionality, legality, and where the risks to individuals are demonstrably minimised to comply with the principles of GDPR and the up-and-coming AI Act and its UK equivalent.
Substantial public interest, such as the prevention and detection of crime, would be a legitimate exemption to the privacy rules, as would well-publicised signage in order to generate implicit consent.
However, in the absence of any case law precedent in the field, the ICO continues to play the role of honest broker in relation to specific challenges, but it is not a general resource for retailers to bombard it with requests to confirm whether their specific DPIAs meet their regulatory blessing in terms of being legal to proceed in the first instance.
Retailers and their legal teams are still wrestling with the challenges of FR and its live application and potential risks of scanning the faces of minors, for example. Mitigations to the potential ultra vires use of FR could lie in the application of retrospective as opposed to live face-matching.
Where retailers are looking at novel approaches to FR, they can also get advice through the ICO’s “iAdvice” service to provide “fast and frank” feedback to establish whether the course of action is worth pursuing.
On face value, facial recognition is here to stay. It is like the toothpaste that has been enthusiastically squeezed out of the tube in that it cannot go backwards. The best-case scenario is businesses look beyond a prima facie case and conduct their own due diligence in advance of any FR introduction in order to future proof the organisation from data breach claims and any collateral brand reputational damage resulting from proving the case for the face race.