LP Magazine EU

ItemOptix-banner_V2.gif

DeArm_banner.jpg

Loss_Prevention_Magazine_300x250__Nov_2023.jpg

Jan_2024.png

UK_Banner_ad_5-01.png

Designing out crime

Minority Report

How Artificial Intelligence Will Help Identify Store Thieves

By John Wilson, Executive Editor

The plot of the 2002 dystopian film Minority Report starring Tom Cruise may have seemed fantastical, but a reality beyond the imagination of the scriptwriters could soon be coming to a retail store near you.

In the film it is the year 2054 AD, and crime is virtually eliminated from Washington, DC, thanks to an elite law enforcing squad called 'Precrime.' It uses three gifted humans, pre-cogs, with special powers to see into the future and predict crimes before they take place.

The reality, however, is even harder to comprehend: advanced AI (artificial intelligence) technology can learn from everyday human behaviour and will soon be in a position to detect crime before it has happened. 

A research project involving graduates of the University of Cambridge, University of Warwick, and City of London University is seeking to engage with the retail community to let super computers analyse tens of thousands of hours of CCTV footage and store-thief behaviour to build algorithms that predict when thieves are going to steal.

Working out of a former biscuit factory in Bermondsey, East London, ThirdEye is an ambitious high-tech start-up project fronted by Razwan Ghafoor who has developed algorithms for intelligent behaviour understanding. He and co-founder Thomas Purchas, a full-stack developer with seven years of software development and image processing experience, have been assisted by Entrepreneur First, Europe's first pre-seed investment programme for start-up technical founders, 50 per cent of whom were educated at Oxford, Cambridge, or Imperial College London and have cut their teeth in some of the biggest companies in the world including Google, Amazon, and Apple. 

Purchas, for example, previously worked for EA Technology where he built CCTV software to prevent copper theft in substations. In addition, he carried out projects handling large datasets and front-end development for the National Nuclear Laboratory and digital agency Potato (acquired by WPP). He also specialised in high-performance GPU computing while earning his electronic engineering degree from Warwick University.

They have already engaged with a number of retailers who are happy to share their intelligence, but the project is data-hungry in its quest to learn more about shoplifting techniques and modus operandi (MO).

The technology is also complementary to the drive towards facial recognition software, which builds accurate images of suspicious individuals, even when they attempt to obscure their features with hats or hoodies.

Ghafoor, who was educated at Imperial College, argues that four years ago there was a wide gap between the intuitive way our brains could analyse behaviour and the learned observations of computers. But AI advances have closed the gap to the point that computers are now as good as humans at classifying scenarios.

Autonomous Cars

Probably the most well-known example of AI in action has involved research into autonomous cars with early adopters including Google at the forefront of the technology. In the world of the Internet of Things, where devices talk to other devices, this technology is seen as the test case of AI, and researchers are already making predictions that traditional insurance models as we know them will be among the first casualties of the switch. Instead, it is likely that responsibility and the cost of accidents will be driven by product warranties because an accident, by definition, cannot be the driver's fault where there is no driver.  

US regulators already say that Google's self-driving car can be considered the driver under federal law, a big step towards approval for self-driving cars to take to the roads. 

Google submitted a proposed design for a self-driving car back in November, which has 'no need for a human driver.' The response from the National Highway Traffic Safety Administration (NHTSA) was that it will interpret 'driver' in the context of Google's described motor vehicle design as referring to the self-driving system and not to any of the vehicle occupants.

The regulator said, "We agree with Google that its self-driving car will not have a 'driver' in the traditional sense that vehicles have had drivers during the last more than one hundred years." Google and many car companies are looking to free up safety rules that are slowing down testing and the eventual roll-out of autonomous vehicles. 

It's not all an easy ride from here, however. There are still rules that require braking systems activated by foot control inside the vehicle, as well as 'whether and how Google could certify that the system meets a standard developed and designed to apply to a vehicle with a human driver.'

Google told the NHTSA that human controls could paradoxically be a danger if passengers attempt to override the car's own judgements and driving decisions. 

Rules about steering wheels and brake pedals would have to be formally rewritten before cars without them would be allowed on the roads, and changing the law will take months if not years. Last month, the Government agency said it may waive some vehicle safety rules to better enable self-driving cars on the roads, promising to write guidelines for these vehicles in the next six months.

It would therefore appear that the machine is overtaking the human brain in many areas of everyday life.

This was highlighted recently when a Google AI programme beat the European and world champions of the board game Go, a significant feat because the Chinese game is viewed as a much tougher challenge than chess for computers. A Go match can play out in many more ways, so the computer has learned to mimic the multi-layered nuances of the human brain.

Other examples are in some ways even more extraordinary. Microsoft researchers have already announced a major advance in technology designed to identify the objects in a photograph or video, showcasing a system whose accuracy meets and sometimes exceeds human-level performance.

The software giant relied on a method called deep neural networks to train computers to recognise the images, which are as much as five times deeper than any previously used.

In another example of deep neural network, a computer's analysis of courtroom videos has taught it to predict if someone is telling the truth or lying. A machine-learning algorithm trained on the faces of defendants in recordings of real trials correctly identified honesty in about 75 per cent of cases. Humans managed just 59.5 per cent, while the most skilled interrogators can only manage 65 per cent.

This breakthrough at the University of Michigan looked at 121 videos from sources such as the Innocence Project, a non-profit group in Texas dedicated to exonerating people with wrongful convictions. 

Transcripts of videos that included the speaker's gestures and expressions were fed into a machine-learning algorithm, along with the trial's outcome. Such a system, experts predict, could one day spot liars in real-time court scenarios.

Beyond law enforcement, AI is being put to work in the field of medicine. In the US, Ellie, an AI program, was designed to diagnose post-traumatic stress disorder and depression and interviews conflict veterans about family, feelings, and biggest regrets.

Despite the fact that emotions seem too hard for a machine to comprehend, the psychologist behind the programme, Skip Rizzo, said Ellie is programmed to listen to the tone and observe facial movements and expressions.

"Contrary to popular belief, depressed people smile as many times as non-depressed people," Rizzo said. "But their smiles are less robust and of less duration. It's almost like polite smiles rather than real or robust. It is a coming-from-your-inner-soul type of a smile."

Ellie then compares the subject in front of her to a database of soldiers who have returned from combat to detect signs of post-traumatic stress disorder (PTSD) and depression and scores as a large pool of psychologists.

Jody Mitic served with the Canadian forces in Afghanistan. He lost both of his feet to a bomb. And Mitic remembers that Ellie's 'robotness' helped him open up.

"Ellie seemed to just be listening," Mitic said. "A lot of therapists, you can see it in their eyes, when you start talking about some of the grislier details of stuff that you might have seen or done, they are having a reaction." With Ellie, he said he didn't have that problem.

IBM's aspirations for its artificially intelligent supercomputer are also now less quiz show champion and more about fine tuning the next medical genius.

Watson, the supercomputer that is now the world Jeopardy champion, went to medical school after it won Jeopardy. The Massachusetts Institute of Technology's Andrew McAfee, co-author of The Second Machine, said, "Watson was a game changer. I'm convinced that if it's not already the world's best diagnostician, it will be soon."

Watson is already capable of storing far more medical information than doctors, and unlike humans, its decisions are all evidence-based and free of cognitive biases and overconfidence. It's also capable of understanding natural language, generating hypotheses, evaluating the strength of those hypotheses, and learning.

As IBM scientists continue to train Watson to apply its vast stores of knowledge to actual medical decision-making, it's likely just a matter of time before its diagnostic performance surpasses that of even the sharpest doctors.

And unlike human doctors, it is consistent. Inconsistency is a surprisingly common flaw among human medical professionals, even experienced ones. And its progammers argue Watson is always available, never annoyed, sick, nervous, hungover, upset, in the middle of a divorce, or sleep deprived. 

It is also cheaper and can be offered anywhere in the world. If a person has access to a computer or mobile phone, 'Dr. Watson' is on call for them, according to IBM.

The supercomputer's potential is huge, but as the Wall Street Journal reported earlier this year, "just a handful of customers are using Watson in their daily business." And it's far from performing at the level and in the range of domains that should be possible in the future.

So far, IBM's most high-profile AI partnerships are with the cancer hospitals, where Watson helps recommend leukemia treatments, and WellPoint, where Watson helps the insurer evaluate doctor's treatment plans.

Applications in Retail

Razwan Ghafoor said, "There is so much going on in AI that we were really surprised that no one had applied it to retail as of yet. We have £200,000 worth of supercomputing time to use on retailers" details so that the CCTV footage can enable it to understand what it is looking for. The more data we can get, the more accurate we can be at detecting theft in the stores of participating retailers. To act like a human, our algorithms need to be taught like a human. This is why we need the videos showing theft in stores. And it adjusts itself and does this thousands and thousands of times by using neural networks. 

"This is machine vision, which I believe this year could beat humans. Retailers have all these CCTV cameras that are not being optimised, and they cannot spend the time viewing all of the footage because it involves billions and billions of numbers. And we only now have the computer capacity to interrogate this."

According to Ghafoor, if the computers can read people's faces and dreams and even write Shakespearean Sonnets by mimicking human intuition, the possibilities are endless.

"Deep learning is enabling computers to do things that only humans can do, and we are now at that stage where AI can master human instinct. 

"There is consequently a huge debate about job losses across industry, but my view is that we do not have enough people to do the tasks that we need doing, so it is a good use of the technology.

"There is a huge AI rush at the moment, it is about who will get the first breakthrough working in the real world."

AI is the future, no doubt, especially if it plugs a gap in our knowledge and reduces the time of investigations when worked in collaboration with retailers already familiar with technology such as facial recognition. In a world where Police cannot respond to store theft call outs, AI might even be considered a replacement strategy for not only investigation but also interrogation and prosecution with no human interaction along the way. Although Thomas Purchas said the ThirdEye initiative will stop well short of this vision: "This is a very controversial idea and not something that we would want to be part of."

Indeed, he mirrors the thinking of one of the world's leading scientists, Professor Stephen Hawking. Although not opposed to low-level developments of AI, Hawking has said that efforts to create thinking machines could ultimately pose a threat to our very existence.

"The development of full artificial intelligence could spell the end of the human race," the told the BBC earlier this year.

The warning is even more surprising because the theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new AI system developed by Intel to speak. 

Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.

Professor Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

Professor Hawking's warning is perhaps more akin to the view painted in Minority Report, but it is unlikely to be rolled-out unchecked. Instead it is likely, as in the case of the AI used to progress autonomous vehicle technology, that a regulatory framework will have to be developed to police its use in terms of its ethical and economic future. AI is here to stay, but its application is likely to be slower than many enthusiasts predict. CCTV monitoring is a positive development, but it is at the foothills of the climb towards AI. However, whether Professor Hawking's predictions or Tom Cruise's bleak futuristic landscape comes to fruition by 2054 remains to be seen.

Retailers wishing to be part of the AI research project should contact Razwan Ghafoor at raz@thirdeye.io. 

Leave a Reply



(Your email will not be publicly displayed.)

Captcha Code

Click the image to see another captcha.



iFacility CCTV and Alarm Installation