LP Magazine EU

Retail-Ad1.gif

LPM_EU_Box_Banner_Ad_-_May_v3_1.png

BodyWorn_300x250_2405.jpg

May_2025.png

UK_Banner_ad_5-01.png

Industry focus

Buy, Lie And AI

From Chatbots to Synthetic Deepfakes, How the Risk Narrative Around Artificial Intelligence is Shifting

A SWOT (strengths, weaknesses, opportunities, and threats) analysis has for many years been a useful guidance tool for business positioning purposes. Of course, it’s not a clear linear diagram as strengths can also be weaknesses and threats are often seen as opportunities. However, SWOTs do give us a useful line in the sand of where and how a business sits and can help determine corporate courses of action. 

This is especially the case in the fast-moving world of retail, particularly when it comes to game-changing technologies, all of which have their associated acronyms—RFID (radio frequency identification), EAS (electronic article surveillance), SCO (self-check out) FR (facial recognition) and latterly, and most critically—the generic use of AI (artificial intelligence). AI has provided the power behind many of the latest technological developments through machine learning, the ability to teach technology to understand and manage industrial scale levels of predictive data intelligence to help drive decision-making and reduce manual and menial tasks and human interventions.

Artificial Intelligence in Retail

Putting aside the potential for headcount reductions at a time when staff recruitment and retention are at an all-time low, the business view with many technological developments has usually been a muted “proceed with extreme caution” approach, except in the case of AI where the industry has, up until recently, only recognised the “strengths” of the technology.

The sector has largely embraced AI’s cost-saving and customer journey enhancing potential and has only been held back from going further by regulatory restrictions such as those imposed around AI and facial recognition, for example, and the potential risks around false positives and data protection. 

But there is no smoke without fire, and AI’s rapid development through it being marketed as the “next best thing” has meant its benefits are recognised over its drawbacks. 

The launch of ChatGPT last November saw retailers begin to evaluate how they could take advantage of the latest developments in generative AI which is capable of generating text, images, or other media, the potential for which has yet to be fully realised or appreciated. However, many retailers have so far only recognised the strengths in the context of their retail eco-systems from the online customer experience to warehouse management.

The British Retail Consortium (BRC) says that while the technology has already had a transformative impact on the retail industry through personalised and immersive experiences like virtual try-on platforms and automated content generation, it will only continue to drive far-reaching changes in the market as it develops further.

“The future holds immense promise as generative AI continues to evolve, driving innovative products and customer experiences and enhancing operational efficiency,” says Kris Hamer, director of insight at the BRC. 

“By harnessing generative AI on their digital transformation journey, retailers can optimise processes, reduce waste, and support their transition to a net-zero future while offering customers even greater value.”

While there is little doubt that generative AI will affect the retail space as it becomes increasingly sophisticated, retailers are still in the early stages of understanding how its more advanced versions can be used commercially to meet further business needs. 

Product recommendations created by generative AI is a feature already being explored by several companies in the e-commerce space.

Buy Now, Pay Later giant Klarna, for example, says it was the first European company and the first FinTech in the world to collaborate with OpenAI on a plug-in for ChatGPT, which it launched in March. 

“The integrated plug-in delivers a highly personalised and intuitive shopping experience by providing curated product recommendations to users who ask the platform for shopping advice and inspiration, along with links to shop those products via Klarna’s search and compare tool,” says Klarna.

Users can search for a certain product or theme and the AI programme will return with a selection of ideas. They can also send feedback directly to ChatGPT if the ideas don’t meet their expectations, with the platform sending over new recommendations in their place. 

Shopify also recently launched its own generative AI model designed to address the millions of retailers using its online platform that currently don’t have descriptions for their product catalogue. The company says that “Shopify Magic” can create product descriptions with a consistent tone across a retailer’s portfolio “in seconds”. 

Personal Shopping 

Diarmuid Gill, chief technology officer (CTO) at digital retail ad company Criteo, predicts that consumers will likely be able to ask AI systems to find certain items within the store and provide iterated results based on customer feedback.

“For example, if the consumer queries a supermarket’s AI about a recipe for Sunday dinner, then the AI can assume the role of a “personal shopper”, searching the product catalogue for the best-fit items,” explains Gill. 

“Another example would be leveraging “text to visual” AI, which enables consumers to describe what they’re looking for, with AI creating the visual and iterating it based on the customers’ feedback,” he added.

Some experts say that generative AI could even be used to reduce the carbon footprint of a retailer’s business by minimising over-production, for example in the customer returns cycle. 

“Through a combination of generative AI and augmented reality, consumers will be able to virtually try on clothing, minimising the volume of returns from online purchases,” says Robyn Duffy, senior markets analyst at business consultancy RSM UK who said AI could also help revolutionise the retail supply chain when combined with technologies such as RFID.

Criteo is using generative AI to enable retailers to use data gathered from their target markets to provide predictive modelling. With access to 2.5 billion unique identities, its AI lab team can build consumer profiles based on shopping habits and interactions. 

“Just as Netflix develops and launches new programmes based on user data, brands in the apparel, home furnishings and jewellery industries will soon predict future sales and launch new product lines that match their target market,” explains Diarmuid Gill.

Over the months and years ahead, retailers will continue to take advantage of the plethora of opportunities generative AI can bring to address some of their common pain points, from supply chain disruption to climate change and create new and improved experiences for their customers.

But while the technology will create more opportunities for business owners and their customers, it remains to be seen just how much this surge towards automation will impact the retail workforce. 

Risks

Indeed, the honeymoon period of generative AI may soon be over, such is the concern of a series of leading technology experts, including those involved in the creation of the technology who have called for a moratorium on its development because of its potential direction of travel.

Geoffrey Hinton, the so-called “Godfather in AI” left his job at Google in May this year to warn the industry that the technology could soon outsmart humans. Both the US and the EU have also launched investigations into the further development of the technology.

While the industry has tried to play down such dystopian visions of the future, governments across the world have been attempting to play catch-up in terms of regulatory control of AI.

In June, the European Consumer Organisation (BEUC) representing consumer groups from over thirteen countries called for a halt of its use, citing generative AI’s ability to “spread disinformation, entrench bias and discrimination and create scams”.

This follows plans from the European Union, also in May, to introduce the EU AI Act which is intended to target regulation at the technology to harness its potential benefit whilst bridling its abuse and excesses. If passed, it would represent the world’s first collection of laws directly targeting AI, but the BEUC wants action now.

“We call on safety data and consumer protection authorities to start investigations now and not wait idly for all kinds of consumer harm to have happened before they take action,” said Ursula Pachl, deputy director general of the BEUC.

In the same month, US Senator Richard Blumenthal opened a Congressional hearing into the advance of AI by demonstrating an example of a simulation of his own voice which he had created by using ChatGPT, a chilling reminder of the potential threat of synthetic “deepfake” technology that could easily be harnessed by fraudsters to access account details via a call centre, for example. 

In July, the University of Cambridge Judge Business School (CJBS) launched the world’s first “Disinformation Summit”, a two-day global online event looking at a wide range of threats including those posed by generative AI. 

CJBS Executive Education has wide experience in this field of research and learning and insight into where AI “goes wrong”. It also offers bespoke programmes relating to the mitigation of disinformation in mainstream society, from its definition to its drivers—the “chaos actors” adept at exploiting entrenched belief systems to manipulate markets or polarise political discourse as part of an agenda to divide and disrupt. 

Audits, fact-checks, regulation and “business inoculation” are integral components of the ethical toolkit offered by CJBS Executive Education delegates, but also formed part of a snapshot view of the disinformation landscape at the Summit which convened global thought leaders from psychology, journalism, financial reporting, political science, and related information fields.

But is it too late? The rise of AI-generated identity fraud is already causing alarm, with 37 per cent of organisations already experiencing deepfake voice fraud and 29 per cent falling victim to deepfake videos, according to a survey by Regula, a global developer of forensic devices and IDV (identity verification) solutions. 

The increasing accessibility of artificial intelligence technology for creating deepfakes is raising the stakes, posing a significant challenge for businesses and individuals alike.

Indeed, fake biometric artifacts like deepfake voice or video are perceived as a real problem by 80 per cent of companies, according to Regula’s survey, with US businesses registering the most concern. Indeed, in the States, 91 per cent of organisations already consider it to be a growing threat, particularly the ease at which individuals with malicious intent can create deepfakes, amplifying the threat to businesses and individuals alike.

“AI-generated fake identities can be difficult for humans to detect, unless they are specially trained to do so,” said says Ihar Kliashchou, chief technology officer at Regula.

“While neural networks may be useful in detecting deepfakes, they should be used in conjunction with other anti-fraud measures that focus on physical and dynamic parameters, such as face liveness checks, document liveness checks via optically variable security elements.” 

“Frankenstein” Identity

According to Regula’s survey of more than one-thousand fraud detection and prevention decision-makers from the financial services sectors across Australia, France, Germany, Mexico, Turkey, the UAE and the UK, nearly half the organisations globally (46 per cent) experienced synthetic identity fraud in the past year. 

Also known as “Frankenstein” identity, this is a type of scam where criminals combine real and fake ID information to create totally new and artificial identities. It’s usually used to open bank accounts or make fraudulent purchases.

The banking sector is obviously the most vulnerable to such types of identity fraud, with nearly all the companies in the industry (92 per cent) perceiving synthetic fraud as a real threat, and almost half (49 per cent) recently coming across this scam.

Voice verification specialist Pindrop’s 2023 Voice Intelligence and Safety report, chillingly titled “The Fraudster’s Strike Back” said: “Following recent economic changes, fraudsters have shifted focus away from government pay-outs and back to their traditional targets—contact centres.” 

“However, today’s fraudsters are also armed with new tactics, including the use of personal user data available on the dark web, advancements in artificial intelligence (AI) for creating synthetic audio, and an increased willingness to work in teams. This has led to a 40 per cent increase in fraud rates on contact centres in 2022 compared to the previous year.”

The business, whose technology correctly identified Senator Blumenthal’s generated voice stunt as a “synthetic deepfake”, recognises the potential for AI to generate attacks on call centres.

“Our technology can detect AI-generated synthetic voices—these voices may be an attempt to impersonate a real customer or simply to disguise the voice of the fraudster, in which case the voice does not belong to anyone,” said Nikolay Gaubitch, director of research for Pindrop.

“The media presents AI as a new problem but our whole business since around 2011 has been predicated on authenticating voices simply because of the necessity for trust—we all want to know who we are speaking to.”

“In the case of Senator Blumenthal, he had generated a synthetic version of his voice using publicly available software and he generated the content of the speech using ChatGPT.” 

“Our technology was able to accurately distinguish between his real voice and the synthetic portions of it. While it is difficult for human listeners to hear the difference, there are subtleties in the speech signal that make it easier for machines to detect. The same holds for those who simply want to disguise their voice when trying to defraud a call centre,” he continued.

Pindrop was one of the early participants in the international ASV spoof challenges, a global academic community working in tandem with business to test speaker verification technology and fraud counter measures.

“Anyone can enter this to test the capability of their systems—and we have always come up high in the rankings,” said Nikolay.

“Yes, there are dangers in the use of AI synthetic voice development, but there are also the right tools out there to combat this and these are future-proofed,” he added.

Fighting Fire with Fire

Many businesses have chosen to fight fire with fire and are using AI extensively in the battle against fraud.

According to a whitepaper by computer scientists from the University of Jakarta, machine learning algorithms achieved up to 96 per cent accuracy in reducing fraud for e-commerce businesses.

Hungarian fraud prevention business Seon Technologies, for example, has provided a guidance in how to train AI to identify fraud trends on an industrial scale beyond the capability of human investigators. In its advice, it says: “Machine learning is a collection of artificial intelligence (AI) algorithms trained with your historical data to suggest risk rules. You can then implement the rules to block or allow certain user actions, such as suspicious logins, identity theft, or fraudulent transactions.” 

“When training the machine learning engine, you must flag previous cases of fraud and non-fraud to avoid false positives and to improve your risk rules’ precision. The longer the algorithms run, the more accurate the rule suggestions will be. It will also provide:

Faster and more efficient detection: the system gets to quickly identify suspicious patterns and behaviours that might have taken human agent’s months to establish.

Reduced manual review time: similarly, the amount of time spent on manually reviewing information can be drastically reduced when you let machines analyse all the data points for you.

Better predictions with large datasets: the more data you feed a machine learning engine, the more trained it becomes. While large datasets can sometimes make it challenging for humans to find patterns, it’s the opposite with an AI-driven system.

Cost-effective solution: unlike hiring more risk operations agents, you only need one machine-learning system to go through all the data you throw at it, regardless of the volume. This is ideal for businesses with seasonal ebbs and flows in traffic, check outs, or signups. A machine learning system is a great ally to scale up your company without increasing risk management costs drastically at the same time. Last but not least, algorithms don’t need breaks, holidays, or sleep. 

“Fraud attacks can happen 24/7, but even the best fraud managers might come to work on Monday morning with a backlog of manual reviews. Machines can ease up the process by sorting through the obviously fraudulent or acceptable cases,” the company argues.

Many businesses have outsourced their machine-learned fraud solutions to India, one of the world’s leading locations for artificial intelligence development. As a global hub for AI advancement, the Indian sub-continent has a talented work pool and a thriving start-up eco-system with a current market value of ¢6.4 billion, according to recent analysis.

In a LinkedIn blog about the potential risks for retail fraud, Somsubhra Sikdar, head of data science at Anko, India’s largest platform for AI and analytics said: “Machine learning is a powerful tool for fraud detection, enabling retailers to quickly identify and prevent fraudulent activities. By leveraging machine-learned algorithms, retailers improve the accuracy and efficiency of their fraud detection processes, protect their customers from financial losses, and reduce the costs associated with fraudulent activities. Retailers continue to monitor and analyse their data to identify new patterns of fraud and update their fraud detection models to stay ahead of fraudulent activities.”

Once the toothpaste is out of the tube, it can’t be put back as the retail industry which embraced AI finds itself in the invidious position of helping to foster the development of Frankenstein-like technologies which in the wrong hands have as much power to destroy as to create. They need to swot-up on their SWOT analysis and make sure they recognise the weaknesses and threats as much as the strengths and opportunities before unwittingly handing over the keys to the shop to the bots as part of the relentless rise of the machines. 

Leave a Reply



(Your email will not be publicly displayed.)

Captcha Code

Click the image to see another captcha.



iFacility CCTV and Alarm Installation