Generative artificial intelligence (AI) poses a slew of cybersecurity risks, particularly in social engineering scams that are prevalent in Hong Kong, according to a cybersecurity firm that is also deploying AI to counter the threat.
The emergence of advanced generative AI tools such as ChatGPT will enable certain types of scams to become more common and effective, said Kim-Hock Leow, Asia CEO of Wizlynx Group, a Switzerland-based cybersecurity services company.
“We can see that AI voice and video mimicking continues to seem more genuine, and we know that it can be used by actors looking to gain footholds in a company’s information and cybersecurity [systems],” he said.
Social engineering like those conducted over the phone or through phishing emails are designed to fool victims into believing they are conversing with an authentic person on the other end.
In Hong Kong, scams conducted through online chats, phone calls and text messages have swindled people out of HK$4.8 billion (US$611.5 million). AI-generated audio, video and text are making these types of scams even harder to detect.
In one example from 2020, a Hong Kong-based manager at a Japanese bank was fooled by deepfake audio mimicking his director’s voice into authorising a transfer request for US$35 million, according to a court document first reported by Forbes.
The scammers ultimately made off with US$400,000, as the manager also believed he had emails in his inbox confirming the director’s request.
Just a year earlier, a similar scam led a British energy company to wire US$240,000 to an account in Hungary.
Governments are starting to wake up to the new threat. In February, Beijing’s municipal public security bureau warned in a statement on WeChat that “villains” may use generative AI to “commit crimes and spread rumours”.
In March, the US Federal Trade Commission issued a warning about scammers weaponising AI-cloned voices to impersonate people, adding that all they need is a short audio clip of the person’s voice from online.
A much likelier scenario than AI audio or video, however, is AI-generated text used in phishing emails, according to Leow.
“Everyone gets phishing attacks, but they are sometimes easily detectable due to the length, typos, or because they lack relevant context to you and your job,” he said. “But now, cybercriminals can use new AI language models to increase the sophistication of their phishing emails.”
One way this might work is by using a tool like ChatGPT to clean up and professionalise the language of a message. This can include quickly conducting background research on an entity or industry for added contextual information to personalise phishing emails, Leow explained.
Wizlynx, whose clients include finance companies and government-affiliated entities in Hong Kong, and companies like it are now deploying ChatGPT themselves to better fight these more sophisticated scams.
Tactics include using the chatbot to generate phishing emails for training purposes. Wizlynx is also using it to identify vulnerabilities and conduct research on cybersecurity systems.
“Based on the knowledge and data that AI can gather and generate over time, cybersecurity professionals can use it to get more accurate identification of a security system’s risk and vulnerability areas,” Leow said.
“We have to encourage cybersecurity professionals and other industries to make use of ChatGPT itself to improve defences,” he added. “In a way, it is a double-edged sword that will be used for both cybersecurity and cybercrime.”
Some of the threats cybersecurity firms point to remain hypothetical for now. Digitpol, a global provider of digital risk solutions, has warned that AI models can be trained to bypass security filters and detection signatures by quickly writing malware and malicious code.
Leow said he is unsure if AI-generated malware has been used in recent attacks, but noted that currently available security measures can only minimise the risk of these highly sophisticated threats.
“We will have to hope that the owners of ChatGPT and other generative AI models will do everything they can to minimise the chances of abuse by bad actors,” he said.
The terms of service from ChatGPT creator OpenAI prohibit the use of its technology for illicit purposes. The company also has technical solutions in place, but there is a risk that bad actors could bypass ChatGPT’s filters, according to Leow.
Dark web forum posts identified in a January report from Check Point Research indicated that cybercriminals have figured out how to manipulate ChatGPT into producing basic but viable malware.
OpenAI did not respond to a request for comment.
Cybercrime is expected to cost US$8 trillion globally this year in damages that include stolen funds, property loss and reduced productivity, according to a report from Cybersecurity Ventures.
The losses would be larger than the gross domestic product of every country except the US and China.
In the face of this threat, cybersecurity experts will continue to evolve in what is becoming an AI arms race, according to David Fairman, Asia-Pacific chief information officer for Netskope, who previously held top security roles at global banks including Royal Bank of Canada and JPMorgan Chase.
“In the coming years and months, we will see security teams effectively embracing AI to improve threat identification and automate much of the defence process,” he said. “AI is commonly used in many of the latest cybersecurity products that are used by security teams today, and we will see this continue to evolve.”