Coding that can be used in spear phishing attacks was developed by researchers at Check Point using ChatGPT and Codex, both of which rely on standard English instructions.
Cyber security firm Check Point noted in a research note published on Tuesday that OpenAI’s ChatGPT, the large language model (LLM) based artificial intelligence (AI) text generator, appears to be able to generate code for malicious tasks. To develop code for launching spear phishing attacks, researchers at Check Point used ChatGPT and Codex, another OpenAI natural language to code generator that also uses standard English instructions.
The most significant risk associated with AI code generators is that they make it easier for malicious hackers to gain access to systems by using natural language processing (NLP) tools. Since the code generators do not require the user to have any prior coding experience, anyone with access to the open web can piece together the information flow logic used in a malicious tool and use it to generate syntax for their own malicious tools.
Check Point’s demonstration illustrated the problem by showing how a phishing email scam template was created using the AI code generator and then refined using plain English instructions. Since these tools were used to create an entire hacking campaign, it follows that any user with malicious intent can use them.
Threat intelligence group manager at Check Point, Sergey Shykevich, said that programs like ChatGPT have the “potential to significantly alter the cyber threat landscape.”
“Hackers can also iterate on malicious code with ChatGPT and Codex. AI technologies represent another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities,” he added.
While open-source language models can be useful for making cyber defense tools, the lack of safeguards against their being used to make malicious tools is cause for concern. Check Point noted that despite ChatGPT’s “against our policy” wording, there are no restrictions preventing its platform from being used to develop hacking tools.
The potential for abuse of an AI language and image-rendering service is not new. Lensa, an AI-based image editing and modification tool by US-based Prisma, also brought attention to how the absence of filtering based on body image and nudity may lead to privacy-nullifying images created of an individual without consent.
What do you think the future of ChatGPT is?