OpenAI has disrupted more than 20 adverse operations leveraging its ChatGPT service for tasks including malware debugging, target reconnaissance, vulnerability research and generation of content for influence operations, the company revealed in a report published Wednesday.
The generative AI (GenAI) company also uncovered a spear-phishing campaign targeting its own employees, conducted by a threat actor that additionally used ChatGPT for various tasks. Several case studies of threat actors found to be using ChatGPT are outlined in the report, along with lists of tactics, techniques and procedures (TTPs) and indicators of compromise (IoCs) for some of the attackers.
Overall, OpenAI reported that the use of ChatGPT by cyber threat actors remained limited to tasks that could alternatively be performed using search engines or other publicly available tools, and that few of the election-related influence operations leveraging ChatGPT scored higher than Category Two on the Brookings Institution’s Breakout Scale.
“Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” the report states.
CyberAv3ngers use ChatGPT to research default credentials for ICS devices
One of the known threat actors identified in the OpenAI report is CyberAv3ngers, a group suspected to be affiliated with the Iranian Islamic Revolutionary Guard Corps (IRGC). CyberAv3ngers is known to target critical infrastructure including water and wastewater, energy and manufacturing facilities, especially in the United States, Israel and Ireland.
OpenAI discovered the group using the ChatGPT service to research information on industrial control systems (ICS) used in critical infrastructure, including by searching for default credentials for Tridium Niagara and Hirschmann devices.
The threat actors also researched vulnerabilities in CrushFTP, the Cisco Integrated Management Controller and Asterisk Voice over IP software and sought guidance on how to create a Modbus TCP/IP client, debug bash scripts, scan networks and ZIP files for exploitable vulnerabilities, and obfuscate provided code, among other inquiries related to detection evasion and post-compromise activity.
The report noted that the activity on CyberAv3ngers’ OpenAI accounts, which have since been deleted by OpenAI, suggested the group may be seeking to target industrial routers and programmable logic controllers (PLCs) in Jordan and Central Europe, in addition to its usual targets.
OpenAI stated that the interactions between CyberAv3ngers and ChatGPT did not provide the threat actors with “any novel capability, resource, or information, and only offered limited, incremental capabilities that are already achievable with publicly available, non-AI powered tools.”
OpenAI employees targeted in spear-phishing malware campaign
The report also revealed a spear-phishing campaign that was conducted against OpenAI employees by a suspected China-based threat actor known as SweetSpecter. OpenAI investigated the campaign after receiving a tip from a “credible source,” finding that the threat actor sent emails to personal and company accounts of OpenAI employees posing as ChatGPT users seeking assistance with errors they encountered on the service.
The emails came with a ZIP attachment containing a LNK file that, when opened, would display a document listing various errors to the user; however, in the background, the file would launch the SugarGh0st remote access trojan (RAT) on the victim’s machine.
OpenAI found that its email security systems prevented the spear-phishing emails from ever reaching the inboxes of company email accounts. Additionally, OpenAI discovered that SweetSpecter was separately using ChatGPT to perform vulnerability research, including on Log4j versions vulnerable to Log4Shell, target reconnaissance, script debugging and assistance writing social engineering content.
Threat actor leaks its own malware code through ChatGPT
In a third cyber operation uncovered in the ChatGPT report, an Iran-based threat actor known as STORM-0817 was found to be developing a new Android malware not yet deployed in the wild.
STORM-0817 provided code snippets to ChatGPT for debugging and development support, revealing a “relatively rudimentary” surveillanceware designed to retrieve contacts, call logs, installed packages, screenshots, device information, browsing history, location and files from external storage on Android devices.
Piecing together information sent to ChatGPT by the threat actor, OpenAI found that STORM-0817 was creating two Android packages – com.example.myttt and com.mihanwebmaster.ashpazi – containing the malware and was attempting to use ChatGPT to help develop server-side code to facilitate connections between compromised devices and a command and control (c2) server with a Windows, Apache, MySQL and PHP/Perl/Python (WAMP) setup, using the domain stickhero[.]pro for testing.
Indicators for the unfinished malware were included in the report, along with information about another tool STORM-0817 was seeking to develop to scrape information from Instagram. OpenAI found STORM-0817 appeared interested in scraping information about Instagram followers of an Iranian journalist who is critical of the Iranian government, as well as translating LinkedIn profiles of individuals working at the National Center for Cyber Security in Pakistan to Persian, seeking ChatGPT’s assistance in these tasks.
“We believe our models only offered limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools,” OpenAI concluded.
AI-driven election influence campaigns fail to gain momentum
The report also contained numerous case studies on election-related influence campaigns targeting elections in the United States, Rwanda and the European Union, but noted that none of these campaigns managed to garner significant engagement on social media.
Threat actors based in Russia, Iran, the United States, Israel and Rwanda used ChatGPT to generate content ranging from short replies to longer form articles aiming to sway political opinion on a range of topics, including upcoming elections.
For example, one US-origin influence network known as “A2Z” generated short comments and stylized political images to post on about 150 accounts on X and Facebook, mostly focused on praising the government of Azerbaijan using fake personas. After the OpenAI accounts associated with A2Z were closed, the affiliated social media accounts stopped posting, with the largest following among all of the accounts noted to be just 222 followers at the time of campaign was disrupted.
Another campaign, dubbed “Stop News,” conducted by a Russia-origin threat actor, extensively used OpenAI’s DALL-E image generator to create imagery accompanying social media posts and articles promoting Russian interests. While the social media activity saw little success and engagement, the report noted that fake news sites produced by the campaign managed to gain some attention through “information partnerships” with a few local organizations in the United Kingdom and United States, and the influence operation was scored as Category Three on the Brookings Breakout Scale.
This latest OpenAI report follows an earlier report published in May that described the use of ChatGPT in five influence campaigns originating from Russia, China, Iran and Israel, as well as the disruption of another Iranian election-related influence campaign leveraging ChatGPT in August.
In February, Microsoft and OpenAI revealed the use of ChatGPT by Russian, North Korean, Iranian and Chinese nation-state threat actors for basic research, scripting and translation tasks, with Microsoft first proposing the integration of large language model (LLM) related TTPs into MITRE frameworks.