Dive Brief:
- OpenAI said it terminated accounts of five state-affiliated threat groups who were using the company’s large language models to lay the groundwork for malicious hacking campaigns. The disruption was done in collaboration with Microsoft threat researchers.
- The threat groups — linked to Russia, Iran, North Korea and the People’s Republic of China — were using OpenAI for a variety of precursor tasks, including open source queries, translation, searching for errors in code and running basic coding tasks, according to OpenAI, the company behind ChatGPT.
- Cybersecurity and AI analysts warn the threat activity uncovered by OpenAI and Microsoft is just a precursor for state-linked and criminal groups to rapidly adopt generative AI to scale their attack capabilities.
Dive Insight:
The threat activity disclosed by OpenAI and Microsoft seems to confirm widespread concerns about the potential abuse of generative AI by malicious threat groups.
Concerns center around the ability to rapidly scale and accelerate attacks beyond the capability of network defenders to hunt for potential hackers and take mitigation measures.
“It’s very significant,” said Avivah Litan, VP distinguished analyst at Gartner. “GenAI basically puts the attackers on steroids. They can scale attacks, they can spread their attacks much more quickly.”
OpenAI cautioned that prior red-team assessments showed GPT-4 allowed malicious attackers only limited, incremental improvements over what they could achieve using non-AI tools that are publicly available.
The actors include Russia-linked Forest Blizzard, North Korea-linked Emerald Sleet, Iran-linked Crimson Sandstorm and China-linked Charcoal Typhoon and Salmon Typhoon.
Microsoft said it has not yet seen any uniquely novel methods or significant attacks using large language models, but the company is tracking the activity and pledged to promptly issue alerts regarding any observed misuse of the technology.
“Actions like those by the state-affiliated groups highlight the need by cyber defenders to leverage AI’s existing benefits as it pertains to cybersecurity and further innovate in the space to ensure that we stay ahead of adversaries,” Brandon Pugh, director of cybersecurity and emerging threats at R Street Institute, said via email.