ChatGPT prompts experts to consider AI’s mark on cybersecurity
The rapid ascendance of ChatGPT sparked a scramble in the cybersecurity space as multiple vendors rushed to embed the generative artificial intelligence tool into products designed to protect organizations from cyberattacks.
Microsoft combined its security network and global threat intelligence capabilities with GPT-4 to launch Security Copilot in March, and Cohesity last week announced plans to integrate its data security products with Microsoft services and OpenAI to better detect and classify threats.
Microsoft is a strong supporter of ChatGPT maker OpenAI and has invested more than $11 billion in the company thus far.
OpenAI is not exclusive to Microsoft. Recorded Future released a tool last week that uses OpenAI’s large language models to help its threat intelligence customers detect threats automatically in real time by identifying breach indicators, vulnerabilities and misconfigurations.
It’s early days for the technology, but as technology vendors race to plug in generative AI, cybersecurity vendors will respond in kind. ChatGPT and generative AI won’t apply or warrant consideration in every layer of cybersecurity, but it is widely expected to impact critical areas, including product development and synthesizing disjointed streams of data to produce insights that will stir organizations into action.
“I think we're starting to see the early, early uses of it in the security world,” said Jon France CISO at (ISC)2.
Security vendors have weaved AI and machine learning into products and threat detection processes for decades, France said, and it’s proven quite successful at improving the signal-to-noise ratio, identifying anomalies or patterns that humans can’t do at scale.
ChatGPT and generative AI are expected to strengthen defensive capabilities by assisting with coding and alerting organizations to the most critical vulnerabilities or misconfigurations.
The flip side, of course, is that threat actors are also using the technology to craft seemingly authentic phishing lures and test its ability to write malware code.
The potential breakthroughs ChatGPT could achieve in cybersecurity are still largely under development. Justin Fier, SVP of red team operations at Darktrace, expects ChatGPT and generative AI to deliver more positive outcomes than negative, but it’s still too early to say exactly what that might be and how.
“Am I seeing the whole market being taken over by it? No. Do I think six months from now, 12 months from now, that’ll probably be a different case? Absolutely," Fier said. "I think we’re all still letting the dust settle.”
Potential use cases for good
ChatGPT may not completely reshape the underpinnings of cybersecurity defense, but many expect generative AI to make its mark in security.
Keeper Security CEO and Co-Founder Darren Guccione expects products like ChatGPT to accelerate the development of new security products and processes.
“Everything is going to remain the same from an A-to-Z perspective, functionally the same, but things are going to catalyze and move much quicker with the advent of this technology as it embeds itself into product development lifecycles,” Guccione said.
Previous AI advancements pulled into cybersecurity tools and practices, such as autonomous threat and vulnerability detection, could be a precursor of what’s to come.
Some cybersecurity professionals are asking ChatGPT questions or prompting the tool to write code, suggesting generative AI could be another indicator of health that professionals will have to contextualize for broader use, France said.
Conceptualizing and understanding data will be one of the biggest wins for ChatGPT, but professionals still have to ask the right questions, Fier said. Reaping the value of tools like ChatGPT requires technologists to understand what the technology can and cannot do.
“It’s not a sentient being. It’s not just going to run your [security operations center] for you and make decisions on your behalf,” Fier said.
Where AI shines in cybersecurity, according to many experts, is by taking on tasks that humans can’t.
“Us mere mortals, we can't really operate at the levels we would need to to work and operate efficiently,” Fier said.
This is where generative AI tools like ChatGPT can step up, by making the more rote or overwhelming parts of cyber defense easier.
“Things are changing so quickly within the environment, that us humans are just not up for that task," Fier said. "It's not our fault, we just can't, we don't have the compute power.”