Google Cloud’s VP and CISO Phil Venables is unequivocally convinced generative AI can and will give defenders an advantage over attackers, and by quite a significant margin in the next three to five years.
Venables and his colleagues at Google argue AI can reverse the defender’s dilemma, a “classic truism, if not cliche, that attackers only have to be right once but defenders have to be right all the time," he told Cybersecurity Dive.
Google launched the AI Cyber Defense Initiative last month to advance the development of AI for digital security. A corresponding report it published at the same time proposes the development of autonomous cyber defenses, and research in the design, build and use of AI in system safety.
Most of the report is visionary with forward-looking examples that have yet to materialize, including AI-integrated defensive systems with access to telemetry and analysis, tools that manage and fix an organization’s attack surface and systems that learn from global attack data and find vulnerabilities more comprehensively than attackers.
The structural characteristics of AI fuels Venables' optimism in developing foundation models trained on threat data, institutional data and security knowledge and tooling that can add up to at least a tenfold transformation in capability and productivity.
“Fundamentally, when you think about what an AI is and does it’s something that’s trained on a set of data and then fine tuned with an organization’s own data, plus a lot of the tuning of the organization’s context and expertise,” Venables said.
“This is exactly the asymmetry in the favor of the defender, because we as defenders can fine tune an AI to be our perfect assistant to the whole of the environment. Whereas an attacker has to be generic across everything,” Venables said.
“By the way, if the attacker had all the data that the organization has then they don’t need to attack anyway because they’ve already won.”
Attackers’ AI use remains narrow, by design
Executives at many cybersecurity firms view generative AI as a viable mechanism to boost defense and lift the performance of their respective businesses, but not everyone is convinced the technology will deliver significant benefits.
Defenders can already reorient systems around resilience and reclaim attacker advantages for themselves with modern software engineering practices, according to Kelly Shortridge, senior principal engineer in the CTO office at Fastly.
“I think it’s pretty much a solution in search of a problem, especially in cybersecurity,” Shortridge told Cybersecurity Dive last year, commenting on generative AI technology broadly. “I think we want to be part of the new hotness, but we don’t really understand how it applies.”
Security researchers have yet to observe evidence of threat actors using generative AI to substantially improve their operations or initiate cyberattacks. But just because AI hasn’t proven to be effective for attackers to date, doesn’t mean the same outcomes will apply to defense.
“What excites me about the opportunity, particularly in security going forward, is just the ability for us to augment both human operations capabilities to analyze threats more quickly, and to generate more automated defenses ever more quickly,” Venables said.
The “slightly cynical reason” attackers haven’t extensively added AI to their arsenal of tools thus far is because they haven’t had to, Venables said. Threat actors are achieving their goals without AI.
“I like to think that attackers are rational economic actors. They're not going to spin up a whole bunch of activity when they’re being successful every day with their traditional techniques,” Venables said.
Threat actors are using AI to craft more accurate and luring phishing campaigns, which is the “perfect use case because it can mass produce” messages in a place that gives attackers the maximum economic return as they social engineer their way into some form of unauthorized access, Venables said.
Rewards outweigh risks
His predictions for AI in security contrast with his counterpart’s current outlook for the technology at AWS. AWS CISO Chris Betz told Cybersecurity Dive last week it’s too early to say if the advantages afforded by generative AI rest with defenders or attackers.
“There's clearly a long way to go in terms of adoption, integration, fine tuning and making it real inside organizations,” Venables said. “But I definitely think that, medium to long term, it gives defenders a pretty significant structural advantage.”
Flipping the script on security with a big assist from AI requires some degree of expertise in an organization, specifically as it integrates AI models with tools, internal data and context.
“For many organizations, there's still a lot of basic vulnerabilities that have yet to be closed down,” Venables said.
AI can help businesses identify problematic vulnerabilities, he said. The technology can also serve as a guide for proper configurations in cloud or on-premises infrastructure and the right way to construct software.
“The common pattern across all of this is, AI is great at many things when they're acting on their own, but they're typically great at most things when you use them as a complement to a human activity,” Venables said. “You're using it to amplify skills and productivity of an existing human, and I think that’s pretty exciting.”
There are a few signals, or leading indicators, Venables is looking for as evidence of defenders gaining the upper hand with AI:
- Increased productivity of the security team’s efforts
- Expanded sensory coverage with the same amount of people and tools
- Quickened resolutions of a known new threat or vulnerability
When, and if, these outcomes are realized with AI for cyber defense, the lagging indicators of fewer cyberattacks and security incidents will almost be guaranteed, according to Venables.
While Venables has broad concerns about how organizations can safely adopt AI, he doesn’t foresee risks akin to the overcorrected images Google’s next-generation large language model, Gemini 1.5, generated soon after its release in mid-February.
“We're literally just using its very deep structural analysis capability to look at a context of data against a foundation set of knowledge,” Venables said. “You do have to be careful to fine tune its output but you're generally not subject to the same types of risks that you would do if you're just running it as a general image generator or a general chatbot out there being subject to any type of prompt.”