SAN FRANCISCO — Businesses hoping AI can automate away their security woes should think again, because the technology isn’t a cure-all and is actually introducing new risks, experts warned at the RSAC 2026 Conference here.
“We’re seeing advantages [with AI for defense], but we’re also seeing a lot of hiccups as we figure out how to get there,” Adam Pennington, who oversees MITRE’s ATT&CK framework, said during a panel about how AI is changing the push-and-pull between attackers and defenders.
Security teams are using AI in a lot of the same ways as hackers, Pennington said, especially rapid code-writing. “There does need to be some caution, though, in using it directly in defense,” he said. “False positives have always been a problem in trying to apply machine learning and AI to defense.”
The warnings from Pennington and others on the panel come as businesses rush to purchase AI security services, often with seemingly little regard for their efficacy or tradeoffs.
Paul McCarty, the head of security research for the threat intelligence database OpenSourceMalware, said he had heard from many organizations at the conference that they were “shifting budgets from existing security tools to AI because they don't want to be left behind.”
“That is a terrible, terrible policy,” he said. “Do not shift budgets to AI.”
What AI can and can’t do
McCarty said he used to doubt AI’s value for defense but has now “seen the light.” Even so, he added, AI doesn’t make his job easier — if anything, it makes it harder.
“What it does is fill my pipeline,” he said, with new data and reports to analyze. “I am working harder than I ever have, because I just have so much more data, so much more good, high-quality stuff to look at.”
AI can be immensely valuable for that first-stage data collection, McCarty said, but no company should trust it to do more sophisticated work without human oversight.
“If Claude wrote your YARA rules, they’re probably crap,” he said. “You need to go and test them, you need to verify them, you need to fine-tune them.”
Most AI tools “will need humans in the loop for a long, long time, if not forever,” McCarty added. “We’re going to always have to realize that a human has to be there, but we also have to modify these systems to better handle this volume of things that are coming in.”
Creating new risks
In addition to being an imperfect resource, AI is also a source of risk in its own respect.
“It’s a new surface area that is exposed that we are not thinking about,” said Seeyew Mo, director of training for the DEF CON conference and a former White House cybersecurity staffer.
Pennington agreed that AI, in its relatively immature state, lacked a lot of the security guardrails that emerged only after extensive evaluation and refinement.
“There [are] a lot of new risks that are coming along with it, especially in the cases where it’s fairly immature technology, where people don’t have the decades of building up, ‘How do you deal with threats against it?’” he said. “Everything from prompt injection to trying to manipulate results coming out of these systems.”
McCarty joked that, in the future, every team in capture-the-flag cybersecurity exercises would have one person dedicated to spinning up AI agents just to launch prompt-injection attacks on competing teams’ agents.
Aiding adversaries, with impact if not sophistication
Many businesses buying AI tools are likely steeling themselves for exquisite AI-powered attacks leveraging never-before-seen techniques. But that’s not what’s actually happening, experts said during the panel.
“We don’t have to look for the super new, crazy things,” McCarty said. “AI is helping [hackers] do the things that are already successful.” One example he offered: AI tools are using speed and scale to find exposed security keys for software development platforms. The problem of those exposed keys isn’t new, but they’re often needles in a haystack, waiting for someone to discover them. “This is all stuff that already existed,” McCarty said, “but AI is just hammering it so much.”
Still, AI’s novel qualities are creating new problems for defenders.
“The speed and pace has become almost unmanageable,” Pennington said. “In some cases, we’re seeing actors move so fast that we’re detecting something and our loop for getting in there and starting to do something about it doesn’t work anymore. The exfiltration’s already happened; they’ve already gotten across a lot of systems.”
One solution to the speed conundrum, Pennington said, is to refocus on resilience instead of betting so heavily on intrusion prevention. “It’s important that we start getting a little bit better at some of our fundamental controls, some of our mitigations. … If you can’t deal with the speed, you have to make sure that [attacks] don’t work in the first place.”
And on the detection front, AI’s unnaturally fast operations could actually help businesses modernize their detection processes to confront new attacks.
“The reason why so many of these companies are fairly sure that AI is being used in generating activity against them is that they’re seeing … lateral movement happen too fast to be a human or a team of humans,” Pennington explained. “That may give us an opportunity to do some tuning on our existing stuff.”
“Some of the stuff that adversaries have gotten good at, in terms of evading defenses — they may actually be degrading themselves,” he added.