Dive Brief:
-
As cyberattacks against major companies become more sophisticated, a growing number of IT security leaders are worried about the potential use of artificial intelligence by cybercriminals, according to a report released Thursday by MIT Technology Review Insights on behalf of cybersecurity firm Darktrace.
-
The report — based on a survey of 300 global executives, directors and managers — shows that 60% of respondents say human responses are failing to keep up with the pace of cyberattacks, while 96% have in some way begun to protect their companies against AI-based threats.
-
About 75% of respondents were most fearful of email threats, while 68% of respondents were most concerned about advanced spear phishing and impersonation threats.
Dive Insight:
The report highlights a larger debate about the role of automation in the development of ever more sophisticated campaigns against major U.S. companies, security firms and critical infrastructure.
"We know AI can be used for good, but it can be also used for malicious intent," Justin Fier, director of cyber intelligence & analytics at Darktrace. "Offensive AI attacks allow criminals to contextualize relevant information, scale up operations, self-propagate to blend in with regular operations."
AI also makes attribution and detection more difficult, he said.
The report cited the nation-state attack against SolarWinds, which was uncovered in December 2020, and the Oldsmar, Florida water system hack in February as examples of recent attacks where traditional security tools failed to detect the threat ahead of time.
During the Oldsmar incident, unidentified threat actors gained access to the supervisory control and data acquisition system at a water treatment facility, before a plant operator noticed changes in dosing amounts of sodium hydroxide. The attackers took advantage of flaws in desktop sharing software and the facility was using an outdated version of Windows 7.
"Bad actors have always used automation to 'spray and pray' cyberattacks to find vulnerable targets and/or pernicious new attack signatures," Mike Gualtieri, vice president and principal analyst at Forrester Research, which was not involved in the report. "AI enables cyberattackers to test attack hypotheses faster by using machine learning."
The use of AI in cybersecurity has been discussed in conceptual form for many years. Last year, Gartner researchers cited AI security as a Top 10 strategic technology trend, noting that attackers were beginning to use machine learning and other AI techniques to develop their attack methods. In response, companies need to use AI and ML to anticipate potential attack techniques.
"The best approach is a robust layered security approach including threat hunting, endpoint and network security, user and entity behavior analytics and much more — that themselves leverage AI to protect the enterprise and detect the attacks," said Avivah Litan, distinguished VP analyst at Gartner, via email.
While many in the industry have discussed the potential for AI in cybersecurity, not all have been convinced.
Kevin O'Brien, co-founder and CEO of GreatHorn, considers nearly all of the talk about AI and cyberattacks to be marketing hype designed to sell new security technologies to the enterprise customer. He said AI is a buzzword that primarily "lives in the fever dreams of marketers" looking to exaggerate the limits of existing machine learning algorithms.
"The fact is that most cybersecurity breaches are hand-crafted by incredibly targeted threat actors," he said.