Dive Brief:
-
Artificial intelligence could change the scale and scope of hacking in the near future, as new technologies advance hacking from a human activity to one that could be dominated by computers, Bruce Schneier, a security technologist, researcher and lecturer at the Harvard Kennedy School, said during a virtual keynote Monday at the RSA Conference 2021.
-
When AIs are able to discover new software vulnerabilities in computer code, it will be a major boon to hackers everywhere, he said. They will be able to exploit those vulnerabilities to hack computer networks on a global scale, far beyond human capability.
-
However, AI could also be useful for defense, he said. A software company could identify and patch all automatically discoverable vulnerabilities in its software code before releasing that product for sale.
Dive Insight:
AI is becoming part of everyday decisions in society, but there are increased risks in how the technology could be used to bypass the ability of humans to manage its capabilities, Schneier said. Organizations are using AI for everything from screening job applicants to approving loan applications and deciding who gets accepted or rejected for college entry or government benefits.
Schneier compared hacking to the current tax code. While not computer code, the tax code is essentially a series of algorithms, which include inputs and outputs. The tax code has vulnerabilities called tax loopholes, as well as exploits labeled as tax avoidance strategies.
"And there's an entire industry of Black Hat hackers, looking for exploitable vulnerabilities in the tax code," he said. "We call them tax accountants and tax attorneys."
Based on that reasoning, Schneier offered two definitions of what a hack really is:
- "Something that a system permits, but is unanticipated and unwanted by the designers," he said.
- "A clever, unanticipated exploitation of a system" which subverts the rules of the system at the expense of some other part of the system.
Security researchers and CISOs have debated the rewards and potential risks of AI more frequently in recent years, as cyberattacks have become far more sophisticated and nation-states have increasingly used new technologies to attack government and private sector computer systems on a scale never before seen.
Researchers have raised concerns of IT security leaders about the potential use of AI by cybercriminals. Sixty percent of exectutive say human responses are failing to keep pace with cyberattacks, according to a study released in April by MIT Technology Review Insights on behalf of Darktrace.
The cocern with AI is it doesn't incorporate context or moral values or restraint in the same way that humans do, Schneier said. It just works to find a way to exploit a loophole and will attempt some unusual methods of achieving that goal.
"So while a world filled with AI hackers is still science fiction, its not stupid science fiction," he said. "And we better start thinking about its implications."