AI has hit an inflection point. For years it lingered beneath the surface, useful to many technologies and innovations, but it was controlled by engineers and computer scientists. Machine-driven tools improved cybersecurity systems by allowing AI to handle the most tedious, repetitive tasks.
Then came generative AI, with OpenAI's ChatGPT and other chatbots.
Now AI is available to everyone, those with both good and bad intent. While the adoption of AI language models is an exciting push forward, it has also highlighted the limitations of the technology, according to Vijay Bolina, CISO with Google DeepMind, which researches and produces AI technology.
We’re seeing things like distributional bias and AI hallucinations, Bolina told an audience at RSA Conference 2023 in San Francisco in April. This will force organizations to come to terms with the ethical standards of AI, and the lack of a responsible or trustworthy AI creates new security risks.
As organizations learn more about the ethics surrounding generative AI and how the technology will impact everything from customer interaction to business operations and cybersecurity, there is still a lot of uncertainty around what the overall impact will be today and in the future.
Merging of ethics and security
There is a misconception that when AI is sharing incorrect information, whether purposefully or by accident, it is automatically a security problem. But that’s not the case.
Ethics and security aren’t the same, Rumman Chowdhury, co-founder of Bias Buccaneers, told an audience at RSA.
“There’s one very specific distinction: Most of cybersecurity thinks about malicious actors, but a lot of irresponsible AI is built around unintended consequences and unintentionally implementing bad things,” said Chowdhury.
Disinformation is a good example of this. Bad actors will create a malicious deepfake — and a security problem — but if people are sharing them because they believe the information, now you’ve moved into an ethics problem.
“You have to address both problems,” said Chowdhury. An ethics approach will focus on the context on how something is used, but the security approach is meant to flag any potential problem.
AI red teams
Organizations regularly use red and blue teams to help find points of weakness in the network infrastructure. Red teams go on the offensive and simulate attacks, while the blue team’s job is to defend the organization’s assets from these attacks.
Organizations like Microsoft, Facebook and Google now utilize AI red teams, and the trend is gaining popularity as cybersecurity analysts turn to AI red teams to investigate vulnerabilities in AI systems. They are useful for anyone who is working with large computational models or general purpose AI systems that have access to multiple applications, said Bolina.
“It’s an important way to challenge some of the safety and security controls that we have, using an adversarial mindset,” said Bolina.
The red teams should have a mix of cybersecurity and machine learning backgrounds to work together to understand what vulnerabilities in AI would look like. The problem with building an AI red team is the lack of skilled AI cybersecurity professionals.
And yet, AI — or more specifically, machine learning — can help solve the talent shortage, according to Vasu Jakkal, corporate vice president with Microsoft Security Business and speaker at RSA.
Generative AI can become an ally for new security professionals who may otherwise feel overwhelmed. For more seasoned security analysts, generative AI offers time to develop their skills through automation of repetitive tasks. They can integrate their experience and expertise into the AI tool, essentially sharing those skills with someone who lacks them.
“Imagine if a tier 1 SOC analyst, security operation center analyst, who is just starting out had AI with them to help them learn about investigation or reverse engineering or threat hunting, without any other help, and just learn with the tool,” Jakkal said.
Where AI can harm security
One of the dangers of generative AI is knowing the origin of information. There are few safeguards in place right now; AI hallucinations, when the technology provides incorrect information, can create real security risks.
Sometimes generative AI is careful about what it will not share or it doesn’t have enough information to produce a complete answer, and this often results in bias in its answers, according to Chowdhury.
Security teams need to consider how large language models are trained to not only provide correct information, but also to not disclose sensitive or regulated data.
Of course, there are no perfect security models, so AI security should be built with the future in mind. Inevitably what AI is taught today will be wrong in the future. That could end up creating security risks if organizations aren’t prepared for the shifts in language and technology.
AI is always learning. It has the power to absolutely change the game of security, to tip the scales in favor of defenders — and it will, said Jakkal.