Editor’s note: The following is a guest post from Anton Chuvakin, security advisor at the Office of the CISO at Google Cloud.
AI presents a fascinating paradox for security leaders — it's a powerful technology promising immense benefits, but also a minefield of risks, new and old. To truly harness its potential, these lingering risks need to be proactively addressed with effective risk management strategies.
By implementing guardrails that combine human oversight, strong underlying security architecture, and technical controls — backed by a carefully refined cyber strategy — organizations can reap bigger benefits from AI. Let’s take a closer look at each of these strategies.
1. Build guardrails to ensure secure and compliant AI
To start, organizations need to use existing risk and governance frameworks as a foundation to build AI-specific guardrails.
Security teams should review and refine existing security policies, identify and mitigate new threat vectors introduced by generative AI, refine the scope of risk oversight, and update training programs to keep up with the rapid advancements in AI capabilities.
In essence, a critical review of current security capabilities can provide the groundwork for the required AI policies.
Human involvement remains crucial in overseeing AI systems that the organization builds and operates, and for establishing effective frameworks. A "human-in-the-loop" approach can help mitigate risks and promote responsible AI use across three key areas:
- Assessing the risks of AI usage: Ranking the risks of AI use cases by categorizing them by factors like data sensitivity, impact on individuals, and importance to mission-critical functions, which can help assess consequences and uncertainties linked with business decisions around AI.
- Technical or operational triggers: Once risks are identified and ranked, security teams should implement technical or operational triggers that require human intervention for critical decisions.
- AI Do’s and Don’ts: To mitigate the risk of unauthorized use of generative AI tools (such as “shadow AI”), organizations should create an Acceptable Use Policy, building an agreed upon “do’s and don'ts” of how an organization and its employees will use AI in the work environment.
2. Prioritize security architecture and technical controls to support AI
To implement secure AI, it’s necessary to have infrastructure and application-level controls that support AI security and data protection. This involves prioritizing security architecture and employing technical controls using the infrastructure/application/model/data approach:
- Building a secure infrastructure: Bolster security with traditional measures like network and endpoint controls, and prioritize updates to address vulnerabilities throughout the AI supply chain.
- Prioritizing application security: Embed secure development practices into your workflow, use modern scanning tools, and enforce strong authentication authorization measures. While some focus on AI-specific issues like prompt injection, a classic SQL injection can “get” them all the same.
- Securing the AI model: Train models to resist adversarial attacks, detect and mitigate bias in training data, and conduct regular AI red team exercises to identify problems. Models are also both very portable and very costly to create, and subject to theft by malicious actors. Test the model, and subsequently, guard the model.
- Implementing data security: Enact robust protocols including encryption and data masking, maintain detailed data records for integrity, and enforce strict access controls to protect sensitive information. Focus on training data provenance, model inputs/outputs, and other related data sets.
By prioritizing and enforcing these measures, organizations can help ensure the security of their AI systems and data.
3. Expand your security strategy to shield AI from cyber threats
A live, constantly refined strategy is essential to mitigating cybersecurity threats against AI as the field changes very rapidly. That is why it’s important to build strong and resilient defenses, as outlined in Google's Secure AI Framework.
When building a resilient cyber strategy to cover AI systems, organizations need to understand the risks of AI — including attacks on prompts, training data theft, model manipulation, adversarial examples, data poisoning and data exfiltration.
They should also explore using AI for security such as for their threat detection and response initiatives.
Finally, organizations should build cyber resiliency through a comprehensive incident response plan that addresses AI-specific issues and outlines clear protocols for detecting, containing, and eradicating security incidents involving AI. Doing this will equip organizations with the right education and tools to safeguard their AI deployments against evolving cyber threats.
In navigating the complex landscape of AI, security leaders are tasked with balancing rapid technology advancement and increased risk. By adopting a multilayered approach that combines strong safeguards, human oversight, technical security controls and a proactive threat defense strategy, organizations can set themselves up for a secure and innovative future.