Generative AI has created unprecedented challenges for CISOs, increasing the complexity of enterprise environments while providing bad actors with the tools to create more sophisticated attacks.
Security leaders now face the dual task of mastering this evolving technology and integrating AI into their defense strategies, all while ensuring the security and privacy of AI tools.
The data shows that these challenges are top of mind for CISOs. In Tines’ survey on AI adoption, 94% expressed concerns that AI will increase pressure on their teams.
But how are leading CISOs leveraging AI today? Are they feeling optimistic or underwhelmed by AI’s impact so far?
These are some of the questions I posed to Mandy Andress, CISO of Elastic, and Matt Hillary, VP of Security and CISO of Drata, during our webinar, How to make AI an accelerator, not a blocker.
Today, I’m sharing seven key takeaways from our conversation, including some valuable insights for forward-thinking security leaders.
1. AI is already providing security teams with significant benefits
Both CISOs I interviewed said AI is helping them reduce repetitive and manual tasks, such as responding to large volumes of security alerts.
Elastic’s CISO Mandy Andress told me, “We could automate bringing in asset data, owner data, application criticality to the business, IoCs, etc. using today's tools. But what we couldn't always do was tie that into what's happening in the threat environment around us, because that's always changing. Having some of that accessible via an LLM allows you to apply better context in a world that's changing quickly.”
2. Ensuring the quality and security of AI’s output remains a challenge
Both CISOs were concerned about visibility into the “black box” of AI.
But as CISO of the leading platform for search-powered solutions, Andress said she was encouraged to see security teams being vigilant about the security and privacy features of their AI tools. “I see a desire for more transparency in the AI space,” she says. “From a product perspective, it's about being explicit and letting customers use what works best and what's approved by them and helps their environment. It’s not us dictating what needs to be there.”
3. An AI steering committee can help organizations ensure strong governance
To help govern AI usage, both security leaders suggest forming cross-functional AI committees.
Andress explained, “It's representation from technology, from security, from legal, compliance, business and bringing all of those perspectives together. I think some companies will put accountability on a Chief AI Officer, but they'll still bring together these same groups to understand what we need to watch out for, and what ideas we have for utilizing AI in the business.”
4. AI is helping bad actors with phishing (but not much else)
While Andress and Hillary had concerns about AI, they haven’t seen it significantly change cybercriminal tactics.
Hillary explained that, while bad actors use AI, human creativity remains the biggest threat. “There are still humans behind these [phishing emails and deep fakes], creating content, creating misinformation. I think they have much greater impact, as far as what might hurt us in the long term.”
5. The pressure to adopt AI is real
Security leaders are facing significant pressure from leadership and employees to adopt AI. Executives hope AI will “supercharge” their organizations, while employees from across teams are eager to use AI in their roles.
As CISO of security and compliance automation platform Drata, Hillary knows how big this challenge is. “AI has added a whole new domain to the already extensive list of things that CISOs have to worry about today,” he said. “There’s lots of additional domain-level knowledge that we'll need to increase on our teams and individually.”
6. CISOs are excited by AI’s potential to help them strengthen defenses
When asked about the future of AI, both security leaders hoped AI could become the connective tissue between the tools - 76 on average - that their teams use to protect their environments.
Hillary shared, “One thing I haven't seen yet, but I'm excited for as a CISO, is asking an LLM, ‘How is my posture? What are the things that are exposed today that weren't yesterday? Give me a dossier of my own perimeter that a hacker might use to come at me.’”
Products like the newly-released Tines Workbench, an AI chat interface that allows practitioners to access proprietary data and take action in real time, are already helping teams achieve this level of efficiency.
7. Keeping humans in the loop is critical to a security team’s success
Both CISOs I spoke with are eager to find the right balance between leveraging AI and maintaining human oversight.
“I've always been biased towards the automation of problems,” Hillary says. “But you still have humans to herd the bots, right? The creativity, the inspiration, the thinking out of the box, all the things that we bring as humans, I don't think they’re going to be materially replaceable. But AI is going to increase our capability and capacity on the automation side, more than I think we've seen before.”
To learn more about how leading CISOs are approaching AI, read the full results of Tines’ survey.