SAN FRANCISCO — The AI era is forcing critical infrastructure sectors to rethink how they share information — balancing risk and opportunity with advanced technology — in situations where critical services are on the line.
“When we talk about [how] people's money or lives are at stake, that's where things get very, very real very quickly,” Pam Lindemoen, the chief security officer and vice president of strategy at the Retail & Hospitality Information Sharing and Analysis Center (RH-ISAC), said during a panel at the RSAC 2026 Conference here on Monday.
Speaking alongside representatives from two other ISACs, Lindemoen said the information-sharing groups were scrambling to understand how AI could help them better analyze and distribute threat intelligence, a vital service at a time when nation-state and cybercriminal hackers are becoming more sophisticated and aggressive.
“How do we balance new technology, getting things faster, without diluting the message and without it becoming noise?” Lindemoen asked.
If automated threat information-sharing led to lower-quality output, she warned, critical infrastructure organizations would lose trust in their ISACs.
“If we lose that center of trust,” she said, “that's where I think things are going to go really poorly for us and for our peers in the industry as well.”
Her colleagues from the other ISACs agreed.
“People have to see the value in the information that's being shared,” said Denise Anderson, the president and CEO of the Health-ISAC.
John Denning, the CISO of the Financial Services Information Sharing and Analysis Center (FS-ISAC), added that “trust underpins everything, but the thing that makes it valuable is converting trust into action.” He said his ISAC was focused on offering implementable advice to members.
The ISAC leaders’ conversation about preserving trust in the AI era comes as their organizations still strive to convince members to open up and share information beyond what they may initially find comfortable.
Anderson recounted the early days of the FS-ISAC, when banks and other institutions were reluctant to attach their name to the information they shared, which in turn discouraged others from sharing. “We made it easier for people to share,” she said, “and when they actually saw who was sharing, they shared more.”
AI alarm bells ring
All three ISAC leaders acknowledged that AI automation could jeopardize the quality of the threat intelligence streaming across their platforms, but RH-ISAC’s Lindemoen was the most vocal skeptic of the technology.
“When I’m in circles amongst friends,” she said, “we’re talking about the collateral damage that having AI managing your workloads could have at scale.” She expressed particular concern about AI analysis of threat data breaking the “chain of custody” of that information, undermining participants’ confidence in the integrity of the data.
“That's what we have to think about as practitioners — how we put guardrails around what access [we give to AI], what level, and its scale and its speed,” Lindemoen said. “And I'm just not seeing anything on the market right now that people trust.”
Critical infrastructure organizations are “thinking about [AI] very optimistically,” she added, “but cautiously, which I think is important in our business.”
Limited embrace of AI
Some ISACs, however, have found ways to improve their work using automation.
The Health-ISAC is looking into whether AI can speed up its distribution of threat alerts or its collation of open-source intelligence for daily reports. These current uses of AI for ISAC work involve “taking a lot of the noise out of the work and making it much more efficient,” Anderson said.
In addition to leading H-ISAC, Anderson chairs the National Council of ISACs, and she said the council was considering setting up a cross-sector AI working group to exchange information and best practices about how to responsibly use the technology.
However ISACs end up using AI, the panelists agreed that the groups need to maintain their spirit of mutual aid.
“If you're a big enterprise and you're part of a community, I almost feel like it's your duty to help the smaller teams,” Lindemoen said. Just as critical infrastructure organizations banded together to figure out the best ways to adopt cloud technology in an earlier era, she said, they will also coalesce around strategies for safely using AI.
Anderson offered a real-world example of that collective spirit.
When Iran-linked hackers launched distributed denial-of-service attacks on U.S. banks in 2012, they accidentally targeted a small Midwestern community bank that shared the name of a large financial institution. After the hackers “took them down hard,” Anderson recalled, FS-ISAC members sent personnel to the bank and helped it restart operations.
“That's the power that we have as a community to help each other out,” Anderson said, “because one day that could be you on the ropes.”