Discover Financial Services is taking a calculated approach to generative AI.
From experiments and pilots to use cases across the business, the financial institution evaluates how to best use generative AI by assigning specific guardrails based on risk. The process enables adoption with an unobscured lens to better identify value and prioritize projects, whether the technology is customer-facing or intended for back-office tasks.
The approach also grants Discover more protection from the outsized risks generative AI brings.
“All of that is meeting our standards, expectations and our policies around that, but it’s still ‘human in the loop,’” Discover CIO Jason Strle told CIO Dive. “That’s a really big part of how we mitigate that risk, [and] that will last for a certain period of time.”
Discover’s risk reduction strategy closely follows the guidance laid out by the National Institute of Standards and Technology, which released a draft of its generative AI risk management framework in July.
“The NIST AI risk management framework is very, very consistent with financial risk management, non-financial risk management or the operational risk management that banks need to do,” Strle said. “The pattern is very familiar.”
As enterprises approach generative AI with caution, NIST’s risk mitigation guidance is a jumping-off point for businesses trying to determine the best place to start as the technology rapidly evolves. Even as leaders are eager to reap the potential rewards of wide-reaching, large-scale generative AI integration, they are prioritizing efforts to avoid missteps and shape holistic adoption plans.
The popularity of the NIST framework is not coincidental. The government agency has worked for years to fortify standards for cybersecurity, which are recognized broadly, and is now setting the stage to become the standards body for generative AI, too.
An abundance of options
For Discover, Strle distilled NIST’s voluntary framework into three steps:
- Identify where capabilities create risk.
- Prove the organization understands how to quantify and mitigate the risk.
- Monitor on a daily basis.
The final version of NIST’s text, which was the result of President Joe Biden’s executive order last October, offers just over 200 risk-mitigating actions for organizations deploying and developing generative AI. It's a slimmed down version of the 400 steps in the initial iteration published in April.
The NIST AI guidance focuses on a set of a dozen broad risks, including information integrity, security, data privacy, harmful bias, hallucinations and environmental impacts. The framework provides organizations with ways to contextualize and mitigate risks.
To prevent incorrect generated outputs, for example, NIST provides around 19 different actions enterprises can take, such as establishing minimum thresholds for performance and review as part of deployment approval policies.
NIST is not alone in its effort to provide generative AI adoption guidance.
As vendors rushed to embed generative AI into solutions, industry groups and advocacy agencies worked to clear the confusion around model evaluations, risk management and responsible processes.
Those efforts have resulted in an abundance of guidelines, policy recommendations and guardrail options, but no single source of truth.
The International Organization for Standardization released an AI-focused management system standard in December. MIT launched an AI risk database calling attention to more than 700 threats in August and several professional services firms have created governance frameworks.
Whether the growing list of options made the waters murky for CIOs or is actually helpful, depends on who you ask.
“I don’t think it’s a straightforward answer,” Strle said. Having more ways to mitigate threats is not always inherently productive, so it's up to enterprise leaders to decipher what the business needs to be protected.
Standing on the sidelines is only an option for so long.
Executives are contending with tightening regulations on AI around the world, from the European Union’s AI Act to California’s contentious Senate Bill 1047, which California Gov. Gavin Newsom vetoed Sunday. The majority of leaders expect stricter requirements in the future, and businesses are reviewing and updating their existing practices to get on track.
“I have to stay prepared because, eventually, it’s going to make it to the other states,” said Shohreh Abedi, EVP, chief operations and technology officer, membership experience at AAA - The Auto Club Group. The organization has focused on embedding generative AI over the last year, operating across 14 states, a Canadian province, Puerto Rico and the U.S. Virgin Islands.
“We can’t put our heads in the sand,” Abedi said.
Where CIOs draw the line
CIOs are growing tired of seemingly empty promises of what generative AI might do and want to turn talk into action. The technology’s laundry list of risks, however, calls for a more meticulous security overview, requiring new frameworks, best practices and training.
While there are hundreds of ways to mitigate generative AI’s risk, technology leaders don’t necessarily need to rush to deploy them all, analysts told CIO Dive.
CIOs should identify the most critical risks, whether it's reputational damage or from an intellectual property perspective, Thomas Humphreys, compliance expert and content manager at Prevalent, said. “Thinking like that will start to help shape which of those mitigation techniques are most useful to a business.”
Protecting intellectual property when using generative AI tools has become a persistent point of contention as ease of access to third-party tools and employee eagerness have led to a proliferation of shadow AI.
Gartner analysts predict enterprise spending to curb IP loss will hurt ROI and slow adoption in the next two years.
NIST recommends organizations periodically monitor and address sensitive data exposure. At AAA’s second-largest North America club, Abedi said the organization forbids employees to freely put sensitive information into models or use proprietary data to train models.
“The first thing we said was, you can’t use any of our assets to go do your own generative AI,” Abedi told CIO Dive. “We will be monitoring, and if we see that you’ve done an account off my assets, we’re going to come to you and shut it down.”
Employees are encouraged to bring forth use case ideas that solve pain points, Abedi said, but the organization isn’t willing to potentially allow unauthorized third-party providers full access to its host of proprietary information.
That balance was struck after conversations with stakeholders and risk assessments, a strategy NIST highlighted in its guidance as well.
Understanding risk tolerance
NIST recommends organizations base risk mitigation on their level of risk tolerance as a core governing principle.
“An amicable and acceptable approach will be to first evaluate the business needs where AI is implemented and not just dump all AI risk mitigation guidance as a silver bullet,” said Rahul Vishwakarma, senior member of the Institute of Electrical and Electronics Engineers.
When Discover considers adding generative AI to workflows, the business keeps in mind where it currently draws the line.
“If it’s completely autonomous and it’s answering where the nearest ATM is, that’s one kind of risk profile,” Strle said. “Complete autonomy when you’re making a decision that’s going to affect the customer’s financial livelihood or financial outcomes, well, that’s a very, very high set of risk profiles to manage and we’re not there yet.”
Discover has controls and guardrails in place, but it relies on its workers, who have gone through training and have access to usage policies and procedure guidelines, to distinguish the value of generative AI’s outputs. It’s a tactic NIST recommends in its guidance, too.
“A lot of what we’re doing in the contact center is ‘human in the loop,’ where you can leverage these generative AI capabilities and that’s happening parallel to a contact center agent doing their job,” Strle said. “The final decision is with the human, who’s adhering to all the training and processes.”
When generative AI does have a level of autonomy in a particular use case, CIOs need a plan for what happens if models go awry. For some tech leaders having an off-switch is vital.
The City of Glendale, Arizona, turned to generative AI to solve a pressing support issue as the city moved to approve a major renovation to its City Hall, according to former CIO and CISO Feroz Merchhiya, who is now the CIO position of the City of Santa Monica, California.
“I had full control of the data and I had control of the system in terms of if it didn’t work or fired off wrong advice, I could turn it off,” Merchhiya said, referring to the company’s enterprise-focused IT support copilot tool. “And I had a mechanism to rectify the problem by deploying a human resource to solve the problem.”
Risk mitigation and implementation plans work best when devised together, technology leaders told CIO Dive.
Strle said Discover's upfront work to understand how to best use generative AI in the contact center was coupled with an assessment of the risks tied to identified use cases.
“All controls that we create — and financial services have to be sustainable over an indefinite period of time — it [must] take into account all the dynamics of the industry in which we operate, which is constantly changing,” Strle said. “The NIST framework is an extension, in my mind, of that same basic pattern.”
Next up for CIOs
While there are enterprises making progress in risk management, studies have shown consistent discrepancies between the number of businesses deploying generative AI and the prevalence of responsible, secure practices.
Analysts attribute the lag to the quick pace of technological innovation and adoption.
“What I’m seeing with CIOs is that they are more challenged because they are having to make very difficult decisions about technology, even more than they always have because of how quickly these tools, techniques and models are developing,” Rowan Curran, senior analyst at Forrester, said.
Though it commands enterprise interest, generative AI is still evolving and its best practices are not yet solidified.
Plus, managing risks isn’t always simple. More than 3 in 5 executives expect to see a significant increase in the level of risk they will be responsible for in the next three to five years, according to a recent KPMG survey. Around 2 in 5 anticipate more than half of their risk management budget will go to technology.
“There are no prescriptive standards set yet, but these will evolve over time,” Freshworks CIO Ashwin Ballal told CIO Dive. “Right now, it’s like we all have a hammer with AI and we think everything is a nail.”
But a shift is afoot.
Enterprises are quickly growing tired of experimentation and pilot stages. The hype-filled veil is lifting for early adopters as leaders connect use cases to metrics of success.
Interest in generative AI has dipped among senior executives and board of directors since the beginning of this year, according to Deloitte research published in August.
Fortune 500 companies are also more likely to cite AI as a potential risk factor in securities filings than to highlight its benefits or use cases, according to Arize AI research, which analyzed each businesses’ most recent annual report.
The dip in enthusiasm comes as most organizations grapple with adoption roadblocks related to tech debt and inadequate infrastructure, on top of risk management. Still, enterprises are hopeful their AI initiatives can deliver results, using frameworks like NIST's suggested actions to curb adoption risks.
“You have to come to the leadership table with recommendations and suggestions,” Curran said. “Be the one that educates about how this technology can make a difference, how it ties to the business goals and what’s the path to get there.”