Dive Brief:
- The unclear legal ramifications of using generative AI tools, along with fears of proprietary information leaks, will ultimately impede enterprise progress, according to Gartner research released Wednesday.
- By 2026, enterprise spending to curb copyright infringement and avoid loss of intellectual property will hurt ROI and slow generative AI adoption, Gartner analysts predict.
- When assessing the broad sets of risks that could erode value realization, generative AI’s IP and copyright problem represent a notable hurdle, Rita Sallam, distinguished VP analyst at Gartner, told CIO Dive.
Dive Insight:
Generative AI has a tendency to supercharge existing risks.
Procter & Gamble kept IP leakage risks in mind when it built chatPG, an internal generative AI tool built using OpenAI’s API. The company added stronger safeguards before rolling the tool out to employees across the business.
American Honda, similarly, was looking to give teams access to a conversational AI tool, like ChatGPT, without the risk of losing intellectual property. Copyright infringement and protection against bias was also a deciding factor in tool selection. The company ended up rolling out Microsoft’s Copilot, formerly known as Bing Chat Enterprise, in November.
“Copyright infringement risk was kind of limited, for most organizations, to people who were just exposed to secrets," Sallam said. "Now, it can actually affect everyone."
As the courts work through a slew of copyright lawsuits against big tech companies, analysts expect some change in operations, costs or how tools are developed is on the horizon.
Several authors and news organizations, including The New York Times, sued OpenAI, and its partner Microsoft, over alleged copyright and IP violations. Anthropic has been sued by Universal Music Group and other music publishers for copyright violations, and a group of artists sued generative AI vendors including Midjourney and Stability AI for similar issues.
Nvidia is the most recent target of litigation as three authors alleged last week their works were used to train the chipmaker's NeMo platform. Most of the lawsuits claim LLM providers have trained models on information found on the internet without compensating the owners.
“Large language model vendors, what they now get for free, may at some point have to pay for, based on the outcomes of some of these lawsuits,” Sallam said. “That could potentially add to their costs and change how they monetize.”
While still a few steps away from enforcement, the new AI Act — approved by the European Union Wednesday — will require vendors of general-purpose AI systems and models to publish detailed summaries of the content used for training and comply with EU copyright law.
Vendors have already started to shift the way they address legal liability for businesses. PC manufacturers are positioning on-prem generative AI implementation as safer and more secure as it relates to IP and sensitive data to win over enterprise customers. OpenAI, Microsoft, Google and AWS have pledged to protect customers from legal claims of copyright infringement.
Protecting customers via indemnities in vendor contracts isn’t new, but how copyright indemnity would play out is yet to be determined, according to some experts.
There are liabilities across the AI value chain, Enza Iannopollo, principal analyst at Forrester, said. Vendors offering copyright indemnities is a positive for businesses using their tools, but it shouldn’t be looked at as a blanket defense.
“That shouldn’t be taken in any way as to say all the risks are removed and you’re safe to go,” Iannopollo said. “Any organization will still have liability across the value chain even if a vendor is taking some away.”
The ambiguity coupled with growing regulatory scrutiny on the technology makes generative AI adoption difficult terrain for enterprise tech leaders.
“What we are recommending for CIOs is to at least recognize the potential of not only loss of intellectual property but also copyright infringement,” Sallam told CIO Dive. “But in the end, success in actually protecting against this risk, and others, will certainly be about managing people, change and how to work responsibly.”