Generative AI has rapidly cemented itself as a cornerstone of modern life. It can be essential for work purposes, like generating code or fine-tuning a plan, and it can infuse personal tasks with new insight, like planning a vacation itinerary (yes, you can do that!).
You’ve likely already seen the benefit of generative AI (GenAI) in the workplace. While it holds critical power to spur innovation, its adoption presents new challenges to data security. A recent Gartner survey found that 55% of organizations are either piloting or fully implementing generative AI technologies. While this is a testament to everything this technology brings to the table, many worry about the speed of its adoption.
Accounting for Data Risk
To spit out a characteristically quick and accurate response, generative AI platforms rely on data inputs from millions of users. This means that any sensitive data entered into these tools can reappear and be used by an untold range of other users. The features that have made AI revolutionary, like speed, automation, and learning capabilities, can also put critical business IP at risk.
This risk isn’t limited to public AI platforms. Even internally developed AI models, trained on proprietary datasets, can inadvertently expose confidential information to unauthorized individuals within the organization.
These concerns were voiced in the 2024 Data Exposure Report, an annual survey of cybersecurity leaders and practitioners. In this year’s survey, the utilization and risk of GenAI was a key theme. Nearly three-quarters (73%) of respondents indicated they were looking to AI to bridge skills gaps. However, 86% of security leaders noted they worry employees leak data to GenAI tools, exposing it to competitors.
Faced with this risk, some companies are taking the road of data protection over innovation – opting just to shut down GenAI usage. A recent survey by Blackberry found that 75% of organizations are contemplating or have already instituted bans on generative AI applications, with 67% of these companies citing data security and privacy risks as primary reasons.
While this strategy may, at face value, achieve a security team’s end goal of locking down its data, it both negates the potential positive value of GenAI and fails to account for the human element.
Employees who see the benefits of these tools often find stringent policies cumbersome. While some may comply with these restrictions, many will likely find a workaround so that they can efficiently do their jobs. This results in behavior that can be even riskier. Not to mention, GenAI is a boon for boundary-pushing organizations in every industry. Choosing to opt out of progress by banning this tech can lead to companies falling behind the competitive curve.
A Secure Pathway Forward
So, what’s the sweet spot for GenAI use? There are a few key methods security teams can focus on to foster innovation from these new tools while maintaining robust data protection.
- Establish People-Centric Training: Effective data security is founded on an understanding of how well employees not only grasp security policies, but also how well they understand the why behind the rules. Nearly all (98%) companies, regardless of how often they conduct training, believe it requires improvement. Regular training that is transparent and corrects risky behavior in real-time can promote enhanced retention and adherence to policies. Especially in the case of GenAI, employees need to know how they can use it and why risky behavior can threaten the business.
- Isolate and Protect Source Code Repositories: Source code is one of the most valuable sets of data for today’s organizations. As AI tools become deeply embedded in the development of new products, it’s critical to keep access to repositories controlled. As soon as IP is inputted into any AI model, it’s virtually impossible to filter out. Thus, ensuring that those who handle this sensitive data are doing so securely can help diminish the risk of a leak.
- Invest in Tailored Data Protection Solutions: It remains to be seen the extent to which GenAI could put corporate data at risk. However, we know that depending solely on yesterday’s programs and tools isn’t enough. As AI grows exponentially, robust data protection requires functionalities like instantaneous monitoring, automated threat identification, and response mechanisms aligned with contextual cues. To truly keep data secure, security practices must keep pace with the rate of change brought on by new technologies.
We’re currently in the Wild West of Generative AI, and it’s never been a more exciting time to innovate – no matter your industry. With all the promise this tech brings, it’s easy to get lost in the noise and either go full steam ahead or slam on the brakes. The correct answer, as with most, lies somewhere in the middle. By establishing critical security policies and bringing employees into the fold, while also securing the most essential data and deploying intelligent data protection tools, organizations can harness the benefits of GenAI while keeping IP safe.