Dive Brief:
- Users of OpenAI’s ChatGPT public model can now turn off chat history within the tool, the company announced Tuesday. Users can also export their ChatGPT data to “understand what information ChatGPT stores,” the company said in a blog post.
- “We are also working on a new ChatGPT Business subscription for professionals who need more control over their data as well as enterprises seeking to manage their end users,” the company said.
- ChatGPT Business data policies will mimic those of the ChatGPT API where end-user data will not be used to train models by default. The company expects to release its business version of the tool in the coming months, according to the blog post.
Dive Insight:
If users disable chat history in ChatGPT, the system will retain new conversations for 30 days and are only reviewed “when needed” to monitor for abuse before permanently deleting, the company said.
“These controls, which are rolling out to all users starting today, can be found in ChatGPT’s settings and can be changed at any time,” OpenAI said. “We hope this provides an easier way to manage your data than our existing opt-out process.”
Data privacy is a top concern for CIOs when it comes to employees using publicly available large language models that use data put into the system to inform future responses.
Recent incidents of data privacy leaks and an open-source library bug found in ChatGPT illustrate the fear. Samsung Electronics employees in the company’s semiconductor business unit reportedly put sensitive corporate data into ChatGPT recently, leading the company to limit upload capacity per prompt.
As businesses craft policies for employee use of generative AI, the challenge for OpenAI is balancing what data they need to make models better and what users are comfortable sharing, especially as the company seeks to reach enterprise customers.
Following the open-source library bug, the Italian Supervisory Authority imposed a temporary limitation prohibiting OpenAI from processing Italian users’ data after suspecting the company was breaching the European Union’s General Data Protection Regulation.
Italy’s data protection watchdog then published a list of demands for OpenAI that if met would result in Italy lifting the ban. One of the requests asked OpenAI to add easily accessible tools to allow users and non-users to obtain their personal data and give users the right to refuse access to their personal data for training the model.