OpenAI has said that one of the ways ChatGPT gets better is through interactions with users. But as the mass experiment underway — since the AI chatbot’s launch to the public last year — moves past novelty, the company has signaled it is closely considering safety and trust.
ChatGPT users are already greeted with a pop-up alert that their conversations can be seen by “AI trainers,” and are warned not to type in “sensitive information.”
In April, OpenAI said it’s also giving users the choice to disable their conversation record with ChatGPT, seeking to offer more visibility and control over data. (Insider’s Sarah Jackson has a helpful explainer on how to do that).
Keeping conversation history off means those chats won’t help train the tool, and that the company will delete after 30 days any conversations under that higher privacy mode, according to OpenAI’s website.
The stakes can be high for both regular users and companies dealing with confidential information, who should also consider their own policies for how such tools should be used at work, said Duane Pozza, a partner at Wiley Rein LLP who advises on privacy, data, and other matters.
“When looking at AI chatbots, there is a potential for these tools to collect a lot of consumer personal information that could include things like conversation histories,” he told Insider, speaking generally about such tools and not about any specific company.
“Average consumers and businesses using these tools have to make sure they understand the privacy policies,” he added. “They should understand if they have options or settings to understand how data is collected by these tools.”
A representative for OpenAI did not comment beyond indicating the company’s resources on its website.
The popularity of AI websites may raise similar concerns about the scope of potential users wrangling with privacy questions, said Rudina Seseri, founder and managing partner of Glasswing Ventures investing in AI.
“I reiterate best practices here — absolutely don’t share with ChatGPT what you don’t want the world to know,” she said.
“And this is not somehow grounded on any mal-intent from OpenAI, or, forget ChatGPT, any large language model,” she said. “It also has to do with the fact that, the more surface — if you were to think of the digital world as a surface — the more surface, the more reach, the more opportunity for exploitation.”
Microsoft’s new Bing search bot launched in February has also rapidly gained ground, amassing more than 100 million daily users in March.
The company offers a “privacy dashboard” where users can get a sense of how their search history is used, and explore options to clear that history.