WordPress Ad Banner

OpenAI Shifts AI Strategy, ChatGPT Training Excludes Customer Data: Sam Altman


OpenAI CEO Sam Altman has announced a significant shift in the company’s strategy regarding the training of its large-language GPT models for artificial intelligence (AI). In response to customer feedback and concerns, OpenAI will no longer utilize client data for training purposes. The decision comes after customers expressed their desire to protect their data, prompting OpenAI to reconsider their approach. Altman confirmed the change, stating, “Customers clearly want us not to train on their data, so we’ve changed our plans: We will not do that.”

While the revised strategy applies primarily to client data used for training via OpenAI’s API services, it’s important to note that ChatGPT, the company’s chatbot, may still incorporate information from external sources. OpenAI’s focus on data privacy and protection aims to address customer concerns and align with evolving privacy standards. By respecting customer preferences, OpenAI aims to foster trust and transparency in its AI development process.

WordPress Ad Banner

The decision holds significance for OpenAI’s corporate clients, including industry giants like Microsoft, Salesforce, and Snapchat, who frequently utilize OpenAI’s API capabilities. The modified approach reflects OpenAI’s commitment to prioritizing customer needs and respecting data privacy. While the use of AI models continues to raise broader questions and concerns within various industries, OpenAI’s shift in strategy demonstrates a willingness to adapt and respond to customer feedback.

The ongoing debate surrounding large-language models extends beyond privacy concerns. Restrictions on ChatGPT usage for script production or editing, for example, have led to a strike by the Writers Guild of America, highlighting concerns about the impact of AI technologies on creative industries. Intellectual property considerations also emerge as a prominent issue. As businesses grapple with these challenges, OpenAI’s decision to discontinue training on client data represents a notable step towards addressing customer concerns and fostering responsible AI development practices.

ChatGPT causing an ‘existential crisis?’

Barry Diller, a businessman in the entertainment industry and the head of IAC, said that media corporations might litigate their claims and possibly sue AI firms for using their original content. 

This week, the Writers Guild of America (WGA), which represents over 10,000 writers in the American film industry, went on strike due to an “existential crisis” about the possibility of AI taking their employment..

Amazon apparently issued a recent warning to staff members not to divulge sensitive information to ChatGPT for fear that it would appear in chat responses for other users.

On Monday, employees at Samsung Electronics Co. are not allowed to use generative AI technologies, such as ChatGPT, Google Bard, and Bing AI, among others. 

According to media sources with access to the company’s internal memo, the tech giant alerted staff at one of its main divisions about the new policy due to concerns regarding the security of critical code. 

“We ask that you diligently adhere to our security guideline, and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” the memo warned employees.

The issue of data privacy and protection is becoming more crucial as the use of large-language models increases. AI companies are trying hard to preserve client privacy and be open about using customer data, noted the CNBC report.