WordPress Ad Banner

Samsung Bans ChatGPT-like AI After Security Breach, Warns of Employee Termination

Samsung Electronics Co. has prohibited its employees from utilizing generative AI tools, including ChatGPT, Google Bard, and Bing AI, among others. 

Due to worries about the security of crucial code, the tech giant informed personnel at one of its largest divisions about the new policy on Monday, according to media sources with access to the company’s internal memo. 

“Interest in generative AI platforms such as ChatGPT has been growing internally and externally,” Samsung wrote to its staff in the memo. 

“While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”

Since the data supplied to these AI platforms are stored on external servers, it is challenging to recover and delete.

The South Korean company found that employees had uploaded sensitive code to the platforms, raising fears that the data may be made available to other users.

Samsung engineers unintentionally uploaded internal source code to ChatGPT earlier in April. A corporate official acknowledged that a message prohibiting the use of generative AI services had been sent last week, according to a Bloomberg report

“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” further read the memo. 

“However, until these measures are prepared, we are temporarily restricting the use of generative AI.” 

Samsung isn’t the only one banning generative AI 

In a poll on AI tools performed by Samsung last month, it was discovered that 65% of participants thought that using such services could be dangerous for security.

The new regulations forbid the use of generative AI systems on the company’s internal networks, as well as on the company’s laptops, tablets, and phones. 

Consumer electronics from the corporation, such as Windows laptops and Android smartphones, are unaffected. Samsung issued a warning that violating the new rules could result in termination.

The use of generative AI has drawn criticism from many sources, including Samsung. Some Wall Street financial institutions, including JPMorgan Chase & Co., Bank of America Corp., and Citigroup Inc., either outlawed or restricted its use in February.

Meanwhile, Samsung is developing its own internal AI tools for software development, translation, and document summarization. The business is also attempting to prevent critical company data from being uploaded to outside services. 

Earlier, an “incognito” mode was added by OpenAI to ChatGPT that enables users to prevent their chats from being used to train AI models, addressing privacy concerns.

Samsung warns employees of termination 

The security procedures at Samsung HQ are being reviewed in order to establish a safe atmosphere where generative AI can be used to increase employee productivity and efficiency. 

The business is temporarily limiting the usage of generative AI, nevertheless, until these measures are ready, noted the Bloomberg report. 

“We ask that you diligently adhere to our security guideline, and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” the memo warned employees.

Samsung workers accidentally leaked trade secrets via ChatGPT

Never forget that anything you share with ChatGPT is retained and used to further train the model. Samsung employees have learned this the hard way after accidentally leaking top secret Samsung data.

Samsung employees accidentally shared confidential information while using ChatGPT for help at work. Samsung’s semiconductor division has allowed engineers to use ChatGPT to check source code.

But The Economist Korea reported three separate instances of Samsung employees unintentionally leaking sensitive information to ChatGPT. In one instance, an employee pasted confidential source code into the chat to check for errors. Another employee shared code with ChatGPT and “requested code optimization.” A third, shared a recording of a meeting to convert into notes for a presentation. That information is now out in the wild for ChatGPT to feed on.

The leak is a real-world example of hypothetical scenarios privacy experts have been concerned about. Other scenarios include sharing confidential legal documents or medical information for the purpose of summarizing or analyzing lengthy text, which might then be used to improve the model. Experts warn that it may violate GDPR compliance, which is why Italy recently banned ChatGPT.

Samsung has taken immediate action by limiting the ChatGPT upload capacity to 1024 bytes per person, and is investigating the people involved in the leak. It is also considering building its own internal AI chatbot to prevent future embarrassing mishaps. But it’s unlikely that Samsung will recall any of its leaked data. ChatGPT’s data policy says it uses data to train its models unless you request to opt out. In ChatGPT’s usage guide, it explicitly warns users not to share sensitive information in conversations.

Consider this a cautionary tale to be remembered the next time you turn to ChatGPT for help. Samsung certainly will.