Samsung is banning employees from using artificial intelligence programs like ChatGPT after a reported data leak exposed one of the company’s sensitive codes.
Bloomberg reported Tuesday that some staff members uploaded sensitive code information to ChatGPT, raising concerns that information uploaded to the AI software could be exposed to other users. A memo obtained by Bloomberg News informed employees that they were prohibited from using AI programs like ChatGPT due to cyber security, noting that the data uploaded could also be difficult to retrieve and delete.
“Interest in generative AI platforms such as ChatGPT has been growing internally and externally,” Samsung told staff in the memo reported by Bloomberg. “”While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”
In an internal survey conducted by the company last month, 65 percent of respondents said that AI tools pose a security risk, the report said.
The company said that the use of AI programs would be banned on company devices and asked employees to not submit company information to these programs on their personal devices. Samsung is in progress of developing its own AI program and said in the memo that it is working “to create a secure environment for safely using generative AI.”
“We ask that you diligently adhere to our security guideline and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” the company said in the memo.
The Hill reached out to Samsung to confirm the memo.