Samsung this week became the latest big name to ban its employees from using generative AI tools such as ChatGPT and Google Bard, warning staff that they could be fired if they’re caught using them.
In an internal memo obtained by Bloomberg, Samsung said the ban was prompted by the discovery of a leak of sensitive internal source code by an engineer who uploaded it to ChatGPT last month. According to earlier reports, one Samsung employee reportedly asked the chatbot to check sensitive database source code for errors, while another fed a recorded meeting into ChatGPT and asked it to generate minutes.
The Korean tech giant is the latest company to crack down on the use of ChatGPT. American banking giant JPMorgan recently restricted its use among employees due to compliance concerns, and Amazon has reportedly urged staff not to share code with the AI chatbot. Verizon and Accenture have also taken similar steps, and Italy also briefly banned ChatGPT last month, saying it was concerned the services breached EU data protection laws.
Even Microsoft, which has a multibillion-dollar stake in ChatGPT owner OpenAI, has doubts. According to a new report, Microsoft’s Azure cloud server unit plans to sell an alternative version of ChatGPT that runs on dedicated cloud servers, where the data will be kept separate from those of other customers.
These concerns are by no means unfounded. Not only could tools such as ChatGPT help attackers write legitimate-sounding phishing emails and malicious code, they also carry a data breach risk. Those risks have already manifested: OpenAI admitted in March that ChatGPT has already suffered its first significant data breach, which exposed the personal and partial payment data of ChatGPT Plus subscribers.
Cutting-edge AI, legacy tech
Generative AI tools like ChatGPT bring powerful capabilities to non-technical users and represent a huge leap forward both in what AI can do and its potential to revolutionize everything from the way we work to the way we make decisions. For non-technical users who are now using the technology to generate human-like text for essays and social media copy, it might feel like the future has arrived. Indeed, some have even called it a new industrial revolution.
While it might feel like some sort of magical eight ball, the underlying infrastructure behind generative AI is nothing new. Much like a cloud storage service, all of the data you share with ChatGPT is stored on OpenAI’s servers. Along with prompts and chat conversations, OpenAI saves other data, too, such as your account details, approximate location, IP address, payment details and device information. This data is used to train and improve the model, according to OpenAI, so it can better understand and respond to natural language queries.
OpenAI has taken steps to protect user data: It says that all conversations are encrypted and that user data is stored on secure servers that are regularly monitored for vulnerabilities. However, as with any data stored on what is essentially someone else’s computer, it carries a major data breach risk — as OpenAI has already proven.
That’s not to say using generative AI is a bad thing. Even Samsung, which is threatening to fire employees for using the technology, has accepted that generative AI holds potential, pledging in its memo to create its own tools for translation and document summary.
Chances are, generative AI will be a significant time-saver for your startup, too. It can help your business automate tasks, carry out research and streamline workflows. But before hopping on the ChatGPT bandwagon, it’s essential that you — and your employees — understand where your data is going and who controls it.
According to data protection company Cyberhaven, about 10% of surveyed employees had used ChatGPT in the workplace, while 7.5% had pasted company data into the chatbot since it launched. The company’s analysis found that 4% of employees have pasted sensitive data into the tool at least once, including source code and client data.
“An executive inputs bullet points from the company’s 2023 strategy document into ChatGPT and asks it to rewrite it in the format of a PowerPoint slide deck. In the future, if a third party asks ‘what are [company name]’s strategic priorities this year,’ ChatGPT could answer based on the information the executive provided,” Cyberhaven wrote.
As tools like ChatGPT continue to balloon in popularity and as your employees look to integrate generative AI into their workflows (potentially without your knowledge), it’s important that your staff are aware of the risks so that they are not held responsible for exposing critical company data.
“Businesses that use ChatGPT without proper training and caution may unknowingly expose themselves to GDPR data breaches, resulting in significant fines, reputational damage, and legal action taken against them,” Richard Forrest, legal director at the U.K.-based data breach solicitors Hayes Connor, told TechCrunch+.
It’s also critical that companies set boundaries. Assume that anything you enter could later be accessible in the public domain, and let your employees know what data they can and cannot share, such as source code or confidential data.
Think of generative AI as the new phishing threat. Human error is the leading cause of data breaches and we know that the only way to prevent it from happening is through sufficient cybersecurity training. If your employees aren’t aware of the risks, remember: a breach of company data is not their fault.
Comment