Generative AI tools like OpenAI’s ChatGPT and Microsoft’s Copilot are advancing quickly, sparking concerns about potential privacy and security issues, especially in the workplace.
A very good example here is Microsoft’s new Recall tool which has raised alarms among privacy advocates for its capability to take screenshots periodically, leading the United Kingdom’s Information Commissioner’s Office to request more information on its safety in the upcoming launch of Copilot+ PCs.
Similarly, worries have arisen about OpenAI’s ChatGPT and its upcoming macOS app, which experts fear could capture sensitive data through screenshotting. The United States House of Representatives has prohibited the use of Microsoft’s Copilot by its staff, citing risks identified by the Office of Cybersecurity related to data leakage to unauthorised cloud services.
Gartner has also warned about the potential risks associated with using Copilot for Microsoft 365, highlighting the dangers of internal and external exposure to sensitive information. Additionally, after screenshots of odd and deceptive search results went viral recently, Google was forced to make changes to its brand-new AI Overviews feature.
Overexposed
Using generative AI in the workplace poses a major challenge of potentially exposing sensitive data unintentionally. According to Camden Woollven, who is the group head of AI at GRC International Group, most generative AI systems act like “big sponges” by absorbing vast amounts of information from the internet to enhance their language models.
Steve Elcock, the CEO and founder at Elementsuite, points out that AI companies are eager to gather data to train their models, making it enticing behaviourally.
Jeff Watkins, the chief product and technology officer at xDesign, warns that this extensive data collection opens the door for sensitive information to be shared with outside parties or extracted through clever manipulation.
Moreover, there is a growing concern about the security of AI systems themselves, with the potential for hackers to target and exploit them. Woollven explains that if an attacker gains access to a company’s AI tools powered by a large language model (LLM), they could steal sensitive data, manipulate outputs, or use the AI to distribute malware.
According to Phil Robinson, principal consultant at security consultancy Prism Infosec, there are growing concerns about the potential risks associated with “proprietary” AI tools like Microsoft Copilot, which are generally considered safe for work. These tools could potentially be used to access sensitive data if security measures are not properly implemented.
Additionally, there is worry that AI tools could be utilised to invade the privacy of staff by monitoring their activities. While Microsoft claims that its Recall feature keeps snapshots on the user’s PC and ensures privacy, there is scepticism that this technology could eventually be used for the surveillance of employees.
Self-censorship
Generative AI presents potential risks, but businesses and individuals can improve privacy and security by taking certain precautions.
Lisa Avvocato, Vice President of marketing and community at data firm Sama, advises against putting confidential information into publicly available tools like ChatGPT or Google’s Gemini.
When creating prompts, it is recommended to be generic to avoid revealing too much information. Avvocato suggests using AI as a first draft and then adding sensitive information later.
Additionally, when using AI for research, it is important to validate the information provided by asking for references and links to sources. It is also crucial to review AI-generated code rather than assuming it is error-free.
Microsoft has emphasised the importance of configuring Copilot correctly and implementing the “least privilege” concept, which ensures that users only have access to necessary information.
According to Prism Infosec’s Robinson, this is a critical consideration as organisations need to properly set up these systems and not solely rely on technology to guarantee security. Additionally, it should be mentioned that ChatGPT utilises shared data for model training, unless you disable this feature in the settings or opt for the enterprise version.
List of assurances
The companies that incorporate generative AI into their products assert that every effort is being made to safeguard security and privacy. Microsoft is eager to discuss the privacy and security features of its Recall product, as well as how users can manage the feature under Settings > Privacy and Security > Recall & Snapshots.
Google clarifies that information is not used for advertising and states that generative AI in Workspace “does not change our foundational privacy protections for giving users choice and control over their data.”
While enterprise versions of its products are available with additional controls, OpenAI reiterates how it maintains security and privacy in its products.
OpenAI provides control over how data is used by offering self-service tools to access, export, and delete personal information, as well as the option to opt out of using content to enhance its models. The ChatGPT Team, ChatGPT Enterprise, and its API are not trained on data or conversations, and its models do not learn from user interactions by default, as per the company’s policy.
As AI technology becomes more advanced and prevalent in the workplace, the associated risks are likely to increase, warns Woollven. The emergence of multimodal AI like GPT-4o, capable of analysing and generating images, audio, and video, means that companies now must safeguard more than just text-based data.
Given this reality, individuals and businesses should treat AI like any other external service provider, advises Woollven. It is important to refrain from sharing sensitive information that you would not want to be made public.
Meanwhile, OpenAI has eliminated language from its usage guidelines that specifically forbade the use of its potent language technologies, like ChatGPT, for military objectives. The firm’s prior policies explicitly prohibited the use of its AI tools and services for “weapons development” as well as “military and warfare,” as the Intercept reported.
The explicit ban was subtly lifted in a revised usage policy. Experts say this would eliminate any direct applications by military forces or defence departments. Although there is still a general ban on harmful activities, military applications are no longer specifically excluded from usage under the new policy.
When questioned about the modification, an OpenAI representative said that the intention was to reduce the policy to “universal principles” such as “Don’t harm others,” but it is still unclear what this would mean in terms of possible military application.
“We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs. A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples,” OpenAI’s Niko Felix said.