logo DOM Victrays, Prompts, Jobs, News, Chatgpt Plugins, influencers, AI tools, AI applications, AI tools search engine, artificial intelligence, top ai tools, ai tools directory, best ai tools, ai tools list
Search
Close this search box.

Employing Generative AI: Unpacking the Cybersecurity Implications of Generative AI Tools

It’s fair to say that generative AI has now caught the attention of every boardroom and business leader in the land. Once a fringe technology that was difficult to wield, much less master, the doors to generative AI have now been thrown wide open thanks to applications such as ChatGPT or DALL-E. We’re now witnessing a wholesale embrace of generative AI across all industries and age groups as employees figure out ways to leverage the technology to their advantage.

A recent survey indicated that 29% of Gen Z, 28% of Gen X, and 27% of Millennial respondents now use generative AI tools as part of their everyday work. In 2022, large-scale generative AI adoption was at 23%, and that figure is expected to double to 46% by 2025.

Generative AI is nascent but rapidly evolving technology that leverages trained models to generate original content in various forms, from written text and images, right through to videos, music, and even software code. Using large language models (LLMs) and enormous datasets, the technology can instantly create unique content that is almost indistinguishable from human work, and in many cases more accurate and compelling.

However, while businesses are increasingly using generative AI to support their daily operations, and employees have been quick on the uptake, the pace of adoption and lack of regulation has raised significant cybersecurity and regulatory compliance concerns.

According to one survey of the general population, more than 80% of people are concerned about the security risks posed by ChatGPT and generative AI, and 52% of those polled want generative AI development to be paused so regulations can catch up. This wider sentiment has also been echoed by businesses themselves, with 65% of senior IT leaders unwilling to condone frictionless access to generative AI tools due to security concerns.

Generative AI is still an unknown unknown

Generative AI tools feed on data. Models, such as those used by ChatGPT and DALL-E, are trained on external or freely available data on the internet, but in order to get the most out of these tools, users need to share very specific data. Often, when prompting tools such as ChatGPT, users will share sensitive business information in order to get accurate, well-rounded results. This creates a lot of unknowns for businesses. The risk of unauthorized access or unintended disclosure of sensitive information is “baked in” when it comes to using freely available generative AI tools.

This risk in and of itself isn’t necessarily a bad thing. The issue is that these risks are yet to be properly explored. To date, there has been no real business impact analysis of using widely available generative AI tools, and global legal and regulatory frameworks around generative AI use are yet to reach any form of maturity.

Regulation is still a work in progress

Regulators are already evaluating generative AI tools in terms of privacy, data security, and the integrity of the data they produce. However, as is often the case with emerging technology, the regulatory apparatus to support and govern its use is lagging several steps behind. While the technology is being used by companies and employees far and wide, the regulatory frameworks are still very much on the drawing board.

This creates a clear and present risk for businesses which, at the moment, isn’t being taken as seriously as it should be. Executives are naturally interested in how these platforms will introduce material business gains such as opportunities for automation and growth, but risk managers are asking how this technology will be regulated, what the legal implications might eventually be, and how company data might become compromised or exposed. Many of these tools are freely available to any user with a browser and an internet connection, so while they wait for regulation to catch up, businesses need to start thinking very carefully about their own “house rules” around generative AI use.

The role of CISOs in governing generative AI

With regulatory frameworks still lacking, Chief Information Security Officers (CISOs) must step up and play a crucial role in managing the use of generative AI within their organizations. They need to understand who is using the technology and for what purpose, how to protect enterprise information when employees are interacting with generative AI tools, how to manage the security risks of the underlying technology, and how to balance the security tradeoffs with the value the technology offers.

This is no easy task. Detailed risk assessments should be carried out to determine both the negative and positive outcomes as a result of first, deploying the technology in an official capacity, and second, allowing employees to use freely available tools without oversight. Given the easy-access nature of generative AI applications, CISOs will need to think carefully about company policy surrounding their use. Should employees be free to leverage tools such as ChatGPT or DALL-E to make their jobs easier? Or should access to these tools be restricted or moderated in some way, with internal guidelines and frameworks about how they should be used? One obvious problem is that even if internal usage guidelines were to be created, given the pace at which the technology is evolving, they might well be obsolete by the time they are finalized.

One way of addressing this problem might actually be to move focus away from generative AI tools themselves, and instead focus on data classification and protection. Data classification has always been a key aspect of protecting data from being breached or leaked, and that holds true in this particular use case too. It involves assigning a level of sensitivity to data, which determines how it should be treated. Should it be encrypted? Should it be blocked to be contained? Should it be notified? Who should have access to it, and where is allowed to be shared? By focusing on the flow of data, rather than the tool itself, CISOs and security officers will stand a much greater chance of mitigating some of the risks mentioned.

Like all emerging technology, generative AI is both a boon and a risk to businesses. While it offers exciting new capabilities such as automation and creative conceptualization, it also introduces some complex challenges around data security and the safeguarding of intellectual property. While regulatory and legal frameworks are still being hashed out, businesses must take it upon themselves to walk the line between opportunity and risk, implementing their own policy controls that reflect their overall security posture. Generative AI will drive business forward, but we should be careful to keep one hand on the wheel.

The post Employing Generative AI: Unpacking the Cybersecurity Implications of Generative AI Tools appeared first on Unite.AI.

 Thought Leaders, cybersecurity, thought leaders Unite.AI 

New to
Victrays
?

Please login to save your tools

By login, you accept our Privacy Policy

Join 1000+ Ai enthusiasts worldwide

Join now and stay informed with weekly updates on new AI tools and breaking AI news!

By joining you agree to with our Privacy Policy and provide consent to receive updates from our Victrays.