How to Write a Generative AI Cybersecurity Policy

Many C levels I speak with are concerned about shadow AI within their organization … and they should be! Every technology provider on this planet has or is quickly releasing an AI story in order to remain relevant. This means many out of the risk of falling behind are slapping AI onto their technology without addressing risks related to responsible AI. I have spoken about responsible AI HERE.

Trendmicro release an article focusing on creating an AI policy. That article can be found HERE. These are the four minimal recommendations they point out.

“Four key AI security policy considerations

Given the nature of the risks outlined above, protecting the privacy and integrity of corporate data are obvious goals for AI security. As a result, any corporate policy should, at a minimum:

1. Prohibit sharing sensitive or private information with public AI platforms or third-party solutions outside the control of the enterprise. “Until there is further clarity, enterprises should instruct all employees who use ChatGPT and other public generative AI tools to treat the information they share as if they were posting it on a public site or social platform,” is how Gartner recently put it.

2. Don’t “cross the streams”. Maintain clear rules of separation for different kinds of data, so that personally identifiable information and anything subject to legal or regulatory protection is never combined with data that can be shared with the public. This may require establishing a classification scheme for corporate data if one doesn’t already exist.

3. Validate or fact-check any information generated by an AI platform to confirm it is true and accurate. The risk to an enterprise of going public with AI outputs that are patently false is enormous, both reputationally and financially. Platforms that can generate citations and footnotes should be required to do so, and those references should be checked. Otherwise, any claims made in a piece of AI-generated text should be vetted before the content is used. “Although [ChatGPT] gives the illusion of performing complex tasks, it has no knowledge of the underlying concepts,” cautions Gartner. “It simply makes predictions.”

4. Adopt—and adapt—a zero trust posture. Zero trust is a robust way of managing the risks associated with user, device, and application access to enterprise IT resources and data. The concept has gained traction as organizations have scrambled to deal with the dissolution of traditional enterprise network boundaries. While the ability of AI to mimic trusted entities will likely challenge zero-trust architectures, if anything, that makes controlling untrusted connections even more important. The emerging threats presented by AI make the vigilance of zero trust critical.”

This isn’t bad advice. To address the first one, you need a way to identify all forms of AI and only allow approved AI. Microsoft AI Hub is an example of a solution, which something is needed to make sure AI isn’t just turning on and potentially putting you at risk. For data security aka cross the streams, a data security solution with eDiscovery and data classification is needed. I completely agree with number 3, which is AI can’t be a closed box. Any outputs need to have a way to validate the source. And finally, everybody in security knows zero trust is important.

Check out the full post for more details on these policy recommendations.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.