Artificial Intelligence is sweeping into our organizations for various purposes. Sometimes, people are just turning it on to try it out while other times, it’s a capability that is popping up as vendors innovate. As an AI consultant, the number one ask I continue to receive is understanding how to build an AI policy. This is due to the fear of shadowAI meaning the concern that AI systems are being leveraged within a organization without any understanding of their risk.
Why care about this? What is the actual risk? AI is not a human hence it doesn’t have feelings or care about anything outside of what it is instructed to do. This means without controls, AI can get you in trouble. One example is allowing AI to reason over the public internet. Without controls in place, a public using AI could leverage copyright protected data putting the user at fault for copyright infringement.
Without any controls in place, an AI system could pick up bias behavior and become a bad source of truth. An example is asking about who to hire for a role. A bad situation is the AI determining a trend based on training is males tend to get the role. This causes it to auto remove any females from the resume pile. This obviously isn’t a good practice but something an AI system could decide to do if responsible AI controls are not put in place.
The following are some examples of concerns you need to have as you think about what controls should be considered for any AI you allow within your environment:
- How do you prevent copyright infringement?
- How do you ensure the AI is not bias and fair?
- How do you identify hallucinations and validate anything generated as truthful?
- How do you prevent AI from causing data leakage or privilege escalation?
- How does responsible AI fit into developing applications that use AI?
- What protects and insurance is in place with AI fails to meet responsible AI practices? Who is liable?
I expect to see many guidelines become available as various providers work to develop what should be considered the standard for responsible AI. Some available references include NIST’s AI Risk Management Framework found HERE. Microsoft also published their six pillars for responsible AI found HERE.
I’ll end this post with this simple concept …. I’m not a lawyer but if I didn’t have the bullet points covered in this post answered by a tool … I wouldn’t put it in my environment. I would be afraid of what legal troubles it could cause.
You will need to develop your own policies for AI and ensure any systems using AI meet such policies before allowing them to function. Hopefully this post helps with developing what those policies will include. I’ll post how to identify shadow AI in future post as it’s also a common question I receive as I speak to this responsible AI topic. As a teaser, think about cloud access security broker (CASB) and other tools you would use to identify any form of shadow IT.