AI safety and the risks surrounding the use of AI are popular topics. Microsoft last year released a open source AI pen testing tool called PyRIT (The Python Risk Identification Tool for generative AI) and have been one of the first companies to perform red teaming of AI. This team recently posted about what they have learned from pen testing over 100 generative AI products. You can see the post HERE.
The three key takeaways are the following:
Takeaway 1: Generative AI systems amplify existing security risks and introduce new ones —- I have ran into this many times when showcasing AI to organizations. One demo I’ll do is ask AI to tell me what my boss has been working on the last week. Since both my boss and I are using corporate issued computers that allow permissions to see what others are working on (while not showcasing sensitive details), AI can use this privilege to answer my question. Sensitive details are removed however, I can get visibility that I have always been allowed to do but unable to do without AI. Responsible AI solutions leverage some form of role-based access controls (RBAC) to ensure the AI isn’t causing privilege escalation. If your RBAC isn’t strong, AI can expose this vulnerability hence AI helps people expose weak security programs. The same concept applies to all of your security capabilities.
Takeaway 2: Humans are at the center of improving and securing AI —- AI is about helping people do more via using machines to do tedious tasks as well as skilling up and wide. This means AI is about people hence if people are your weakest link, this will be amplified with AI.
Takeaway 3: Defense in depth is key for keeping AI systems safe —- This finding shouldn’t be a surprise to any security professional.
Check out the full article for all of the dirty details.