AI is a huge topic right now and so is the security associated with such technology. As an AI security advisor, I’m always asked questions around the risk of using AI as well as how aspects of AI could be used by threat actors. I’ve seen many variations on explaining how to secure AI, which most break things down into security the Applications using AI, protecting the AI model being used, and protection of the data being leveraged by AI.
Zeropath posted an article showing AI pen testing results against AI used by Netflix, Hulu, and Salesforce. That post can be found HERE. The post shows a breakdown of many of the vulnerabilities discovered minus ones that are currently being patched by the vendor hence, they don’t want to expose until the risk of exploitation has been handled. I like the following diagram showing how the different types of vulnerabilities were summarized into the % they were found. I would have guessed Authentication flaws would be a top issue, which made up over 50% of the vulnerabilities found.
Check out the full post to get the dirty details. My response to these findings is organizations need to focus on securing applications, LLMs and data just as we have done in the past to reduce the risk of compromise. The security industry has various guidelines and recommendations regarding securing AAA, which not only reduces the risk of compromising an Authorization flow, but also prevents to use of using compromised accounts beyond a very limited scope. Zero Trust is a popular concept used in this space. Application security is nothing new as well as following best practices to secure code or lastly, data loss and in motion security practices. AI may be a newer topic, however, the elements that make up AI have been secured for years.