AI is the latest big security buzzword that is bringing a ton of confusion around what the real potential is as well as associated risk. All AI isn’t created the same way hence some offerings use your data to train the systems, expose you to certain AI based attacks, and over promise what is actually being done. On the other hand, some AI offerings follow responsible AI as well as can provide your SOC a ton of value.
The U.S. National Institue of Standards and Technology (NIST) published a warning about the risks associated with AI including call outs to what risks are associated with threat actors abusing AI.
The attacks, which can have significant impacts on availability, integrity, and privacy, are broadly classified as follows –
- Evasion attacks, which aim to generate adversarial output after a model is deployed
- Poisoning attacks, which target the training phase of the algorithm by introducing corrupted data
- Privacy attacks, which aim to glean sensitive information about the system or the data it was trained on by posing questions that circumvent existing guardrails
- Abuse attacks, which aim to compromise legitimate sources of information, such as a web page with incorrect pieces of information, to repurpose the system’s intended use
You can learn more about these concerns via the NIST publication HERE. You should ask about these threats as you evaluate AI solutions for your security practice.