Microsoft just announced the release of an open automation framework, PyRIT (Python Risk Identification Toolkit for generative AI), to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems. Why care? Red teaming AI has been a hot topic for many meetings I’ve been in since most organizations have concerns about the risk associated with AI. I recently posted about my thoughts on responsible AI HERE, however that doesn’t address the risks around vulnerabilities.
What is involved with this offering? Microsoft’s AI Red Team leverages a dedicated interdisciplinary group of security, adversarial machine learning, and responsible AI experts. The Red Team also leverages resources from the entire Microsoft ecosystem, including the Fairness center in Microsoft Research; AETHER, Microsoft’s cross-company initiative on AI Ethics and Effects in Engineering and Research; and the Office of Responsible AI. Our red teaming is part of our larger strategy to map AI risks, measure the identified risks, and then build scoped mitigations to minimize them.
To learn about this cool new offer, check out this blog post found HERE. Also, sign up for the webinar going into details on this March 5th. Register HERE.