NIST recently released its first AI guideline noted as AI RMF 1.0. Its purpose is to help AI actors (organizations and/or individuals), increase trustworthiness of AI systems. As with any guideline, nothing is required but should be seen as recommended best practices. To best understand how this NIST framework can be used, this is what is stated.
As directed by the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283), the goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. The Framework is intended to be voluntary, rights-preserving, non-sector-specific, and use-case agnostic, providing flexibility to organizations of all sizes and in all sectors and throughout society to implement the approaches in the Framework.
The first part of the framework addresses how to frame risk. This is by far the top concern I hear about when speaking about AI. Risk management is defined, and recommendations are given to help understand and quantify risk. This is a primer to using the second section.
The second section provides the AI RMF core. The purpose of this is to provide outcomes and actions to enable dialog and actions to manage AI risk. This is the key value of this publication and likely to be referenced for many requirements coming out across all aspects of business. Like with other NIST publications, I predict requirements for various AI topics calling out specific mappings to the AI RMF core as NIST is a industry respected resource for guidelines.
Check out the publication HERE.