On January 26, 2023, the U.S. National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (AI RMF 1.0) [PDF], which sets out principles designed to equip organizations and individuals with approaches that increase the trustworthiness of artificial intelligence (AI) systems. NIST also released a companion NIST AI RMF Playbook, AI RMF Explainer Video, AI RMF Roadmap, AI RMF Crosswalk (mappings of the AI RMF 1.0 to other standards and frameworks) and various Perspectives (published statements from interested organizations and individuals). While NIST is an agency of the United States, the organization has a significant presence on the international stage and many of their standards have been developed in collaboration with, and adopted by, stakeholders worldwide, including the AI RMF.
Coinciding with the release of the AI RMF 1.0 and the accompanying materials was a launch event hosted by NIST, which included speakers and panelists from the White House, the Chamber of Commerce and the Technology Industry Council, as well as various technology and AI initiatives and companies.
AI Risk Management Framework 1.0
The AI RMF was created in response to a directive from Congress to develop a framework to manage AI risks and to promote trustworthy and responsible development of AI systems. In addition to introducing AI risks and characteristics of trustworthy AI systems, the core of the AI RMF establishes four high-level functions that are key to understanding and managing AI risk: Govern, Map, Measure and Manage. (These functions are described in greater detail in our previous article, which introduces the framework.)
Released only four months after the second draft of the AI RMF, version 1.0 expands on most of the material from the second draft and addresses new considerations as well, reinforcing its goal to be practical, flexible and adaptable to various AI technologies and organizations of all sizes.
AI Risk Management Framework 1.0 launch event
The AI RMF 1.0 launch event included addresses from Don Graves (U.S. Deputy Secretary of Commerce), Dr. Alondra Nelson (Deputy Assistant to the President and Principal Deputy Director for Science and Society in the White House Office of Science and Technology Policy) and Zoe Lofgren (Ranking Member of U.S. House Committee on Science, Space, and Technology). It also featured panels to discuss the capabilities of the AI RMF and where it may fall short.
Key takeaways from the speakers and the panelists include
- AI systems are inherently socio-technical systems as people are often at the centre (i.e., AI systems are used by, governed by and impact people).
- The AI RMF has the flexibility to scale and fit the needs of both large and small organizations in the public, private and non-profit sectors.
- More specific use cases should be built into the AI RMF. Building out the current capabilities of the AI RMF to apply to a broader range of specific, real-world situations is crucial to its success in expanding the adoption of AI to more applications and in addressing and mitigating AI-specific risks such as perpetuating bias and disseminating misinformation.
Overall, there is much anticipation in the AI community to implement the AI RMF in AI practices and deployment across all relevant industries, including employment, housing and others. Multiple panelists emphasized the importance of providing widespread, international education on the AI RMF, including Navrina Singh (founder and CEO of Credo AI), as the AI RMF can be the “gold standard” if implementation is achieved across the globe.
NIST plans to incorporate feedback from the AI community to update the NIST AI RMF Playbook periodically, with the next update slated for spring 2023. NIST also has plans to launch a Trustworthy and Responsible AI Resource Center to provide guidance and assistance to organizations using the AI RMF 1.0. NIST’s recently released AI RMF Roadmap contains a full list of their top priorities for further developing the AI RMF.