Actionable-guidance and roadmap recommendations for the NIST AI Risk Management Framework

https://www.lesswrong.com/posts/JNqXyEuKM4wbFZzpL/actionable-guidance-and-roadmap-recommendations-for-the-nist-1

Contents

Background on the NIST AI RMF

The National Institute of Standards and Technology (NIST) is currently developing the NIST Artificial Intelligence Risk Management Framework, or AI RMF. NIST intends the AI RMF as voluntary guidance on AI risk assessment and other AI risk management processes for AI developers, users, deployers, and evaluators. NIST plans to release Version 1.0 of the AI RMF in early 2023.As voluntary guidance, NIST would not impose "hard law" mandatory requirements for AI developers or deployers to use the AI RMF. However, AI RMF guidance would be part of "soft law" norms and best practices, which AI developers and deployers would have incentives to follow as appropriate. For example, insurers or courts may expect AI developers and deployers to show reasonable usage of relevant NIST AI RMF guidance as part of due care when developing or deploying AI systems in high-stakes contexts, in much the same way that NIST Cybersecurity Framework guidance can be used as part of demonstrating due care for cybersecurity. In addition, elements of soft-law guidance are sometimes adapted into hard-law regulations, e.g., by mandating that particular industry sectors comply with specific standards.

Summary of our Working Paper

In this document, we provide draft elements of actionable guidance focused primarily on identifying and managing risks of events with very high or catastrophic consequences, intended to be easily incorporated by NIST into the AI RMF. We also provide our methodology for development of our recommendations. We provide actionable-guidance recommendations for AI RMF 1.0 on:

Key Sections of our Working Paper

Readers considering catastrophic risks as part of their work on AI safety and governance may be most interested in the following sections:

Next Steps

As mentioned above, feedback to Tony Barrett (anthony.barrett@berkeley.edu) by May 31, 2022 would be most helpful (and we will also appreciate feedback after that). We will consider feedback as we work on revised versions. These will inform our recommendations to NIST on how best to address catastrophic risks and related issues in the NIST AI RMF, as well as our follow-on work for standards-development and AI governance forums.