WASHINGTON: To promote the development and use of AI technologies and systems that are trustworthy and responsible, NIST is seeking public comment on an initial draft of the AI Risk Management Framework (AI RMF). The draft addresses risks in the design, development, use, and evaluation of AI systems.
The voluntary framework is intended to improve understanding and to help manage enterprise and societal risks related to AI systems throughout the AI lifecycle, offering guidance for the development and use of trustworthy and responsible AI. NIST is also developing a companion guide to the AI RMF with additional practical guidance.
This draft builds on the concept paper released in December and an earlier Request for Information. Feedback received by April 29 will be incorporated into a second draft issued this summer or fall. On March 29-31, NIST will hold its second workshop on the AI RMF. The first two days will address all aspects of the AI RMF. Day 3 will allow a deeper dive of issues related to mitigating harmful bias in AI.
This week, NIST also released “Towards a Standard for Identifying and Managing Bias within Artificial Intelligence” (SP 1270), which offers background and guidance for addressing one of the major sources of risk that relates to the trustworthiness of AI. That publication explains that beyond the machine learning processes and data used to train AI software, bias is related to broader societal factors – human and systemic institutional in nature – which influence how AI technology is developed and deployed.