NIST releases new AI risk management framework for ‘trustworthy’ AI

Check out all the on-demand sessions from the Intelligent Security Summit here.


Today the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released the first version of its new AI Risk Management Framework (AI RMF 1.0), a “guidance document for voluntary use by organizations designing, developing, deploying or using AI systems to help manage the many risks of AI technologies.”

The NIST AI Risk Management Framework is accompanied by a companion playbook that suggests ways to navigate and use the framework to “incorporate trustworthiness considerations in the design, development, deployment, and use of AI systems.”

Congress directed NIST to develop the AI Risk Management Framework in 2020

Congress directed NIST to develop the framework through the National Artificial Intelligence Act of 2020, and NIST has been developing the framework since July 2021, soliciting feedback through workshops and public comments. The most recent draft had been released in August 2022.

A press release explained that the AI RMF is divided into two parts. The first discusses how organizations can frame the risks related to AI and outlines the characteristics of trustworthy AI systems. The second part, the core of the framework, describes four specific functions — govern, map, measure and manage — to help organizations address the risks of AI systems in practice.

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Watch Here

In a live video announcing the RMF launch, undersecretary of commerce for technology and NIST director Laurie Locascio said “Congress clearly recognized the need for this voluntary guidance and assigned it to NIST as a high priority.” NIST is counting on the broad community, she added, to “help us refine these roadmap priorities.”

Deputy secretary of commerce Don Graves pointed out that the AI RMF comes not a moment too soon. “I’m amazed at the speed and extent of AI innovations just in the brief period between the initiation and the delivery of this framework,” he said. “Like many of you, I’m also struck by the enormity of the potential impacts, both positive and negative, that accompany the scientific, technological, and commercial advances.”

However, he added, “I’ve been around business long enough to know that this framework’s true value will depend upon its actual use and whether it changes the processes, the cultures, our practices.”

A holistic way to think about and approach AI risk management

In a statement to VentureBeat, Courtney Lang, senior director of policy, trust, data and technology at the Information Technology Industry Council, said that the AI RMF offers a “holistic way to think about and approach AI risk management, and the associated Playbook consolidates in one place informative references, which will help users operationalize key trustworthiness tenets.”

Organizations of all sizes will be able to use the flexible, outcomes-based framework, she said, to manage risks while also harnessing opportunities presented by AI. But given the fact that standardization efforts are ongoing, she added that the framework will also need to evolve “in order to reflect the changing landscape and foster greater alignment.”

Some criticize the RMF’s ‘high-level’ and ‘generic’ nature

While the NIST AI RMF is a starting point, “in practical terms, it doesn’t mean very much,” Bradley Merrill Thompson, an attorney focused on AI regulation at law firm Epstein Becker Green, told VentureBeat in an email.

“It is so high-level and generic that it really only serves as a starting point for even thinking about a risk management framework to be applied to a specific product,” he said. “This is the problem with trying to quasi-regulate all of AI. The applications are so vastly different with vastly different risks.”

Gaurav Kapoor, co-CEO of governance, risk and compliance for solution provider MetricStream, agreed that the framework is just a starting point. But he added that the framework helps “put sustainable processes around ongoing performance management, risk monitoring, risk of AI induced bias and even measures to ensure PII is secure.” It’s clear, he added, that “all stakeholders should be involved when it comes to best practices in risk management.”

Will the NIST AI RMF foster a false sense of security?

Kjell Carlsson, head of data science strategy and evangelism at Domino Data Lab, told VentureBeat that organizations are more likely to successfully manage their AI risks by empowering their data science teams to develop, implement and continuously improve their best practices and platforms.

“Hopefully, this framework can provide some guidance to these efforts,” he said, but he added that many organizations will be “tempted to apply a framework like this, from the top down, in initiatives run by risk management professionals that are not experienced with AI technologies.”

Such efforts, he maintained, are “likely to result in the worst of all worlds — a false sense of security, no actual reduced risk, and additional wasted effort that stifles both adoption and innovation.”

NIST “uniquely positioned” to fill the void

Still, widely-accepted best practices around AI risk management are lacking, and practitioners on both the technical and the legal sides are in need of clear guidance, Andrew Burt, managing partner at law firm BNH.AI, told VentureBeat.

“When it comes to AI risk management, practitioners feel, all too often, like they are operating in the Wild West,” he said. “NIST is uniquely positioned to fill that void, and the AI Risk Management Framework consists of clear, effective guidance on how organizations can flexibly but effectively manage AI risks. I expect the RMF to set the standard for how organizations manage AI risks going forward, not just in the U.S., but globally as well.”

Originally appeared on: TheSpuzz

iSlumped