Alumni Ellie Sakhaee writes for EqualAI about the steps to establish a framework for managing risks associated with AI systems
Despite their astonishing capabilities, today’s AI systems come with various societal risks, such as discriminatory outputs and privacy violations. Minimizing such risks can, therefore, lead to AI systems that are better aligned with societal values, hence, more trustworthy. Directed by Congress, NIST has taken important steps to establish a framework for managing risks associated with AI systems through creating a process to identify, measure, and minimize risks.
More than 167 guidelines and sets of principles have been developed for trustworthy, responsible AI. They generally lay out high level principles. The NIST framework, however is unique from many others because it aims to translate principles “into technical requirements that can be used by designers, developers, and evaluators to test the systems for trustworthy AI,” Elham Tabassi, the Chief of Staff at the Information Technology Laboratory (ITL) at NIST, said on the In AI we Trust? podcast with EqualAI and the World Economic Forum.