It is widely acknowledged that “trustworthiness” in artificial intelligence (AI) systems is critical to their development and appropriate use in all parts of our society. That’s easier said than done, of course, and there is little agreement on what constitutes trustworthy AI and the research, standards, and policy steps needed to define and achieve the goal of trustworthy AI systems.
This workshop kicks off a NIST (National Institute of Standards and Technology) initiative involving private and public sector organizations and individuals in discussions about building blocks for trustworthy AI systems and the associated measurements, methods, standards, and tools to implement those building blocks when developing, using, and testing AI systems. NIST’s effort will be informed by a series of workshops that will follow this initial session.
The second workshop, set for August 18, 2020, aims to develop a shared understanding of one characteristic of trustworthiness – bias in AI, what it is, and how to measure it. Future workshops on other technical requirements of trustworthy AI will be announced. All workshops for the immediate future will be virtual and are open to the public at no cost.
This launch event will bring together experts from the private and public sectors to engage in collaborative discussions. Details about the program and speakers will be announced shortly.
Please check back for updated information or sign up to receive email updates about NIST’s AI activities by sending an email to: [email protected]