Anything goes when it comes to artificial intelligence—it’s the proverbial Wild West. At least, that’s what concerns many skeptics. They worry companies are rolling out algorithms for everything from life insurance to driverless cars with little oversight or understanding of the technology’s safety, and it could put people’s lives and wellbeing at risk if something goes wrong. But the truth is that industry and government are moving to establish robust technical and professional standards for artificial intelligence, much like those already in place in fields such as medicine, law, and civil engineering. For example, many companies, especially big ones, are instituting internal review processes and consulting with external advisory boards, among other measures, to ensure the algorithms they develop are safe, effective, and ethical. And President Trump has issued an executive order directing the National Institute of Standards and Technology (NIST) to develop a plan to further engage the federal government in creating these standards for AI.
So, what comes next? Will this standards-making process address skeptics’ concerns? What’s the role of the standards-setting process in creating the underlying scientific understanding for regulating AI? And what does U.S. leadership in AI standards look like?
Please join ITIF’s Center for Data Innovation, in partnership with NIST, for a conversation about the state of play in developing standards and oversight for AI, and the importance of these initiatives for AI innovation, adoption, and governance.