How (and How Not) to Fix AI
While artificial intelligence was once heralded as the key to unlocking a new era of economic prosperity, policymakers today face a wave of calls to ensure AI is fair, ethical and safe. Unfortunately, the two most popular ideas — requiring companies to disclose the source code to their algorithms and explain how they make decisions — would cause more harm than good by regulating the business models and the inner workings of the algorithms of companies using AI, rather than holding these companies accountable for outcomes.
In an opinion piece for TechCrunch, Josh New writes that policymakers should instead insist on algorithmic accountability — the principle that an algorithmic system should employ a variety of controls to ensure the operator (i.e. the party responsible for deploying the algorithm) can verify it acts as intended, and identify and rectify harmful outcomes should they occur. Algorithmic accountability offers a better path toward ensuring organizations use AI responsibly so that it can truly be a boon to society.