Public Interest Group Offers New Liability Regime for Closed AI Foundation Models
Developers of closed foundation models for artificial intelligence should bear greater responsibilities than those of "open" ones, with a potential safe harbor around testing and certification processes offering an adaptive regulatory approach to the technology, Public Knowledge says in comments to the National Telecommunications and Information Administration.
[ ... snip ... ]
But R Street — along with industry groups such as the Center for Data Innovation—says NTIA should take an "agnostic" approach to placing any requirements on the purveyors of open vs. closed models.
While NTIA has noted the existence of a gradient, rather than a binary, dynamic of openness with AI foundation models, Google's DeepMind and Microsoft-affiliated Open AI are considered to be more toward the closed end of the spectrum. Meta's Llama2, in contrast — while not truly "open-source" as it doesn't reveal its training data, for example — is considered to be more on the open end of the spectrum.
All three of the big tech companies are supporters of the Center for Data Innovation, which acknowledges the benefits of open models for accountability.
"Concerns about fairness and bias could impede adoption of AI in fields such as criminal justice or public administration, but using open models could mitigate some concerns because anyone would have the opportunity to conduct their own independent third-party testing," CDI wrote in its own March 27 comments.
But, CDI adds, "History shows that there is a market for both open and closed approaches, such as in operating systems, software, and mobile device ecosystems, for both consumers and enterprises. Competition between open and closed models encourages innovation that benefits users. Policymakers should provide a level playing field, and neither penalize nor favor open models over closed models."
