Korea Needs to Slow Down Regulation, Speed Up Support for AI
As Rob Atkinson writes in The Korea Times, it seems everyone is all in a lather about AI, especially after the release of the most recent ChatGPT large language model. Elon Musk says AI will kill us all if we don't act now. More restrained voices predict just the end of work and the end of truth. Now the best way to get attention is to "cry AI." Indeed, we are rapidly ascending to peak AI panic.
Although the vast majority of these breathless claims are nonsense, policymakers around the world are panicking, rushing to be the first out of the gate in regulating this menacing technology. And with its longstanding embrace of the precautionary principle, Europe is leading the charge. And for some bizarre reason, countries are competing not to be the best in AI, but to be the best―or worst, depending on how you view it―in AI regulation.
Korea is certainly trying with its proposed new Law on Nurturing the AI Industry and Establishing a Trust Basis. To be sure, the "nurturing" part is positive. And to his credit, President Yoon Suk Yeol has committed to supporting AI research and entrepreneurship, including spurring cooperation on AI education and research.
But the "trust" component appears more problematic. Indeed, the legislation mirrors the EU's in the fact that both are based on the faulty premise that regulation fosters trust which in turn fosters AI use. But that is only the case if regulation does not harm AI innovators or users, either by restricting needed capabilities (such as limiting data use) or by imposing significant compliance costs.