Korea's Basic AI Act Risks Stalling the Engine It Seeks to Build
Editor’s note: This column appeared in the South Korean publication TechM and is published here in English with permission.
Last December, South Korea did something no other nation has attempted. It passed the world’s first law to merge AI strategy, industrial promotion, and regulation into a single statute: the Framework Act on the Promotion and Trust of Artificial Intelligence.
On paper, it looks like a triumph. Korea can now claim to be the first to have an integrated framework while the United States debates voluntary guidelines and Europe wrangles over its AI Act. It is the kind of move that cements a nation’s reputation as a policy pioneer.
But being first carries its own risks. By binding promotion and regulation together, Korea has created a paradox: The same law designed to drive industrial growth could also stall it.
The promotional side of the Framework Act is impressive. It promises investment in data infrastructure, new AI clusters, talent training, and internationalization. These are smart, forward-looking measures that other countries will envy. They lay the foundation for an AI ecosystem capable of competing globally.
The trouble comes with the regulatory provisions. Instead of targeting genuine harms, they rely on blunt, symbolic rules. Most problematic are the criteria for designating “high-impact AI.” One test is the so-called compute threshold, which automatically deems any AI model trained with more than a set amount of computational power high-risk. The metric sounds precise, but it is meaningless. Compute is not a proxy for danger. Regulating by raw processing power is like regulating airplanes by fuel tank size: It looks objective, but it tells you little about whether they are safe to fly.
Articles 33 through 35, which cover high-impact AI, add another layer of concern. They impose exhaustive self-assessments, documentation, and reporting obligations. Yet endless paperwork does not guarantee safety. Results—not reports—should be the standard.
That means shifting to performance-based oversight. Regulation should rest on measurable outcomes and be enforced by ministries that already have the expertise and laws to manage sector-specific risks. The Transport Ministry knows how to regulate autonomous driving, the Financial Regulator understands market algorithms, and the Health Ministry oversees medical safety. Each can set relevant benchmarks, while technical bodies such as the Korea Research Institute of Standards and Science can design protocols to test compliance.
Other provisions are equally misguided. Obligations for watermarking and labeling AI outputs may sound reassuring, but they will not stop disinformation, deepfakes, or intellectual property theft. What they will do is create the illusion of safety while burying firms in paperwork.
The metaphor is the clock. If the regulatory clock ticks too fast, industry stalls. If it ticks too slowly, society is left unprotected. At present, Korea’s regulatory clock is running ahead of its industrial one, and the gears are grinding.
Fortunately, time remains. The law does not take effect until January 2026. The Ministry of Science and ICT is now drafting the Enforcement Decrees that will determine how it works in practice. Lawmakers can still narrow the overly broad definition of “AI system,” scrap prescriptive R&D mandates, and eliminate SME-first rules that punish scale. Regulators can still drop the compute threshold, introduce a grace period before fines, and redesign high-risk AI obligations around measurable outcomes.
Korea has a choice. It can show the world how to integrate strategy, promotion, and regulation in a way that builds both trust and competitiveness. Or it can serve as a cautionary tale of how regulatory overreach strangles innovation.
The question is not whether AI should be regulated, but how—and at what tempo. Korea’s AI Framework Act has put its clock on display. For the Lee Jae-myung administration, which has pledged to make Korea one of the world’s top three AI powers, the challenge is not to wind the clock faster. It is to craft a precision policy clock—one in which strategy, promotion, and regulation mesh and keep time together.
Editors’ Recommendations
September 29, 2025
One Law Sets South Korea’s AI Policy—and One Weak Link Could Break It
February 8, 2023
Ten Principles for Regulation That Does Not Harm AI Innovation
Related
September 29, 2025
Regulations Risk AI Framework Act’s Strengths and Global Best Practice Potential; New Report Calls for Amendments
August 1, 2018