Memorization Won’t Prepare Students for the Age of Agentic AI
Editor’s note: This column appeared in the South Korean publication the Chosun Ilbo and is published here in English with permission.
For decades, Korea’s education system has been highly effective at training students to avoid mistakes.
Success has traditionally meant memorizing large amounts of information and selecting the correct answer from a set of five choices. Students who quickly identified the right answer and minimized errors were rewarded. This model worked well during Korea’s period of rapid industrialization, when economies needed workers who could absorb standardized knowledge and apply it accurately.
But the type of knowledge needed to succeed in the emerging AI economy is changing.
As AI systems reduce the cost of generating answers, they increase the importance—and difficulty—of verifying them. In many domains, the bottleneck is shifting from producing outputs to evaluating whether those outputs are reliable, relevant, and aligned with real-world objectives.
This shift is creating demand for a new kind of role, an “AI middle manager,” who supervises the outputs of multiple AI systems. Rather than coordinating people, this role involves managing AI systems: interpreting conflicting results, identifying flawed assumptions, and determining when outputs are sufficient to inform decisions.
For example, a company might use one AI system to optimize pricing and resource allocation for short-term revenue growth, while another flags increased long-term customer churn under that strategy. The task is not simply to compare outputs or check for errors. The human supervisor must determine whether the optimization objective is correctly specified, reconcile tradeoffs between short-term gains and long-term stability, and decide whether to override or refine the system’s recommendation. In some cases, they may need to reframe the problem entirely—for instance, shifting from pure revenue maximization to lifetime customer value.
In this environment, the critical skill is not retrieving known answers, but evaluating what counts as a good answer and deciding which outputs to trust, revise, or reject.
Some education systems are more prepared for these changes.
In the United States, many universities have adopted “test-optional” admissions policies, allowing students to apply without submitting standardized test scores. Instead, these universities are placing greater emphasis on sustained work such as research projects, design activities, or long-term problem-solving experiences.
Educational institutions themselves are also changing how they teach students.
Olin College of Engineering in Massachusetts, for example, organizes much of its curriculum around design and project-based learning. Students begin working on collaborative design projects early in their studies and complete real-world capstone projects before graduation. Failure is not treated as something to avoid, but as part of the learning process.
At the secondary education level, programs such as the International Baccalaureate (IB) Diploma Programme require students to complete an Extended Essay, an independent research project in which students define their own research question and document their investigative process. The goal is not simply to produce a correct answer, but to demonstrate how a problem evolves through inquiry and revision.
These examples reflect a broader shift in educational priorities. Today, most information is accessible instantly through digital tools and AI systems. The key challenge is no longer access to established knowledge, but the ability to interpret, evaluate, and verify it.
In this context, Korea’s education system faces an important challenge.
A system designed to reward accuracy and speed of recall may struggle in an environment where questioning and revision are more valuable than immediate correctness. Students trained primarily to avoid mistakes may find it difficult to adapt to workplaces where identifying and managing errors is essential.
Reform does not require abandoning the existing system entirely. But several practical changes deserve consideration.
First, university admissions should gradually incorporate long-term project work or research experience into evaluation criteria. Without changes to admissions, classroom practices are unlikely to change.
Second, high school curricula should expand AI-based project learning. This should go beyond basic tool use to include working with AI systems in open-ended tasks, where students must interpret outputs and make decisions based on them.
Third, secondary education should emphasize understanding the limits of AI systems. As these tools produce increasingly plausible answers, students need to know how and why they fail—so they can identify weaknesses, challenge underlying assumptions, and refine outputs rather than accept them at face value.
Korea’s memorization-centered system was once highly efficient. During industrialization, it helped produce large numbers of capable workers in a short period of time. But in an era defined by intelligent systems that generate answers automatically and with authority, book learning alone is no longer enough.
Competitive advantage in the age of AI will depend less on how much individuals know and more on how well they understand these systems’ limits, question their outputs, identify weaknesses, and refine results to create value under uncertainty.
What Korea’s education system needs today is not more practice in getting answers right, but more opportunities to get answers wrong, learn from those mistakes, and improve.
