ITIF Logo
ITIF Search
The Department of Education Shouldn’t Treat Human in the Loop as a Silver Bullet for AI

The Department of Education Shouldn’t Treat Human in the Loop as a Silver Bullet for AI

July 19, 2023

The Department of Education released a report called “Artificial Intelligence and the Future of Teaching and Learning” in May 2023 to guide policymakers at the federal, state, and district levels in creating education-focused AI policies. While the report makes several good recommendations for how to identify and prioritize AI applications in education, it falls short by misguidedly calling for human oversight for all these applications.

The report makes seven recommendations to education leaders: (1) adopt “humans in the loop” as a key criterion for using AI for education; (2) choose AI use cases that best serve educational priorities; (3) incorporate modern pedagogical approaches in AI applications; (4) foster dialogue about AI with all educational stakeholders; (5) prepare teachers to use AI-enabled technologies; (6) focus R&D on addressing the needs of different types of learners; and (7) develop education-specific guardrails for AI.

However, the report’s first and self-described “central recommendation”—encouraging humans in the loop to be the default for all education-related applications—is the most problematic. The guidance states that “teachers, learners, and others need to retain their agency to decide what patterns mean and to choose courses of action” and envisions a “technology-enhanced future more like an electric bike and less like robot vacuums. On an electric bike, the human is fully aware and fully in control, but their burden is less, and their effort is multiplied by a complementary technological enhancement. Robot vacuums do their job, freeing the human from involvement or oversight.”

But the report’s analogy shows a misunderstanding of what the human-in-the-loop approach is supposed to be. As Stanford’s Human-Centered AI Institute itself explains, this approach is about incorporating useful, meaningful human interaction into a system. In the case of a robot vacuum, humans are involved in the initial setup and oversight; they decide where to use the vacuum and what type of cleaning it will do, but having humans involved in the cleaning process would be counterproductive. Similarly, there are many aspects of education, particularly remedial learning and after-school work, that will require some initial human interaction but would then benefit from operating autonomously. For example, if humans must be involved in every aspect of an AI-powered after-school tutor, this could leave students without a parent or guardian to help them with homework to fend for themselves.

Indeed, defaulting to always having a human involved can hinder opportunities for AI to improve equity. For example, the Boston public school system had proposed in 2018 to use an algorithmic system to improve school busing. Using an algorithm from the Massachusetts Institute of Technology (MIT), the district optimized routes and addressed equity. In the past, with a human-created bus system, poorer and minority students disproportionately shouldered earlier start times. In contrast, the algorithm equitably distributed advantageous start times across major racial groups. Despite the obvious improvement to a situation plagued by humans reinforcing bias, the use of a purportedly opaque algorithm led to strong public pushback and, ultimately, the district scrapping its plans.

While a careful approach is important for addressing potential risks with AI-enabled education technologies, a stance that downplays the benefits of removing humans from processes may inadvertently hinder progress and deny students the benefits of these tools and systems. Policymakers can strike a better balance between risk mitigation and innovation and unlock the full potential of AI in K–12 education if they don’t favor human-in-the-loop approaches.

Back to Top