ITIF Logo
ITIF Search
Measuring Digital Literacy Gaps Is the First Step to Closing Them

Measuring Digital Literacy Gaps Is the First Step to Closing Them

April 26, 2024

Digital literacy—the ability to use the internet for work, education, and communication—is now a necessary skill on par with the ability to read or write. And yet we have no clear system of measuring this type of literacy rate or comprehensive dataset that tells us where the U.S. population stands. On the contrary, there’s a piecemeal landscape of data measuring various aspects or interpretations of digital literacy, and studies often cover only members of particular groups rather than the population at large. Without consensus on how to measure universal digital literacy rates, we have no clear way of taking a data-driven approach to the problem—which is necessary if we want to solve it.

Examining existing digital literacy research can help point us toward that consensus. These studies usually take one of three approaches: Practical exams, self-reported skills assessments, and knowledge testing.

2019 OECD data assessing digital literacy skills as “proficiency in problem-solving in technology-rich environments” is a good example of the first. Participants are asked to complete online tasks with increasing levels of complexity, such as sending an email or problem-solving bugs in an unfamiliar application. They are divided into skills groups based on their performance. These ‘grades’, therefore, simultaneously assess certain skills—reasoning abilities, comfort with new platforms, and problem-solving—that are distinct from the core question of digital literacy: The specific ability to use technology.

EveryoneOn, meanwhile, surveyed low-income U.S. households to assess their digital literacy through respondents’ self-reported comfort levels with digital tasks like creating a resume or applying for a government program. The self-reported survey approach is popular among digital skills-measuring institutions: Many U.S. states, for example, have their own surveys assessing their population’s competency online. Though practical, self-reported assessments create less standardized datasets.

Finally, there is basic knowledge testing. Pew Research Center, for example, published a survey assessing the population’s knowledge of certain Internet-relevant information like cybersecurity norms. However, knowledge evaluation doesn’t account for all online skills. Someone who knows the theory behind two-factor authentication but struggles to send an email isn’t digitally literate.

Some of the wide variation in these approaches can probably be attributed to the broadness of the term “digital literacy.” The most current definition by the National Digital Inclusion Alliance covers the ability to access information online, communicate through online platforms like email or Facebook, and complete other now-everyday online tasks like online banking or attending virtual classes. Since the point of measuring digital literacy rates is largely to address gaps, it can be important to identify the reasons behind digital illiteracy—for example, underdeveloped problem-solving skills versus unfamiliarity with a particular program—since those would indicate the need for different interventions. From that perspective, a largely outcome-based definition leaves room for ambiguity.

Researchers also need to contend with the fact that digital skills themselves, no matter how narrowly defined, can be difficult to measure. Any attempt to measure practical, and not always outwardly discernible skills—such as comfort with or understanding of a particular process—often relies on self-assessment or otherwise self-reported data. This fact opens digital literacy studies up to a measurement problem. Different people’s evaluations of their own competency at the same task might reflect different understandings of what the standards for that competency should be. In other words, people don’t always know what they don’t know.

There are some broad frameworks meant to universalize these standards, but they’re largely targeted to specific populations. For example, the International Society for Technology in Education has a framework for digital technology usage in classrooms that articulates appropriate benchmarks for students. But these benchmarks fit within the context of a broader educational system in which students are consistently observed for long periods of time: For example, they include setting learning goals that use technology and collaborating online with learners from other cultures. There needs to be a universal “standardized test” for digital literacy.

One potential instrument for this type of data collection may already exist. Digital skills institutions—either standalone or initiatives within broader organizations—have cropped up across the country to provide personalized digital skills training, help navigating the Internet or connected devices, and on-site tech assistance. These organizations often use entry and exit interviews to assess users’ proficiency with technology and digital skills gaps. So finding some way to collect and standardize this rich trove of on-the-ground data may be a natural solution.

However, there are two problems with this approach: First, digital skills institutions already suffer from a lack of funding and scalability. From a practical standpoint, it’ll be difficult to distribute the necessary information to each relevant institution and to funnel data to the relevant top-level institutions for analysis without overburdening already overworked institutions.

Second, some of the success of more local-level digital skills institutions stems from their ability to interpret and teach digital literacy in a way that resonates with their target populations. It will be difficult to standardize questionnaires in a way that preserves that flexibility while allowing for data comparison.

The one clear takeaway from all of this is that digital literacy needs to be better studied and better understood. Simply connecting to the Internet is no longer enough. To really benefit from the information age, populations need the ability to navigate the Internet and use connected devices with some baseline level of skills. To equip our population with essential twenty-first-century skills, we need to standardize that baseline and figure out how to assess whether and when it’s met.

Back to Top