---
title: "Calling Timeout on Social Media Time Limit Policies "
summary: |-
  Virginia’s one-hour social media limit for minors is a misguided policy that undermines parental authority, raises constitutional concerns, and fails to effectively address the real drivers of youth online harm.
date: "2026-04-08"
issues: ["Internet", "State and Local", "Public Safety"]
authors: ["Alex Ambrose"]
content_type: "Commentary"
canonical_url: "https://itif.org/publications/2026/04/08/calling-timeout-on-social-media-time-limit-policies/"
---

# Calling Timeout on Social Media Time Limit Policies 

Virginia’s [latest social media law](https://law.lis.virginia.gov/vacode/title59.1/chapter53/section59.1-577.1/), which limits children under 16 to one hour of social media use, is a misguided attempt to protect young people online. By imposing a government-set time cap, the law substitutes a blunt mandate for more effective, targeted approaches. It undermines parental authority, creates new compliance burdens, and expands reliance on age verification—all while distracting from more effective ways to protect young people online. Policymakers should reject this model in favor of solutions that empower parents and target specific, actual harms to children.

Virginia’s recently enacted [SB 854](https://lis.virginia.gov/bill-details/20251/SB854/text/SB854ER) illustrates the shortcomings of this approach. The law amends the Virginia Consumer Data Protection Act to limit social media use to one hour per app per day for users under 16, although parents can raise or lower the time limit. Allowing parents to set screen-time limits and manage access for their children is a reasonable goal, but since [most major platforms already offer](https://consumer.ftc.gov/articles/how-use-parental-controls-keep-your-kid-safer-online) this feature, it is unclear what the law accomplishes.

Supporters might say it sets a default one-hour limit for children, but the reality is that the limit will have little practical impact. After all, the law limits users to one hour per app, not one hour for all social media. So, for example, a teen using X, Instagram, Snap, and YouTube can still be online for four hours. This measure does little to actually keep youth off social media when they can simply switch apps or use a different account to add more time to their online session. Parents are much better positioned than governments to make individual judgment calls on their children’s use of social media.

Furthermore, treating every young user as the same through one-size-fits-all policies is ineffective, as children do not automatically reach certain cognitive, emotional, and social thresholds as soon as they hit a particular birthday or maturity level, and the types of content children should have access to and for how long can vary significantly from child to child. Just as all children are different, all social media platforms are different. For example, social media is full of educational content in addition to all other types of content, such as opportunities to speak with scientific experts on X and Twitch, or science and technology channels on YouTube. Social media encompasses a wide range of content, and treating it as uniformly harmful ignores that diversity.

This time limit requirement also [raises First Amendment concerns](https://itif.org/events/2024/05/16/social-media-and-first-amendment/). Enforcing that limit requires platforms to verify the age of all users—children and adults alike—to determine who falls within the law’s scope. Social media platforms host a broad range of socially and politically valuable speech that both adults and minors have a constitutional right to access. While protecting children is a legitimate government interest, [courts have repeatedly found](https://truthonthemarket.com/2026/03/04/coppa-age-verification-and-the-ftcs-enforcement-end-run/?_gl=1*7go3xs*_ga*MjA4MTMxNjAzOC4xNzcyNzI3MDk0*_ga_R1FRMJTK15*czE3NzI4MTE4OTkkbzMkZzAkdDE3NzI4MTE4OTkkajYwJGwwJGgw) that broad age verification requirements are not the least restrictive means of achieving that goal. Virginia’s law fits squarely within this line of cases.

That constitutional concern is not theoretical. Although the law took effect on January 1, 2026, a lawsuit—*[Netchoice v. Jones](https://netchoice.org/netchoice-v-miyares-virginia/)*—has delayed enforcement. The lawsuit, filed by the technology trade association NetChoice, argues that the law violates the First Amendment by restricting access to free speech online, echoing previous lawsuits. For example, [Utah passed legislation](https://le.utah.gov/~2023/bills/static/SB0152.html) in 2023 requiring age verification for social media platforms and imposing a nighttime “curfew” for users under 18. Courts halted enforcement of that law as well, citing [similar constitutional concerns](https://netchoice.org/netchoice-v-reyes/).

Time-limit policies stem from a flawed but prevailing narrative that social media is inherently addictive, particularly for young users, and therefore, youth need strict limits to their time online. But the research [does not support](https://itif.org/publications/2026/02/19/the-flawed-narrative-driving-tech-bans-for-kids/) such sweeping conclusions. While excessive or problematic use can certainly be harmful for some individuals, much of the existing research relies on correlational data or self-reported screen time data, which makes it difficult to draw clear causal links between usage duration and mental health outcomes. [Research](https://digitalwellnesslab.org/cimaid/) from the Digital Wellness Lab at Boston Children’s Hospital and Harvard Medical School in 2024 finds that excessive time spent playing video games, using social media, and using smartphones is typically a coping mechanism for psychological issues, rather than the cause of them.

As a result, the guidance about screen time continues to change. The American Academy of Pediatrics recently updated and lowered its[ youth screen time recommendations](https://publications.aap.org/pediatrics/article/doi/10.1542/peds.2025-075320/206129/Digital-Ecosystems-Children-and-Adolescents-Policy). This addiction narrative likewise falls short in court. For example[, in the Utah case](https://www.axios.com/local/salt-lake-city/2024/09/11/utah-meta-tiktok-google-lawsuit-age-restrictions-injunction-netchoice), the ruling states, “[Utah officials] have not provided evidence establishing a clear, causal relationship between minors' social media use and negative mental health impacts.”

Blunt time limit mandates do not meaningfully improve youth safety online. A more productive approach would be for policymakers to focus on helping parents understand and use the tools already available to them. Confusing this pre-existing process for parents and muddying compliance for platforms is the wrong approach to protecting youth from social media harms.

---
*Source: Information Technology & Innovation Foundation (ITIF)*
*URL: https://itif.org/publications/2026/04/08/calling-timeout-on-social-media-time-limit-policies/*