Age Gating Won’t Fix Social Media Harms in Canada
As Canada considers joining a growing list of countries moving to ban social media for teenagers, lawmakers have framed the debate around limiting access to online spaces rather than improving conditions within them. The evidence suggests this focus is misplaced: Harm is not driven by access alone, but by specific experiences. A blanket social media ban does not address those dynamics, it just removes some users from view while leaving the underlying sources of harm unchanged.
The evidence doesn’t support the premise. Studies often cited to justify restricting access to social media don’t show a clean relationship between time spent and harm. A large longitudinal study from the University of Manchester tracking 25,000 adolescents found no evidence that time spent on social media predicts emotional or behavioural struggles. Reviews from the American Psychological Association reach the same conclusion: Social media can be both harmful and beneficial to children. Outcomes depend on what content users encounter, the design of the platforms, and how young people engage with them.
Canadian evidence points the same way. Mental Health Research Canada finds that specific online experiences, not general access to social media, are linked to negative mental health outcomes. Cybervictimization correlates with sharp increases in distress and suicidal ideation. Persistent social comparison correlates with worse psychological outcomes, whereas positive relationships and resilience dampen these effects.
Ottawa’s current policy debate misses this important nuance. The problem is not that young people use social media, but the specific experiences they may have on it. A ban does not engage with that reality; it operates on the assumption that less access automatically means less harm. That logic is too shallow to guide policy.
This distinction will determine whether policy actually reduces harm in Canada or merely reduces its visibility. The drivers of harm will not disappear when access is restricted. They will simply move, reappearing on adjacent platforms, in private channels, and across harder-to-observe spaces.
There is no “Harassment App™” policymakers can ban and call the problem solved. The primary issues facing children on social media—social comparison, bullying, child sexual abuse, and exposure to age-inappropriate content—predate the Internet itself. Historically, virtually every major media innovation has triggered a moral panic resembling today’s debate over children’s access to social media.
Banning social media will not change the rates of depression, anxiety, or other mental health challenges among teenagers. If only it were that easy.
Blanket restrictions also assume a level of compliance that does not exist. Age limits are easy to route around. Teens can use shared accounts, misstate their age, or move to adjacent platforms, resulting in a less visible environment that pushes risks into spaces that are harder for parents, platforms, and policymakers to monitor or influence.
And bans weaken the incentives needed to improve platform safety because the users most at risk are no longer part of it. If young users are excluded, platforms face less pressure to design safer experiences for them. Responsibility moves away from system design and toward user exclusion.
The same flaw appears in more invasive proposals. Age verification systems based on identity checks or biometric data promise precision but deliver new problems. They create privacy risks, impose compliance costs, and remain easy to circumvent. The bigger problem is that they are built on the same premise as bans: Age verification systems assume the central policy challenge is determining who should be allowed to participate rather than how participation is structured.
The alternative isn’t inaction; it is regulation at the level where harm actually occurs—through platform design, content exposure, and user interactions—while preserving flexibility for families and users.
One practical approach is a device-level child flag system. Instead of requiring users to prove their age through intrusive verification, operating systems could allow parents to signal that an account belongs to a minor. Platforms would then be required to recognize that signal and apply appropriate safeguards, which could include safer defaults, limits on features, and restrictions on age-inappropriate content. This system would not attempt to perfectly determine age, but rather ensure that platforms respond differently to underage users.
Crucially, this model allows for variation. Young people develop at different rates, and families have different expectations and risk thresholds. A uniform ban overrides those differences. A signalling-based system would allow them to be managed.
More broadly, effective policy should concentrate on addressing specific harms. It should support parental control tools that are usable and consistent across platforms. It should prioritize transparency where it improves accountability, not impose sweeping mandates that create new risks. The objective should not be to eliminate social media from young people’s lives, but instead to make those environments safer in ways that reflect how harm actually occurs.
The momentum behind these proposals is not accidental. Governments across Europe and Australia are moving in the same direction, driven by knee-jerk reactions and public pressure rather than science-backed evidence. While other countries may be converging on responses that are easy to explain, not ones designed to work, Canada does not have to follow the trend. Policy diffusion is not validation.
Ottawa has a habit of choosing policies that are easy to announce and difficult to justify. A youth social media ban would fit that pattern, offering a clear story: “remove the source of harm.” But for anyone willing to read past the headlines, the evidence is clear and points elsewhere. Harm is driven not by access, but by how these systems operate. A blanket ban would do little to address the root causes of harm and risks pushing young people into less visible online environments. A serious policy response should start there.
