ITIF Logo
ITIF Search
Social Media Ban for Children Is a Step Backward for Australia

Social Media Ban for Children Is a Step Backward for Australia

November 19, 2024

As jurisdictions around the world grapple with how to protect children from potential online harms, the Australian government announced on November 8, 2024, its intention to introduce legislation this year banning users under 16 from social media platforms. As is the case with similar legislation at the state level in the United States, blocking an entire age group from social media is the equivalent of using a regulatory sledgehammer instead of a scalpel to address complex and evolving online safety issues. This approach fails to consider the benefits of social media for young people and the pitfalls of online age verification requirements.

Australia’s social media ban would be the strictest in the world. Prime Minister Anthony Albanese says the responsibility for enforcing this age limit on social media will lie with the individual tech companies and platforms to “demonstrate they are taking reasonable steps to prevent access” for underage users. Thankfully, the legislation does have some platform exemptions for educational and informative content, such as YouTube Kids, but there will be no exemptions for children who have parental consent, or who already have accounts. The country’s eSafety Commissioner will have enforcement power, though it is unclear exactly how they will enforce the ban.

There are multiple problems with this approach. Australia may require an ID-based age verification, a process that would involve handing over some form of government-issued identification to a platform to confirm the user is over 16. This method would mean all users—not just teens—would have to give up their personal information to social media platforms via ID verification. Though this method of age verification is the most accurate, it is also the most invasive. Indeed, there are serious privacy and free speech implications behind requiring users to turn over personal information in order to use a tool that is increasingly vital to social and political activism and everyday communication and expression, as many users would likely not want to give up their personal information to access social media, particularly platforms like X, Reddit, and many others that enable users to maintain anonymity.

As an alternative, Albanese expressed the possibility of introducing biometric scanning technology, such as facial recognition, to verify social media users’ ages and identities. Biometric information includes data derived from a user’s physical characteristics, such as a facial scan or fingerprints, and behavioral characteristics, like voice recognition. This process would be more privacy-protective if online services used age estimation—estimating a user’s age based on a facial scan—and were then required to delete those facial scans after users completed the verification process. This process would also be more inclusive of children, who typically lack a government-issued ID. However, it does not solve all the problems associated with banning social media for children.

A total ban causes youth to lose out on the many benefits of social media. According to the American Psychological Association, which wrote in a May 2023 health advisory that “[u]sing social media is not inherently beneficial or harmful to young people,” some of the benefits of social media include social interaction, connection to peers in similar circumstances, promotion and reinforcement of positive attitudes and behaviors, and support for members of marginalized groups. In Australia, one teenager said social media is the only way they are able to communicate with their loved ones who live in other countries and a ban would “mean losing a direct line to the most important person to her.”

Instead, a more balanced approach would continue to allow children to benefit from social media while giving them and their parents more control over their online experience. An alternative to age verification could be a child flag system, which would require device operating systems to create a “trustworthy child flag” for user accounts that signal to apps and websites that a user is underage and requires apps and websites that serve age-restricted content to check for this signal from their users and block underage users from this content. Rather than using ID checks or biometric verification to determine whether to activate this child flag option, this would be an opt-in process built into existing parental controls on devices. Parents could activate or disable the child flag option depending on their own values and the maturity of their children. Additionally, devices could default to certain parental controls recommended for children, with different settings recommended for different age groups, much like movie and video game rating systems.

Because this approach does not require anyone to disclose or verify their identity, it does not create the privacy risks posed by forcing users to share their government IDs. It is also a low-impact approach, allowing adults to continue using the Internet as they do today. This would also alleviate some concerns Australians have expressed with the “potentially complicated, time-consuming and risky ramifications of requiring up to 40 different apps to enforce the legislation.” By implementing this opt-in, largely voluntary system, users would not face the same disruptions caused by a blanket, age-gated ban.

Enacting a complete social media ban for all users under 16 would send Australia backward in time, to an age when online communication and community-building were much more difficult. There are easier, less drastic measures Australia can take to give children and their parents important safety tools and, importantly, a choice in how best to protect themselves online.

Back to Top