Australia’s under-16 social media ban: A global test case for digital regulation

Australia’s first-of-its-kind social media ban for children under the age of 16 took effect on December 10, 2025. This nationwide law represents a landmark shift in digital regulation and youth protection. Whether it marks the beginning of a broader international trend or instead exposes the practical and ethical limits of age-based social media bans remains to be seen.

Stemming from Australia’s Privacy Act 1988, the legislation requires companies that operate age-restricted social media platforms to take “reasonable steps” to ensure that Australians under 16 cannot create or maintain accounts. The Act expressly defines “age-restricted social media platforms” to include companies such as Facebook, Instagram, TikTok, Snapchat, X (formerly Twitter), Reddit, and YouTube. Notably, the legislation carves out exceptions for platforms whose primary purpose is education or health, as well as for messaging applications and online gaming services.

To satisfy the “reasonable steps” requirement, covered platforms must prevent Australians under the age of 16 from creating new accounts. Importantly, the Act does not exempt existing accounts; platforms are therefore expected to identify and remove accounts already held by Australian users under 16 years old.

This blanket prohibition is accompanied by significant compliance pressure. While the legislative intent is clear, the Act does not mandate specific technologies or verification methods that will qualify as “reasonable.” Instead, compliance measures will depend on factors such as the size, nature, and risk profile of each platform, along with evolving guidance from the eSafety Commissioner. In the interim, companies face the risk of substantial penalties while navigating an uncertain and rapidly developing regulatory environment.

Supporters of the law liken social media regulation to other age-based restrictions, such as those governing driving or alcohol consumption, arguing that such measures are necessary to protect young people during critical periods of emotional and cognitive development. At the core of this rationale is the concern that algorithm-driven feeds, addictive design features, and exposure to harmful content can negatively affect children’s mental health, self-esteem, and overall well-being.

Critics, however, point to substantial challenges facing both the implementation and effectiveness of the Act. The absence of prescribed compliance methods creates uncertainty for platforms, while underage users may attempt to circumvent restrictions through false birthdates, shared accounts, or VPN usage. Additionally, opponents argue that the law may discourage transparency and digital literacy, potentially driving young users toward less regulated, and potentially more harmful, online spaces.

In contrast, the scope of federal protections for children in the United States remains largely limited to the Children’s Online Privacy Protection Act (COPPA), which focuses primarily on data collection and consent rather than access restrictions. Against this backdrop, Australia’s reform is likely to influence global discussions on digital governance and child protection, serving as a case study in how far governments can, or should, go in regulating minors’ online lives.

If you have questions about your company’s compliance with cyber regulations, concerns about vulnerability to cyberattacks or data breaches, or would like to learn more about proactive cybersecurity defense, please contact a member of our national data privacy and cybersecurity team.

Jump to Page

McDonald Hopkins uses cookies on our website to enhance user experience and analyze website traffic. Third parties may also use cookies in connection with our website for social media, advertising and analytics and other purposes. By continuing to browse our website, you agree to our use of cookies as detailed in our updated Privacy Policy and our Terms of Use.