Australia’s world-first social media “ban” has been in the global spotlight since its introduction in late 2025. As other jurisdictions look to follow suit, parents and tech giants alike continue to grapple with a key question: how will the ban be practically enforced?

Application of the “social media ban”

On 10 December 2025, the Online Safety Amendment (Social Media Minimum Age) Act 2024 (the Act) came into effect, requiring social media platforms to take “reasonable steps” to prevent individuals under the age of 16 from creating and keeping social media accounts.

The Act casts a wide net, applying to electronic services that:

  1. Have the sole or significant purpose of enabling online social interaction between 2 or more users;
  2. Allow end users to link to, interact with, some or all of the other end users; and
  3. Allow end users to post materials on the service.

The regulator, eSafety, has formally announced that it considers 10 popular social media platforms fall within the scope of the restrictions, including Facebook, Instagram, Snapchat, Threads, and YouTube. However, this list is non-exhaustive and other platform operators need to determine if the ban applies to them.

The Online Safety (Age-Restricted Social Media Platforms) Rules 2025 (the Rules) set out exemptions to the ‘ban’, including electronic services with the “sole or primary purpose” of:

  1. messaging, emailing, or calling;
  2. enabling end users to play online games;
  3. professional networking or development; and
  4. supporting the education or health of users.

eSafety does not consider platforms like Discord, GitHub, Google Classroom, Messenger, Steam, Steam Chat, WhatsApp or YouTube Kids to fall within scope of the ban.

Neither children nor their parents will be penalised for circumventing the ban, but social media platforms may be fined up to AUD $49.5 million for systemic non-compliance.

Regulatory guidance for age verification

On 16 September 2025, the eSafety Commissioner published regulatory guidance to assist social media platforms with meeting their obligations under the Online Safety Act 2021 (Cth) (Regulatory Guidance), including what constitutes “reasonable steps”. The Regulatory Guidance draws on the results of the Australian Government’s Age Assurance Technology Trial, which independently assessed the range of technologies capable of verifying or estimating user age.

Key insights from the Regulatory Guidance include the following:

  • As expected, the Act does not mandate the use of any specific technology. However, the eSafety Commissioner, Julie Inman Grant, has confirmed that platforms are expected to take a “successive validation” or “waterfall” approach to age verification, and avoid relying on self-declaration as the sole or primary means of verifying age. The Act expressly prohibits the collection of “government identity materials” in the course of age verification, but otherwise platforms have carte blanche to determine how their obligations are discharged. This could include the use of AI-driven models to assess age with facial scans, or tracking users’ online behaviour;
  • Platforms are to prioritise the deactivation of active accounts belonging to under-16s and prevent those users from immediately creating a new account;
  • Measures must be proportionate to the risk profile of the relevant services. Platforms with a higher risk profile are expected to employ more robust measures, but there is no requirement that the ages of all users need to be verified;
  • Platforms must offer accessible, fair and timely review mechanisms for disputes over “adverse outcomes” of age assurance processes, reports of underage accounts, or account removals; and
  • Platforms are expected to continuously monitor, uplift and seek to improve the integrity of their measures over time, which includes maintaining an awareness of changes in circumvention methods, user behaviour and demographics and community expectations of privacy, among other things.

The Regulatory Guidance also specifies that measures will not be considered as “reasonable steps” if they:

  • rely entirely on self-declaration to determine the age of existing or prospective account holders;
  • rely on users holding an account for an unreasonable period of time before detection (such as if users are required to engage with a platform for an extended period of time to collect sufficient data to assess their age);
  • do not prevent age-restricted users whose accounts have been deactivated or removed from immediately reactivating or creating a new account and regaining access to the platform; and
  • result in substantial numbers of users who are not subject to the ban from being removed or blocked.

As at mid-January 2026, the Australian Government announced that more than 4.7 million social media accounts judged to be held by individuals under 16 had been deactivated, removed or restricted.