Australia’s internet regulator has accused the world’s largest social media companies of not adequately implementing the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including permitting prohibited users to make repeated attempts at age verification and inadequate safeguards to prevent new accounts. In its first compliance report since the prohibition came into force, the regulator identified multiple shortcomings and has now moved from monitoring to active enforcement, cautioning that platforms must demonstrate they have implemented “appropriate systems and processes” to prevent children under 16 from accessing their services.
Non-compliance Issues Exposed in First Major Review
Australia’s eSafety Commissioner has documented a worrying pattern of non-compliance amongst the world’s most prominent social media platforms in her inaugural review since the ban took effect on 10 December. The report reveals that Meta, Snap, TikTok, YouTube and Snapchat have jointly failed to implement appropriate safeguards to stop minors from accessing their services. Julie Inman Grant expressed particular concern about systemic weaknesses in age verification systems, noting that some platforms have permitted children who originally stated themselves under 16 to subsequently claim they were older, thereby undermining the law’s intent.
The findings represent a notable intensification in the regulatory action, with the eSafety Commissioner transitioning from monitoring towards direct enforcement. The regulator has made clear that simply showing some children still maintain accounts is inadequate; platforms must instead provide concrete evidence that they have put in place comprehensive systems and procedures designed to prevent under-16s from opening accounts in the first place. This shift demonstrates the government’s determination to hold tech giants accountable, with potential penalties looming for companies that do not meet the statutory obligations.
- Permitting formerly prohibited users to confirm again their age and regain account access
- Allowing repeated attempts at the same age assurance method with no repercussions
- Insufficient mechanisms to block new under-16 accounts from being opened
- Inadequate complaint mechanisms for parents and the general public
- Absence of clear information about compliance actions and account deletions
The Magnitude of the Issue
The substantial scale of social media activity amongst young Australians underscores the compliance challenge confronting both the authorities and the platforms in question. With millions of accounts already removed or restricted since the implementation of the ban, the figures provide evidence of widespread initial non-compliance. The eSafety Commissioner’s conclusions indicate that the operational and technical barriers to implementing age restrictions have proven far more complex than anticipated, with platforms struggling to differentiate authentic age confirmations from false claims. This complexity has left enforcement authorities grappling with the core issue of whether current age verification technologies are sufficient for the purpose.
Beyond the operational challenges lies a wider issue about the readiness of companies to place compliance ahead of user growth. Social media companies have consistently opposed stringent age verification measures, citing data protection worries and the genuine difficulty of verifying age digitally. However, the regulatory report suggests that some platforms may not be making adequate commitment to implement the systems required by law. The move to active enforcement represents a critical juncture: either platforms will significantly enhance their compliance infrastructure, or they risk facing substantial fines that could reshape their business models in Australia and possibly affect compliance frameworks internationally.
What the Figures Indicate
In the opening month after the ban’s launch, Australian regulators indicated that 4.7 million accounts had been limited or removed. Whilst this number initially looked to show regulatory success, subsequent analysis reveals a more layered picture. The considerable quantity of account deletions implies that many under-16s had managed to establish accounts in the first place, revealing that preventative measures were lacking. Additionally, the data casts doubt about whether removed accounts constitute authentic compliance or merely users deleting their pages voluntarily in response to the new restrictions.
The minimal transparency regarding these figures has frustrated independent observers attempting to evaluate the ban’s true effectiveness. Platforms have disclosed little data about their enforcement methodologies, success rates, or the profile of suspended accounts. This lack of clarity makes it hard for regulators and the wider public to assess whether the ban is working as intended or whether younger users are simply finding alternative ways to use social media. The Commissioner’s insistence on detailed evidence of systematic compliance measures reflects mounting dissatisfaction with platforms’ unwillingness to share complete details.
Sector Reaction and Pushback
The major tech platforms have responded to the regulatory enforcement measures with a mixture of assurances of compliance and doubts regarding the ban’s practicality. Meta, which operates Facebook and Instagram, stressed its commitment to complying with Australian law whilst simultaneously arguing that precise age verification continues to be a major challenge across the industry. The company has called for a different approach, suggesting that robust age verification and parental approval mechanisms implemented at the app store level would be more effective than platform-level enforcement. This position reflects wider concerns across the industry that the existing regulatory system puts an impractical burden on individual platforms.
Snap, the creator of Snapchat, has taken a more proactive public stance, stating that it had suspended 450,000 accounts since the ban took effect and claiming to continue locking more daily. However, sector analysts dispute whether such figures demonstrate genuine compliance or merely reactive account management. The core conflict between platforms’ business models—which historically relied on maximising user engagement and growth—and the statutory obligation to systematically remove an entire age demographic remains unresolved. Companies have consistently opposed rigorous age verification methods, pointing to privacy issues and technical constraints, creating a standoff between authorities and platforms over who carries responsibility for execution.
- Meta contends age verification ought to take place at app store level instead of on individual platforms
- Snap states to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups highlight privacy concerns and technical obstacles as impediments to effective age verification
- Platforms contend they are doing their best whilst challenging the ban’s general effectiveness
More Extensive Inquiries Concerning the Ban’s Impact
As Australia’s under-16 social media ban enters its enforcement phase, key concerns persist about whether the law will achieve its stated objectives or merely drive young users towards unregulated platforms. The regulatory authority’s initial compliance assessment reveals that despite months of implementation, significant loopholes remain—children continue finding ways to bypass age verification systems, and platforms have struggled to stop new underage accounts from being established. Critics contend that the ban’s success depends not merely on regulatory oversight but on whether young people will genuinely abandon mainstream platforms or simply migrate to alternative services, secure messaging apps, or VPNs designed to conceal their age and location.
The ban’s worldwide effects increase the complexity of assessments of its success. Countries such as the United Kingdom, Canada, and various European states are observing Australia’s experiment closely, considering similar laws for their own populations. If the ban proves ineffective at reducing children’s digital engagement or fails to protect them from dangerous online content, it could damage the case for similar measures elsewhere. Conversely, if regulation becomes sufficiently robust to genuinely restrict underage usage, it may embolden other nations to adopt comparable measures. The conclusion will likely influence global regulatory trends for the foreseeable future, making Australia’s regulatory efforts examined far beyond its borders.
Who Benefits and Who Loses
Mental health supporters and child safety organisations have backed the ban as a essential measure against algorithmic manipulation and contact with harmful content. Parents and educators contend that taking young Australians off platforms designed to maximise engagement could lower anxiety levels, enhance sleep quality, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the mental health risks linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also eliminates valid applications of social media for young people—maintaining friendships, accessing educational content, and engaging with online communities around common interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families dispute.
The ban’s real-world effects goes further than individual users to influence content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have followed creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that are dependent on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously employed effectively. Meanwhile, the ban unintentionally favours large technology companies with resources to build age verification infrastructure, possibly reinforcing their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects reach well further than the simple goal of child protection.
What Follows for Compliance Monitoring
Australia’s eSafety Commissioner has signalled a notable transition from inactive oversight to direct intervention, marking a pivotal moment in the execution of the youth access prohibition. The watchdog will now compile information to establish whether platforms have omitted “reasonable steps” to prevent underage access, a regulatory requirement that goes further than simply noting that young people stay within these systems. This approach necessitates demonstrable proof that platforms have established appropriate systems and processes designed to exclude minors. The Commissioner’s office has signalled it will conduct enquiries methodically, building cases that could trigger substantial penalties for breach of requirements. This shift from monitoring to enforcement reflects growing frustration with the services’ existing measures and signals that voluntary cooperation by itself is insufficient.
The enforcement phase highlights significant concerns about the appropriateness of fines and the concrete procedures for holding tech giants accountable. Australia’s regulatory framework delivers regulatory tools, but their success hinges on the eSafety Commissioner’s readiness to undertake official proceedings and the platforms’ capacity to respond meaningfully. Global regulators, particularly regulators in the UK and EU, will carefully track Australia’s implementation tactics and results. A successful enforcement campaign could establish a model for additional countries considering equivalent prohibitions, whilst shortcomings might undermine the comprehensive regulatory system. The forthcoming period will be critical whether Australia’s pioneering regulatory approach produces real safeguards for young people or becomes largely performative in its influence.
