Australia has launched a major investigation into global tech giants including Meta, TikTok, Snapchat and Google, amid allegations that they have failed to comply with the country’s strict ban on social media use by children under the age of 16.
The probe, led by Australia’s eSafety Commission, comes just months after the world-first legislation came into force in December 2025, mandating that platforms prevent underage users from creating or maintaining accounts. The law applies to major platforms such as Facebook, Instagram, TikTok, Snapchat and YouTube, placing the responsibility on companies to enforce age restrictions or face heavy penalties.
Government officials have accused the companies of not doing enough to implement effective safeguards. Communications Minister Anika Wells said the findings so far were “unacceptable” and warned that firms must follow Australian law if they want to continue operating in the country.
A recent survey of around 900 parents has raised serious concerns about the effectiveness of the ban. It found that about 31% of children still had access to social media accounts even after the restrictions were introduced, compared to 49% before the law. More strikingly, nearly 70% of under-16 users who previously had accounts on platforms like Instagram, Snapchat and TikTok were still able to maintain access despite the ban.
Regulators say weaknesses in age-verification systems are at the heart of the problem. Technologies such as facial age estimation have been described as inadequate, while some platforms allegedly allow repeated attempts at verification until users succeed. Authorities have also pointed to “lax guardrails” that make it easy for minors to bypass restrictions or rejoin after being blocked.
The eSafety Commissioner has indicated a shift towards stricter enforcement, with potential legal action on the table. Companies found in breach of the law could face fines of up to A$49.5 million per violation.
While some tech firms claim to have removed millions of underage accounts since the law came into effect, regulators argue that these efforts have not gone far enough. Reports suggest that many children have been able to create new accounts or bypass verification checks altogether, undermining the intent of the legislation.
The investigation is being closely watched globally, as Australia’s approach is seen as a test case for stricter regulation of social media platforms and their responsibility towards child safety. The outcome could influence similar laws in other countries and reshape how tech companies design and enforce age restrictions on their platforms.