Australia launches new online safety standards

With the announcement of new guidelines, Australia’s internet safety regulator looks to be heading off a potential confrontation with Apple over its encrypted messaging program iMessage. The regulator claims the new rules will combat terrorist content and material that abuses children, but they won’t compromise end-to-end encryption. Apple has been a vocal critic of the new standards.

Julie Inman Grant, the commissioner for eSafety, decided in June to reject two regulatory codes that were established by the industry because they did not require cloud storage services, email services, or encrypted messaging services to screen for inappropriate content involving children. Instead, the regulatory body started working on necessary criteria, which were made public in draft form on Monday when they were completed.

The proposed guidelines compel the operators of cloud or messaging services to detect and remove known child abuse content and pro-terror content “where technically feasible.” In addition, they urge the operators to disrupt and dissuade the creation of new content of the same sort.

eSafety has emphasized that it “does not advocate building in weaknesses or back doors to undermine privacy and security on end-to-end encrypted services” when declaring that it will only be required if technically practicable. This statement was made in conjunction with the announcement that it will only be required if it is technically possible.

According to Inman Grant, “eSafety is not requiring companies to break end-to-end encryption through these standards,” nor do “we expect companies to design systematic vulnerabilities or weaknesses into any of their end-to-end encrypted services,” eSafety. “eSafety is not requiring companies to break end-to-end encryption through these standards.”

“But operating an end-to-end encrypted service does not absolve companies of responsibility and cannot serve as a free pass to do nothing about these criminal acts,” the statement reads. “[T]here is no way for companies to get out of their responsibility.”

If the regulator moves forward on the basis of what is “technically feasible,” it may be able to avoid a fight such to the one Apple had earlier this year against the government of the United Kingdom.

The technology company, along with other providers of encrypted communications apps, made the threat to withdraw iMessage from the United Kingdom in the event that message scanning requirements were included in the local rules governing online safety. The ideas were ultimately shelved by the government of the United Kingdom in September after they were deemed “feasible” only if scanning content became “technically feasible.”

The commissioner said that technology such as hashing is a technically possible solution since it can give known content a unique value and incorporate it in a database.

Inman Grant referred to Meta, the parent company of Facebook, Instagram, and WhatsApp, as an example of a corporation that implements hashing technology throughout its platforms in order to identify known content. In 2022, the corporation submitted 27 million reports of child sexual exploitation as well as abuse to the National Center for Missing and Exploited Children, but Apple only submitted 234 such reports.

eSafety has stated that the standard will need further measures in those cases where something is believed to be technically impossible. These additional measures include having clear and identifiable user reporting methods in place, as well as recognizing patterns in the behavior of users. Reviews of encrypted communications are not included in this requirement.

The draft standards will be available for public comment until the 21st of December, and the standards themselves are scheduled to become law in April of the following year.

According to Samantha Floreani, who is in charge of program development at Digital Rights Watch, the organization is still concerned about the approaches eSafety alludes to in the draft standards.

She stated that “such approaches have been widely criticized by privacy and security researchers for their questionable effectiveness, risk of false positives, increased vulnerabilities to security threats, and the ability to expand the use of such systems to police other categories of content.” “Such approaches have been widely criticized by privacy and security researchers for their questionable effectiveness, risk of false positives, and increased vulnerabilities to security threats,” she said.

According to our point of view, the introduction of such standards would put the digital security of consumers at risk.

The proposed guidelines also include sections aimed at businesses that use generative artificial intelligence technology. These clauses attempt to prevent AI from producing content that promotes terrorism or child exploitation and sexual exploitation of children.

The standards require corporations to utilize lists, hashes, or other technology to detect and prohibit AI from generating such content. Additionally, individuals who input phrases connected with child sexual abuse material must be warned about the risks and illegality involved with their actions.

Latest articles

EV sales go up in Australia by 185%

In a noteworthy turn of events, Australia has experienced a substantial surge in electric vehicle (EV) sales throughout the current year, with figures more...

Queensland coast can witness first cyclone this summer

Off the coast of Queensland, a tropical low-pressure system is exhibiting a high likelihood of evolving into the initial cyclone of the summer season....

First Nations people still disappointed in Australia

Following the recent setback of the voice to parliament referendum, the Australian public finds itself in a state of uncertainty regarding the government's plan...

Landslides kill over 50 people in Tanzania

In a heart-wrenching turn of events, northern Tanzania is reeling from the aftermath of devastating landslides triggered by flooding, claiming the lives of at...

Related articles