The Australian Senate approved this Thursday and will contemplate fines of up to 49.5 million Australian dollars (30.6 million euros) on platforms for systematic violations. The text, which will come into force in twelve months and will affect the networks most used by young people such as TikTok, Instagram or Facebook, has received the approval of the Senate with 34 votes in favor and 19 against, with support from the opposition, one day after being approved in the House of Representatives (deputies) by 101-13. The text must return to Congress for final approval, with the incorporation of a series of amendments. Below are some of the keys to this measure.
Why has the Australian Government promoted the new law?
The Australian Prime Minister, Anthony Albanese, pointed out a week ago that the problem of minors, particularly girls, due to the harmful representations of body image and the misogynistic content that is disseminated. “For too many young Australians, social media can be harmful. “Almost two-thirds of Australians aged 14 to 17 have seen extremely harmful content online, such as drug use, suicide or self-harm,” said Communications Minister Michelle Rowlanden in Parliament.
What does the norm consist of?
As Labor senator Jenny McAllister explained this Thursday, the Government is seeking to ensure that digital platforms – Meta (Instagram and Facebook), Tiktok, open or have an account on those social networks. Otherwise, they could face “intentionally large” fines of up to €30.6 million. The law seeks to protect minors from harassment and potential mental health problems and to do so introduces a new category called “age-restricted social media platforms,” which prohibits both the creation of new accounts and the maintenance of old ones. those under 16. The law does not impose sanctions for users or parents of minors who violate it. This is the highest age limit proposed by a country to date, and there will be no exemption regarding parental consent (even if they authorize it, those under 16 will not be able to create an account). Despite the new regulation, children will have access to “low risk” platforms such as Google Classroom or YouTube.
What system does Australia want to implement for age verification?
It has been precisely one of the most controversial points within this legislation. It is the first country to raise the legal age for creating accounts to 16 and to impose sanctions on companies that fail to comply, but, as happens in other parts of the world, since technology companies have argued in recent years that they do not The technical development of a tool of these characteristics that does not conflict with the users’ right to privacy is viable, since it would be necessary to review personal documents that would hypothetically be associated with the consumption of a certain type of content. The Australian Government plans to pilot an age verification system that may include biometrics or government identification to ensure that the user is the age they claim to be. Furthermore, in the case of platforms with adult content, such as those that offer pornography, the company will be obliged to verify that the people trying to access are of legal age to do so.
What other countries have implemented similar measures?
The clearest precedent is the United Kingdom, which in October 2023, a text that obliges content providers, whether their own or generated by users such as on social networks, to monitor the legality or suitability of the material. In the event that identification through an official document issued by the Government is not enough (in the United Kingdom something like the Spanish National Identity Document does not exist nor is it mandatory), the regulations establish that the platforms must use the technology “now within reach” to perform biometric verification through the facial features and age of the potential user.
Why have some parties in Australia opposed it?
An example is the Green Party. One of its spokespersons in Congress, David Shoebridge, stressed that the proposal is “deeply flawed” and “dangerous”, since, as he stressed, it will negatively impact vulnerable young people, such as the LGTBIQ+ group in rural areas, who They find support in social networks, as well as in the privacy of all users, including adults.
How have technology companies reacted?
Both Meta and From In this sense, the network noted that the proposal “will have a negative impact on the human rights of children and young people, including their right to freedom of expression and access to information.”
How is this measure different from the one announced by Spain?
In Spain, the age required to open an account on social networks is 14 years old, but as the Government has included in it (which the Council of Ministers approved last June and which is still in the public consultation phase), it will be raised at 16. Right now, below 14, consent from mothers, fathers or legal guardians is required to be able to do so. Regardless of the regulations – the European (GDPR) leaves it up to the Member States to establish that minimum age between 13 and 16 years -, each social network establishes in its terms and conditions a minimum access age, which in the case on TikTok, Twitter or Twitch it is 13 years, and 14 on Instagram.
For its part, it has been launched with the different Member States to develop that are respectful of privacy and that allow us to ensure that the minor is of the required age, and if not, block access to or creation of accounts. .