Big tech companies were open to online safety regulation – why did NZ’s government scrap the idea?
- Written by Fiona Sing, Research Fellow, Population Health, University of Auckland, Waipapa Taumata Rau
The coalition government has scrapped efforts to modernise New Zealand’s out-of-date online safety rules[1], despite qualified support for change from social media and tech giants.
The aim of the Safer Online Services and Media Platforms[2] project, led by the Department of Internal Affairs, was to develop a new framework to regulate what can be published on online platforms and other forms of media (such as news) in New Zealand.
It addressed the sharing of harmful online content such as child sexual exploitation, age-inappropriate material, bullying and harassment, the promotion of self-harm, and so on. It also aimed to generally improve the regulation of online services and media platforms.
Announcing a halt to the project in May, Internal Affairs Minister Brooke van Velden argued[3] that illegal content was already being policed, and the concepts of “harm” and “emotional wellbeing” were subjective and open to interpretation. She also said it was a matter of free speech.
The principle of free speech is important to this coalition government and is an essential factor to consider in the digital world. On this basis, the Department will not be progressing with work to regulate online content.
However, when we looked at submissions[4] from tech and social media companies on the proposed framework, we found companies such as Facebook, Reddit and X (formerly Twitter) were broadly supportive of regulations – within certain limits.
Getty ImagesRegulating online media
The Safer Online Services and Media Platforms[5] project had been in development since 2021. Internal Affairs invited public submissions last year.
The proposed rules would have created a new, more streamlined industry regulation model. It proposed codes of practice governed by an independent regulator to control online harm and protect public safety. The safety standards would have applied to online and other media platforms.
Currently, at least ten different government organisations have some level of responsibility for governing online services and responding to harmful content, often overlapping with each other. And some areas are barely regulated at all. Social media companies, for example, are not required under New Zealand law to meet safety standards.
Other countries have also been looking at how to regulate harmful digital content, online services and media platforms. Ireland[6], Canada[7], the United Kingdom[8] and Australia[9] have all progressed a version of this law to regulate online spaces.
Outdated regulations
We examined the submissions from some of the dominant companies in the technology sector[10], including Google (including YouTube), Meta, Snap, Reddit, TikTok and X Corp. Our aim was to look at what these companies had to say about regulations that would directly affect their core business.
All of them agreed the current system is outdated and needs revamping. Google, for example, argued:
Content regulation has been developed for a different era of technology, focusing on mediums such as radio and television broadcasting. It is therefore appropriate that regulatory frameworks be updated to be fit for purpose to reflect both technological and societal changes.
These companies have already introduced their own protection policies and signed up to the voluntary Aotearoa New Zealand code of practice for online safety and harms[11].
Importantly, none of the companies argued their efforts towards self regulation were sufficient.
The only option, according to these companies’ submissions, was a code focused on objectives and not hard rules that would be too prescriptive. Submissions insisted the new code had to be a “proportionate” system to implement and enforce.
Snap stated that:
online regulation is most effective when it is based on broad principles that companies of all sizes are able to follow and implement proportionately.
Proportionality is usually a legal test used to decide whether a right, such as freedom of expression, can be limited in the interests of another public concern. However, only Meta and X Corp mentioned protecting freedom of expression in their submission.
Most submissions stated they would trust an independent regulator to design one overarching code, with the caveat that the regulator needed to be truly independent from all industry actors and also the government of the day.
Reddit stated:
we are also concerned with the proposal for industry to develop codes of practice, rather than the government or an appropriate regulatory agency.
Submissions also noted there needed to be consultation with industry actors throughout the design process.
A missed opportunity
In the submissions on the proposed regulatory framework, each of the companies had their own views on how codes should be designed, whether legal but harmful content would be included in an regulatory code, who should carry the burden of implementation, and what penalties should look like.
But notably, they were all supportive of a regulatory overhaul.
The decision to scrap the framework is a missed opportunity to protect future generations from some of the harms of online media.
References
- ^ out-of-date online safety rules (www.dia.govt.nz)
- ^ Safer Online Services and Media Platforms (www.dia.govt.nz)
- ^ Brooke van Velden argued (www.facebook.com)
- ^ submissions (www.dia.govt.nz)
- ^ Safer Online Services and Media Platforms (www.dia.govt.nz)
- ^ Ireland (www.cnam.ie)
- ^ Canada (www.canada.ca)
- ^ United Kingdom (www.gov.uk)
- ^ Australia (onlinesafety.org.au)
- ^ dominant companies in the technology sector (www.dia.govt.nz)
- ^ code of practice for online safety and harms (thecode.org.nz)