The regulation of online disinformation and harmful content is the subject of global attention. Australia is progressing world-first reforms in this area along with the United Kingdom (UK). In a key milestone, the Australian Online Safety Bill 2021 was introduced into Parliament on 24 February 2021. Following amendments by the Senate Environment and Communications Legislation Committee, the Australian Bill is currently before the Senate. This article lays out key developments in Online Harms legislation in Australia and the UK.
The Australian Online Safety Bill 2021 (Australian Bill) was introduced into Parliament on 24 February 2021 and is currently before the Senate.
Background on Online Harms legislation
As put by the Australian Minister for Communications, Cyber Safety and the Arts, Paul Fletcher, ‘The internet has brought great social, educational and economic benefits. But just as a small proportion of human interactions go wrong offline, so too are there risks online’.
The Australian Government commenced its reform process in 2018 with reviews of the existing regulatory framework and the announcement of a $17 million online safety package which funded the development of an Online Safety Charter. Last month, the Government published an Industry Code of Practice on Disinformation (the Code), a self-regulatory code aiming to manage online disinformation in Australia. The Code was prompted by the UK’s proposal for an Online Harms Bill (UK Bill) in April 2019.
The UK Bill seeks to address a wide range of ‘online harms’ – including online disinformation, terrorist propaganda and pro-suicide content – in a ‘coherent, single regulatory framework’. Significantly, the UK Bill imposes a statutory duty of care on online service providers to protect their users. Online companies who breach their duty by failing to identify, remove and limit the spread of illegal content may be fined up to £18m or 10% of annual global turnover – whichever is higher – by the appointed government regulator Ofcom. The law applies to all companies with UK-based users through a tiered system – companies with large audiences (such as Facebook, TikTok, Instagram and Twitter) will be classified as Category 1 with more stringent obligations than Category 2 services (such as private messaging apps and platforms hosting dating services or pornography).
The Australian Bill and UK Bill align with the international momentum towards strengthening safeguards against illegal online content. This goal gained traction following the Christchurch terrorist attack in May 2019, when New Zealand’s Prime Minister Jacinda Ardern and French President Emmanuel Macron hosted a political summit that united countries and tech companies to end the online organisation and promotion of terrorism and violent extremism. Australia joined 17 countries in signing the summit pledge known as the ‘Christchurch Call’ and, one month later, Prime Minister Scott Morrison issued a similar call at the G20 leaders’ summit.
Key developments in Australia
The Australian Online Safety Bill 2021
- Legislates industry guidelines for digital platforms (‘Basic Online Safety Expectations’) including periodic reporting requirements. Digital products and services will be penalised for failing to respond to a notice from the eSafety Commissioner (the Commissioner) to report on their adherence to the Government’s guidelines;
- Updates Australia’s Online Content Scheme so that removal notices can be issued to online services and link deletion notices can be issued to internet search engines;
- Expands the cyber-bullying scheme beyond social media platforms to include electronic services such as games, websites, messaging and hosting services;
- Introduces a new adult cyber-abuse scheme which allows the Commissioner to order the takedown of seriously harmful online abuse once digital platforms have failed to respond to a complaint. This scheme applies the same standard as the Criminal Code, which is higher than the standard applied to the cyberbullying of an Australian child;
- Reduces the take down response timeframe from 48 hours to 24 hours;
- Grants the Commissioner the power to rapidly block websites hosting abhorrent violent or terrorist material during online crises such as the Christchurch massacre; and
- Grants the Commissioner the power to compel search engines and app stores to remove access to a website or app that systematically ignores take down notices for Class 1 material (for example, child sexual abuse material).
Key developments in the UK