Social media is a key communication tool for children to engage with each other and the outside world. Thousands of businesses use social media to provide a platform to interact with children and young people and to allow children to communicate with each other. The issue of cyber-bullying has long been a concern for children, parents and the operators of these social media platforms.

On 24 March 2015, the Federal Parliament of Australia passed the Enhancing Online Safety for Children Act 2015 (the Act), with support from all major Australian political parties.

The Act seeks to enhance online safety for children through the establishment of a Children’s e-Safety Commissioner (Commissioner) and the implementation of a complaints system to remove cyber-bullying material targeted at Australian children from social media sites, such as Facebook. Failure to comply with the Act can result in significant fines.

CYBER-BULLYING PROTECTION FOR SOCIAL MEDIA AND BLOG SITES

Social media is entrenched in modern life. Companies rely on it to attract, advertise, retain and communicate with their clients and the world at large. At an individual level, it a primary method of communication between people of all ages and particularly between children and young people. Social media takes many forms: from ubiquitous applications like Facebook, Twitter and Instagram, to “in game” social forums that are a regular part of online gaming, to simple chat rooms, blogs and even websites that allow people to communicate with each other in “comments” sections.

In ‘first of its kind’ legislation passed by the Australian Federal Government, the Enhancing Online Safety for Children Act 2015 requires all organisations that provide a platform for people to post communications on the internet to have specific child protection terms of use that prohibit cyber-bullying and provide a mechanism for complaints of cyber-bullying to be received and for offending material to be removed. The Act targets material targeted at a particular Australian child that would have the effect of seriously threatening, intimidating, harassing or humiliating that Australian Child (cyber-bullying material).

The Act establishes the office of a Children’s e-safety Commissioner, to administer a complaints system monitor and require organisations to remove social media posts consisting of cyber-bullying material, and can seek injunctions and levy fines.

The Act applies to any social media service – which is any electronic service with a primary purpose of enabling social interaction between 2 or more end-users, where end-users post material under the service.

WHAT ARE THE COMMISSIONER’S FUNCTIONS?

The Commissioner is an independent statutory office within the Australian Communications and Media Authority (ACMA). Alastair MacGibbon has been announced as Australia’s first Commissioner. Mr MacGibbon was the founder of the Australian Federal Police’s High Tech Crime Centre.

The Commissioner’s primary role is to administer a complaints system for cyber-bullying material targeted at Australian children. In conjunction with this, it also oversees the compliance of social media services with the Act’s basic online safety requirements (see below for more detail).

The Commissioner is also responsible for promoting online safety for children; coordinating activities of federal government departments, authorities and agencies in this regard; and reporting to the Minister for Communications (Minister) on children’s online safety issues.

WHAT ARE THE BASIC ONLINE SAFETY REQUIREMENTS?

To comply with the Act’s basic online safety requirements, a social media service must:

  • include a provision in its terms of use that prohibits end-users from posting cyber-bullying material on the service, or an equivalent provision;
  • have a complaints scheme under which end-users of the service can request the removal of cyber-bullying material that breaches the service’s terms of use;
  • designate an employee or agent as the service’s contact person for the purposes of the Act, which must also be notified to the Commissioner.

There is an expectation in the Act that each social media service will comply with the basic online safety requirements. While the Commissioner can publish a statement of non-compliance on its website, non-compliance is not otherwise enforceable.

HOW DOES THE COMPLAINTS SYSTEM WORK?

A complaint can be made by or on behalf of an Australian child if the complainant believes cyber-bullying material targeted at an Australian child is accessible or delivered to one or more of the end-users using a social media service.

As above, “cyber-bullying material targeted at an Australian child” is material that an ordinary reasonable person would conclude that was likely intended to have the effect of seriously threatening, seriously intimidating, seriously harassing or seriously humiliating a particular Australian child, regardless of whether the particular child accessed the material.

A complaint to the Commissioner can only be made if a complaint has already been made to the relevant service, and evidence of that complaint must be provided to the Commissioner. The Commissioner will only proceed in cases where the service has not removed the offending material within 48 hours of receiving the original complaint.

The Commissioner has powers to investigate each complaint and conduct the investigation as it sees fit. Following investigation, the Commissioner can:

  • request a tier 1 social media service to remove the material; or
  • issue a notice to a tier 2 social media service to remove the material; and/or
  • issue a notice to an end-user who posted the material to remove it, refrain from posting further cyber-bullying material and give an apology.

The difference between tier 1 and tier 2 is described further below.

If a tier 2 service or end-user who receives a notice fails to remove the relevant material within 48 hours, the Commissioner can take enforcement action. For a tier 2 service, that can include a fine of up to $17,000, an enforceable undertaking or an injunction. The Commissioner can also issue a formal warning and publish a statement of non-compliance on its website.

WHAT IS THE DIFFERENCE BETWEEN TIER 1 AND TIER 2 SOCIAL MEDIA SERVICES?

The key difference between tier 1 and 2 services is the fact that the Commissioner can take enforcement action against a tier 2 service, but not a tier 1 service.

A social media service can apply for tier 1 status, and that status will be granted if the Commissioner is satisfied the service meets the basic online safety requirements.

A social media service will only be determined to be a tier 2 service if the Minister makes a declaration to that effect by legislative instrument. This will only occur if it is a large social media service and the Commissioner makes a recommendation it should be categorised as such, or if the service requests tier 2 status.

IMPLICATIONS

The new legislation has important implications for both social media providers and corporations and institutions that work with children.

For companies that provide social media services (in any of its many forms), there are a range of matters that must be acted upon to ensure compliance with the Act.

For organisations that work with children, it will be important to understand what can be done to prevent cyber-bullying, in order to minimise harm to children.