The Tenth Circuit recently affirmed, and the Ninth Circuit overruled, lower court rulings regarding eligibility for safe harbour protection provided under Section 512(c) of the Digital Millennium Copyright Act. Both rulings concern the Digital Millennium Copyright Act safe harbour that protects eligible online service providers in the United States from "infringement of copyright by reason of storage at the direction of a user". Both rulings hinge on the identity of the user who contributed the infringing content. If the user comprises an agent or employee of the service provider, or exercises undue influence over the user, the safe harbour evaporates. The safe harbour is a shield from copyright infringement if full compliance is met.
The Digital Millennium Copyright Act was passed in 1998 to balance the rights of copyright owners with the needs of US online service providers in the face of rapidly developing online technology. Nearly 20 years later, many users who share content online are unaware that posting content owned by a third party to a social media platform constitutes copyright infringement. Other online users understand copyright well and use the doctrine of fair use to avoid licensing content from copyright owners. Copyright owners will target both these users and the service provider for copyright infringement. Without the Digital Millennium Copyright Act safe harbours, US service providers would find online services too legally troublesome to operate. As US law, the Digital Millennium Copyright Act cannot be used to take down infringing content in other countries, but copyright owners can access similar notice and take down systems in other countries, including Canada and EU member states.
There are several safe harbours listed in Section 512 that shelter qualifying online service providers from copyright infringement. Section 512(c) is intended to protect interactive platforms that attract users who upload and share:
- audio tracks;
- videos; and
- other digital content.
Such interactive platforms are now found in every industry, including platforms for business services, advocacy, membership organisations, non-profits, entertainment, news media, government, education and scientific research. With the advent of social media, Section 512 is more important than ever, as copyright infringement continues to be rampant.
Some interactive platforms use moderators and monitoring systems to detect and remove infringing content, but large interactive platforms which manage hundreds to millions of posts a minute, cannot police every copyright infringement in real time. Qualifying for the Section 512(c) safe harbour is a business requirement for interactive platforms in the United States. Many interactive platforms provide services ‘in the cloud’ through a user interface and use social media technology to communicate with customers. These platforms may not understand that they face the same threat of vicarious copyright infringement as large social media platforms and should consider qualifying for the Section 512(c) safe harbour.
Copyright owners of pirated content are increasingly pushing back against safe harbour defences launched by US interactive websites by targeting weaknesses in service providers' compliance with the safe harbour requirements. If all safe harbour requirements under Section 512(c) are not fully met, the service provider may be liable to the copyright owner for its users' infringing content.
More recently, two paparazzi cases, BWP Media USA, Inc v Clarity Digital Group, LLC (820 F3d 1175 (CA9 2016)) and Mavrix Photographs LLC v LiveJournal, Inc (14-56956 (CA9 2017)) have focused on the identity of the ‘user’ in the Section 512(c) safe harbour where online service providers used moderators or curators to monitor postings on their platforms.
In 2016 the Tenth Circuit affirmed a lower court ruling that the Section 512(c) safe harbour protected the service provider from copyright infringement of 75 photos of celebrities posted to a user-generated news platform (Examiner) by volunteer and paid independent contractor users. The lower court held that the term ‘user’ should be interpreted by its plain and natural meaning: "a person or entity who avails themselves of the online service provider's system or network to store material." The appeals court disagreed with BWP's arguments that a ‘user’ should exclude the owners, employees and agents of the service provider, emphasising that Examiner's contractors were employees because:
- Examiner had vetted them and provided them with assignments;
- editors had provided them with guidance and compensated them with web traffic; and
- no basis in the law for BWP's definition of ‘user’ was found.
In early 2017, the Ninth Circuit reversed the lower court's summary judgment ruling that found the Section 512(c) safe harbour was valid because the photos were submitted by users. Mavrix had argued in a district court that the safe harbour was not valid because LiveJournal's volunteer community moderators were acting as agents of LiveJournal. The Ninth Circuit remanded the case back to the district court to determine if LiveJournal's volunteer community moderators who approved posting the images were acting as agents of LiveJournal.
In order to qualify for Digital Millennium Copyright Act safe harbour protection under Section 512(c), a service provider must provide a solid foundation by:
- publishing an appropriate copyright policy in its website agreements;
- designating an agent to receive notifications of claimed copyright infringement; and
- registering the designated agent in the Copyright Office Digital Millennium Copyright Act agent directory.
To be fully eligible for the Section 512(c) safe harbour, the service provider must have a thorough understanding of how to comply with the Digital Millennium Copyright Act and its notice and takedown procedures. The service provider must also clearly understand its duty to remove infringing content without notice from the copyright owner under Section 512(c)(1) if:
- the service provider does not have actual knowledge (also known as ‘red flag’ knowledge) that the content or an activity using the content on the system or network is infringing;
- in the absence of red flag knowledge, the service provider:
- is not aware of facts or circumstances from which infringing activity is apparent; or
- on obtaining such knowledge or awareness, acts expeditiously to remove or disable access to the content;
- the service provider does not receive a financial benefit directly attributable to the infringing activity, where the service provider has the right and ability to control such activity; and
- on notification of claimed infringement as described above, the service provider responds expeditiously to remove or disable access to the content that is claimed to be infringing or to be the subject of infringing activity.
Interactive websites that employ moderators as agents or employees need not investigate potential content unless it is very obviously infringing. Moderators should delete or disable content that is obviously infringing.
This article first appeared in IAM. For further information please visit www.iam-media.com.