The issue of policing social media has long attracted criticism. Balancing freedom of expression with the safety of the online community is a difficult and unenviable task. There has recently been significant political might focussing on the apparent failures of social media giants to moderate content adequately, and the recently leaked manual allegedly for the use of Facebook’s moderators shines a light on the true scale of the problem.
In March this year journalists from the BBC reported 100 Facebook images that appeared to breach its guidelines for sexualised images of children, yet only 18 were removed. Automated responses for the other 82 posts suggested that there was no breach of community standards.
The leaked alleged Facebook moderators’ manual and associated documents now provides some insight into why some disturbing and abusive content might be permitted to remain online; and it raises some alarming inconsistencies which will undoubtedly lead the future debate on what more can be done to protect online communities.
Much has been reported over the use of Facebook Live to gain a real-time audience to crimes such as sexual assaults and murder. Last month the German Government backed a proposal to levy fines against social media platforms that fail to remove hate speech promptly, and a report by the UK’s Homes Affairs Select Committee this month called for similar fines.
However, Facebook’s policy is to review some potentially offensive posts, such as non-sexual child abuse, “upon report only”, meaning that a post will only be considered once reported by a user of the site. The lack of proactive searches, by software or any other means, allows for an abusive image to be viewed an endless number of times before it is eventually reviewed by a moderator.
Further, Facebook reportedly currently employs 4,500 moderators and has announced plans to hire 3,000 more, but with 1.3 million posts every minute, moderating contact effectively could be an uphill challenge. The Guardian reports that some moderators claimed to have just 10 seconds to make a decision on a reported image, and that there is a high turnover of moderators who say that they have suffered from anxiety and post-traumatic stress.
Even if more time was available to the moderators they would still have to consider a number of peculiar inconsistencies within the leaked manual if it is all they have to consult.
The Guardian reports that the policies on sexual content are the most complex and confusing, with some moderators apparently suggesting that this is the area in which most mistakes are made. The leaked documents show that there has been a review of the policy on nudity since the backlash after Facebook removed a post which depicted the Nepalm Girl, an iconic Vietnam War photograph, as the child in the picture was naked.
There is now an emphasis on the context in which a violent image is shared, as it is felt that users ought to be allowed to discuss “global and current events.”
Additionally, a portrayal of nudity and sexual activity is allowed if the image is “handmade” art but not if it has been digitally created.
This is apparently because digitally created nudity is more likely to be pornographic, but it is easy to imagine a scenario where the distinction between handmade and digital art is indiscernible to the human eye, leading to a decision by a moderator which may be contrary to the guidelines in place.
In an article written for the Guardian by Facebook’s head of global police management there is an acknowledgement that some policies can appear inconsistent and that mistakes are made.
Universal legal standards are reportedly rare and it is claimed that Facebook needs to remain objective to provide consistency worldwide. But can more be done?
Part of the published extracts of the leaked manual covers “revenge porn”, i.e. the sharing of intimate, nude or near-nude photographs of a person without their consent.
According to The Guardian, Facebook says that this is a high priority area, but it is acknowledged that the line between acceptable and unacceptable sexual content is difficult.
The manual states that there must be confirmation of the lack of consent, by either the vengeful context (captions, comments etc.) or an independent source such as the media.
However, this does not appear to cover a situation whereby an image is shared without a caption or comment and remains unreported.
Facebook has reportedly confirmed that it uses image-matching technologies in an attempt to try to stop some explicit images from ever being published on the website. Yet, according to a leaked document seen by The Guardian, in January alone moderators flagged in excess of 51,300 posts relating to “revenge porn”, leading to 5,110 accounts being disabled.
Child abuse images
Additionally, within the manual it states that photographs of non-sexual physical abuse and bullying of children do not need to be deleted or actioned unless there is a sadistic or celebratory element. Instead videos of such abuse are marked as “disturbing”.
Some examples of non-sexual physical abuse provided within the manual include “videos of biting through skin/burn/cut wounds inflicted on minor by adult” and “videos of poisoning/strangling/suffocating/drowning inflicted on minor by adult”.
The justification for allowing the material to remain online is that it allows for “the child to be identified and rescued”; but I would question how this is effective unless the moderators are required to notify safeguarding professionals rather than simply marking images of illegal abuse as “disturbing”.
Further, comments that have threats that are either generic or not credible are permissible on the website, but the examples of permissible threats contained in the leaked manual includes the disturbing and subjectively direct comment, “little girl needs to keep to herself before daddy breaks her face”.
Child protection experts have said that there should be “no grey areas” when it comes to child abuse, amidst calls for an independent regulator who can deal with extremist content online.
Concerns have also been raised that private companies appear to be making their own decision over the permissibility of disturbing and sometimes illegal images.
In recent years there has been an increasing and alarming rise in the number of cases involving a number of different forms of online abuse, as organisations and institutions in the UK have become more aware of the risk and prevalence of abuse.
At Leigh Day we have represented a number of clients in cases which involve indecent images being circulated on social media, and it is important that measures are put in place to protect those who may be subject to such abuse from re-victimisation.
Some social network platforms appear to have outgrown their own resources to effectively monitor content to ensure the safety of their users, and there seems a struggle to decide where the line should be drawn. With new challenges being posed by posts involving “revenge porn” or the use of Facebook Live to commit sexual offences, it is clear that new and innovative technologies will need to be developed, and fast, if such issues are to ever be rectified.
Additionally, with a greater focus on the “context” in which an image of violence or physical abuse is shared, some companies are seemingly permitting material to remain online which could be evidence of an illegal act.
There needs to be more transparency and open debate over the guidelines for moderators, sufficient staffing levels to cope with the demand put upon them and automatic referrals to the relevant safeguarding authorities in the event that abusive images are uploaded. Only then will we be on the way to protecting the online community.
If you have been the victim of “revenge porn” or the subject of online images of child abuse you can contact the abuse team at Leigh Day to seek legal advice on the different options available to you.