What does the paper say?
Not much. The paper does not define harmful content, but instead provides an initial list of harms that would be in scope, including clearly defined harms such as child sexual exploitation, as well as less clearly defined harms such as disinformation, cyber bulling and trolling. The paper also lists certain harms that are out of scope, such as harms suffered by legal entities rather than individuals and breaches of data protection legislation. The ‘initial’ list of harms in scope is stated to be neither exhaustive nor fixed, but the paper does not explain how the list would be amended or extended, implying that the regulator would have the final say as to what constitutes harmful content.
The paper does not go into detail as to what each harm entails. For instance, it gives no indication as to what would amount to trolling. It only states that “Cyberbullying, including trolling, is unacceptable.” But where do we draw the line between banter and trolling? Malicious tweets about singer Gary Barlow’s still born baby were clearly unacceptable, but are comments left on an ex’s social media page shaming their infidelity acceptable? The paper does not help answer these questions.
The guidance provided on how to fulfil your duty of care does shed some light on a few of the ambiguous harms. For example, the guidance explains that reading false or misleading information could encourage us to make decisions that damage our health, undermine our respect and tolerance for each other and confuse our understanding of what is happening in the wider world. It further states that the code of practice that addresses disinformation will have to focus on protecting users from harm, not judging what is true or not, but admits that this will be a difficult judgement call. This may render technological solutions to identifying content that falls within scope limited in their effectiveness; whilst algorithms may exist to distinguish truth from falsity, an assessment of harm would seem to require more human input. We will look at the role of technology as a solution later in this series.
Can you define harmful content?
Harm is an ambiguous concept and by nature is hard to define. Certain harms are more easily identifiable than others, such as child pornography, but other more subjective harms such as “intimidation” will be up for a more critical analysis. An individual’s religion, age and culture will play a large part on interpretation. What we can stomach as a society also evolves as we mature. What was offensive 10 years ago may not be so today. Given the difficulty in defining what harmful content is, it comes as no surprise that the paper has been vague on the subject. But how do you base legislation on such a nebulous concept?
In the Government’s eyes, uncertainty may not necessarily be a bad thing. By keeping the notion of “harmful” content vague allows the regulator’s interpretation of it to survive the years. There seems to be a trend for this kind of approach in broadcast regulation. The Communications Act 2003 refers to ‘offensive and harmful material’ but fails to define them, leaving the regulator Ofcom to decide what they mean. Ofcom has since had plenty of experience tackling harmful content in broadcasting, while simultaneously attempting to protect freedom of expression in the UK communications industry. The story is now repeating itself some 16 years later. The independent regulator tasked with assessing online harmful content will have to use the parameters set by legislation and ensure the right balance between freedom of speech and protection from harm is struck.