The UK’s Online Safety Bill has been controversial. It contains measures designed to increase safety by requiring social media and high-risk/high-reach gaming platforms that enable user-to-user interactions/user-generated content to prevent or remove illegal content. Under the legislation, illegal content includes (amongst other things) content that is potentially harmful to children, facilitates animal cruelty or torture, promotes self-harm, shows controlling or coercive behaviour or extreme sexual violence. These are laudable aims, but there are concerns that despite the existence of “free expression provisions” the legislation will curtail the dynamic and creative environment of the internet.
Currently, online intermediaries are protected from liability if they act quickly to take down unlawful content once they are put on notice. However, the new legislation will oblige them to proactively implement systems and processes to limit the likelihood of unlawful content being hosted or posted. The regime will be overseen by Ofcom, who will have the power to impose fines of up to £18 million or 10% of annual turnover (whichever is higher), and tech company executives could also face prison.
The Online Safety Act, which has been the subject of debate for six years, will run to some 300 pages and cover a range of areas, from implementing age-checking requirements to giving rights to bereaved parents to access information about their children. It places significant obligations on social media companies and others, and it remains to be seen whether the UK Government can successfully balance user safety against its ambitions for the UK as a technology hub.