Last night, the European Parliament approved a controversial law that will require platforms - broadly defined, and potentially including any website in the EU that has a comment section or chat function - to remove terrorist content within one hour of being requested to do so.

How could that possibly be controversial? The importance of combatting terrorist content goes without saying. And so on.

I won't deep dive into the concerns that the law (1) could be weaponised by authoritarian regimes in order to silence dissidents across Europe, and (2) will drown smaller platforms in unsustainable compliance costs.

Instead I'd point to the fact that, by imposing strict takedown requirements, the law could have the unintended consequence of incentivising platforms to overlook free speech and censorship concerns in favour of blunt algorithmic tools that minimise the number of removal requests they receive. Whilst the law doesn’t require platforms to use content filters, its broad definition of terrorist content makes it inevitable that they will increase their reliance on automation to remove information such as satire and activist speech that platforms can't distinguish from genuine terrorist content.

This move towards greater automation also comes as the regulation of AI has taken centre stage in Europe – requiring companies to identify and address the biases that are baked into most machine learning tools. Balancing these competing interests is going to be a real challenge for all platforms, but particularly the smaller, poorly funded sites where terrorist groups often make their home.

Hosting service providers will have to remove or disable access to flagged terrorist content in all member states within one hour of receiving a removal order from the competent authority.