In the aftermath of the Christchurch massacre, Facebook and other social media platforms have come under renewed pressure from politicians to improve their regulation mechanisms, both to limit the ability of would-be terrorists to live-stream their atrocious acts and to crack down on hate speech and the transmission of racist, misogynist and bigoted ideologies in general.
The scale of the task is daunting – Facebook says it removed 1.5 million videos of the attack in the 24 hours after it was streamed live, 1.2 million of them before upload – but the nature of the task is complicated by the unpredictability of these attacks, the prompts that the terrorist built into his video and “manifesto” to ensure it travelled online, and even the need to ensure that news organisations are able to cover events in good faith without falling foul of the restrictions.
When it comes to the expression of hate, a further problem lies in how regulation would in fact work. Those who propagate jihadist ideologies online are adept at deploying the symbols of conventional Islam for their purposes, while the denizens of the alt-right message boards have developed an entire subculture of oblique references and ironic detachment to cover their tracks.
Attempts to restrict their presence could quickly degenerate into the kind of absurd regimes of suppression we normally associate with China’s restrictions on online speech, such as preventing mentions of Winnie the Pooh due to the character’s association with President Xi Jinping.