Content removal by social media companies, as recently implied by Scott Morrison, is not the solution to countering online violent extremism. Instead, policymakers must consider ways to further engage the public in decision-making processes as a means to restore their faith in the political system, Isaac Kfir writes.
The G20 countries have signed up to Scott Morrison’s initiative demanding that social media and tech companies take a more active role in countering online violent extremism.
The initiative is clearly part of a larger belief that these companies aren’t proactive enough in countering online violent extremism, when in fact the situation is much more complex than many policymakers make it out to be.
Challenges arise for several reasons. To begin, banning content seems to encourage disseminators to migrate to new platforms. Such a move may actually have an adverse effect on security operations, as it may limit open source data-collection. The Israelis, for example, rely on algorithms to collect vast amounts of data as part of their predictive intelligence program aimed at identifying potential terrorist.
What Morrison’s proposal seems to lack is an understanding of the ‘supply side’ and the ‘demand side’ of terrorism that pertain to online platforms. The former refers to the environment that feeds the extremist narrative, while the latter refers to how violent extremists use tools such as the Internet to carry out their activities.
For example, some might be a part of online extremist communities, while others might actively use the Internet to research how to carry out attacks.
Secondly, it seems that governments across the world want to regulate online spaces without truly understanding the implications of defining what is considered radical, anti-social, or disruptive – especially when such terms aren’t applied to those engaged in far-right activities as often.
With the proposal’s attempts to regulate social media, many people, especially those that pursue extremist ideas, have moved either to the dark web or the decentralised web (DWeb), or now use VPNs and other tools such as browsers such as Tor to limit supervision.
Consequently, those that are mainly affected by the proposals are the law-abiding citizens whose data is collected – not the extremists.
In countering online violent extremism, we must recognise that social network and tech companies have greatly altered their operations to address the relevant issues over the last few years.
Facebook, which has its own definition of what is considered as terrorism, recently released a report providing greater insight into how it moderates content and how that content will go through a Facebook ‘Supreme Court’.
The report lacks any substantive detail on the ‘how’, however. This is in part because Facebook is wary of giving away too many trade secrets, as well as being concerned that its policies will drive people away from its platforms.
Further, the company must deal with the challenges that come with trying to moderate copious amounts of content put together by tech-savvy extremists. This was evident when the perpetrator of the recent New Zealand shootings allegedly broadcasted the attack on Facebook, which was seen by around 200 people – none of whom reported the stream.
YouTube also struggled during this time as its platform was inundated with uploads of the violent footage, leading it at one point to disable several search functions so as to stop people searching for the material.
It’s worth recalling that, in 2010, the video site – once it finally recognised that it had a ‘terrorism problem’ and that its platform was being used to disseminate hateful, violent ideas – gave users the option to flag videos that they thought were abusive.
This was done in efforts to take advantage of the sheer number of users on YouTube. Despite several thousands of videos of Anwar al-Awlaki, the al-Qaeda cleric, having been removed from the site, he continues to feature in many clips that might have a greater chance of being taken down once reported by a user.
On top of this, extremists have all the power to migrate to other platforms. Thus, when Twitter banned Paul Golding and Jayda Fransen, key leaders of the far-right group Britain First, their followers were encouraged to migrate to Gab, a private social networking based in Texas.
In other words, banning users doesn’t put an end to the dissemination of violent extremist content or divisive and inflammatory material either.
One issue with Morrison’s counter extremism initiative, along with others implemented in the past, is that it’s ‘easy politics’. Decision-makers act without understanding how complicated the issue actually is. Knowing that there was already pushback against big tech and social media companies, policymakers have simply taken advantage of the situation.
The decision-maker, however, appears to be acting for the sake of acting, instead of acting to actually address the issue. Content removal fails to acknowledge the overarching ecosystem that feeds the violent extremist narrative.
Clearly, social media and tech companies must do more to regulate content and enhance transparency when it comes to content removal. They must be held more accountable, ensuring that they don’t subcontract tasks to third parties that do very little to protect the health and safety of their employees.
Morrison’s proposal doesn’t address the fact that there is a desperate need for western governments to change their tune, as what lies at the epicentre of violent extremism is a deep dissatisfaction with the current socio-economic-political system.
There is a pervasive sense of disillusionment with the established political class and system. One source found that, in 2018, only 41 per cent of voters were satisfied with Australia’s democracy – compared to 86 per cent in 2007. It also noted that, should this trend continue, fewer than 10 per cent of Australians might trust their politicians and political institutions by 2025.
Decision-makers must focus more on increasing public faith in the political system. This could be done by ensuring that parliament sits more often and that there is more substantive and meaningful debate – instead of what often turns into an undignified shouting match.
Additionally, using a citizens’ assembly to further involve people in the political system – beyond compulsory voting – would prove useful too.
Violent extremism must be countered through a multipronged strategy. While more must be demanded of the tech and social media companies, the public must also be further empowered through active participation in their governance. Only then will we be making a step in the right direction.