How should the world tackle far-right extremism?

A different problem that requires different solutions

Isaac Kfir

Government and governance, Law, National security | Australia, The World

21 August 2019

Far-right extremism poses a unique set of challenges that are yet to be seen in Salafi-jihadi methods. It would be inappropriate to apply the same tactics against the two, Isaac Kfir writes.

Following the mass shootings in Christchurch, El Paso, and Dayton, calls for greater action to combat online far-right extremism have been getting louder. As expected, many heads have turned towards technology and social media companies, demanding that they adopt a more robust de-platforming regime.

Social and media companies could – and should – do more, whether it be by hiring more people to help monitor and remove extremist content or by increasing investment in machine learning algorithms and artificial intelligence to do so.

There have also been suggestions to improve information-sharing networks such as the database used by the Global Internet Forum to Counter Terrorism (GIFCT). This industry-led initiative involves a massive database of ‘hashes’, ensuring that if one member of GIFCT removes a ‘hash’, others do so as well.

Other recommendations include enforcing terms of service and reporting abusers to the authorities. The Tech Against Terrorism project mandated by the United Nations is one such example. It aims to assist and support tech and social media companies in developing useful terms of service and in sharing information with each other.

There is also a demand that these companies adopt a consistent content moderation policy, particularly because of the mystery that shrouds their processes. Ultimately, however, these measures amount to nothing but a Band-Aid solution on a gushing wound.

More on this: Morrison's ineffective proposal

Social media companies are looking for efficient ways to moderate and engage with content. Facebook’s initiative aimed at using an algorithm to identify individuals possibly considering suicide is one example. The algorithm looks at a whole cache of information that the individual posts, which may include words like ‘kill’, ‘die’, and ‘goodbye’, as well as looking at the rate at which the content is uploaded.

It is much harder to apply the same logic to identify a far-right activist, let alone identify someone planning a far-right terrorist attack. It would, therefore, be inappropriate to apply the tactics used to combat Salafi-jihadi online activities to the far-right not only because of their notable differences but also because the label ‘far-right’ is an umbrella term used all over the world.

Difficulties also arise because the language used in far-right ideology has become part of a larger conversation on political correctness, liberalism, globalisation, gender roles, and its reading of history.

This is best seen with reactions to far-right terrorist attacks compared to Salafi-jihadis. For example, the Christchurch shooter was portrayed by the Daily Mirror as an ‘angelic boy’, whereas the perpetrator of the Pulse nightclub shooting was described as an ‘ISIS maniac’

More on this: National Security Podcast: Countering violent extremism

Similarly, US Senator Lindsey Graham described Dylann Roof, the suspect of the Charleston Church massacre, as a not much more than ‘whacked out kid’. On the other hand, he argued that Dzhokhar Tsarnaev, the suspect of the Boston Marathon bombings, should be considered an ‘enemy combatant’ like those currently being imprisoned in facilities like Guantanamo Bay.

Tackling online far-right extremism is likely to be more difficult because there is uncertainty as to what the far-right actually is, which is at best described as a network composed of individuals and groups adhering to non-mainstream ideologies and attitudes.

The network includes populist, anti-establishment agents, and White-supremacists, operating through what has been described as a ‘leaderless resistance’. These extremists seek to exploit existing laws on civil and political rights in their propaganda campaigns, as their language skirts around what is permissible.

Secondly, much of the source of far-right violent extremism is the US, although there is evidence that Ukraine, Russia, and the Balkans have become safe-havens for these activists too.

In the US, one must deal with First Amendment rights and a lack of consensus over domestic terrorism, on top of a history of far-right activism that includes the Ku Klux Klan and sovereign citizenship.

The recently proposed legislation, pushed by President Trump and others, suggests that conservative voices are silenced by social media companies. This has made the possibility of an appropriate regulatory response even less likely.

More on this: Why countering extremism programs are failing

One solution that has gathered momentum over the last few years is de-platforming – a tactic that social media companies use to address violent extremist content and the propagation of anti-social cohesion messaging.

There are several problems with this approach. Firstly, it is obvious that many of those that operate in extremist spaces prefer to use alternative means of messaging, especially platforms such as Discord, 8chan, and Telegram to spread their ideology.

Secondly, they also use the decentralised web or platforms like the Alt-Tech which encourage and support extremist messaging often under the guise of free speech.

Third, there are also issues with the use of language. Certain comments may not be banned as they don’t violate the terms of service per se, and doing so could raise issues around the prospect of social media companies serving as censors.

Moreover, these options often emerge when a platform is shut down. Regulating mainstream social media sites doesn’t address the root problem.

To address the roots of the far-right online activities, we must recognise the toxic nature of political discussions and debates and the perpetuation of the narrative of a ‘clash of civilisation’. Combined, they inspire ideals of the ‘Great Replacement’ and the ‘manosphere’.

By allowing ideas such as ‘It’s Okay To Be White’, we fuel the far-right network and embolden these extremists with mainstream policymakers whitewashing intolerant language.

Closer to home, George Brandis’ emotional reaction to Pauline Hanson’s decision to enter the Senate wearing a burqa should serve as a reminder of what supports our democracies and what destroys them.

In response to what he condemned as an attempt to inflame tension and sow discord, the attorney general emphasised that the ideals that attack the core tenants of Australia’s democracy must be identified and challenged. Though it’s not yet clear how countries should go about this, what’s certain is that it will require new ideas and policies.

Back to Top
Join the APP Society

Comments are closed.

Press Ctrl+C to copy

Republish

Close