Across the world, governments are exploring whether to intervene in online media, and this may affect how Australia approaches issues of platform neutrality and harmful expression, Jelena Gligorijevic writes.
Recent online behaviour, by both platforms and users, has brought into even sharper focus the boundaries of online speech and its regulation. In most liberal democracies, including Australia, the United States, and the United Kingdom, the online sphere remains largely untouched by overarching platform-targeting laws that either protect or censor unwanted or harmful speech.
In Australia, there continues to be no general law prohibiting online platforms, such as Google, Twitter, and Facebook, from regulating content posted on their platforms by their users – or a ‘neutrality law’. Conversely, there is also no overarching legal duty on these platforms to censor harmful online speech – what might be called a ‘duty of care’ law.
There are, and always have been, various laws that affect individuals’ freedom of expression, including defamation, harassment, incitement, and privacy. These laws can potentially have an impact on platforms to varying extents in different jurisdictions, but they are, for the most part, limited in terms of holding platforms liable for third party content.
This is on account of the continued legal characterisation of platforms as predominantly neutral space-providers or ‘town squares’, rather than active publishers or editors, as in traditional media.
For example, it is unclear whether a search engine or social media platform is liable for defamatory content posted by its users. In the United Kingdom, the courts have held that platforms could indeed be liable, but only where they have been put on notice that the content is defamatory. In such cases, the statutory defence for ‘secondary publishers’ is unlikely to provide strong or conclusive immunity for platforms.
In Australia, the High Court has indicated the courts should follow this path in holding platforms liable in individual cases where notice conditions are made out. Still, a definitive judgment on the matter is yet to be made, and the recent amendments to the model defamation provisions do not conclusively address this issue.
In other jurisdictions, notably the United States, platforms are specifically protected from liability by provisions enacted originally to protect the freedom of online information flows, as a way of ensuring the internet would not be encumbered by lawsuits and take-down orders.
Apart from such instances of publisher liability (or immunity) for specific content that is itself unlawful, some governments have enacted, or are considering, overarching platform duty of care laws, or, conversely, overarching platform neutrality laws. These might present models to other countries, including Australia, where there are no such generic laws applicable to online platforms.
Germany was a pioneer in 2017 when it legislated to impose a duty (and concomitant penalties) on online platforms in respect of content deemed to be hate speech or ‘fake news’. A similar law exists in France.
The United Kingdom Government has been considering legislating a duty of care law for platforms. Most recently, in its official response to the public consultation on its White Paper on Online Harms, the government has indicated it intends to draft an Online Harms Bill, which will impose a duty of care on platforms to deal with ‘harmful’ online content. It will also give the communications regulator, Ofcom, additional powers of enforcement.
Significantly, under such a law, whether online content is ‘harmful’ (and covered by the duty of care) is not determined by whether it is already unlawful (for example, under existing defamation, incitement, or harassment laws). Content can be deemed harmful, and removed, even when it is not in and of itself unlawful.
On the opposite side of the spectrum, the Polish Government has proposed a new neutrality law that would penalise platforms if they refused to reinstate deleted users or content.
In Italy, the Court of Rome recently applied a provision of the Italian Constitution relating to the diversity of political parties in a way that forced Facebook to reinstate the profile and promotional content of an established political party, CasaPound Italia, even though that party celebrates and advocates for the fascist legacy of the former dictator Benito Mussolini.
Given there was no evidence of direct incitement to violence in the posts and content, and no evidence of a breach of the criminal law, the platform was not entitled to deprive the party of its participation in national politics, nor deprive the electorate of receiving information about that political candidate.
If Australia wishes to legislate more broadly in the direction of Germany or the United Kingdom’s proposed duty of care laws and to impose legal burdens on platforms to monitor content deemed to be harmful, even if they are not necessarily otherwise unlawful, then the implied constitutional freedom of political communication will have to be considered carefully.
Specifically, any such law must not disproportionately encroach on the freedom of individuals to express or to receive political communication. That is narrower than a general ‘freedom of expression’: it is limited to political communication. It is also not absolute, or superior to other laws. This differentiates it from the United States’ First Amendment, and it also leaves a space for federal legislation to impose duties of care on platforms in the nature of that considered by the British Government.
However, the implied freedom in Australia still presents a limitation on the extent of any potential legislation and is likely to pose a hurdle for any broadly framed legislation that permits or obliges platforms to remove users or content on the basis of political messages or consequences.
Any such laws would have to be framed carefully so as clearly to target particular content that is defined as being harmful as are, for example, Australia’s existing racial vilification laws, criminal incitement laws, or recently enacted criminal laws against abhorrent violent content.
It is, of course, open to Australia to legislate in the opposite direction. Given that the implied freedom is limited to governmental acts only and cannot be used directly against private companies like social media platforms, there may be a political appetite to reinforce Internet neutrality and freedom of speech online, through laws that prohibit platforms from removing content and users which they deem to be harmful, irrespective of how the law treats that content.