While cracking down on the rise of ‘fake news’ is crucial for the future of democracy, enforcing fair laws on disinformation is easier said than done, Martin Kwan writes.
Fake news, misinformation, and disinformation can mislead the public. Their spread is amplified by the prevalence of social media and communication technologies. Carefully disguised as the truth, they often involve unverifiable claims which may be hard to dispel by fact-checkers.
This has caused many problems, undermining trust in journalism, disrupting elections and democracy, and even inciting hatred and violence. This issue has become particularly relevant recently – as some countries have experienced an ‘infodemic’ of incorrect health advice amidst the COVID-19 pandemic. In some countries, governments have cynically politicised pandemic news and even enforced emergency censorship.
Various solutions have been proposed to tackle this, such as providing education on media literacy, enacting laws to punish repeated spreaders of disinformation, utilising artificial intelligence technology to detect fake news, and putting in place regulations on social media platforms.
It remains open to debate which approach is the best, in part because there is still no solid consensus on the definition and exact scope of fake news, misinformation, and disinformation. Some countries have even deliberately chosen not to define these terms to broaden the scope of regulations.
Still, a definition is important. Generally, disinformation involves intentional sharing of false, deceptive, or misleading information. Alongside this is misinformation, which is the inadvertent sharing of the same information in the belief it is true. Fake news then broadly refers to these two notions without making any specific distinction between them.
Simply ignoring these definitions to increase the scope of a disinformation law would be a mistake, because if it is cast too wide such a law can be used as a tool of oppression, whether against honest journalism or to suppress opposing political views.
This is why the issue of how the law should apply to disinformation is widely debated, but existing analysis has paid inadequate attention to one crucial aspect of it: whether such laws should define incomplete information as disinformation.
Again, there is no totally accepted definition for this concept, but whilst most disinformation involves some truth and some fabrication, surely policymakers can agree that incomplete information – which is wholly true with crucial details of context omitted – is different.
Technically, this often falls within various definitions of fake news, misinformation, or disinformation. Incomplete information can be regarded as, legally false because it may have a deceptive effect. For example, in Singapore, where under the definition of section 2(2)(a) of the Singaporean Protection from Online Falsehoods and Manipulation Act 2019, “a statement is false if it is…misleading…in part”.
This raises a tricky problem for governments. On the one hand, a law like this which covers incomplete information may be excessive and can be used as a tool of oppression. On the other hand, a law which excludes such may be rendered ineffective.
There are five main problems with regarding incomplete information as disinformation.
First, incomplete information is not necessarily bad. Just like lawyers representing a particular side, sometimes a point can only be made by offering a one-sided perspective.
Second, the provider of incomplete information does not necessarily hold culpable intention. The provider of that information may be doing so inadvertently and without the intention to mislead. They may not be aware of, or may be able to argue they were not aware of, the significance of the omitted details.
Third, incomplete information can have positive social functions. It sometimes helps ascertain the complete picture/truth when multiple perspectives are joined together.
Fourth, the case for prohibiting disinformation at all is controversial in some places, due to its potential effect of undermining some countries’ constitutional freedoms. Courts in the United States, for example, have endorsed that for a press to be truly free even false information warrants protection. As incomplete information involves true information, the case against it is even weaker in the face of some constitutions.
Fifth, disinformation laws may pose a dilemma to the rule of law. Even the government itself may sometimes – intentionally or not – present incomplete information.
Intentional or otherwise, it may have done so without having any malign motive and could be caught up in regulations that include incomplete information as disinformation.
While these problems make it very difficult to include incomplete information as disinformation, a law which fails to cover it at all may simply be too ineffective.
There are many situations where context is omitted from reporting with an intention to mislead, and this is the exact kind of deception targeted when cracking down on disinformation.
On top of this, incomplete information is dangerous. Deliberately or not, omitting crucial facts can incite hatred, encourage harmful or irresponsible behaviour, and damage democracy.
As such, it is not feasible to totally exclude incomplete information. This will open loopholes and room for abuse, and so some definition of incomplete information that addresses the problems mentioned before has to be part of such legislation.
In any case, the definition of disinformation adopted in any law or regulation must be framed broadly in order to work, but the inevitable coverage of incomplete information brings with it many problems.
This means policymakers must consider the implications of casting such a wide net when choosing an appropriate solution to tackle disinformation. They must be aware of the many potential unintended consequences of a too tight definition too, and seek to find a balance that can protect the public from the dangers of misleading claims.