The artificial intelligence arms race

Why states need to start thinking about how to win a battle of the bots

Andrej Zwitter

Government and governance, National security, Science and technology | Australia, Asia, East Asia, South Asia, Southeast Asia, The Pacific, The World

27 July 2017

Cyberspace is now a territory where politics, economics, and foreign affairs are all contested – and Internet bots driven by artificial intelligence have emerged as key new actors, Andrej Zwitter writes.

Artificial intelligence (AI) is pervading all aspects of our lives. As one of the primary methods of analysing unstructured and messy data sets, it has become synonymous with big data. And with much of the globally produced data being transmitted via the Internet, a new cyber landscape has emerged, a parallel digital world that still requires the carving out of territories and rules.

These territories are currently dominated by large corporate actors, such as search engines and social media networks, which themselves compete over access to the new raw material – data. It is roamed by vigilantes and cyber criminals. But states also need to define their own role in this new world.

In recognition of this need, states are increasingly investing in artificial intelligence. China recently announced an IT strategy focusing on artificial intelligence, virtual reality, and robotics. In doing so, the country is trying to ride the wave of renewed interest in emergent technologies around cyberspace, hoping to gain an economic advantage and position itself as technology leader by 2030. This foresight in national strategy might give China a decisive competitive edge over other countries that neglect the importance of artificial intelligence and robotics specifically for the emerging data economy.

States are slowly coming to realise the importance and the need to become a tangible, regulatory actor in the anarchic cyberspace. For instance, while it is hard to establish criminal liability for international war crimes in the real world, cyber warfare has elevated the burden of proof to a whole new level – agents can operate from anywhere in the world, do not require large scale facilities such as military compounds, and do not even have to be human at all, but can be bots, viruses, or worms.

More on this: Welcome to the “managed” Internet

Microsoft recently initiated a discussion about a Digital Geneva Convention, calling also for the private sector to become involved in what used to be the exclusive domains of states (international law and warfare). But so far states have largely not succeeded in assuming a regulatory role in the cyberspace.

Being a regulative actor in cyberspace requires more than just enacting barely enforceable laws on data protection, scraping the web for intelligence purposes, and patrolling the dark web. It also requires states to develop cyber policy that takes cyberspace seriously in its own right and under its own conditions.

Cyberspace has its own ontology that does not conform to the material world. This ontology involves digital globality, because cyberspace is inherently global in nature and doesn’t lend itself to be regulated on the territorial principle; digital anarchy, because laws that try to regulate the web struggle with territorial limitations and are difficult to enforce; and digital agency, because new cyber-native actors are emerging, for example, proxies to real agents such as smart bots, worms, and viruses, are increasingly used to carry out cyber crimes and warfare increasingly.

An encompassing cyber strategy for foreign policy would, therefore, have to include at the very least the economy, the justice sector, foreign relations, and defence. Given that cyberspace operates in accordance with its very own principles of digital globality, anarchy and agency, for governments to attach passages on digital policy to already existing economic, political and military strategy would be to remain merely responsive – and would not go far enough in actually tackling the problem.

Examples of this can be found in a range of sectors. In the justice sector, in order to digitally and physically shut down two of the biggest dark net marketplaces, Alpha Bay and Hansa Market, police forces of the UK, the US, Thailand, Lithuania, Canada, France and the Netherlands had to cooperate. This is just one example of the inherently transboundary nature of cyberspace.

More on this: The real cyberespionage rule: don't get caught

In the economic sector, the 2010 flash crash was allegedly conducted by quant-hackers exploiting the interaction between regulations and high frequency trading algorithms. Subsequently, strategies used to manipulate high frequency trading, such as spoofing, layering and front running have been banned. This, however, could not prevent the 2016 Pound Flash Crash, which, as analysts suggested, was caused by AI-driven algorithms going rogue in response to a press statement of Francois Holland about a hard Brexit.

In the military sector, cyberattacks have become incredibly sophisticated, as StuxNet and the attack of the botnet Mirai against Internet of Things (IoT) devices demonstrated. Mirai almost broke the Internet by launching distributed denial of service (DDoS) attacks from more than 1.2 million infected IoT devices. Less well known is a vigilante bot, called Hajime, which was designed to counteract Mirai and similar botnet attacks by infecting IoT devices and blocking some of their ports used by Mirai for cyber attacks.

A looming fear is what bots, worms, and viruses can accomplish when enhanced by artificial intelligence. This is not limited to polymorphic viruses designed to escape virus detection.

Bots – small programmes executing tasks as virtual agents – are already responsible for more than half of all Internet traffic. The logic for their use is clear: they are cheaper and available in larger quantities than human agents, faster in specialised tasks, and navigate cyberspace natively.

Not all bots are good bots like chatbots, crawlers, or traders. In fact, more than half of them are malicious bots, such as spambots, impersonators, scrapers, and hackerbots. The utility of pattern recognition to circumvent CAPTCHA was only the beginning of AI-driven bots 10 years ago. With AI, the prospect of smart malicious bots developing ever-new trial paths might become a real problem for law enforcement and the cyber security sector alike.

At the same time, as illustrated with Hajime, bots can also patrol cyberspace for good. For example, they might scan Internet traffic for attempted cyberattacks and autonomously launch counter measures, alerts, and investigations.

Internet Relay Chat bots (IRCbots) can, for instance, be used to interfere when malicious users try to hijack chat conversations with profanities or for violent ideological purposes, by responding to phrases in a moderating function (for example by banning users automatically). Enhanced with artificial intelligence capabilities like natural language processing, such bots can become indistinguishable from human agents and execute any kind of task. They might even be used by law enforcement to automatically investigate online criminals, which could include sexual predators, narcotics traders, and weapons traffickers.

States, the private sector and white, grey, and black hat hackers are already building bot armies. Their potential purpose is only limited by our imagination. We’ve already seen such armies undertake DDoS attacks, campaign on Twitter (disguised as humans) for President Trump, and be used for criminal financial gain.

States need to develop strategies that go further than simply artificial intelligence and virtual reality. A forward-looking cyber strategy will have to also include the new actors of cyberspace – bots – in all policy domains, including defence, justice, economics and foreign affairs.

It will, however, be not only up to states to determine whether we will soon see intelligent bots as guardians of global peace and justice, or a global bot arms race and an AI battle over who gets to rule cyberspace.

Now more than ever, the tech industry, civil society, online interest groups, and hacktivists have a say in our joint digital future, and a moral responsibility to aim for a peaceful and just digital society.

Back to Top
Join the APP Society

One Response

  1. Peter Kinnon says:

    There is a much wider and potentially more important context in which such issues should be considered:

    The emergence of the IOT, The IOE and such minor collateral developments as self evolving robots is the extension of an on-going evolutionary process that has been, in principle, very predictable since the middle of the last century. Gordon Moore’s quantification of one contributory aspect being an example
    .
    One should not lose sight of the wider implications that arise from viewing the IOT/IOE not from the usual parochial angle of individual devices but rather as the latest manifestation of the exponential growth of technology which has occurred within the medium of shared human imagination over the past two million years. Itself a part of a greater evolutionary process that can be traced at least as far back as the formation of chemical elements in stars.

    While such considerations may, at first glance, seem to have little practical import it may well be that public awareness of such issues may have a profound impact on the future survival prospects for our species.

    For from this perspective we can safely predict the emergence of the next phase of the process, the imminent emergence of a new predominant cognitive entity of which the IOE is, of course, a precursor..

    The broad evolutionary model which supports this assertion is detailed in my latest book “The Intricacy Generator: Pushing Chemistry and Geometry Uphill”. which is available through Amazon, etc.

Back to Top

Leave your Comment

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

Press Ctrl+C to copy

Republish

Close