With the increasing role that science and technology play in society, the need for philosophers and scientists to collaborate will only grow, Lachlan Walmsley writes.
Every so often a popular scientist will, as Stephen Hawking did back in 2011, dismiss philosophy as dead and made obsolete by science. As a philosopher, I want to tell you that the truth is almost the exact opposite.
To start with a philosophical exercise, an intuition pump, how would you feel if the person making a decision that might change your life did not know right from wrong? You’d probably prefer that they were familiar with the distinction and were motivated to make fair and just choices wherever they could.
With the rise of machine learning algorithms, the person making a life-changing decision on society’s behalf might be a computer program.
There’s nothing inherently bad about automation. If a machine can do a job better than a human, it should do it. But while computer programs excel at tasks like identifying patterns in data and calculating probabilities, they are not very good at taking moral and humanistic considerations into account.
Consider the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) tool, already used to assist many judges in the United States in their sentencing and probation decisions. Although the algorithm does not explicitly measure race, it measures other factors that often do correlate with race, like geographical location and the marital status of the defender’s parents.
The technology is reported to make racially biased evaluations of the risk of re-offending. Analysis of risk assessments made in 2013 and 2014 in Florida showed that the algorithm’s false positives – those it flagged as likely re-offenders but who did re-offend – were disproportionately African American. On the other hand, its false negatives – those the algorithm flagged as unlikely re-offenders but who did re-offend – were disproportionately Caucasian.
The COMPAS case is just one very salient example of AI-assisted decision-making. AI also plays a role in automated vehicles, resume selection, and surveillance and policing.
A recent discussion paper on the ethical framework relating to AI provides a catalogue of decision tasks where AI is already being used in Australia and abroad. The paper, prepared by CSIRO’s Data61 for the Australian Government Department of Industry Innovation and Science, also identifies domains in which AI will be increasingly deployed in the future.
But the aim of this piece is not to make a judgment about AI-assisted decision-making. Instead, I only want to use this topic to show that scientific progress does not put philosophy out of a job.
Instead, it creates fertile new grounds for philosophical enquiry and judgment. More importantly, it shows how science and philosophy need not – indeed, should not – compete, but can – and should – collaborate.
The Humanising Machine Intelligence (HMI) Grand Challenges Project, which was launched publicly on 9 August, is an interdisciplinary team headed by Associate Professor Seth Lazar from the School of Philosophy at the Australian National University.
The HMI project aims to determine how to build moral machines, figuring out where and why AI is succeeding and failing to promote social justice, how to represent moral considerations to computers, and, ultimately, how to construct programs that can make good decisions in the face of risk and uncertainty.
Importantly, it presents a great model for collaboration between philosophy and science within the space of policy development. Along with the Australian Academy of Science, Lazar and Professor Bob Williamson from the Research School of Computer Science – also a member of the project – have already made a submission to the public consultation on AI responding to the discussion paper mentioned above.
Science – and the technology it creates – has, in general, moral and ethical consequences for the public. Philosophy can play a key role in shaping those consequences through policy to ensure that they produce a collective benefit.
Nowhere is this more salient than in the domain of AI, where the technology not only has indirect ethical consequences but is also directly involved in making moral decisions.
Science will not kill philosophy. Rather, science inspires philosophers and creates practical problems that philosophers and scientists can solve together through collaboration on policy.