A lack of understanding is causing Australia’s aid to underperform in the Pacific, Terence Wood, Sabit Otor and Matthew Dornan write.
A recently co-authored a report by one of us (Terence) highlighted Australian aid transparency issues, however, the Aid Program does deserve credit for placing some very useful information in the public domain. A good example of this can be seen in Aid Program Performance Reports, which contain a suite of helpful information, including appraisals of the performance of individual aid projects.
Appraisals are only available for projects over a certain size and only for larger partner countries, but in making this type of data available, Australia joins a select group of donors that allow researchers the chance to study how projects are performing. Two of us (Sabit and Terence) have already conducted fruitful work with World Bank and ADB project data (we blogged about our findings here). So, when the opportunity to use Australian aid project data presented itself, we jumped at the chance to learn more.
We have just published our analysis of Australian aid project performance based on these data in a paper in Asia and the Pacific Policy Studies (the paper is open access.) In the paper we use the data to analyse which types of aid projects are more likely to work and where. We also compare Australia with other donors.
Project appraisal data aren’t perfect: they’re a product of staff assigning ratings of project performance. (Various aspects of performance are appraised; in our paper we focused solely on effectiveness.) Although there are clear criteria for ratings, and checks built into the system, there is an inevitable degree of subjectivity that goes into appraising aid projects.
Fortunately, if the subjectivity is effectively random – some projects are scored more generously than others but there’s no systematic bias – it isn’t a big issue for the type of large N analysis we conducted. And although it’s possible projects are appraised too generously across the board, this isn’t a problem for our work either. Our analytical leverage comes from comparing differences between appraisal scores. If all scores are inflated equally, we can still learn from differences between different types of project, or projects in different places.
Analysing a dataset of project appraisals still comes with challenges. But it brought a real strength. Rather than focus in depth on an individual aid project, or simply draw on our own intuitions, we were able to zoom out, and look for systematic differences in the performance of Australian aid projects.
When we did this, much of what we found was interesting simply because of what we didn’t find. We found no good evidence, for example, that Australia suffers clearly different challenges to other donors. We found no good evidence that Australian aid is particularly effective in certain sectors (although humanitarian emergency projects appear more effective than long-term development projects.)
However, one clear and important finding did emerge from our analysis. This was that Australian aid projects perform less well in the Pacific. You can see this in the chart below, which plots the average Australian project appraisal score both in the Pacific and elsewhere in the world.
The finding proved to be remarkably robust. The Pacific continued to be less successful even when we controlled for project differences (sectors, project size, etc.).
It’s true that the magnitude of the difference is not massive: projects are nominally assessed on a one to six scale; the difference in the chart is less than half an increment on that scale. However, Aid Program staff are clearly averse to providing very high or low scores for projects. (Almost all projects were scored four or five.) Given the diversity of aid projects in the real world, this is surely an artifact of risk aversion when appraisals are made. The Australian Aid Program isn’t unique in this. We found the same clustering with other donors’ data. But a likely consequence is that the difference in project performance between the Pacific and other countries is understated. The difference in reality is probably much greater.
Australia is not unique in suffering worse project performance in the Pacific. Other researchers have found it in ADB data. Two of us (Sabit and Terence) have shown the same gap exists with ADB and World Bank loans. The issue of the Pacific emerges in other analysis too. Under-performance can be seen (page 5) in the Australian Aid Program’s assessments of how well it meets country objectives in recipient countries. The practical experiences of some aid workers point to similar issues.
What does all this mean? We’re working on a new project using a large multi-donor dataset to gather insights on why projects are less effective in the Pacific. (Stay tuned for some answers in a blog post soon.)
As far as aid practice, lower project effectiveness in the Pacific shouldn’t mean less aid is given to the region. The need for aid is high, particularly in smaller countries, and in the poorest parts of Melanesia.
Rather, we think the obvious lesson is that all donors (not just Australia) need to up their focus on giving aid well in the Pacific. More needs to be learnt about context. More focus should be placed on gold standard evaluations. Despite lower levels of aid effectiveness, there is a dearth of robust impact evaluations undertaken in the Pacific.
Effective aid in the Pacific requires more work. But if we truly want to be a good partner to the region, it’s the least we can do.
The authors would like to gratefully acknowledge the willingness of the Australian Aid Program to make data available, provide advice on its data, and the interest it has taken in our research thus far.
This article was published in partnership with Devpolicy Blog and is based on the author’s paper, ‘Australian aid projects: what works, where projects work and how Australia compares’, published in the Asia and the Pacific Policy Studies journal.