Simon Maxwell

papersmallLooking behind the headlines of DFID’s bilateral and multilateral aid reviews



Fewer countries to receive support. Aid to India frozen. No more funding for UNIDO or ILO. FAO in special measures. These are some of the headlines of the newly-published DFID bilateral and multilateral aid reviews, published together under the banner of ‘Changing Lives, Delivering Results’. More headlines are in the pipeline: the humanitarian review led by Lord Ashdown is still to be published. I am involved in that (unpaid), and am sure it will be interesting.

In the meantime, what do we have so far, and do the conclusions make sense? These reviews are important, because they deliver on Andrew Mitchell’s commitment, not just to achieve better value for money and more concrete results, but also greater momentum on the MDGs, including issues like maternal mortality. They also provide a vehicle to deliver on the commitment to spend a greater proportion of aid in conflict- and post-conflict situations. The reviews need to be read in conjunction with other recent policy statements, like the speech at Chatham House on emerging powers, which set the scene for decisions about the future of aid to countries like India and China. There are some big changes embedded in these various public announcements, which reflect an unusually careful and systematic review.

Inevitably, the devil is in the detail, some of which is published, but quite a bit of which is not.

To start with the bilateral review, the main thrust is to cut the number of countries receiving bilateral aid (to 27), and focus higher levels of assistance, in fewer countries, on tangible results in terms of DFID priorities.  These include delivery of the Millennium Development Goals, wealth creation, governance, and climate change. The number and choice of countries seems to have been largely political, meaning exogenous to the review, but within the group of 27, a methodical exercise was undertaken, in which DFID country offices were asked to make ‘results offers’, saying what they could achieve and what it would cost. These were sifted internally and by Ministers, who then ‘agreed to take up a set of costed offers for each regional and country programme’: these have become the country and regional programme budgets listed at the back of the document. Note that account was taken (in ways not yet explained) of the results which could be achieved via the multilateral system. Also, the results offers were aggregated thematically, to make sure that a balance was maintained across the portfolio.

There are questions to ask about this, and issues to debate, and no doubt more detail will emerge, or be asked for through parliamentary scrutiny.

First, it would have been interesting to learn more about country selection. Why 27, I wonder, and why this 27? Attention has focused on the termination of aid to China and Russia, the freezing of aid to India, and also the phasing out of aid to Burundi, Gambia and Lesotho, among others. The countries which remain represent quite a range, with several middle income countries, including South Africa, Kenya and Ghana, among them.

Second, the results expected have been published, but the ‘results offers’ have not, so far. It would be fascinating to know how much DFID thinks it will cost to put 2 million children into school in Ethiopia by 2015, or support 800,000 children in school in Nigeria. Does that include investment costs as well as recurrent costs? School buildings? Training of teachers? The Ministry of Education, to support and supervise?

Third, some of the results are much harder to cost than others. Schooling is actually at the easy end of the spectrum. DFID is promising to create 144,000 jobs in Ghana and raise the incomes of 600,000 poor people in Nigeria by 50%. It is also promising 45,000 jobs in Somalia and 200,000 in Afghanistan. We are told that strong ‘offers’ were made to support this kind of result, through spending on financial services, trade promotion and better performing agricultural markets. Presumably, DFID has a series of country-specific growth calculations in which specific amounts of money spent in these areas result in higher levels of economic activity and hence formal and informal sector employment. These should certainly be published - and their logic reviewed by the new Independent Commission on Aid Impact.

Fourth, there must be some interesting cross-country comparisons. What did DFID do if it was cheaper to deliver lower maternal mortality in country A as opposed to country B – perhaps because country B is landlocked and generally more expensive, or because country B is fragile, with weak state capacity, high levels of insecurity and poor infrastructure? Were funds allocated disproportionately to country A? Or not? Overall, no explanation is given of the country allocations listed in the review. Were these obtained by adding up the best offers available globally until the money had run out, or was there some adjustment to give country allocations which managers or their political leaders could accept as reasonable?

It is a bit of a surprise to see the aid programme in Burma growing quite fast, also Kenya, and such a large investment in DRC. Nigeria is another worth discussion: the aid programme there is set to double. In general, there are some risks implied by the level of funding to some countries. Pakistan and Ethiopia will each receive about 10% of the bilateral aid programme in 2014/15. It would have been interesting to have some analysis of projected aid dependency, and dependency on UK aid, in the 27 countries.

Fifth, by the same token, how were the themes managed? What adjustments were required, and on what basis, to assure thematic balance as between wealth creation, the different MDGs and the rest? Did it just happen that fragile and conflict-affected states emerged with 30% of spend by 2014/15, as promised in the Strategic Defence and Security Review?

Sixth, the multilateral comparison is not explained. In fact, the bilateral-multilateral balance is not at all discussed in the documents provided, with the two reviews apparently having been conducted independently.

Seventh, and this is not the first time I have made this point, aid does not always work in the way that a simple payment for results model would suggest. When the UK provides pounds sterling to Tanzania, that may or may not enable the Government of Tanzania to increase spending in local currency, depending on the fiscal space available to the Government after taking into account aid flows, the impact of the spending on demand for foreign exchange, and other factors. It will be interesting to have an IMF view on the analysis.

No doubt there are answers to all these questions, and they do not undermine the value of a systematic review of the kind undertaken. There is much to be positive about, and the main conclusions are not unreasonable:

  • A bilateral programme is definitely worth maintaining, alongside a multilateral programme. Personally, I would limit it to 25% or so of total aid spending, but that requires a political judgement, as well as a careful analysis of effectiveness and value for money across bilateral and multilateral aid.
  • It’s fine to cut the number of countries receiving bilateral aid, provided that the multilateral system is equipped to avoid the emergence of aid orphans. No tears should be shed for China today, nor for India in five years’ time – though in both cases, as Andrew Mitchell argued at Chatham House, funds should be made available for global partnerships and joint action on topics like climate change. It is even safe to leave Niger and Burundi, provided that the multilaterals step in. Whether 27 is the right number of bilateral recipients for a programme of this size, and whether these 27 are the right countries are questions which have no automatic answers.
  • The list of thematic priorities is defensible, though the account of what needs to happen under each head could usefully be expanded. Infrastructure, for example, seems to be missing from ‘wealth creation’, though household energy is a theme of the South Africa country programme. Presumably, someone else will build the roads, power stations and ports needed to deliver growth: a model of division of labour and comparative advantage as between donors is implicit in the analysis and could usefully be made explicit.
  • Finally, a focus on results is of course desirable, as long as it does not become mechanistic.

How about the multilateral review? This was a different kind of exercise, in which 43 organisations were assessed on a range of criteria, summarised in terms of their contribution to UK development objectives and their organisational strength. There were 10 ‘components’ of the assessment and a total of 41 specific criteria, ranging from promotion of gender equality to the commitment to change of the Governing Body. All these were investigated, scored and then aggregated into a single ‘value for money’ chart, which classifies agencies into four categories: very good; good, adequate; and poor. This is a comprehensive and systematic exercise and adds to the work of other reviews, like the multi-donor MOPAN network.

Nine of the 43 organisations qualify as ‘very good’. These are the Asian Development Bank, ECHO, the European Development Fund, GAVI, the Global Fund, the ICRC, the IDA, PIDG (the Private Infrastructure Development Group), and UNICEF. Funding increases are promised to most of these (though not at this time, interestingly, the organs of the EU).

Nine organisations qualify as ‘poor’: the Commonwealth Secretariat, FAO, Habitat, ILO, IOM, ISDR, UNESCO, UNIDO, and UNIFEM. Some of these are to be placed in ‘special measures’ (including FAO and UNESCO). Some are to have funding withdrawn completely: Habitat, ILO, UNIDO and ISDR (the Secretariat for disaster reduction).

The methodology used for this exercise will no doubt spawn a small industry of commentary, tackling issues like the choice of criteria, the weighting, the aggregation procedures, and the quality of the evidence base. It is not easy to score on a single scale, agencies which have very different constitutions, functions and levels of resources. The IDA, for example, which exists to channel funding to poor countries, is a very different kind of organisation to FAO, which has important normative and technical supervisory functions, and only provides a small amount of aid, mostly in the form of technical assistance. No doubt, the exercise will be refined and extended in future years.

Methodology apart, the review throws up some practical issues.

First, it leaves open the question of whether the current multilateral share (29% if only core funding is counted, according to the MAR; closer to 50%, according to the DAC, if trust funds and other multi-bi operations are included) is about right, or not. The previous Government had pledged (in the 2009 White Paper) to increase the multilateral share year on year. This policy appears to have lapsed. It is a pity the bilateral and multilateral reviews were not integrated, so that the question could have been explored.

Second, it is notable that the highest-scoring agencies or organisations tend to be quite specialised, in several cases being vertical funds, like GAVI or the Global Fund, in other cases being specialist humanitarian organisations, like ECHO or ICRC. I do have a suspicion that vertical funds, in particular, are more popular with donors than recipients, partly because they give donors greater control.

Third, there is little discussion of why some agencies score poorly, and whether donor behaviour might not be to blame. The One-UN Panel, for example, emphasised the damage that is done to multilateral organisations, especially the UN, by over-reliance on trust funds and special purpose vehicles, at the expense of predictable core funding. The UK is certainly guilty in this area, though not alone.

Fourth, in general, the UN does not come out of this exercise as well as the multilateral development banks. It is interesting to ask how the results would have differed if universality and accountability had been more highly weighted.

Fifth, the low scores for some organisations present a real difficulty, because they are key to the international architecture and/or represent international initiatives on issues of real importance. There are few alternatives in some cases, and in other cases alternatives which present problems of universality and accountability. Both ‘ending core support’ and ‘special measures’ are unpalatable options, and have a punitive tone, of course the first more than the second. Thus

  • FAO scores as ‘poor’, which may or may not be justified, but the world is facing a food crisis in 2011 which absolutely needs the kind of coherent management which FAO was set up to provide, and which only FAO can deliver, for example through the Committee on Food Security. This is presumably why the organisation was placed in ‘special measures’, rather than having its funding cut. But rather than a focus on special measures, the alternative might have been to highlight the (alleged) problems, and then reinvigorate the commitment to FAO, which is in the process of electing a new Director General and which will need more and more secure funding. ‘Tough love’ needs to be the strategy, but a bit less ‘tough’ and a bit more ‘love’ might have been appropriate.
  • UNIDO also scores as poor, and is to have its core funding ended. This seems a real pity, given the importance of manufacturing to the growth prospects of the poorest countries, but also given the new impetus UNIDO has received since Kandeh Yumkellah became Director General. He has also led for the whole of the UN on energy, a key ingredient of the poverty reduction package, and a major item on the international agenda, with 2012 having been designated as the International year for Sustainable Energy for All.
  • ISDR falls into the same category, with the review apparently concluding that the same impact can be achieved by channelling resources to a World Bank Trust Fund on disaster risk reduction. This moves UK funding, and, if followed, the global centre of gravity, away from the UN and into the World Bank – a strange move in the year of the Global Platform for Disaster Risk Reduction, marking five years since the adoption of the Hyogo Framework.

Overall, there is a risk with exercises like the multilateral review that the technical wizardry of the scoring might displace common sense or obscure the politics of decision-making. Andrew Mitchell will, of course, need to use the results of the analysis carefully, and make sure the messages are in the right tone. He has been more supportive of FAO than the review might have suggested, for example.

Are we better off as a result of these reviews? More focused, certainly, both geographically and in terms of results. Better informed, certainly, both as development specialists and as UK tax payers. Inquisitive also, with many questions still to be answered. That is one benefit of the exercise. The reviews have helped inform policy. They also contribute to ongoing debate about how best the UK can help to reduce poverty in the world.


Add comment

Security code