Reflections on ICAI’s first year
The Independent Commission on Aid Impact was announced by Andrew Mitchell in a speech to Oxfam in June 2010, and became operational in May 2011, with a mission ‘to provide greater independent scrutiny of UK aid spending, thereby maximising its value for money and impact.’ Andrew Mitchell framed this in the wider context that ‘transparency, accountability, responsibility, fairness and empowerment will be our watchwords’. The announcement was linked to a commitment to the UK Aid Transparency Guarantee. With specific reference to independent evaluation, Andrew Mitchell said:
‘Independent evaluation of British aid is absolutely crucial. There is something a bit too cosy and self-serving about internal evaluation. Reviews that focus on process and procedure miss the real issue: what did the money achieve? What change resulted from it? How were lives made better? We need a fundamental change of direction – we need to focus on outputs and outcomes, not just inputs. Sweden has been using independent evaluation for years and others, including the MIT Poverty Lab, have shown that we can be much more scientific about measuring what works. Aid spending decisions should be made on the basis of evidence, not guesswork. . . We will never maintain support for our growing aid budget unless we can offer to the British public independently verified evidence that it is being well spent.’
Two years on from the speech and one year on from the formal launch of ICAI, it is possible to see whether progress is being made. ICAI has published eleven reports, on topics ranging from programme management in Afghanistan to election support through UNDP, to education programmes in East Africa (see Box 1). It is shortly to publish its first Annual report, and to be cross-examined on its work so far by the International Development Select Committee of the House of Commons in the UK Parliament. A declaration of interest: I am a paid specialist adviser to the Committee.
A selective approach by ICAI
The first thing to say is that ICAI does not pretend to offer a comprehensive analysis of DFID’s performance, from either a results or operational perspective. It does not answer directly a question I posed back in January 2011, ‘what kind of shape is DFID in?’. Instead, ICAI is selective in its approach.
This is in contrast, for example, to the work of the Independent Evaluation Group of the World Bank, which carries out many specialist studies, like ICAI - but which also attempts a comprehensive and systematic overview. For example, its most recent annual report, for 2011, reviews all country programmes and projects from the perspective of relevance, efficacy and efficiency. A typical comment, in this case on expanded economic opportunities for the poor, reads as follows:
‘In FY08–10, 85 percent of all WBG operations aimed to help expand economic opportunities. Among 64 country programs reviewed in FY08–11, objectives relating to expanding economic opportunities were substantially achieved in 69 percent. Eighty percent of Bank-supported projects that aimed at expanding economic opportunities completed in FY08–10 had satisfactory project outcomes.’
ICAI will surely say that this kind of synthetic overview lies outside its current mandate, but DFID might reflect on the fact that its own published data do not include systematic project scoring of this kind. The formal DFID Annual Report does report on results at country level and globally (with some brave assumptions about attribution, as I have argued previously, in connection with the Bilateral Aid Review), but not on project performance. I am reluctant to suggest loading more work on to DFID officials, but ICAI itself has produced a useful report on value-for-money, which DFID could apply to its own work.
Despite the partial nature of its coverage, ICAI has provided a range of snapshots over the past year: bilateral and multilateral; governance and economic; substantive and management-focused; Africa and Asia. Its reports are short, and it is not always clear quite how much work has gone into them; but they appear well-informed and analytical. They are certainly readable. For my taste, there are too many focused on early stage interventions, not yet ready for evaluation; and there is too much on management and fiduciary controls, not enough on impact. There is also very little on policy work: the World Bank report, for example, focuses on the spending work of the Bank, rather than its advisory role. These biases can be corrected in later years.
The scope and cost of ICAI reports
On the question of the reports being ‘short’, this is mainly about presentation (and short is good). However, it is worth making the point that these are ‘evaluation’ reports, not ‘research’ reports. In other words, there is no primary research involved, and no measurement, of the kind carried out by the MIT Poverty Lab and praised by Andrew Mitchell (for my views on that approach, propagated by Banerjee and Duflo in their recent book, Poor Economics, see here).
There was a discussion about this when the Chief Commissioner of ICAI, Graham Ward, and the Permanent Secretary of DFID, Mark Lowcock, were interviewed by the International Development Select Committee in December 2011. The Chair of the Committee, Malcolm Bruce, remarked that
‘For those of us who have to process the reports on top of everything else, they have the virtue of being short and snappy reports, if they do the job. But on the other hand, people can say they are, in the quote I have here, "quick and dirty"; in other words they are too short, too concise. It does not tell you how much went into the report. You have a policy of having a short report, but you do not know how many person days were involved.’
Graham Ward replied as follows:
‘I can certainly tell you how many person days were involved in terms of the different reports. I hope that they are not dirty; we were certainly not quick in putting them together. ICAI’s Approach to Effectiveness and Value for Money was 99.25 days; DFID’s Approach to AntiCorruption was 287.25 days; DFID’s Climate Change Programme in Bangladeshwas 144 days; and DFID’s Support to the Health Sector in Zimbabwe was 161.5 days. Those are the numbers of days that were taken by the contractor to do the fieldwork. There was then, of course, a considerable amount of input that came from ICAI’s own secretariat and from the commissioners personally.’
The Permanent Secretary, Mark Lowcock, commented later in the session that
‘On the evaluation department, that used to be the bit of DFID from which we ran our programme of independent evaluation studies. At the time ICAI was established, we closed that business down. We do not ourselves, from the central department, produce those internal independent evaluations any more. We have recycled the money from that operation into other things. We still have a small team at the centre that deals with evaluation, but the main thing they do is provide advisory services to the several dozen evaluation specialists who are dotted around the wider Department, who commission, for example, randomised control trials and a lot of the longer running research and evaluation programmes of the sort that Mr Ward explained are not really within the resource environment or the mandate of ICAI. We have closed down the bit of the organisation that used to do what Mr Ward’s team now does, but we are still inside DFID financing more, especially longer-term evidence generation and evaluation material, than we have ever done in the past.’
These exchanges confirm that ICAI is not expecting to generate primary evidence on impact or value-for-money. From evidence given in December, it appears that its reports may be costing up to £200k each, perhaps fewer as the number carried out per year rises, and the fixed costs of ICAI are spread more thinly. Mark Lowcock commented that
‘the cost of evaluations of the sort that are maybe broadly comparable with ICAI’s reports varies between something like £100,000 and £150,000 if we do them inside the Department. That is not way out of line with the ICAI numbers. If we are doing much more complex evaluations, for example of the sort involving randomised control trials,. . . , the cost can be significantly higher there. If you are running a randomised control trial over several years, affecting tens of thousands of people, that is obviously very expensive, but as a kind of core starting point for those complex evaluations, they might cost around £250,000.’
Personally, I would be impressed if a full-scale RCT could be conducted for £250k, at least by UK-based researchers. In any case, these exchanges establish ICAI as carrying out mid-range evaluations, more detailed than classic DFID output-purpose reviews, considerably less than research studies,
ICAI’s judgement on DFID’s track-record
Overall, DFID comes out reasonably from the first year set. Six reviews award an overall rating equivalent to 2 on a four point scale. Four award a rating equivalent to 3. One (on DFID’s approach to effectiveness and value-for-money) had no score. There are no scores of 1 (green) and none of 4 (red): this may reflect real performance, but may equally be the result of the evaluators’ unwillingness to be really outspoken. If this was a University examination process, the external examiners would encourage internals to make more use of the tails.
There is plenty of interest in each of the reports: how to work around the Government in Zimbabwe, for example; or how to partner with a commercially-sponsored private foundation like the Nike Foundation. More interesting, is to read the reports as a set, and identify cross-cutting issues. Leaving aside the fiduciary preoccupations which seem to loom preternaturally large in ICAI’s world view, there are five of these which caught my attention.
What is ‘impact’?
First, the reports contain interesting insights into the much-debated question of what should be considered ‘impact’ in aid evaluation: should evaluation only be concerned with final outcomes, like ‘educational accomplishments’, or should it be concerned with intermediate outputs, like ‘numbers of children at school’, or with more indirect outcomes, like the strength of the Ministry of Education and other education institutions? The right answer is ‘all three’, but sometimes the emphasis on governance and institutions is submerged in what Andrew Mitchell has called ‘bean-counting’. In support of the wider view, I have drawn a distinction between Fordist and post-Fordist approaches to results, or Results 1.0 and Results 2.0.
ICAI is sensitive to post-Fordist approaches, as exemplified by its approach paper on effectiveness and value for money. In some of its country work, it emphasises the need to examine educational outcomes as well as numbers in school, for example in its review of education programmes in Tanzania, Ethiopia and Rwanda. In its review of health and education in India, it goes further. In Bihar, the primary contribution that DFID makes is not, according to ICAI, the financing of services, but rather support to the political process of reform, the design of new policy, and the strengthening of institutions. Technical assistance, founded in DFID’s expertise on the ground, turns out to be more useful than money. ICAI concludes that
‘ DFID’s particular contributions to improving development in India are its knowledge, skills, networks and its critical yet supportive approach. DFID’s partners in India consistently pointed out that the UK’s support was valued for more than its technical capacities . . . We are not convinced . . . that DFID can only have influence if it is seen to provide large sums of finance at the same time. We believe that DFID should consider spending a greater proportion of its finance to India on technical assistance.’
Similar analysis underpins the evaluation of budget support. ICAI is sensitive to the opportunities for influence that come with budget support. It concludes that
‘While it is legitimate to report on the crude financing effect of budget support, the main reporting on results should focus on transformational effects (the changes brought about by UK budget support) and should capture changes in the quality of services provided (real impact on citizens).’ (para 2.76)
It will be important to pursue this nuanced line of thinking in future reports. At the ODI/IDS workshop on results which led me to reflect on post-Fordist appraoches, I observed that ‘there was enthusiastic engagement with the idea that better information was needed on results – and also lots of talk of social process, beneficiary perception, learning-by-doing, unexpected consequences, and what was described as the ‘excess certitude’ associated with technocratic approaches to results’. There’s a challenge to ICAI!
Buying a seat at the table
If influence matters as much as or even more than money, a follow-on question is whether money is needed at all. This is a question which preoccupies ICAI in a number of its reports, and which is particularly relevant in countries which could in principle mobilise their own resources. India is a case in point. ICAI reports that ‘ DFID staff often argue that, if the UK wishes to influence change, it needs to provide money to ‘get a seat at the table’ with government and partners that enables DFID to influence policies, practice and standards of financial management on a large scale’. However, ICAI concludes that ‘we are not convinced . . . that DFID can only have influence if it is seen to provide large sums of finance at the same time’. A similar point is made in the report on budget support.
This is a question that ICAI will need to return to, and that perhaps DFID itself will need to examine. The question could also be on the agenda of the follow-up report currently being carried out on DFID’s Multilateral Aid Review by the National Audit Office. There is a substantial literature on the policy process and on donor-recipient relationships. A recent contribution is on Knowledge, Policy and Power in International Development, by a group of ODI authors.
The staffing and skills needed to deliver high quality programmes
A related issue is that if DFID is to engage in post-Fordist ways with institutions and political processes, then it needs the staff in place locally, backed up by professional cadres in London. ICAI make this point strongly in the India report, for example, praising the quality and level of engagement of DFID staff, and drawing unfavourable comparisons with the establishment in East Africa. Members of the New Delhi-based DFID health team, many of whom were locally contracted, and some of whom were on secondment from the Government, made as many as 72 visits to Bihar during 2011!
This is another topic that has long been on the agenda, and not just for DFID. David Booth is one who has long argued that donors need much greater and better trained representation on the ground if they are to engage seriously with political and institutional questions.
DFID staffing is a constant preoccupation of its friends. Numbers have fallen overall, but ingenious steps have been taken to protect ‘front-line services’, for example by reclassifying officials as programme staff rather than administrative staff. Thus, and as a result of fossicking about in DFID Departmental Reports, I was able to inform the International Development Select Committee that in 2010/11, 615 people were reclassified from admin to programme at a cost of £27 million. In the previous year, 703 people were reclassified at a cost of £32 million.
I wonder whether overall staffing and skill distribution is a question that ICAI could take up in its own right? Or perhaps it can become a running agenda item in its every report, that the ICAI Commissioners can comment on in their annual overview.
Bilateral versus multilateral aid
Asking about DFID’s staffing rather begs the questions about whether DFID should have its own technical capacity or rely on that in the multilateral agencies. Some – see the IDC report of April 2012 on the EU – have very little capacity; and if this were the only basis on which the allocation of aid as between bilateral and multilateral channels was to be decided, the EU could expect a miserly settlement from DFID . Others, however, have more to offer. There is evidence of this in the ICAI reports: on electoral support, in UNDP; on working with girls, in the Nike Foundation; and perhaps, though this is not really discussed, in the World Bank.
There are other criteria in play, however. The Multilateral Aid Review, published in early 2011, identified ten separate criteria, ranging from strategic performance to focus on poor countries and likelihood of change. Other multilateral aid assessment frameworks, like the Multilateral Organisation Performance Assessment Network, MOPAN, use similar criteria.
ICAI does not systematically ask whether bilateral or multilateral channels would best be suited to achieve the range of objectives DFID have set in different countries. However, the choice of UNDP as a partner in the area of electoral support is seen as ‘credible and to an extent inevitable’. It would be useful if allocation issues could be explored more systematically in the future. This would have been useful, for example in the Afghanistan report, in which the performance of different DFID partners is extensively discussed – but without recommendations as to the reallocation of funding.
DFID as a venture capitalist
Finally, and cutting across many of the topics already raised, is the idea of DFID as a risk-taker, a venture capitalist. There is praise in a number of the ICAI reports for DFID’s innovation and risk-taking. There is perhaps too little sympathy for failure. As Tim Harford argued in Adapt (and see my review here), development agencies need to foster a multiplicity of experiments, so that evolutionary pressure will identify long-term successes. Remember the Palchinksy Principles: 'to try new things, in the expectation that some will fail; to make failure survivable, because it will be common; and to make sure you know when you have failed'. It would be interesting to ask ICAI to define some operational implications of this approach.
In conclusion, and to repeat, there is much of interest in the individual ICAI reports, but the real value-added is in the opportunity they offer to address cross-cutting issues. The Commissioners have an opportunity to take these up in their Annual Report, which will also give DFID ministers the opportunity to reply. It would be helpful if all parties, including the International Development Select Committee, could contribute to focusing the debate at this higher level of aggregation.
|< Prev||Next >|