Throwing a rock at Evidence?

Originally posted at www.politicsandideas.org by Nilakshi De Silva on the challenges of data, evidence and development policy & practice:

I always feel a little uncomfortable when we refer to research findings as ‘evidence’. The term is very close to the idea of ‘proof’, and conjures up images of court rooms and lawyers presenting Exhibit A with a lot of fanfare to prove the case beyond any shadow of a doubt. This image is a far cry from development research and how it actually happens. We work in the imperfect real world, which is changing even as we study it. Researchers are also hampered by funding and other constraints, and research designs are often the best we can manage under these multiple constraints. Yet, even if we do manage to design the best possible research design, how much of what we do is really, beyond any doubt, ‘the truth’ – indisputable and constant?

One of the criteria by which we judge the quality of research is generalisability or the extent to which the findings may be applied to places and cases which have not been studied.  For example, can we use the findings from an impact evaluation of an employment generation scheme in Colombo and apply it to other cities and towns in Sri Lanka? In theory, of course we can – especially if the study has provided for impact heterogeneity and can show effects on different sub groups and types. Yet, there is a need to be cautious. The evaluation is limited to a specific time and place, and the world moves even as we speak; what worked in this place may not work even in this same place in a few years´ time, let alone in other non-studied locations. In Sri Lanka for example, there is no other place that is exactly like Colombo, though some of its characteristics can be seen elsewhere. The city is also undergoing palpable change and many who return to it after a few years find it has transformed, sometimes beyond recognition. Would a program that performed successfully here 10 years ago, still work? It is hard to say, which is why a good evaluation needs to tell us, not just about what worked here, but also identify under what circumstances.

Evidence of what worked in a particular place and time doesn’t travel well for another reason. Pilot interventions on which the research findings are based may be implemented in a certain way, which is often not replicated (and sometimes not replicable) in a scaled up version of the same intervention. Important checks and balances may be missing in the larger version, and we find that the impact that was seen in the studied intervention is not there under the scaled up policy that is implemented. This does not mean that the idea on which the intervention was based is a bad one, but too often this kind of outcome ends in throwing out the idea as bad policy based on bad evidence.

In our conversations about evidence – based policy making, we often ignore the contested concept of ‘knowledge’.  I was at a discussion forum on post structuralism recently where the question of whether there is anything that can be termed ‘objective’ knowledge came up. Derrida for example suggests that all knowledge is subjective, in that there is someone who goes out, interacts with the objects of the study, gathers information and subjects it to some interpretation. Knowledge is not, and he suggests it never can be, directly that of the object that is studied, but is always shaped by the object coming into contact with the person who studies the object and which is interpreted by the researcher. Many of us, who are long past our university days, may no longer engage with the philosophy of science but these ideas are critical to what we do every day.  Knowledge is our business, but do we really understand enough about how knowledge (a.k.a evidence) is produced? In other words, as development researchers, it is not enough that we are development specialists, we also need to understand what words like ‘knowledge’ and ‘evidence’ mean.

From this perspective, a healthy skepticism is a good way to approach evidence to inform policy and practice. Our knowledge is always time bound and context specific. Rather than jumping through increasingly more complex hoops in an effort to make the evidence water tight, a more useful way forward may be to focus on how we recommend that this knowledge be used.

What does all this mean, if anything, for research influencing policy? I think it means that we need to be very careful about the policy recommendations we make and how the replication and scaling up are rolled out. Whatever the evidence on which the policy is based, the implementation of the policy still needs to be carefully crafted to monitor if it is working, what is not working and adjust and adapt as you go. If we accept that there are limitations and uncertainties in the knowledge that research produces, this becomes even more important. There are no guarantees, and this needs to be borne in mind and incorporated into the recommendations as well as scale up plans. In the end, this may very well mean that the seed of the idea may remain the same as that from which the evidence was drawn, but the intervention / policy that is implemented is more grounded in the context to which the policy applies. A good example are conditional cash transfer programmes, which are implemented in many parts of the world, but which often have different designs which take into account local realities.

As development researchers, our work is driven by the implicit goal of changing the world for the better. The underlying premise is that we can ‘know’ what works and what doesn´t, by carefully designing and carrying out research studies, which can then provide ‘evidence’ for better policies and practice. While this version of the world is naturally attractive and helps to inspire and validate our work, there is also a need to be real. We study a complex world, in which intricate relationships are continuously interacting in multiple, complex ways.  This understanding should not cripple us or alternatively, push us towards over simplifying the world in order to be able to study it. Rather it should enrich our work and help development research to be more truly a useful service.

Some thoughts on tackling corruption

I’ve been seeing a few things coming up on corruption lately, so I thought I would try to pull them together with some thoughts of my own.  A recent post on the World Bank’s “People, Spaces, Deliberation” blog discusses the state of global corruption.  The author notes that there are some indications that global corruption has increased over the past several years.  At the same time, however, there are a number of more promising trends in the struggle against corruption, including:

  • Moves by rich countries to target tax havens
  • A multitude of novel approaches to addressing corruption around the globe
  • Perhaps most importantly, a new generation of social movements, largely middle class, throwing off the resignation of corruption as ‘business as usual’ and taking to the streets in places like India, Brazil, Turkey and elsewhere

The article concludes by noting that international organizations seeking to address corruption are more effective when the support these existing efforts, than when they try to create such initiatives from nothing.

However, in order to address corruption, we must know must know where it exists and how it impacts citizens.  Alex Cobham of the Center for Global Development argues that the most widely-cited corruption indicator (which is also utilized by the World Bank blog’s author), Transparency International’s Corruption Perceptions Index (CPI), is deeply flawed.  He observes that the CPI draws on too few ‘expert’ opinions and has too much room for bias, and instead calls for ‘barometers’ that draw on a diverse population and utilize a number of questions that reveal citizens’ experiences of corruption in each country.  While citizens certainly experience demands for bribes and other direct expressions of corruption, they may not have a full understanding of the depth of corrupt practices that occur behind closed doors.  Corruption is often a hidden practice, after all.  Thus, a broad survey of citizen perceptions could be buttressed by a more robust expert analysis, perhaps along the lines of the Global Integrity Report, in order to guide policy makers towards the most prevalent, and pernicious, forms of corruption in each country.

Finally, once the dynamics and sources of corruption have been established, international organizations need effective tools to combat it.  As mentioned, building off existing efforts and movements is key.  Too often, international actors have focused exclusively on helping countries strengthen formal laws and institutions to address corruption.  Yet corruption persists.  According to the Global Integrity Report, Guatemala’s legal framework for combating corruption scores 90 of 100, while the implementation of those laws rates at 43, a massive gap between form and function.

Over the past several years, however, research and analysis has revealed improved principles for catalyzing meaningful institutional change through foreign aid.  The Fragile States Research Center highlights several recent books, articles and reports that argue for a new thinking about how to support institutional reform.  The authors of these pieces agree that promoting real institutional change involves focusing on the informal aspects of institutional functioning, such as organizational culture, power relationships, incentive structures, etc.  The authors note that international aid agencies have struggled to put these new insights into practice, despite new analytical tools to examine political economy factors, for example.  Thus, the challenge remains for development decision-makers and practitioners to incorporate the latest learning in order to help support what may be a rising tide against global corruption.