Throwing a rock at Evidence?

Originally posted at www.politicsandideas.org by Nilakshi De Silva on the challenges of data, evidence and development policy & practice:

I always feel a little uncomfortable when we refer to research findings as ‘evidence’. The term is very close to the idea of ‘proof’, and conjures up images of court rooms and lawyers presenting Exhibit A with a lot of fanfare to prove the case beyond any shadow of a doubt. This image is a far cry from development research and how it actually happens. We work in the imperfect real world, which is changing even as we study it. Researchers are also hampered by funding and other constraints, and research designs are often the best we can manage under these multiple constraints. Yet, even if we do manage to design the best possible research design, how much of what we do is really, beyond any doubt, ‘the truth’ – indisputable and constant?

One of the criteria by which we judge the quality of research is generalisability or the extent to which the findings may be applied to places and cases which have not been studied.  For example, can we use the findings from an impact evaluation of an employment generation scheme in Colombo and apply it to other cities and towns in Sri Lanka? In theory, of course we can – especially if the study has provided for impact heterogeneity and can show effects on different sub groups and types. Yet, there is a need to be cautious. The evaluation is limited to a specific time and place, and the world moves even as we speak; what worked in this place may not work even in this same place in a few years´ time, let alone in other non-studied locations. In Sri Lanka for example, there is no other place that is exactly like Colombo, though some of its characteristics can be seen elsewhere. The city is also undergoing palpable change and many who return to it after a few years find it has transformed, sometimes beyond recognition. Would a program that performed successfully here 10 years ago, still work? It is hard to say, which is why a good evaluation needs to tell us, not just about what worked here, but also identify under what circumstances.

Evidence of what worked in a particular place and time doesn’t travel well for another reason. Pilot interventions on which the research findings are based may be implemented in a certain way, which is often not replicated (and sometimes not replicable) in a scaled up version of the same intervention. Important checks and balances may be missing in the larger version, and we find that the impact that was seen in the studied intervention is not there under the scaled up policy that is implemented. This does not mean that the idea on which the intervention was based is a bad one, but too often this kind of outcome ends in throwing out the idea as bad policy based on bad evidence.

In our conversations about evidence – based policy making, we often ignore the contested concept of ‘knowledge’.  I was at a discussion forum on post structuralism recently where the question of whether there is anything that can be termed ‘objective’ knowledge came up. Derrida for example suggests that all knowledge is subjective, in that there is someone who goes out, interacts with the objects of the study, gathers information and subjects it to some interpretation. Knowledge is not, and he suggests it never can be, directly that of the object that is studied, but is always shaped by the object coming into contact with the person who studies the object and which is interpreted by the researcher. Many of us, who are long past our university days, may no longer engage with the philosophy of science but these ideas are critical to what we do every day.  Knowledge is our business, but do we really understand enough about how knowledge (a.k.a evidence) is produced? In other words, as development researchers, it is not enough that we are development specialists, we also need to understand what words like ‘knowledge’ and ‘evidence’ mean.

From this perspective, a healthy skepticism is a good way to approach evidence to inform policy and practice. Our knowledge is always time bound and context specific. Rather than jumping through increasingly more complex hoops in an effort to make the evidence water tight, a more useful way forward may be to focus on how we recommend that this knowledge be used.

What does all this mean, if anything, for research influencing policy? I think it means that we need to be very careful about the policy recommendations we make and how the replication and scaling up are rolled out. Whatever the evidence on which the policy is based, the implementation of the policy still needs to be carefully crafted to monitor if it is working, what is not working and adjust and adapt as you go. If we accept that there are limitations and uncertainties in the knowledge that research produces, this becomes even more important. There are no guarantees, and this needs to be borne in mind and incorporated into the recommendations as well as scale up plans. In the end, this may very well mean that the seed of the idea may remain the same as that from which the evidence was drawn, but the intervention / policy that is implemented is more grounded in the context to which the policy applies. A good example are conditional cash transfer programmes, which are implemented in many parts of the world, but which often have different designs which take into account local realities.

As development researchers, our work is driven by the implicit goal of changing the world for the better. The underlying premise is that we can ‘know’ what works and what doesn´t, by carefully designing and carrying out research studies, which can then provide ‘evidence’ for better policies and practice. While this version of the world is naturally attractive and helps to inspire and validate our work, there is also a need to be real. We study a complex world, in which intricate relationships are continuously interacting in multiple, complex ways.  This understanding should not cripple us or alternatively, push us towards over simplifying the world in order to be able to study it. Rather it should enrich our work and help development research to be more truly a useful service.

Leave a comment