News from the initiative

Moving beyond evaluation to learning and doing

$s Published: 5 years, 5 months and 6 days ago. Tagged in evaluation, evidence, impact

MindTheGap2011

Can we develop a tool-box of policies?

“Policymakers act when they see an opportunity to tell a good story … They also act when they see an opportunity to avoid criticism. The good news is that the world is changing. Right now we have a growing movement for accountability with social media playing a lead role” commented Felipe Kast, Chile Planning Minister at the 3ie conference “Mind the Gap: From Evidence to Policy Impact.

Policy is equally influenced by good evidence and bad evidence. To ensure that good evidence influences policy, researchers need to ‘get away from their comfort zone’ and actively engage with policymakers to help them answer the ‘big questions’.  Doing so means addressing the “tension between learning and doing” mentioned by Ruth Levine (Hewlett Foundation) during the conference opening. The pressure is for researchers to deliver results too quickly and for policymakers “there is still a pronounced hunger for success stories but a tendency to choke on failure”.

However, participants in Cuernavaca did speak of a real shift in the political discourse and demand for evidence, with a growing “impact evaluation envy” according to Esther Duflo (J-PAL). In particular, countries in Latin America have been championing the use of evidence to improve the effectiveness of their social policies.Impact evaluation has become part of the democratic dialogue. Gonzalo Hernandez-Licona (CONEVAL) talked about a real change in agencies’ behaviors and that “citizens are demanding evidence” (For more, view video).  Koldo Echebarria (IDB) mentioned the growing institutionalization of impact evaluation in the region: “This is part of a movement towards accountability which comes with more quality democracy and evaluation is playing a bigger role in relation to that” (For more, view video). For Kast, this movement in Latin America is partly due to the fact that “people don’t believe in politicians anymore. Since the credibility is so low, politicians must use good evidence to convince citizens that programmes are working” (For more, view video).

This political reality links in to the issue of incentives and the challenge of getting buy-in from decision makers. So, how can policymakers take credit for demanding the evaluation of their programme? How should researchers communicate to policymakers? One of the problems raised during the discussions was about overcoming a common perception of decision makers that evaluations are a threat. It is important to separate the issue of evaluating the policymakers’ performance with the actual evaluation of the intervention design.

Many speakers stressed the importance of finding the right incentives. The creation of institutions like 3ie and CONEVAL in Mexico, contributes to the advancement in the conduct and use of impact evaluations. The issue of incentives is not limited only to the ‘demand side’ of policymakers. It also concerns the ‘suppliers’ of evidence, in other words researchers. For researchers, this means having the right tools to use the findings and translate them into policy action. More importantly, Miguel Szekely Pardo from Monterrey Technological Institute noted that “both incentives – those of users and producers – need to be aligned” (For more, view video).

A final key challenge is knowing how to communicate when the evaluation shows that a programme has “no effect”. This issue was addressed by Paul Gertler (University of California Berkeley and 3ie Chairman), who mentioned five important elements that can help researchers to engage policy makers around “no effect” findings:

1. Using rigorous methods is essential for the robustness and credibility of the findings.

2. Involving policymakers from the outset. “Policymakers need to be at the table when researchers start designing the evaluation” said Gertler.

3. Finding a positive action that policymakers can apply. More important than finding whether a programme works or not, it is important to compare various versions of the programmes to understand which strategy works better.

4. Conducting more multi-site evaluations. These not only increase the validity of the findings, but also allow researchers to shift their message from the failed performance in a particular location, to the fact that a particular programme design does not work anywhere  (For more, view video).

5. Tell policymakers that they can and should take credit for initiating a rigorous study which exposes failure that they can act upon.

We invite you to join this debate online. Tell us about your experiences and views as policymakers, programme managers or researchers.

 

Author: Christelle Chapoy

SourceMind The Gap Forum 2011

Date: 1 July 2011

 

Comments are closed.