Meet the brain behind Co-Impact’s Monitoring Evaluation and Learning (MEL) practice, Varja Lipovsek. As the Director of Learning, Measurement, and Evaluation at Co-Impact, Varja brings more than two decades of experience in organizational learning with civil society and public sector initiatives. Her previous stints include leading Twaweza East Africa’s MEL portfolio and working as a Research Scientist at MIT GOV/LAB, where she helped to bridge the academic and practitioner worlds working on Transparency Participation, and Accountability (TPA). Interestingly, she has also authored some of TAI’s learning pieces! In this Full Disclosure Series, we get to understand Varja’s drive for TPA as she talks about why trust is important to bringing research and evidence to practice.
Tell us about yourself, your interests, and why you work in the TPA field? Was your foray into the field accidental or a deliberate career path?
Does anyone really follow a deliberate career path? Mine meandered along. When I was ten years old I wanted to be a ballerina; at twenty I became a bus driver; at thirty I got a Ph.D. in public health. Currently, I am the Director for Learning, Measurement, and Evaluation for Co-Impact, a funder collaborative that supports systems change, gender-equitable outcomes, and women’s leadership in the Global South. I now live in the Netherlands but have lived in Mexico, Guatemala, United States, Brazil, Russia, Tanzania, and more. I think of myself as working broadly in the field of social justice and human rights. Linking this with TPA – whatever the sector, I think the ability of citizens and civil servants to actively scrutinize power (so demand transparency) and actively participate in decision making are key to achieving some measure of accountability. Accountable systems are inherently more just, more equitable.
Prior to joining Co-Impact, you worked with MIT GOV/LAB to bridge the academic and practitioner worlds on TPA. Why is that important and what more can be done?
Throughout my career (well at least post the bus-driver phase), I have tried to bring research closer to practice and to get practice to apply some of the tools and approaches from research. The inquisitive, curious mindset and skills of research can be super useful to better understand real-world problems and generate insights and data that help practitioners who are working on those problems. They are also useful in questioning our own assumptions: I love how research thrives on hypotheses and open-ended questions, which it then seeks to answer through inquiry. Practitioners can get stuck on their convictions and strategies, and a healthy dose of curiosity is helpful to break out of a mold. Researchers on the other hand can get stuck in theorizing, going down rabbit holes of knowledge – so thinking about how knowledge is useful to practice helps to keep a perspective.
A lot can be done to strengthen research-practice linkages, but let me highlight three big factors: within research, applied work needs to be valued as much as theoretical or academic work. On the practitioner side, those who are driven to act and improve the world also need to be able to slow down, reflect, ask some tough questions, and think open-mindedly about what evidence, data, stories are telling us. And on the funder side – get out of the kitchen of implementers and researchers; support people and organizations who know what they are doing in their context; do not mandate measurement for the donor’s sake, but truly support learning for practitioners’ sake.
Critical to learning is putting evidence into action. What challenges exist in achieving this, and what are potential solutions to addressing this issue?
That’s the million Shilling question. Everyone talks about “evidence-informed decision-making” but we don’t seem to know quite how to support this. Riffing on a great speech given recently by Ruth Levine (from IDInsight), I’d say evidence is used when it is trusted; and evidence is trusted when it is co-created. In other words, those who have the mandate (and interest, and power) to make decisions need to be invested in the evidence from the start, from the design phase onward, not just receive the shiny report at the end. And, those who actually generate the evidence, i.e, the “populations of interest” (and this is particularly important for people who are traditionally marginalized) should participate not only in creating the data bytes, but in setting the questions, interpreting the answers, and deciding how the information is used.
A lot of things have changed from last year because of the pandemic. How have these changes impact MEL work given the fact that MEL people often depend on being “in the room” to support reflection and learning? What adaptations/reviews have you had to do?
In this pandemic, we’ve realized how much we miss being in a physical space together to do our work. On the other hand, going virtual has forced all of us, including those of us doing learning or measurement or evaluation, to really think about how we communicate and engage. It’s all about building trust in order to be in a learning mindset, and trust is hard to build virtually. Some things I’ve learned: less is definitely better – i.e., focus on 1-3 key points and let the rest go. Use visuals. Listen deeply and lead with inquiry. Assume good intent. Be comfortable with saying you don’t know. Don’t take yourself too seriously and laugh a lot. And be in it for the long ride – trust, openness to learn, all these take a long time to build in the best of times.
What key things have you learnt in learning with and for civil society and public sector initiatives in the past two decades? – what do you wish you had known at the beginning of your career?
One of the big lessons I’ve learned is that measurement doesn’t equal learning. There are organizations (civil society, government, NGOs, others) with superb measurement teams and tools, but they don’t really connect with the rest of the organization; they are on the margins, often seen as an “auditor” or someone who is there to expose mistakes. When this happens, everyone loses: the measurement teams feel alienated and misunderstood, the rest of the organization doesn’t benefit from the insights and data they can generate, and as a result, the organization doesn’t really learn. So, to work on learning in an organization is really to work on dialogue, trust, relationships, motivations, and incentives, as much as it is on tools and data, and evidence. In other words, lead with purpose: why is it that the organization wants to learn? What will this enable it to do? How does it want to evolve? All else – mechanisms, methods, etc. – follows purpose.
Finally, if you could go back in time to change one thing, what would it be?
I am not sure there is one specific thing I would change, because all decisions are interconnected and they have led me here, which is not a bad place to be. But if there was one thing I could change, I would go back to first grade, when a mean teacher told me that I could not join the choir because I couldn’t carry a tune well enough. She was a bossy, cruel teacher who did not like children one bit. It devastated me at the time. I learned later on that anyone can sing, you just need to practice enough and need a supportive mentor. Who knows where that would have led me?… 🙂
Love to hear more from Varja? Connect with her on Twitter at @vlipovsek