What works, what doesn’t & what do we need more of?

By Rania Fazah

During one of my interviews (meetings) with a woman volunteer in Nigeria active in deradicalization programs, a woman mid-40s wearing the traditional outfit, covering herself from head to toe with a colorful pattern! My assumption when I met her, this is a happy woman! (in conflict contexts don’t assume peace fighters are happy). I asked her what was her motivation to join the program to deradicalize youth? Why she chose to get involved with the “BOKO Haram” men. She looked me in the eye and said: “I did not choose them, they chose me! They took my baby and my man! They won the minds and hearts of both my kid and husband and recruited them… I lost 2 men to them, and I want them back. I am bedfellows with Boko Haram!” She continued to say “I was nobody, I suffered stigma and exclusion for being a mom of a dead Boko Haram fighter, a widow of a Book Haram fighter no one would talk to me, no one would even sell me rice! But I know why my son went, and why my man went? They wanted to be seen! I want to be seen also… I want to be heard! This program did not only deradicalized 23 young men; it revived my soul and made me visible!”

This was one of the very first women I talked to when conducting an evaluation around CVE in Nigeria. In this post, I share insights into my experience doing evaluations in fragile contexts, where evaluators face a myriad of complications, sensitivities, and challenges to our knowledge, practices, and assumptions.

  • The more fragile and volatile a situation is, the greater is the need to understand the people, the context, and dynamics on the ground, the power structures, the blind spots, and to crack the code around how change happens in a specific context.
  • In fragile contexts communities suffer severe political polarization, deep-rooted grievances (before, during the conflict and after). Some remain unaddressed and become masked for political dissent, where concepts become contradictory or conflicting (anarchist or freedom fighter, rebel or terrorist) which in turn constitute the complexity with which the evaluator needs to maneuver definitions, concepts, ToC, interconnections, linkages, and biases.
  • Fragility and volatility when combined with violence and hostility such as in dictatorships, authoritarian repressive regimes, rule of brutal security apparatus, corrupt systems, deep-rooted discriminatory systems (against ethnicities, races, gender…) usually make the avenues to access people and information at the local level narrower, more ambiguous and more dangerous for all stakeholders; It plunges people underground (so they become harder to reach, more hostile themselves, and invisible).
  • In conflict and hostile environment, the perception around evaluators, researchers, professionals in international development or peacebuilding, or human rights, or economic development an outsider is negative. population, local governments, and local actors are rather suspicious and attribute such efforts to “the workings of aliens” thus making data shared with you tinted with distrust and conspiracy beliefs.
  • The widespread fear and disinformation challenge the practice of evaluation in fragile and conflict contexts and temper with the authenticity of the data collected and undermines its credibility.

What Worked:

The picture is not grim; there were rays of hope, Elephas experimented certain approaches that worked, & I would like to share
  • Onboarding local partners, stakeholders, and beneficiaries in the process of evaluation by introducing the evaluative thinking from early phases of design and planning. Engaging local partners, actors, and beneficiaries in identifying what results should look like; and highlight nuances that are useful to the evaluation and that are not visible to other actors (national or international).
  • This requires building a valid conceptual framework reflecting the conditions that enhance or inhibit the realization of those identified results in that specific context. Further understanding the system around the project/intervention and tailoring the existing constructs to the realities of people. What works well for whom in which context.
  • Avoiding blind spots through continuous assessment of stakeholders, relations, power structures, conflict dynamics, gender relations, ethnic and religious makeup… This is essential to understand the changing conditions, dynamics and mechanisms that impact the interventions; as experienced in Nineweh, Iraq; where it was impossible to evaluate the relevance and effectiveness of the interventions lest we developed an understanding of the principle sources of conflict at both the national and local levels. This effort involved society members, union representatives, former government officials, militia leaders, youth, CSO activists, women, journalists, entrepreneurs, etc. we continued monitoring trends in conflict through a rubric of measures.
  • In a fragile context and specifically, when evaluating interventions during conflicts, it is important to avoid approaches that are one-off or ad-hoc, looking at specific issues at a certain, definite time in the conflict. Instead, it is worth considering contribution analysis (the various factors and conditions that enhance or distress the overall situation), developmental evaluation approaches, participatory evaluation, plausible triangulation of information from various sources.
  • Use of tailored mixed methods, that take into consideration underlying fears, mistrust, long time repression; to assess the dynamics, and mechanics of how interventions worked in a specific context with specific people to achieve results. We took measures to ensure privacy and confidentiality when accessing sensitive populations. We had a specific interesting experience with a longitudinal methodology whereby a range of indicators was tracked over a long period of time allowing for patterns and trends to emerge and reveal themselves over time, rather than to capture the spontaneous, random sentiment of a moment. It also allows measuring the impact of certain initiatives implemented by the government and/or donor community. This puts the Quali – quanti in tandem. Qualitative data is indispensable; in unwrapping critical outlier information and nuances in response to provide needed insights around how things happened and how effects play in real.
  • The use of some data gathering tools that can expose respondents, or might harbor safety and confidentiality concerns can back-fire if we take into consideration the context of fear and mistrust where beneficiaries and stakeholders live. Examples Elephas encountered are focus group discussions, phone interviews where people’s identities and contacts are revealed or kept in databases. Participants might be wary of expressing their genuine thoughts during a focus group especially in repressive contexts and would not express matters that are of significance. The flexibility and adaptability of the data gathering tools are indispensable when planning the evaluation.
  • We used data gathering tools that are embedded in people’s experiences and provided safe space for individuals’ and groups’ reflections in both collaborative and private settings. We used semi-structured discussions, allowing participants to steer the conversation to reflect their biggest concerns and those of their respective communities; we avoided blind questionnaires, we conducted frequent and periodic trips throughout the life of the project to the communities of implementation. We conducted case studies, longitudinal studies, and quasi-experiments that yielded statistics that also described people, trends, and situations.
  • Teaming up with local consultants and actors, not only to ensure cultural comprehension and sensitivity; but also, to provide local consultants with the platform to evolve the practice in line with the realities people are living in, to embed an evaluation culture into local and national developmental practice.
  • Factoring access to “hard to reach populations” “unheard and unseen groups” is an approach we used and tested in Nigeria and Libya where evaluators reached out to stigmatized women who were abducted by Boko Haram using success case method & MSC to elaborate the workings of the intervention and the results achieved. This approach builds originally on participatory developmental evaluation which ensures that women were participating from the conception phase, and were included in the intervention design.

 What we need more of:

  • Tailoring the existing theoretical constructs to the realities of people. The conceptual framework –or theory of change should reflect contextual definitions, perspectives, and perceptions of various stakeholders. Some theories and paradigms do not fully or accurately apply to certain contexts, and unless we try to understand how change happens in a specific context, it would be unfair to measure the actions towards a theory base alien to the ecosystem.
  • Continuous conflict analysis and how the conflict is impacting the achievement, quality, or sustainability of results. Also looking at the risks and harms that the interventions are posing to the people, and acknowledge the failures/risks and needed trade-offs.
  • Deploy better data collection methods to access tacit and not only explicit knowledge, where the less “privileged” are provided spaces to present alternative views to those of more privileged or powerful voices (Blackman & Sadler-Smith, 2009).
  • Good quality, unbiased reliable data; and access to information at the local level, coupled with data collection methods that tailor to local cultures, the scale of values, ethnographic composition, acceptable social communication models while observing “do no harm” strategies. These also need to take into consideration gender, ethnic, and minority constituents.
  • The relevance and appropriateness of the results are assessed based on measures discussed and agreed upon with stakeholders and affected populations, and not only donors and program implementors. It is a skewed relation of power that controls the evaluation of programs in conflict and fragile contexts.
  • More local partnerships and more in-house capacities within organizations and communities to evaluate in conflict and fragile contexts.
  • Less donor restrictive evaluations, that identify questions, tools, and approaches in a transactional manner and more learning and out of the box innovative approaches to understand what works and how.

Subsequently

  • More budget to fund evaluation processes and less one-off or ad-hoc or add-ons, looking at specific results at a certain, definite time in the conflict.

 CONCLUSION 

  • It is indeed a challenging and daunting undertaking to evaluate in fragile settings where tensions and sensitivities are intensified, skepticism is heightened, and trust is low.
  • Despite the challenges, there is a richness in engaging with the local level of society, where perceptions, attitudes, grievances, and optimism indicate strongly the attitudes and aspirations of the society.
  • It is important to find ethical and practical ways to approach the local levels of society in fragile contexts to gain a better understanding of issues and the trust of stakeholders and constituents. Ultimately, neglecting this level of society makes us miss a significant portion of the overall picture when it comes to fragility issues.
  • Gathering valuable information about the inner, most-localized dynamics helps improve the situation of local populations by raising awareness of their realities or by providing recommendations to the governmental stakeholders on how to improve their lives.

4 Thoughts on “Evaluation in Conflict & Fragility”

  • Interesting insights Rania. The level at which your approach(es) are proposed is governed by policies / interests…. set in higher spheres. Is linking them / finding common grounds possible? Otherwise, I fear that your efforts will be extremely difficult to materialize.

  • SO good! thank you for distilling the richness of your learning. This provides useful guidance for me in understanding context and designing evaluation interventions in some of the work I am doing.

Leave a Reply

Your email address will not be published. Required fields are marked *