How Good Is Good Enough?
A Discussion of M&E Standards in Crisis and Conflict Zones
The USAID Education in Crisis and Conflict Network (ECCN) and its partners are actively working to identify specific strategies for optimizing research, evaluation, and data collection processes in diverse, conflict-affected contexts. ECCN member School-to-School International (STS) is working with ECCN and the larger development community to raise pertinent questions and spark the discussion on measurement standards in crisis and conflict (CC) zones, to be showcased in an interactive webcast in September.
In recent years, STS has conducted monitoring and evaluation (M&E) work in conflict and crisis zones in Afghanistan, Pakistan, Guinea, the Democratic Republic of the Congo (DRC), northern Nigeria, and southern Sudan (before independence). Our experience has raised a number of questions about M&E where the usual rules do not apply.
What are the usual rules? As with any project, we try to ensure a reasonable M&E design and check that data are collected and reported in a way that adequately represents the larger picture from which our data are drawn.
In this blog, we discuss challenges we have faced when trying to follow these rules in CC zones to report useful results that can be generalized. While every project faces challenges, in CC contexts these challenges are difficult to resolve where there are no clear rules about what is acceptable, which leads to the question: How good is good enough?
The Challenge of Reporting Results That Can Be Generalized in Crisis and Conflict Contexts and Strategies
M&E plans use various strategies to ensure that results are representative of a larger population. Where possible, results should be generalizable, with an acceptable level of sampling error–usually around 5 percent for rigorous evaluations. In CC zones, this can be a high bar. Here are a few reasons why and examples of how we have navigated these challenges:
Issue: Obtaining usable data sets
In CC zones, EMIS data can be incomplete, inaccurate, or nonexistent. In northern Nigeria, electronic datasets did not exist, so we asked local education officials to write the names and places of their schools, and the project entered them electronically for the first time. The most significant challenge encountered in administering the EGRA in Afghanistan was the rate of discrepancies between official education statistics and the realities of schools on the ground. Discrepancies were found in types of schools (e.g., boys’ schools were girls’ schools), size of schools (e.g., cycles missing), or in some cases, whether a school existed at all.
Solution: Establish standards for data quality
To date, we have generalized results if we find discrepancy rates between data and field realities to be low–5 percent or less–but opted not to generalize when it is unacceptably high–e.g., 30 percent or higher.
Issue: Selecting replacement schools
In CC zones, as elsewhere, researchers use lists of sampled schools as well as replacement schools and rules for selecting them in the event sampled schools cannot participate. In CC zones, rates of nonparticipating schools can increase due to security concerns. In these cases, researchers make decisions about replacement schools that fall outside the rules of replacement (fear of entering dangerous zones) and therefore purposely select schools that are in safer zones. Such behavior is understandable, but changes the nature of the sample–results from purposely selected schools cannot be generalized.
Solution: Conduct school verification checks
However adequate the EMIS dataset, we routinely conduct school checks by physically visiting sampled schools. School data include school name, location, and unique identification codes and, increasingly, GPS coordinates, photos, and time stamps when data are collected (which is useful for monitoring data collection teams). If time or financial constraints make a census of all schools impossible, we draw a sample of schools to verify–especially in the most contested zones–or review documentation of school verification exercises carried out by other partners.
Issue: Reaching agreement on the rules of generalizability
In some instances, we have deemed datasets too inaccurate or incomplete to consider an adequate reflection of “the universe of schools” and have therefore opted not to generalize, including the weighting of results.
Solution: Follow highly structured procedures for monitoring
Because time in-country in CC zones can be limited, more of our work must be conducted at a distance. In these instances, we ask our in-country partners to provide detailed documentation of tool administration, sampling, and replacement school selection procedures so that we can review in absentia the extent to which administration procedures are followed consistently and sampling and replacement school procedures produce to truly random samples. Cases that cannot demonstrate consistent administration and random sampling are excluded from the analysis, or reported separately.
So, How Good Is Good Enough?
When conducting M&E in CC zones, we reference one main rule of thumb, Do the best you can, and its corollary, Use your best judgment. This is because clear rules on much of the work we do simply do not exist, or are evolving. Here are three topics we will discuss with a panel of seasoned experts in an upcoming ECCN webcast:
- What are the major areas of concern when conducting M&E in a CC zone?
- What are the threats to conducting “business as usual” M&E in CC zones?
- What are the alternatives to “business as usual”?