landscape with brightly painted open doorways standing in rolling hills

Rethinking Evaluation: From Reporting to Learning

As social impact consultants, we believe the real purpose of evaluation is not just to report metrics, but to guide better decisions that create greater impact. Too often, evaluations focus on narrow metrics to demonstrate accountability but miss the bigger picture. two circles with arrows. one labeled strategy and one Learning and Measurement feeding into Increased Social Impact Programs operate in complex, shifting contexts, and without understanding the deeper factors shaping outcomes, decision-makers may misjudge what works and why.

A new approach: Evaluations, whether formative (early/during implementation) or summative, should be learning-oriented—helping funders and implementers adapt, avoid wasted effort, and strengthen real impact.

Why Learning-Oriented Evaluation Matters

Take FSG’s PIPE program as an example, which aimed to improve early years learning outcomes by replacing rote learning with activity-based learning (ABL) through interventions such as developing curriculum and lesson plans, introducing learning aids, and training teachers, among others.

What traditional metrics may have missed: Evaluation during implementation discovered that many teachers recognized ABL’s value, but hesitated to adopt it for fear of being reprimanded by school owners, who themselves catered to parents preferring rote learning.

Without identifying this barrier, metrics alone could have painted an incomplete—or even misleading—picture of accountability and impact. Insights like this show why evaluation must go beyond counting activities and reach to uncover what truly drives or hinders change.

Learning-oriented evaluation applies to all types of programs, big or small, charitable or systemic. Barriers such as limited resources or focus on inputs shouldn’t deter us—because the heart of evaluation is about asking the right questions, to the right people, in the right way. Methods can always be adapted in scope and scale.

Three Shifts for Better Evaluations

Ask the Right Questions

Unbounded learning can yield many insights, but its value is diminished if decision-makers are unclear about which actions can improve strategy. Instead, scope your inquiry so it directly links to validating or iterating the program’s strategy or theory of change. Key questions might include:

  • Are we addressing the issues with plausible links to desired outcomes?
  • What interventions are creating impact and how? Are our partners well-chosen?
  • Are we effectively supporting our target stakeholders?
  • What are the unintended consequences, if any?

Match the questions to the program phase: The right questions are also a function of the phase or evolution of a program, which will define the terms ‘issues’ and ‘impact’ in the above questions. Since outcomes could take years to emerge, identify and assess intermediate markers along the pathway to change in formative evaluation.

Example: In the post-pilot phase of iDE Cambodia’s market-based sanitation program, active marketing and sales by small toilet businesses, and profitability served as intermediate outcomes toward the end goal of increased toilet adoption and universal basic sanitation. Ongoing evaluation confirmed profits but flagged weak promotion efforts and identified why owners engaged less than expected, which prompted a pivot to a managed salesforce model.

Make root cause identification an explicit part of every evaluation: Don’t stop at surface-level questions; dig deeper to understand why interventions are (or aren’t) working, and what the characteristics of an effective partner are. While root cause analysis, delving into relationships, connections, and mental models, is often associated with systems change, it is a universally applicable principle—simply asking “why?” can help identify the critical issues and/or the actors whose involvement could yield outsized impacts. Example: India’s well-known mid-day school meal scheme was developed after identifying food insecurity as a primary root cause of low enrolment and attendance. The intervention simultaneously addressed nutrition and unlocked the value of other educational investments (e.g., infrastructure, teacher training, curriculum development).

Focus evaluation on contribution, not attribution: While attribution (isolating a program’s unique impact) can be appealing, conclusions would likely have limitations because most programs operate in complex systems alongside other programs, actors, and contextual factors. For instance, to what extent are learning outcomes attributed to one program’s school infrastructure improvements compared to another program’s curriculum and teacher training interventions? By contrast, contribution analysis helps illuminate your program’s unique value within the broader ecosystem, informing choices about activities and partnerships that will maximize impact, ROI, and communication with stakeholders. Continuing the example, show contribution through plausibility: learning aids complemented curriculum and teacher training, and together they improved learning outcomes. The emphasis is on each intervention being necessary, but insufficient, and on the plausibility of contribution towards outcomes, subject to an intervention being valued by stakeholders.

Ask the Right People

Actor mapping is indispensable. A superficial mapping of immediate stakeholders is unlikely to reveal root causes or identify leverage points. It could also miss actors who do not have a ‘stake’ in the outcomes (thus, not ‘stakeholders’) but may wield influence over an initiative or be influenced by it. The goal is not to capture every actor, but to identify and illuminate:

  • Well-informed voices: Which actors can speak knowledgeably about core issues and the target group? Engaging them during evaluation would help understand their degree of influence, identify other relevant actors, and the relationship dynamics among them. Moreover, the target group and other actors may shed light on who is being marginalized and why. For instance, while an initiative may target households, its members (e.g., women, children) could differ in their behaviours or experiences.
  • Critical influencers: Who are the few actors with disproportionate influence over desired outcomes? Actor maps help uncover big-picture leverage points for targeted action. Example: In the PIPE case above, evaluation identified the hundreds of school administrators who exercise significant influence over the thousands of teachers and parents, which drove adoption of ABL.
  • Evolving dynamics: How is the actor map evolving? As programs progress, new actors may emerge or change roles based on shifting barriers and opportunities. Regularly revisiting the map ensures strategies remain relevant and we are speaking to the right actors. In the above examples, programs leveraged actors in different roles in response to emerging barriers (i.e., teachers’ reluctance to adopt ABL; shifting marketing and sales responsibilities from toilet business owners to a professionalized salesforce).

 

Ask the Right Way

Participatory approaches are fundamental to learning ground realities and refining strategy. They go beyond interviewing end users or beneficiaries to learning from other key actors and their contexts.

  • Start with the actor map to identify diverse voices and recognize power imbalances that can bias or mute responses. Together, they help select information sources and identify where stakeholders need safe spaces to contribute.
  • Understand how last-mile program staff and community members engaging with the target group can be involved in learning. Their contextual knowledge, lived experience, and trust-based relationships are powerful assets and best placed for providing insights about and/or collecting data from the target group.
  • Apply a human-centred design lens to assess whether participation is accessible. For instance, a series of questions instead of a single question to elicit, validate, and infer an accurate answer, rating scales, or visual aids for respondents with low literacy skills, among others.

A Call to Action

There is no perfect evaluation design, but progress starts small. What matters is building a culture of learning—staying curious, open, and adaptive. By treating evaluation as a journey of discovery rather than compliance, funders and implementers can strengthen strategies, unlock hidden barriers, and ultimately drive greater impact.

References and resources

Close

Sign Up to Download

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Already signed up? Enter your email

Confirm Your Registration

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.