Skip to main content
Previous Blog Home Next

Seeing the “Whole Elephant”- Systems Thinking in Evaluation

My name is Srik Gopalakrishnan and I’m the new Director supporting Strategic Learning and Evaluation at FSG. I have spent the last nine years working in evaluation-related roles, first at a foundation, and then at a national nonprofit. One observation that has stayed with me throughout my work in the evaluation field is the lack of alignment between what systems theory tells us and how evaluation is practiced. How we define, implement and learn from evaluation often is disconnected from what we know about how systems work. This disconnect is best illustrated through a parable that I first read as a child growing up in India.

Many of us are familiar with the parable of the “blind men and the elephant”. In various versions of the story, a group of blind men touch an elephant to learn what it is like. Each one feels a different body part, but only one part, and comes up with an explanation. For example, the one who feels the trunk claims that the elephant is like a tree branch, while the one who feels the tail swears that the elephant is just like a rope. Others go with pillar (legs), wall (abdomen), hand fan (ear), and spear (tusk). This parable has been used to illustrate several lessons, but the one that appeals to me the most, of course, is what it means for evaluation.

In evaluation, we often fail to see the whole elephant for various reasons. Traditionally, the field of evaluation has been led by a “reductionist” view of how the world works. We attempt to break complex phenomenon down into neat boxes and arrows, isolate variables, control for factors, and largely draw from Newtonian models of cause-effect and directionality. More recently however, the field has become acutely aware of the limitations of the traditional approach as the following tenets of complex systems that we work in have become more and more apparent:

  1. Everything is connected; hence what happens in one part of the system affects another. For example, evaluation of a school improvement initiative may have to examine not just what takes place inside the school system, but also what happens in the broader community outside.
  2. Cause and effect is not a linear, one-directional process; it is much more iterative. Does improving health of families improve their economic productivity or the other way around? The answer is probably both. We are increasingly learning that honing in on causation and attempting attribution is a herculean task, as there aren’t clear and straightforward links.
  3. Context matters; a lot! What used to be considered “noise” in models of social change is now recognized as a core factor that can make or break an intervention. Teachers in schools with supportive conditions, for example, are shown to be performing at higher levels than those (even highly qualified ones) in other non-supportive schools.

What can we do?

So what do we, as evaluators and social change practitioners, do to be more cognizant of the “whole elephant”? How can we move from evaluating tusks and tails to really taking the entire pachyderm into consideration? Here are a few ways:

  1. Create evaluation and learning systems, not just stand-alone evaluations: Sound evaluation and learning systems articulate what needs to be evaluated, when, how, by whom, with what resources, etc. in ways that will enhance learning throughout the organization and ensure that evaluation resources are spent effectively to boost organizational effectiveness.
  2. Move towards shared, rather than just individual, measures of success: In an increasingly inter-connected world, it becomes imperative for organizations to work together to tackle complex and chronic social problems. Evaluation serves this scenario best when the measures used to track progress are common, shared and transparent. We can draw from examples of successful shared measurement systems.
  3. Use innovative evaluation approaches that recognize complexity: The traditional paradigm of formative evaluation (to improve an approach or model) and summative evaluation (to prove that the approach or model works) may not quite take into account the complex and emergent nature of social change. Hence, other innovative approaches such as developmental evaluation, social network analysis, and human systems dynamics may be more suitable for certain interventions or at certain times in an intervention’s lifecycle.

Just as with the blind men of the story, evaluators and practitioners continue to apply a simplistic lens to understand a complex beast. Unless we make an intentional change in how we think about evaluation from a systems perspective, the field will continue to spend precious resources in ways that aren’t productive. Let’s all endeavor to move in a direction that truly recognizes the systemic nature of our work and treat the complex beast of social change in the holistic manner in which it deserves to be treated!

Srikanth "Srik" Gopal

Former Managing Director, FSG