Skip to main content
Previous Blog Home Next

Emergence & Developmental Evaluation - Webinar Questions Answered

In a recent FSG and Stanford Social Innovation Review webinar discussion on complexity in collective impact,  John Kania, coauthor of SSIR’s Embracing Emergence: How Collective Impact Addresses Complexity,” Blair Taylor, of Memphis Fast Forward and Mark Cabaj of the Tamarack Institute explored how leaders of successful collective impact initiatives have embraced a new way of collectively seeing, learning and doing that marry emergent solutions with intentional outcomes. 

This is the first post in a 5-part blog series in which Blair and Mark continue the discussion, answering questions submitted by webinar participants, on emergence in action and Developmental Evaluation in collective impact. In this post, Mark discusses Developmental Evaluation as it relates to funders.

Q: How can we incorporate Developmental Evaluation into our work amid pressure from funders that still require formative/summative evaluation results? A: In this case, the response depends on the nature of the resistance of the funder.

  • Scenario #1:  The funders are not aware of the limitations of traditional evaluation practices on innovative, complexity based strategies and models. Funders may not realize that asking for a fully completed logic model  and a commitment to a specific set of measures in advance of evaluation – hallmarks of formative and summative evaluation – can short-circuit innovation or the emergence of a complex change initiative. In this scenario, let funders know that Developmental Evaluation employs rigorous evaluation processes, many of which (though not all) are employed in traditional evaluation. These processes will lead to a more stabilized model where traditional evaluation practices are appropriate.  
  • Scenario #2: The funders believe that Developmental Evaluation does not focus on outcomes. This is a myth that is important – and easy – to dispel. Developmental Evaluation does track the outcomes and learnings of an emerging strategy. In this scenario, explain to funders that the difference between the various forms of evaluation lies in how decision-makers “use” the feedback. In Developmental Evaluation, the feedback is used to inform the development and direction of a strategy or model; in formative, it is used to improve it, and in summative, it is used to determine whether a strategy should be continued, discontinued or scaled.
  • Scenario #3: The funders really don’t want to invest in innovation.  The urge to use formative and summative evaluation may be a symptom of a funder’s deeper desire to invest in tangible – possibly already proven – strategies and models with predictable outcomes.  Their disinterest in employing Developmental Evaluation, therefore, is really a reluctance to invest in a process of innovation where the model and goals evolve and the effects are unpredictable.  In these cases, the grantee is faced with the option of adopting a more fully fledged model or strategy to please a cautious and risk-adverse funder (thus reducing the scope of innovation and impact) or continuing their search for a funder interested in being a productive partner in a developmental initiative. 

Developmental Evaluation – like complexity-based approaches to community change and innovation – is a demanding niche for would-be change makers.  It is equally demanding for the funders who may wish to support them.  

Q: Are there ways to do a partial Developmental Evaluation if the budget does not allow for the full process?

A: In the real world, all decision-makers make do with whatever evaluative data is at hand, whether or not it is good data, enough data and data that has been scrutinized with good sense-making processes.  It’s no different for collective impact initiatives or other types of developmental processes. As such, even modest investments in evaluation can be productive.

Here are three ideas for how to use a modest budget in a collective impact initiative:

  • Target Developmental Evaluation work for use in a discrete and manageable piece of your collective impact work (e.g. a sub-strategy or a newly emerging collaborative program). You will learn a bit about Developmental Evaluation in process and, if successful, create demand for broader use of Developmental Evaluation across the collective impact effort.
  • Facilitate and support a team of researchers, drawn from collective impact partners or seconded from larger institutions, to develop the capacity to provide your members with real-time and user-friendly feedback – a critical feature of collective impact initiatives. 
  • Focus on developing, testing and refining the “sense-making” processes in your collective impact initiatives (e.g. After Action Reviews, Challenge Panels, Beneficiary Archetypes) that can be facilitated by in-house staff.

Mark Cabaj is President of the company From Here to There and an Associate of Tamarack Institute. His current focus is on developing practical ways to understand, plan and evaluate efforts to address complex issues.  He is particularly involved in expanding practice of developmental evaluation, a new approach to evaluation which emphasizes evaluation and learning in emerging, messy and sometimes fast-moving environments. 

FSG