Skip to main content
Previous Blog Home Next

Developmental Evaluation in Practice – Webinar Questions Answered

In a recent FSG and Stanford Social Innovation Review webinar discussion on complexity in collective impact,  John Kania, coauthor of SSIR’s Embracing Emergence: How Collective Impact Addresses Complexity,” Blair Taylor, of Memphis Fast Forward and Mark Cabaj of the Tamarack Institute explored how leaders of successful collective impact initiatives have embraced a new way of collectively seeing, learning and doing that marry emergent solutions with intentional outcomes.

This is the second post in a 5-part blog series in which Blair and Mark continue the discussion, answering questions submitted by webinar participants, on emergence in action and Developmental Evaluation in collective impact. In this post Mark answers questions around what Developmental Evaluation looks like in action and how shared measurements can evolve to enable emergence.

Q: What does Developmental Evaluation look like in action? From the article on emergence, it sounds like a kind of ethnographic storytelling. Is this accurate?

A: As Michael Quinn Patton notes, Developmental Evaluation does not rely on a particular set of methods. Any method will do, as long as they:

  • Provide decision-makers with real time information and good “sense-making” support that they need to innovate and/or navigate a complex environment

  • Meet the standards of good evaluation (e.g. utility, feasibility, accuracy, etc.)

Developmental Evaluation is methodologically agnostic. I have seen randomized controlled trials, social return on investment calculations, network mapping, participant observation and content analysis used during the process.  However, ethnographic storytelling may be a particularly useful method in contexts where decision-makers need new ways to (a) understand and interpret the complex environments in which they are operating and (b) make sense of the feedback on their emerging strategy.

Q: Given that setting up the initial Shared Measurement data is difficult, time-consuming, and expensive – how much can the measurement systems evolve once set up? Is there flexibility (to incorporate emergence) or do you need to use what you have in different ways?

A: There are countless examples of people spending a great deal of time developing elaborate measurement systems only to find out that:

  1. They are no longer relevant (i.e. they have not kept up with emergence)

  2. They are very expensive 

  3. They are so demanding that people ignore them

It is no surprise that the failure rate of information technology projects in the private sector is very high. 

The rule of thumb for designing anything in complex emerging systems is to begin with something simple, and to build more elaborate systems over time. 

For shared measurement systems, this means selecting a modest number of useful measures to begin, and some basic processes for gathering, analyzing and interpreting them.  As a group learns more about what works through trial and error, and adjusts their strategy to reflect emergence, they can add, drop and change measures and processes as appropriate. 

This is easier said than done. People who spend a long time developing a system or model are rarely keen to adjust or change it (commitment bias) and tend to ignore feedback that the process is not working or useful (confirmation bias). However, the “start small and grow organically” approach is more likely to encourage people to adapt their shared measurement systems over time than a more traditional “big bang” approach. 

Mark Cabaj is President of the company From Here to There and an Associate of Tamarack Institute. His current focus is on developing practical ways to understand, plan and evaluate efforts to address complex issues.  He is particularly involved in expanding practice of developmental evaluation, a new approach to evaluation which emphasizes evaluation and learning in emerging, messy and sometimes fast-moving environments. 

FSG