What Are the Products of a Developmental Evaluation?

If it is not a report – what is it? One of the concrete products of many evaluations is a written report. This is not always the case for Developmental Evaluation – a new approach to evaluating social innovation.

This week we continue the conversation started with you during FSG’s webinar on “Evaluating Social Innovation.” See our post from last week on how team structure and dynamics can be different in the practice of developmental evaluation.

Several questions posed during the webinar asked us to further describe what the products of developmental evaluation look like. Here John Cawley and Hallie Preskill share their experiences.

Re-think the long narrative report – or scrap it entirely. “To ensure that the feedback, insights, and key learnings are shared and used by the stakeholders, DE evaluators use a variety of methods, including formal and informal conversations, learning memos and briefs, email updates, impromptu phone conversations, working sessions, quarterly updates, and yes, even year-end reports,” says Hallie. “However, these reports look different from the traditional formative or summative evaluation reports – they focus on what has been developed and what has been discovered.”

John Cawley agrees, based on his experience from the developmental evaluation of YouthScape – where the McConnell Family Foundation scrapped the report entirely. “From our experience, a useful developmental evaluation product is often not a report. Too often, reports are done for accountability purposes to give cover to the giver rather than useful support to the recipient of a grant, explains John. Instead, he favored weekly check-ins to learn about information being shared with evaluators by young people, at all times of day – even while doing the dishes together! As John describes, this provided the evaluator with an opportunity to “share her observation that the team might want to pay attention to a certain dynamic and suggested a safe way to surface the misunderstanding and untangle the knot that existed.” John suggested that, “A written report, regardless of how ‘sanitized’ and diplomatic, would have likely been tardy and could have created a defensive reaction. I can think of several instances where developmental evaluators effectively surfaced racial, generational, or class assumptions and behaviors that were blocking collaboration. A written report could have been counterproductive.”

Consider that “progress” is more than adhering to a set of outcomes. When working with a social innovation, an intervention that is new, experimental, and untested, a new definition of progress and success, is required. John explains “ ‘Progress’ in a complex ecosystem, in particular at the earliest stages of an initiative, may be more about developing robust feedback loops, relationships of trust that did not exist previously, and a capacity to admit failure and learn from it. From our experience, these are more accurate indicators of long term success than ‘quick wins’ in terms of outputs. Accordingly, our reporting framework encourages the capturing and dissemination of learning rather than a cataloging of activities and outputs. We need to know that we are on the right track, not that we have arrived at a pre-determined spot.”

Thank you to John and Hallie for sharing their experiences, as well as the participants in the Evaluating Social Innovation webinar for their interest and questions. Do you have more practical questions about developmental evaluation? Feel free to share them below and we will respond!

Related Blogs

View All Blog Posts
Close

Sign Up to Download

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Already signed up? Enter your email

Confirm Your Registration

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.