Finally! An Approach to Evaluating Complexity

A number of years ago, I had the privilege of conducting a three-year evaluation of a new and promising educational reform effort called, The Saturn School of Tomorrow in St. Paul, Minnesota. It was touted as an innovative and transformational approach to education and was written up in Newsweek, Time Magazine, and several national newspapers. The school was an inspiration for the New American Schools Development Corporation competition and initiative in 1992, which was announced by former President George H.W. Bush when he visited the Saturn School.

The school described itself as: “High tech, high teach, high touch". One of the first of its kind, there was one computer for every two students. Given a great deal of latitude to experiment, and various state waivers, the school was teacher-led, based on a collaborative model, and focused on project-based learning activities. Other details include:

  • 170 students in grades 4-8 (mixed-grade classes)
  • The curriculum changed every 9 weeks based on student interests and needs 
  • Two-tier staff structure – lead teacher and three associate teachers held year-round, higher-salaried positions; Others were "generalists," with regular nine-month appointments, lower salaries
  • It was housed in a renovated downtown building – students could learn in the community
  • 500 applications for 160 slots (originally grades 4-6; grades 7-8 added in year 2)
  • Expectation was that many teachers would be interested in teaching at the school; in the end, had to search high and low for teachers

At the time, it made sense to conduct a formative evaluation; the teachers could use the evaluation findings to refine, adjust, and improve the program. The comprehensive evaluation was designed to address 18 key evaluation questions, and data were collected through interviews, surveys, tests, observation, focus groups, photographs, and document review. As was common practice in the field, 75-100 page year-end reports were written and delivered – in August, preceding the next school year.

It wasn’t long before the evaluation data uncovered serious problems in the school, including the following:

  • The lead teachers didn’t like each another
  • Students in grades 4-6 were developmentally very different from those in grades 7-8; this created unexpected instructional and disciplinary challenges 
  • The school attracted more academically challenged students; the teachers were not prepared 
  • The teachers described their world as a “speeding train out of control” 
  • There were accusations of racism among the staff 
  • They hired and fired three organization development consultants 
  • Both local newspapers continuously wrote negative stories about the school 
  • Parents starting withdrawing their children within a couple of years 
  • By the end of the 4th year, the year-round teaching positions were discontinued; the three lead staff were transferred to other schools 
  • By the end of the 5th year, the school was having trouble maintaining enrollment
  • At the end of the 7th year, it became a traditional school 
  • At the end of the 14th year, the school closed its doors

As the problems were becoming more and more visible and then entrenched, I became increasingly frustrated that the evaluation was not helping. The lengthy reports I delivered in August were just too late to be useful – the teachers thought they had already fixed the problems. And, though I did try to meet with the teachers to share what I was learning along the way, they perceived my evaluation role as the school’s “historian” and “objective observer.” As a result, the evaluation provided little value to them.

Over the years this experience significantly influenced my thinking, research, writing, and practice. What I’ve come to realize, is that The Saturn School was a complex solution to a complex problem, and that the formative evaluation was no match for the developmental, chaotic, emergent, unpredictable, relational, and dynamic nature of the school. Unfortunately, the evaluation had no mechanism or role for helping the teachers learn and adapt as they developed various aspects of the school.

What I came to realize was the need was another type of evaluation – Developmental Evaluation. However, at that time the idea of Developmental Evaluation (DE) was just being formulated by Michael Q. Patton, and had not yet resulted in any articles or books. Since then, Patton has published his book, Developmental Evaluation and has presented and written widely on the topic. He defines DE “as an approach to evaluation that is grounded in systems thinking and supports innovation by collecting and analyzing real time data in ways that lead to informed and ongoing decision making as part of the design, development, and implementation process.”

In an organic, learning-oriented, and adaptive way, Developmental Evaluation explores the following questions:

  • What is developing or emerging as the innovation takes shape?
  • How is the larger system or environment responding to the innovation? 
  • What seems to be working and not working as the innovation unfolds? 
  • How should the innovation be adapted in response to changing circumstances?

In this regard, a Developmental Evaluation is responsive to the emerging information needs of stakeholders as the initiative unfolds. The evaluation team provides continuous real-time feedback using multiple formats to inform strategy during the design, development, and implementation process, and ensures that learning processes are embedded throughout the evaluation.

With a grant from the Center for Evaluation Innovation, FSG has been studying potential uses and lessons learned from funders of Developmental Evaluations and evaluators who are conducting them. In addition to conducting interviews, we have scanned the literature looking for examples, insights, and guidance for using a DE approach. While our results will be shared in a white paper at the end of the year, as well as at the American Evaluation Association conference in November, we have learned that there are particularly appropriate times to use Developmental Evaluation, including:

  • When the right path isn’t clear – there are several ways to go forward
  • In environments that are emergent, changing, dynamic 
  • When it is difficult to plan or predict all possible outcomes 
  • When evaluating systems change and/or community-based change initiatives 
  • When evaluating public policy and advocacy efforts 
  • When working in diverse cultural contexts 
  • When an initiative is being implemented in multiple sites by multiple actors

We are increasingly excited about the value Developmental Evaluation adds to the repertoire of evaluation approaches. While I can’t say what difference it would have made to evaluate The Saturn School of Tomorrow using a DE approach, I do know that given the complexity of today’s social issues, and the need for innovative solutions, Developmental Evaluation offers much promise.

If you have been involved in a Developmental Evaluation, we would love to hear about your experiences!

Related Blogs

View All Blog Posts

Sign Up to Download

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Already signed up? Enter your email

Confirm Your Registration

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.