Strengthening a Struggling Collaboration through Evaluation: Lessons from the Robert R. McCormick Foundation

For many years now, interest in nonprofit and philanthropic collaborations has been growing. Whether organizations are embracing collaborations because of shrinking resources and the need to be more efficient, or the idea that many heads are better than one, or a belief that collaborations are particularly effective for driving diversity, equity, and inclusion, collaborations are a critically important approach to achieving social change.

The Center for Effective Philanthropy’s (CEP) recent report, Shifting Winds: Foundations Respond to a New Political Context, highlights an increasing interest in collaboration and the challenges that come with it. A key finding of CEP’s survey of 162 U.S. independent and community foundation CEOs was that nearly half of all respondents reported an increased emphasis on collaborating with other funders. This echoes insights in Building Collaboration from the Inside Out, a 2015 report by Grantmakers for Effective Organizations (GEO): “The call for greater collaboration has been a persistent drumbeat in the nonprofit and philanthropic sectors… A go-it-alone mentality will not result in meaningful impact.”

If philanthropic leaders double down on collaborations in the next few years, it will be imperative that they evaluate their efforts and investments.  As many who have developed, nurtured, and supported collaborations have found, “collaboration is hard and messy, and many grantmakers and nonprofits are uncertain about the best way to move forward.” This messiness and uncertainty underscores the importance of ongoing learning throughout a collaboration’s design and implementation.

However, if we wait to evaluate a collaborative effort until it’s fully established—perhaps a year or more in—we miss an opportunity to understand what is working where and for whom, what assumptions are holding true or not, and most importantly, the chance to learn, adapt, and change the effort in its early days. Developmental evaluation, where the evaluation team acts as a guide on the side, offers much promise for evaluating and learning about collaborations as the work unfolds.

Because collaborations ideally are built on trust, relationships, shared mental models, agreed upon goals, and open and transparent communications but often fall short of these ideals, evaluation practices must focus on the process as well as the intended outcomes. Fortunately, developmental evaluation (DE) is particularly well-suited to evaluate collaboration efforts. DE is an evaluation approach that positions the evaluation team as a partner in the initiative’s design and implementation. Evaluation activities provide real-time and continuous feedback about the design and implementation of a program or initiative using a variety of data collection approaches. It is particularly effective for new or untested strategies and tactics and in complex and dynamic environments.

As collaborations involve multiple organizations with their own history, culture, goals, financial and reputational needs, and individual representatives with different experiences, cultural contexts, perspectives, and agendas, being able to learn early and often is essential. Because collaborations themselves are complex structures the evaluation team might look at several different aspects of the collaborative effort, including:

  • How people are developing and implementing various collaboration roles and responsibilities
  • The implementation structures that are developing and how effective they are
  • Participants’ interests and motivations, to see where they are competing and/or complementary
  • Trust and power dynamics between and among individuals and organizations
  • The types of leadership that are supporting or hindering the collaboration’s development and progress
  • The extent to which and how there is a diversity of representation and inclusion in the collaboration
  • The types and level of engagement in the collaboration and what is supporting or hindering participation
  • The ways in which and how ongoing learning is embedded in the collaboration’s processes
  • The quality, frequency, and format of communications between and among collaboration participants
  • How the allocation and use of personal and financial resources are being expended and their effects on the health of the collaboration
  • How and to what degree the collaboration is adaptive, experimental, and flexible in their complex environment
  • How effective are the early decision making processes and structures

A good developmental evaluation would provide information to support continuous learning, help uncover blind spots, suggest alternative tactics when a course correction is needed, and could surface a set of effective principles of practice that could not only be tested later with formative and summative evaluation approaches, but could also be used for other planned collaborations.

For example, in 2015 the Robert R. McCormick Foundation began supporting a new collaboration between 2 established non-profits to improve the quality and scope of their work addressing extraordinarily high rates of violence in one Chicago community. Funds were included to hire an external evaluator charged with partnering with the collaborative team to help develop and implement the initiative. Like most collaboratives, this one had high expectations, not only for what the collaboration could accomplish but for the type of partnership that would be created between the 2 organizations. 

However, these 2 organizations have significantly different cultures, ways of working, priorities, approaches to community engagement, staff roles and hierarchy, and differing levels of flexibility and formal bureaucracy.  And, with Illinois’ ongoing fiscal crisis, general funding for the organizations was intermittently threatened and lost, leading this collaboration to be even more challenging than it would have been under less dire circumstances.

While the collaboration was fraught with problems that impeded its effectiveness, the developmental evaluation was critical in helping the 2 organizations salvage the year and the program as a source of learning and guidance for the types of changes that have to be implemented going forward. By meeting frequently with staff from each organization and attending an array of community and organizational meetings and activities, the evaluator had a perspective on the many factors affecting the project and the collaboration that he shared at the regular all-staff evaluation meetings. His role included being a mediator, a truth-sharer, an independent reflector on the challenges of the participants, and a source of ideas and suggestions about how challenges could be addressed. And, his role as documenter of the process was a critical component of the learning that came out of the project, with participants from each organization reviewing his assessments and shaping the narrative of what happened and why. As they approach their second year, both organizations have a much clearer sense of how they need to proceed given both the external and internal constraints in which they work.

As we pursue and invest in collaborations to address the significant challenges facing our communities, let’s make sure that evaluation is being used wisely and robustly. Both to help inform and guide these initiatives as they are unfolding and to provide information and processes to ensure that we learn from them, whether or not they meet our expectations.

Learn more about FSG’s Strategic Evaluation services >

Learn more about the Robert R. McCormick Foundation >

Related Blogs

View All Blog Posts

Sign Up to Download

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Already signed up? Enter your email

Confirm Your Registration

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.