Although most foundations don’t like to admit it, mistakes in philanthropy happen all the time. “Failure” is a dirty word in the social sector. So it’s no surprise that the evaluations that support this conclusion are often passionately debated on both sides.
This topic gets raised in relatively closed conversations among funders, but public statements of errs in judgment, planning, and execution are still rare. In 2010, Robert Wood Johnson Foundation (RWJF) published a series of articles on programs that “didn’t work out as expected” in their Anthology: To Improve Health and Health Care. RWJF is among a select group of grantmakers that talk publicly about failure (whether they decide to use the word or not) and try to learn from them. Robert Hughes, one of the Anthology authors, further examines failure as a “Key to Foundation Effectiveness” in a series of blog posts on the topic. And recently, the field has engaged each other in dialogue about failure in a new way, with organizations hosting “fail fairs,” such as the one hosted by USAID Microlinks at the World Bank in DC.
What I’ve come to realize in reading accounts of foundation failures is the important role evaluation plays in identifying failure, diagnosing its cause, and making corrections (when feasible and appropriate). Three questions that confront failure head-on can improve how foundations and nonprofits understand, learn from, and use failure to be more effective.
Are we achieving what we set out to do?
Identifying failure begins by asking a simple question: Are we achieving what we set out to do? It takes courage to ask that question and answer it honestly.
Evaluators are well-equipped to help answer this question. We work with organizations to refine their strategies and increase clarity around their goals and the outcomes of their work. Then we use what we’ve learned to design and carry out an evaluation that helps us answer whether the organization is actually achieving what it has set out to do. For some organizations this requires a simple monitoring or performance measurement system that utilizes routinely collected metrics (e.g., blood pressure monitoring in health care settings), for others it requires a mixed methods approach that utilizes numerous different data collection activities (e.g., population-based surveys, interviews with key stakeholders).
Why isn’t it working?
Diagnosing the reasons for failure is equally as important as identifying that something isn’t working. Once you understand what the problem is, you are in a much better position to address it.
Again, evaluators plays an important role in helping organizations better understand the root cause(s) of an issue. We conduct interviews, observe programs, and do environmental scans to gather information about why an effort is (or is not) making progress. David Colby, Stephen Isaacs, and Robert Hughes have provided four reasons why programs do not succeed:
- Strategy or design flaws
- Challenging environments
- Faulty execution
- The inability to adapt in a timely fashion
These categories are helpful to consider (in the context of gathering evaluative information through a planned, systematic process) when diagnosing the reason(s) for why an effort is not working out as expected.
What should we do next?
It might seem obvious that the next step after identifying something that is not working and figuring out why is deciding what to do about it. There is also a tendency, however, to deny, dismiss, or avoid any evidence of failure. Especially in the social sector, to say a decent program is “failing” can be hard to accept. RWJF Anthology authors David Colby and Stephen Isaacs remind us, “Even programs that do not achieve their overall goals can have positive effects on the people they touch.”
While in some cases failure does mean the end of a program or initiative, there are numerous examples of when failure was just a short detour on the road to success. Learning from mistakes, so that programs can be improved is essential to “constructive failure” in philanthropy.
It's no accident that these questions are phrased as though the work is ongoing. Learning that a program has or hasn’t worked shouldn’t just happen at the end of a program. It’s exciting to work with foundations that are interested in understanding what’s really taking place as an initiative unfolds. By asking these questions throughout the process from a position of genuine learning, evaluators and key decision makers can make the changes necessary to get a program back on course. This doesn’t guarantee, of course, that the ultimate goals will be achieved. Nonetheless, it provides an important opportunity for mid-course corrections that could mean the difference between success and failure.
So, if you’ve failed… Admit it! Share what you’ve learned with others, so they can grow in their knowledge of what it takes to make a difference. As Tessie San Martin, President/CEO of Plan International USA, bravely wrote in a blog post following the World Bank Fail Fair, “We do not celebrate failure often enough. But we should. …We are failing. And in that failure we are learning, adapting and advancing, and therefore improving our ability to improve the lives of children around the world.” When realized in the context of learning, failure isn’t the end of success. It’s the beginning.