Skip to main content Skip to navigation

Project Evaluation

The Ruckelshaus Center has long recognized that in collaborative governance, as in other fields, it is important to evaluate the results of our work, in order to ask and answer fundamental questions – Did we do what we planned/said we would do? Did it have intended effects? Did those effects lead to positive outcomes/impacts “on the ground?” Were there any unintended consequences? What can we learn to help us improve processes/outcomes in the future? What are best practices? What can others learn from our experience? Does this work make a difference? Is our approach better than other approaches? Despite the importance of these questions, relatively little evaluation occurs in the field, for many reasons including (but not limited to):

  1. Collaborative processes can be difficult to measure and evaluate, given that results tend to be subjective versus objective, qualitative versus quantitative, and attaining scientific rigor, precision and replicability can be at odds with attaining richness and nuance;
  2. Practitioners need to avoid compromising processes and outcomes by revealing inappropriate or sensitive details.
  3. It is nearly impossible to conduct a randomized experiment (no two processes are alike);
  4. It can take years or decades to answer questions like those mentioned above;
  5. There is a great deal of pressure and desire to move on to next policy challenge;
  6. Practitioners, sponsors, participants, academics are all interested in different aspects or elements of a collaborative process.
  7. While everyone wants evaluation, few are willing to pay for it as part of project scopes and budgets.

For these and other reasons, the Center (like its peers) has struggled with how to incorporate evaluation into its work.

In 2011, with oversight from University of Washington Evans School of Public Policy and Governance Professor C. Leigh Anderson, Evans student Alan Foster helped Center begin to address this by doing a degree project (provided on the right) that took a first cut at evaluation methodology. His primary methods were: 1) examination of evaluation instruments created by other practitioners in the academic, non-governmental and private sectors; 2) interviews with these sources and other leaders, practitioners, stakeholders about what questions need to be addressed and how best to get the answers. He created four evaluation instruments. But practitioners reported difficulty fitting the instruments into conducting projects and they did not get used.

Two core faculty broadened the Center’s thinking from 2013-15 by looking into what else exists, and is possible and desirable, in evaluating collaborative governance. This included how project evaluation can roll up into program evaluation, and fit within logic models.

In 2016, the Center decided to focus specifically on project outcomes, lessons learned and process improvements, and on what is of most value to the Center and the communities involved in its projects, versus everything else that it is possible or desirable to evaluate. This “post-project” qualitative evaluation builds on the baseline “pre-process” evaluation the Center already does via situation assessments, using semi-structured interviews with key informants to identify key themes, findings and recommendations. The Center’s university affiliation provides access to graduate students who, when paired with faculty advisors, constitute a pool of talented potential evaluators to conduct evaluations that are robust, but also affordable and replicable. This approach provides the right combination of familiarity with the Center and its work, plus distance and perspective. These evaluators are not strangers, but the Center is also not evaluating itself. It also contributes to the Center’s teaching, training, and capacity building missions.

In 2016, Evans graduate student Trevor Robinson conducted an evaluation of the Walla Walla Water Management Partnership as a pilot of this methodology. This Post-Project Evaluation fulfilled two purposes. First, the evaluation investigated the effectiveness and applicability of the Center’s previous contributions to the development of the Walla Walla Watershed Management Partnership. Second, the evaluation explored the outcomes, challenges and opportunities that have emerged during the implementation of this Partnership. The evaluation was well-received by the Walla Walla community and the Center’s Advisory Board, and earned the American Society for Public Administration’s Evergreen Chapter MPA Project of the Year Award. Read the full report (pdf), also provided on the right.

In 2018, an Evans Consulting Team consisting of three graduate students and faculty advisors conducted a second evaluation, on Center’s contributions to the Nurse Staffing Steering Committee. The evaluation documented project successes and challenges, as well recommendations to improve the Center’s services and future evaluations. Read the Executive Summary (pdf)–full report is available on the right.

For more information on these evaluations or the Center’s evaluation program, contact Project and Program Manager Molly Stenovec.

Updated on September 4, 2018