- Collaborative processes can be difficult to measure and evaluate, given that results tend to be subjective versus objective, qualitative versus quantitative, and attaining scientific rigor, precision and replicability can be at odds with attaining richness and nuance;
- Practitioners need to avoid compromising processes and outcomes by revealing inappropriate or sensitive details.
- It is nearly impossible to conduct a randomized experiment (no two processes are alike);
- It can take years or decades to answer questions like those mentioned above;
- There is a great deal of pressure and desire to move on to next policy challenge;
- Practitioners, sponsors, participants, academics are all interested in different aspects or elements of a collaborative process.
- While everyone wants evaluation, few are willing to pay for it as part of project scopes and budgets.
For these and other reasons, the Center (like its peers) has struggled with how to incorporate evaluation into its work.
In 2011, with oversight from University of Washington Evans School of Public Policy and Governance Professor C. Leigh Anderson, Evans student Alan Foster helped Center begin to address this by doing a degree project (provided on the right) that took a first cut at evaluation methodology. His primary methods were: 1) examination of evaluation instruments created by other practitioners in the academic, non-governmental and private sectors; 2) interviews with these sources and other leaders, practitioners, stakeholders about what questions need to be addressed and how best to get the answers. He created four evaluation instruments. But practitioners reported difficulty fitting the instruments into conducting projects and they did not get used.
Two core faculty broadened the Center’s thinking from 2013-15 by looking into what else exists, and is possible and desirable, in evaluating collaborative governance. This included how project evaluation can roll up into program evaluation, and fit within logic models.
In 2016, the Center decided to focus specifically on project outcomes, lessons learned and process improvements, and on what is of most value to the Center and the communities involved in its projects, versus everything else that it is possible or desirable to evaluate. This “post-project” qualitative evaluation builds on the baseline “pre-process” evaluation the Center already does via situation assessments, using semi-structured interviews with key informants to identify key themes, findings and recommendations. The Center’s university affiliation provides access to graduate students who, when paired with faculty advisors, constitute a pool of talented potential evaluators to conduct evaluations that are robust, but also affordable and replicable. This approach provides the right combination of familiarity with the Center and its work, plus distance and perspective. These evaluators are not strangers, but the Center is also not evaluating itself. It also contributes to the Center’s teaching, training, and capacity building missions.
In 2016, Evans graduate student Trevor Robinson conducted an evaluation of the Walla Walla Water Management Partnership as a pilot of this methodology. This Post-Project Evaluation fulfilled two purposes. First, the evaluation investigated the effectiveness and applicability of the Center’s previous contributions to the development of the Walla Walla Watershed Management Partnership. Second, the evaluation explored the outcomes, challenges and opportunities that have emerged during the implementation of this Partnership. The evaluation was well-received by the Walla Walla community and the Center’s Advisory Board, and earned the American Society for Public Administration’s Evergreen Chapter MPA Project of the Year Award. Read the full report (pdf), also provided on the right.
In 2018, an Evans Consulting Team consisting of three graduate students and faculty advisors conducted a second evaluation, on Center’s contributions to the Nurse Staffing Steering Committee. The evaluation documented project successes and challenges, as well recommendations to improve the Center’s services and future evaluations. The report will be available soon.
For more information on these evaluations or the Center’s evaluation program, contact Project and Program Manager Molly Stenovec.
- Evaluating ADR Projects; Full Report (pdf) (250KB)
- Walla Walla Evaluation Overview (144 KB)
- Revisiting Many Waters: An Evaluation of the Walla Walla Water Management Initiative (1.38MB)