Drawing on data from surveys and interviews, the EvalFest team found high rates of evaluation use among its science festival partners across the United States.
Recently published in Evaluation and Program Planning, this article uses data from festival team interviews (n=23) and surveys completed by individual partners (n=45) to document examples of evaluation use occurring within a community-created multisite evaluation (MSE) embedded within a community of practice (CoP).
As a community-created MSE, EvalFest’s design and structure differs from traditional MSEs in important ways. MSEs typically encompass two or more sites and include systematic cross-site data collection across non-uniform contexts, but variability across sites and inconsistency in evaluation design are often cited as shortcomings of results (Sinacore & Turpin, 1991; Straw & Herrell, 2002). The EvalFest community was able to circumvent many of the challenges by applying a negotiated centralized evaluation model that entailed: (1) creating local evaluations; (2) forming the central evaluation team; and (3) negotiating and collaborating on the participatory MSE.
Through an arduous process of negotiation and collaboration, the EvalFest community co-created a set of shared measures with unique variability that were collected annually using shared data collection systems and data analysis and reporting tools provided by the central evaluation team. The result is a dataset consisting of over 30,000 data points from festival attendees across the U.S. and an online dashboard that provides access to each partner’s data as well as the larger sector-level dataset.
The community-created MSE approach promotes a broader range of evaluation use than traditional models. Drawing on the literature on evaluation use, four types of evaluation use were coded: instrumental, conceptual, symbolic, and process. All partners cited at least one type of use and 11 partners (48%) provided examples of all four types of use. One in five partners also reported using the sector-level data generated by the entire EvalFest community, which underscores the unique value of the design characteristics featured in EvalFest’s community–created MSE.
Although the EvalFest case study provides evidence of evaluation use within a single sector, the EvalFest team believes the design characteristics featured in the community-created MSE can be applied broadly to promote evaluation use across other sectors.
Peterman, K. and M.J. Gathings. (2019). “Using a Community-Created Multisite Evaluation to Promote Evaluation Use Across a Sector,” Evaluation and Program Planning, 74:54-60.