Date of Award

Summer 2024

Degree Type

Open Access Dissertation

Degree Name

Psychology, PhD

Program

School of Social Science, Politics, and Evaluation

Advisor/Supervisor/Committee Chair

Stewart I. Donaldson

Dissertation or Thesis Committee Member

J. Bradley Cousins

Dissertation or Thesis Committee Member

Anna Woodcock

Dissertation or Thesis Committee Member

Melvin M. Mark

Terms of Use & License Information

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Rights Information

© 2024 Courtney M Koletar

Keywords

evaluation influence, intergroup sensitivity effect, negative evaluation findings, participatory evaluation, research on evaluation, social identity theory

Subject Categories

Psychology

Abstract

For decades, evaluators have noted that it is difficult for stakeholders to accept negative evaluation results (Carter, 1971; Taut & Brauns, 2003). There is a need for additional research on evaluation to better understand when and why stakeholders reject negative or critical evaluation findings. Drawing on social identity theory (SIT), the intergroup sensitivity effect (ISE) finds that group members are equally accepting of group-directed praise from both ingroup and outgroup members, however they are much more accepting of group-directed criticism from an ingroup member than an outgroup member (Hornsey et al., 2002). The logic of SIT underlying the effect is that ingroup members are seen as criticizing the group for the purpose of improvement, while outgroup members are seen as criticizing the ingroup to denigrate its status and elevate their own. Applying the ISE to evaluation, two studies were conducted to explore the conditions under which negative evaluation results are accepted. It was hypothesized that the pattern of the ISE was applicable to not just opinion-based criticism, but empirically based evaluation results. In addition, given that practical participatory evaluation has been linked to process use and instrumental use of evaluation results (Cousins, 2003), it was included as a factor in the design to see if a participatory approach could overcome resistance to negative results from an outgroup evaluator. Study 1 employed an experimental design with a crowdsourced sample representing the perspective of broader community members indirectly impacted by a program (e.g., taxpayers). Participants were presented with a vignette about a fictional elementary school math tutoring program within their state. The vignette manipulated evaluator group membership relative to the program (i.e., ingroup/resident’s state vs. outgroup/neighboring state), results direction (i.e., positive vs. negative), and participatory design (i.e., participatory vs. non-participatory). Participants reported overall greater agreement with positive findings compared to negative findings (p = .003). There was a marginal (p = .057) interaction between group membership and findings direction, however it differed slightly from the pattern found in the ISE. Agreement was greatest for positive findings presented by an outgroup evaluator and lowest for negative findings presented by an outgroup evaluator. Notably, the explanatory power of this effect was small. Qualitative comments from participants in the outgroup conditions were more likely to claim the evaluation was fair and unbiased. There were no effects related to participatory design. Perception of the evaluator’s constructive intent was the strongest predictor of agreement (p < .001) explaining a large proportion of variance, however, unlike in the ISE, it did not mediate a relationship between group membership and agreement with negative findings. Study 2 employed a correlational design with an applied sample representing the primary users of an evaluation. Recipients of U.S. Department of Education Title III and Title V funds were invited to respond to a survey about their experiences with the required evaluation component of their grant project. They reported their perceptions of their evaluator as part of their group, the extent to which their evaluation was implemented in a participatory fashion, their agreement with both positive and negative findings, and their perceptions of the evaluator’s intentions. It was hypothesized that both closeness with the evaluator and a participatory design would lead to greater acceptance of negative findings. This was partially supported. Like the ISE predicted, there was a marginally significant relationship between perceptions of the evaluator’s group membership and agreement with negative findings (p = .055), such that negative findings received greater acceptance when the evaluator was part of the ingroup. However, there was no relationship between perceived group membership and agreement with positive findings. There was no relationship between participatory design and agreement with negative or positive findings. Perceptions of constructive intentions was the strongest predictor of agreement with both positive (p < .001) and negative (p = .032) findings. Qualitative data show that perceptions of closeness with the evaluator are not simply based on organizational affiliation, but rather shared group membership defined by shared goals. While Study 2 findings aligned to the ISE, Study 1 differed slightly. Respondents in Study 1 calling the outgroup evaluator objective and unbiased suggests that evaluation represents a unique type of message, different than simply an individual opinion. However, group identity processes may still be a factor given the ingroup evaluator received greater agreement with negative findings compared to the outgroup evaluator. For peripheral stakeholders, the extent to which the evaluation was participatory did not affect their agreement with findings. For primary users, it appears a participatory design may be one way to demonstrate shared group membership, but it is not the only way. Findings from the present studies suggest that demonstrating constructive intentions is key to successfully communicating negative evaluation results. When working with primary intended users, building a positive, friendly relationship through ongoing contact can aid in developing a sense of shared group membership between evaluator and stakeholder. Evaluators may do well to take guidance from the saying that “People don’t care how much you know until they know how much you care.” Believing that programs need to change and improve is often the crucial first step to social betterment. If we can foster greater acceptance of negative findings, we can increase the influence of our evaluations, ultimately bettering society as a whole.

ISBN

9798383699829

Included in

Psychology Commons

Share

COinS