Picture source: https://www.differencebetween.com/difference-between-social-exclusion-and-vs-vulnerability/

Inclusive or rational participatory evaluation?

Bojan Radej
5 min readApr 25, 2024

Part 1 (Introduction)

>Available also here. <

The traditional result-based policy impact evaluation, which compares actual achievements with initial plans, proves inadequate for evaluating complex interventions (Powell et al.; Deprez; Willson-Grau; Davies, Dart). Multifaceted interventions elude complete and unique descriptions. Their impacts lack a common metric for comparison (rendering them incommensurable) and produce a multitude of effects that cannot be definitively attributed to specific causes or beneficiaries. Such is the case with complex interventions aimed at catalyzing human and social behavioural changes, driving social transformation, enhancing interactions between societal groups, or developing internal group dynamics (Wilson-Grau). The complex interventions are always accompanied by uncertainty so they can only be assessed by an epistemically blind and biased evaluator. At least, that is the prevailing belief within the current mainstream evaluation doctrine. However, upon closer examination, it becomes clear that the mainstream response is still committed to the traditional standards of realistic evaluation. When addressing such misrepresented situations, addressing the challenge from the foundational level becomes imperative, even though the issues at hand are often conceived in purely pragmatic terms.

Over the past few decades, evaluative approaches have undergone a shift from the traditional result-based logic. New school redirects the focus towards stakeholder-driven and design-based constructivist approaches, urging the development of dialogical forms of policy impact evaluation (Patton, 2002). Stufflebeam defines participatory evaluation as a collaborative assessment process that appraises multifaceted interventions by engaging a broad spectrum of stakeholders with their diverse perspectives, considering various data types, both quantitative and qualitative. This novel approach strives for an inclusivity of multiple viewpoints. However, it lacks a robust theoretical framework and inconsistently contributes to enhancing collective rationality.

The novel approaches to evaluating multifarious interventions necessitate being grounded in rationality and democratic principles, situating this endeavour within the purview of collective choice theory (Arrow). This theory examines how societies, communities, or organizations navigate the labyrinth of conflicting options and derive collective meaning from fragmented individual actions or claims. Theoretically, collective choice confronts a fundamental limitation framed by Arrowʼs impossibility theorem. It posits that no choice method can simultaneously satisfy the tenets of inclusivity and rational deliberation, or reconcile democracy with rational judgments in the collective choice.

The concept of democracy intersects efforts for resolving the intrinsic tension between demos and kratos in societies. Demos represent the collective body of citizens, who value inclusivity and a diversity of views. Kratos, on the other hand, refers to the practical governance that enables these decisions. Kratos seeks to ensure that choices are coherent, just, and functional, often causing the exclusion of diverse viewpoints. The tension is evident for instance in debates around freedom of speech on social media platforms. While demos value the free exchange of ideas, kratos necessitate measures to prevent the spread of misinformation or hate speech. The democratic legitimacy of political authority depends on finding a middle ground that intersects the aspirations of citizens and the need for efficient and effective governance.

Policy impact evaluation emerged as a critical response to the limitations of conventional collective choice theory. It serves the demos in participating in kratos. It also facilitates the kratos to become decreasingly exclusionary in enforcing unity simply by a more connective understanding of complex contradictions in the collective choice.

Participatory evaluation applies various tools for achieving inclusivity and collective rationality with different success. Two assessment criteria are proposed to assess these differences. The inclusivity can be examined by how different tools incorporate epistemic blindness (Fricker) as a driver of bias in participatory evaluation. Epistemic blindness is the opposite of epistemic certainty. It arises from ignorance or lack of knowledge, or the exclusion of anything that does not match established patterns of understanding social phenomena (Kahneman). Epistemic blindness cannot always be eliminated. Our knowledge is limited or bounded (Simon), especially in evaluations of complex interventions due to uncertainty. Uncertain things involve the void in their core. These things can be better understood by someone blindsighted, than by an enlightened evaluator.

The second criterion examines how effectively from the aspect of the collective these tools aggregate the contributions gathered through the participatory process. Various aggregation procedures exist, ranging from micro-level (focusing on individual data points) to macro-level (looking at broader concerns) and from meso-level (observing intermediate processes and correlations) to meta-level (on the overlap between shared concerns). Various aggregation procedures are designed with dissimilar logics of synthesis yielding strikingly different results (Radej, 2021a). This suggests that the selection of aggregation method should not be arbitrary in evaluation, but must correspond to the complex nature of interventions.

To explore both the inclusiveness and collective rationality of design-based approaches, this paper examines four popular tools in participatory evaluation: SenseMaker by Cognitive Edge, Outcome Harvesting by Wilson-Grau, Most Significant Change by Rick Davies, and Causal Mapping (Copestake et al., Goddard, Powel).

The paper begins by introducing four tools. It then develops two key assessment criteria: epistemic blindness and aggregation problems. The core of this meta-evaluative exercise delves into how well these tools satisfy two criteria. While the paper acknowledges the toolsʼ strengths and positive contributions, it argues that they fail to meet selected criteria in complex circumstances. It concludes with a call for the anti-postmodern or post-constructivist (Barkin) turn in evaluation theory that achieves inclusiveness and collective rationality by intersecting them in the empty middle and then reading obtained evaluation messages as blindsighted.

The four tools were presented and discussed at the 5th Western Balkans Evaluatorʼs Network (WBEN) conference in Ljubljana in late September 2023.[1] The Slovenian Evaluation Society hosted the event on behalf of WBEN.

[1] Conference webpage. Accessed May 2024.

>>>This is the opening chapter of a forthcoming Working Paper of Slovenian Evaluation Society 1/XVIII; Spring 2024. For a list of sources used, see the original work. <<<

Links:

Acknowledgements: Freely accessible Artificial Intelligence tools were employed in translation of this text from Slovene to English language and in English language text editing: Google* (Translate, Gemini ), Grammarly, OpenAI ChatGPT,* and MS Bing*. *Also employed in literature review.

--

--

Bojan Radej

A methodologist in social research from Ljubljana; Evaluator. Slovenia. Author of "Social Complexity" Vernon Press, 2021.