Download PDF

Policy evaluation (in)activity unravelled. A configurational analysis of the incidence, number, locus and quality of policy evaluations in the Flemish public sector

Publication date: 2014-05-13

Author:

Pattyn, Valérie
Brans, Marleen ; Hondeghem, Annie

Keywords:

policy evaluation, evaluation capacity building, Flanders, configurational comparative methods, QCA, New Public Management

Abstract:

Despite a general and increased consensus about the intrinsic value of policy evaluation for professional government, we can observe large variety in evaluation activity across public sector organizations. In explaining this variance, the evaluation literature offers fragmented insights only. This study aims to fill this void. We examine which combinations of conditions explain policy evaluation (in)activity.By focusing on the Flemish public sector, the research offers an alternative to the many studies that focus on the world leaders in the practice of evaluation. Internationally compared, Flanders belongs to the so-called second wave of countries (regions) that generally introduced policy evaluations around the millennium change. The Flemish administration constitutes an interesting case of a public sector where New Public Management oriented reforms have put policy evaluation on the governance agenda. In 2006, Flanders implemented a major reform, coined Beter Bestuurlijk Beleid. With this reform, policy evaluation has been formally anchored in the task prescription of departments and agencies. Departments are charged with the evaluation of instruments, outputs and outcomes of policy implementation. Agencies, in turn, are required to provide the necessary input for these evaluations. Triggered by the reforms, evaluation is now widely practiced by departments and agencies, albeit still major discrepancies in evaluation activity exist. Twenty-seven organizations are included in the analysis.In this study, we conceive evaluation activity in a multi-dimensional way. We distinguish between (a) the incidence of evaluation; (b) the regularity of evaluation practice; (c) the locus for evaluation implementation and (d) the application of evaluation quality measures. For each dimension, we ask two specific questions. Which conditions have most explanatory potential? And how can they be combined to provide a sufficient explanation?We started from an unconventionally large number of potentially relevant conditions, and organized them along the principles of the actor centered neo-institutionalist framework. This framework combines characteristics of the two broad research traditions in the social sciences: the one that stresses agency, as opposed to the one that emphasizes structure.The actor centered conditions were grouped in two categories (i) evaluation capabilities (i.e. evaluation skills, evaluation budget, evaluation staff, pool of external evaluators, and availability of monitoring information); and (ii) actor orientations (i.e. evaluation demand from organizational management, from sector minister, from parliament, from other organizations and organizational support for evaluations).The structural conditions were categorized in the following groups: (iii) conditions related to the institutional setting (i.e. organizational size, organizational status, autonomy, anchorage of the evaluation function, participation of staff in training and networking, and evaluation requirements); (iv) policy issue characteristics (i.e. issue salience, competition, measurability of outputs and outcomes); and (v) conditions concerning the path of the organization (i.e. pre-reform evaluation experience, organizational stability and ministerial stability). We extensively relied on configurational comparative methods. We combined two methods that seldom feature together in a single study: the Most Similar Different Outcome/Most Different Similar Outcome technique (MSDO/MDSO) and crisp set Qualitative Comparative Analysis (csQCA). To this end, we translated our cases into configurations of Boolean (0/1) formulated conditions and outcomes. The MSDO/MDSO helped us to identify the conditions that are most relevant for each activity dimension. Via csQCA we unraveled the different combinations of conditions that can be made from them. The csQCA analysis was conceived in a two-step way. Only when the structural conditions could not provide a fully consistent explanation, we included the actor-centered categories in our analysis.Of the remote conditions, especially the anchorage of the evaluation function, pre-reform evaluation experience and the perceived measurability of outcomes excel in explaining evaluation activity. The skills to outsource evaluations, evaluation demand from organizational management and evaluation demand from the sector minister are the most influential actor related conditions. We do strongly emphasize, however, that the role of each condition can only be fully captured when taking into account the broader configurational picture.Next to a contribution to theory-building about evaluation, the research offers multiple lessons for practitioners and policy-makers. As such, we hope that this study is a step towards more evidence-based evaluation capacity building. If policy evaluation is a genuine concern for governments, a comprehensive evaluation capacity building strategy should be developed. The most important actions to consider include the establishment of evaluation units, the provision of appropriate evaluation trainings, testing evaluation skills when recruiting staff for policy units, and evaluation awareness building among organizational managers and policy advisors of ministers.