RSS
 

Conducting Quality Evaluations: Four Generations of Meta-Evaluation

Num°13 EVALUATION
StufflebeamModelJacob

Evaluation, as an applied science, entails a perpetual quest for improvement, as evaluators seek the codes and instruments that will allow them to ensure the quality and validity of their conclusions and recommendations. Given the range of evaluation objectives, a number of quality issues are present. First, the evaluation must centre on a specific need for information. Second, it must lead to a judgment on public actions that is based on explicit criteria. Third, it must generate useful, evidence-based recommendations. And last, it must provide information as an input for the decision-making process. As a result, the literature on evaluation quality is rich and constantly evolving. It outlines four evaluation quality assurance approaches. The first is the structural approach developed by Schwartz and Mayne (2005), which involves the elaboration of standards and other directing principles to orient evaluative practice. The second is the systemic approach, which consists of ensuring the reliability of information collection mechanisms during the evaluative process (Bornmann et al. 2006). Third, unlike the systemic approach which focuses on the information collection system, the formative approach allows evaluators to ensure the quality of the information at the time of its collection and production. Fourth, the summative approach is similar to the formative approach in the sense that it also concentrates on information quality, but only once it has been produced (Daigneault 2008). These different approaches use precise instruments for their operationalization. One of the methods which allows evaluators to link together several quality assurance approaches is without doubt meta-evaluation. Meta-evaluation, also referred to as “second-level evaluation”, is an application of the formative and summative quality assurance approaches (Daigneault 2008) because it allows evaluators to ensure, both before and after an evaluation, the quality of their work. Also, meta-evaluation mobilizes instruments such as standards and directing principles for its implementation. In this way, it may appear to be a structural approach.

The origin of the term “meta-evaluation” is attributed to Michael Scriven and dates back to the 1960s (Cook 1978; Reineke and Welch 1986; Stufflebeam and Shinkfield 2007; Stufflebeam 2001a; 2011). Meta-evaluation is defined as being the evaluation of the evaluation and indirectly of the evaluator (Scriven 1991). In an editorial entitled Meta-Evaluation Revisited, Scriven explains, “I published my first article about ‘meta-evaluation’ (Scriven 1969), a term I had invented somewhat earlier in a report to the Urban Institute, who had asked me for help in dealing with the non-comparability of the evaluations they had commissioned for several housing projects” (Scriven 2009, p. iii). The main rationale for the existence of meta-evaluation is to respond to criticisms and concerns about the value of evaluations. Reineke and Welch (1986) find expectations on the subject of meta-evaluation in Stufflebeam’s writings. On the one hand, evaluators are increasingly required to demonstrate the quality of their work; on the other hand, it is fitting that they should evaluate their own work. Meta-evaluation allows evaluators to meet this double expectation.

Despite the simplicity of the concept’s definition, the operationalization of meta-evaluation has taken place in several stages, and has included the development of several tools for its implementation and several theoretical approaches for its analysis. Nevertheless, it seems essential to develop characteristics common to the profession to better determine the issues of meta-evaluation (Cooksy and Caracelli 2008) which is now presented by several authors as a professional obligation (Hanssen et al. 2008; Jacob and Boisvert 2010; Stufflebeam and Shinkfield 2007; Stufflebeam 2011). When we speak of meta-evaluation, three recurrent questions emerge:

-        (i) exactly what it is,

-        (ii) how it can be justified, and

-        (iii) when and how it should be used (Scriven 2009 p. iii).

Based on a review of the literature, our research will attempt to answer these questions. The objective of this exploratory study is to describe meta-evaluative practice.

Conducting Quality Evaluations: Four Generations of Meta-Evaluation

scarica pdf
 
Commenti disabilitati

Numero 13 EVALUATION febbraio, 2015 - Autore:,   Condividi

 

Tags: , ,

 
porno porno izle porno porno film izle