You are on page 1of 4

Consumer-Oriented Approach

 Typically a summative evaluation approach

 This approach advocates consumer education and independent reviews of


products

 Scriven’s contributions based on groundswell of federally funded educational


programs in 1960s

 Differentiation between formative/summative eval.

 Typically used by gov’t. agencies and consumer advocates (i.e., EPIE)

 What does one need to know about a product before deciding whether to
adopt or install it?

 Process information

 Content information

 Transportability information

 Effectiveness information

 Strengths: valuable info given to those who don’t have time to study,
advance consumers’ knowledge of appropriate criteria for selection of
programs/products

 Weaknesses: can increase product cost, stringent testing may “crimp”


creativity, local initiative lessened b/c of dependency on outside consumer
services

Expertise-Oriented Approach

 Depends primarily upon professional expertise to judge an institution,


program, product, or activity

 This is the first view that relies heavily on subjective expertise as the key
evaluation tool

 Examples: doctoral exams, board reviews, accreditation,


reappointment/tenure reviews etc…

 Strengths: those well-versed make decisions, standards are set, encourage


improvement through self-study
 Weaknesses: whose standards? (personal bias), expertise credentials, can
this approach be used with issues of classroom life, texts, and other
evaluation objects or only with the bigger institutional questions?

Participant-Oriented Approach

 Heretofore, the human element was missing from program evaluation

 This approach involves all relevant interests in the evaluation

 This approach encourages support for representation of marginalized,


oppressed and/or powerless parties

 Depend in inductive reasoning [observe, discover, understand]

 Use multiple data sources [subjective, objective, quant, qual]

 Do not follow a standard plan [process evolves as participants gain


experience in the activity]

 Record multiple rather than single realities [e.g., focus groups]

 Stake’s Countenance Framework

 Description and judgment

 Responsive Evaluation

 Addressing stakeholders’ concerns/issues

 Case studies describe participants’ behaviors

 Naturalistic Evaluation

 Extensive observations, interviews, documents and unobtrusive


measures serve as both data and reporting techniques

 Credibility vs. internal validity (x-checking, triangulation)

 Applicability vs. external validity (thick descriptions)

 Auditability vs. reliability (consistency of results)

 Confirmability vs. objectivity (neutrality of evaluation)

 Participatory Evaluation

 Collaboration between evaluators & key organiz-ational personnel for


practical problem solving

 Utilization-Focused Evaluation
 Base all decisions on how everything will affect use

 Empowerment Evaluation

 Advocates for societies’ disenfranchised, voiceless minorities

 Advantages: training, facilitation, advocacy, illumination, liberation

 Unclear how this approach is a unique participant-oriented approach

Argued in evaluation that it is not even ‘evaluation

 Strengths: emphasizes human element, gain new insights and theories,


flexibility, attention to contextual variables, encourages multiple data
collection methods, provides rich, persuasive information, establishes
dialogue with and empowers quiet, powerless stakeholders

 Weaknesses: too complex for practitioners (more for theorists), political


element, subjective, “loose” evaluations, labor intensive which limits number
of cases studied, cost, potential for evaluators to lose objectivity

Alternative Approaches Summary

Five cautions about collective evaluation conceptions presented so far

1) Writings in evaluation are not models/theories

 Evaluation is a transdiscipline (not yet a distinct discipline)

 “Theoretical” underpinnings in evaluation lack important


characteristics of most theories

 Information shared is: sets of categories, lists of things to think about,


descriptions, etc.

Discipleship” to a single ‘model’ is dangerous

 Use of different approaches as heuristic tools, each appropriate for the


situation, recommended

3) Calls to consolidate evaluation approaches into a single model are unwise

 These efforts based in attempts to simplify evaluation

 Approaches are based on widely divergent philosophical assumptions

 Development of a single omnibus model would prematurely close a


divergent phase in the field
 Just because we can does not mean we should; would evaluation be
enriched by synthesizing the multitude of approaches into a few
guidelines?

The choice of an evaluation approach is not empirically based

 Single most important impediment to development of more adequate


theory and models in evaluation

5) Negative metaphors underlying some approaches can cause negative side


effects

 Metaphors shared in ch. 3 are predicated on negative assumptions in


two categories:

 Tacitly assume something is wrong in system being evaluated


(short-sighted indictment)

 Based on assumptions that people will lie, evade Qs or withhold


information as a matter of course

 Approaches shared in ch. 4-8 influence evaluation practices in


important ways

 Help evaluators think diversely

 Present & provoke new ideas/techniques

 Serve as mental checklists of things to consider, remember, or


worry about

 Alternative approaches’ heuristic value is very high, but their


prescriptive value is less so

 Avoid mixing evaluation’s philosophically incompatible


‘oil/water’ approaches; eclectic use of alternative approaches
can be advantageous to high-quality evaluation practices

 T

You might also like