You are on page 1of 6

Running head: KOSAN PROGRAM EVALUATION PLAN

Program Evaluation Plan for Emergency Medical Response for the Athletic Trainer
Course
Brian C. Kosan
EDAE 639.801
Colorado State University
25 November 2015

KOSAN PROGRAM EVALUATION PLAN

Systematic instructional design has many important steps, which includes


significant amounts of planning for various stages of an instructional event. One of
the final steps and stages, if using the simple ADDIE model, is the evaluation of the
instructional program. Appropriate and effective evaluation of the instructional
design is essential to making informed decisions in refining and/or improving that
design (Smith & Ragan, 2005). Instructional design simply cannot take on its true
cyclic nature without evaluation. Quality evaluation will typically address four levels
(learner reaction, learning, behavior, and results) as explained by Kirkpatrick (1975),
and it should have both formative and summative components (Smith & Ragan,
2005; Angelo & Cross, 1993). Quality evaluation is also essential in accountability
to ones stakeholders.
Formative evaluation of an instructional program primarily focuses on various
forms of feedback that lend themselves to informing change. Aurora University has
a system in place for its adjunct faculty where a faculty mentor will observe at least
one class meeting per course per semester. My faculty mentor happens to be the
program director for that Athletic Training Education Program, Dr. Oscar Krieger.
Utilizing a standardized rubric from the University, Oscar will provide formative
evaluation on areas such as pedagogy, organization, and learner engagement. Also
during this class observation, the faculty mentor will have the students complete an
anonymous happy sheet that has numerous 5 point Lickert scales as well as areas
for comments. The results from Oscars observations and the student responses are
combined into a single formative report, which Oscar and I will meet at some point
that semester to discuss. In the past two meetings, I have taken the feedback and
developed action plans to implement the improvements suggested by the formative
evaluation in either the current instructional event or for the next semester.

KOSAN PROGRAM EVALUATION PLAN

Summative evaluation of the course typically comes at the end of our courses
at Aurora University. We essentially utilize two types of summative program
evaluation: another set of happy sheets and review of achievement of stated
learning outcomes. The end of course questionnaire seeks to quantify Kirkpatricks
(1975) four evaluation levels. The scores from these questionnaires are tabulated
and they are then compared to our peers within a particular academic program, our
department, and then across the entire university. These questionnaires fairly well
with the first three levels, but the final level, results, are more directly measured
through data we supply to the University. I am required, at the end of a semester,
to submit a report to my program director detailing how many of my students
achieved each stated learning outcome as well as what the average score was for
the particular assessment. If there were any deficiencies noted, we would need to
take corrective action. I thankfully have never encountered this issue, and I am
sure I would need to rely on my faculty mentor to navigate the policies of the
university.
Improving accountability to stakeholders is another important component to
quality program evaluation. Especially with information gathered through
summative evaluation, stakeholders can see program efficacy through learner
satisfaction and achievement. Program efficacy ties directly into the concept of
return on investment as well (Phillips & Phillips, 2005). There are several sets of
stakeholders for my course, which include the learners themselves, my program
director, the broader hierarchy of Aurora University, and our educational accrediting
body, The Commission on the Accreditation of Athletic Training Education [CAATE].
However, there is an additional, latent set of stakeholders that cannot be
adequately involved in the design process: future patients being cared for by the

KOSAN PROGRAM EVALUATION PLAN

graduating students. The learners are paying good money for the credits they are
taking at the University as well as need to effectively utilize course material later in
clinical practice. The program director and the University proper want to have well
prepared students graduating from their institution. CAATE needs to ensure that
our program is conforming to established national norms and expectation.
Each of these stakeholders are engaged and included in the systematic
instructional design process (including evaluation) in different ways. The learners
are engaged through audience analysis initially, formative evaluation throughout
the semester, and summative evaluation at the end of the course. As discussed
previously, the program director is my faculty mentor, and he is actively involved
with the formative and summative evaluation of the course as well as directing
improvements and changes in instruction. In participating in the aforementioned
evaluations, data is available to the University as well as CAATE through the
program director. The program director can then be the conduit through which
feedback from these ultimate powers that be can be delivered and appropriately
implemented.
Quality evaluation of an instructional program provides many sound benefits.
These benefits include more informed decisions regarding improving, changing, or
otherwise augmenting the instructional design and increased accountability to the
event stakeholders. The evaluation plan for my course satisfies criteria set forth by
Kirkpatrick (1975) for quality program evaluation; it also includes formative and
summative components as suggested by Smith and Ragan (2005) as well as Angelo
and Cross (1993). The continued and diligent use of effective evaluation will allow
for the organic and cyclic progression of my systematic instructional design of my
course, Emergency Medical Response for the Athletic Trainer.

KOSAN PROGRAM EVALUATION PLAN

KOSAN PROGRAM EVALUATION PLAN

6
References

Angelo, T.A., Cross, K.P. (1993). Classroom Assessment Techniques: A Handbook for
College Teachers, 2nd Edition. San Francisco, CA: Jossey-Bass.
Kirkpatrick, D.L. (1975). Techniques for evaluating training programs. In D.L.
Kirkpatrick (Ed.), Evaluating Training Programs. Alexandria, VA: American
Society for Training & Development.
Phillips, J.J., Phillips, P.P. (2005). ROI at Work: Best-Practice Case Studies from the
Real World. Alexandria, VA: American Society for Training & Development.
Smith, P.L., Ragan, T.J. (2005). Instructional Design, 3rd Edition. Hoboken, NJ: John
Wiley & Sons, Inc.

You might also like