Professional Documents
Culture Documents
Program Evaluation Plan for Emergency Medical Response for the Athletic Trainer
Course
Brian C. Kosan
EDAE 639.801
Colorado State University
25 November 2015
Summative evaluation of the course typically comes at the end of our courses
at Aurora University. We essentially utilize two types of summative program
evaluation: another set of happy sheets and review of achievement of stated
learning outcomes. The end of course questionnaire seeks to quantify Kirkpatricks
(1975) four evaluation levels. The scores from these questionnaires are tabulated
and they are then compared to our peers within a particular academic program, our
department, and then across the entire university. These questionnaires fairly well
with the first three levels, but the final level, results, are more directly measured
through data we supply to the University. I am required, at the end of a semester,
to submit a report to my program director detailing how many of my students
achieved each stated learning outcome as well as what the average score was for
the particular assessment. If there were any deficiencies noted, we would need to
take corrective action. I thankfully have never encountered this issue, and I am
sure I would need to rely on my faculty mentor to navigate the policies of the
university.
Improving accountability to stakeholders is another important component to
quality program evaluation. Especially with information gathered through
summative evaluation, stakeholders can see program efficacy through learner
satisfaction and achievement. Program efficacy ties directly into the concept of
return on investment as well (Phillips & Phillips, 2005). There are several sets of
stakeholders for my course, which include the learners themselves, my program
director, the broader hierarchy of Aurora University, and our educational accrediting
body, The Commission on the Accreditation of Athletic Training Education [CAATE].
However, there is an additional, latent set of stakeholders that cannot be
adequately involved in the design process: future patients being cared for by the
graduating students. The learners are paying good money for the credits they are
taking at the University as well as need to effectively utilize course material later in
clinical practice. The program director and the University proper want to have well
prepared students graduating from their institution. CAATE needs to ensure that
our program is conforming to established national norms and expectation.
Each of these stakeholders are engaged and included in the systematic
instructional design process (including evaluation) in different ways. The learners
are engaged through audience analysis initially, formative evaluation throughout
the semester, and summative evaluation at the end of the course. As discussed
previously, the program director is my faculty mentor, and he is actively involved
with the formative and summative evaluation of the course as well as directing
improvements and changes in instruction. In participating in the aforementioned
evaluations, data is available to the University as well as CAATE through the
program director. The program director can then be the conduit through which
feedback from these ultimate powers that be can be delivered and appropriately
implemented.
Quality evaluation of an instructional program provides many sound benefits.
These benefits include more informed decisions regarding improving, changing, or
otherwise augmenting the instructional design and increased accountability to the
event stakeholders. The evaluation plan for my course satisfies criteria set forth by
Kirkpatrick (1975) for quality program evaluation; it also includes formative and
summative components as suggested by Smith and Ragan (2005) as well as Angelo
and Cross (1993). The continued and diligent use of effective evaluation will allow
for the organic and cyclic progression of my systematic instructional design of my
course, Emergency Medical Response for the Athletic Trainer.
6
References
Angelo, T.A., Cross, K.P. (1993). Classroom Assessment Techniques: A Handbook for
College Teachers, 2nd Edition. San Francisco, CA: Jossey-Bass.
Kirkpatrick, D.L. (1975). Techniques for evaluating training programs. In D.L.
Kirkpatrick (Ed.), Evaluating Training Programs. Alexandria, VA: American
Society for Training & Development.
Phillips, J.J., Phillips, P.P. (2005). ROI at Work: Best-Practice Case Studies from the
Real World. Alexandria, VA: American Society for Training & Development.
Smith, P.L., Ragan, T.J. (2005). Instructional Design, 3rd Edition. Hoboken, NJ: John
Wiley & Sons, Inc.