Professional Documents
Culture Documents
Assessment
Victoria Acevedo
CCHE 687
Dr. King
December 8, 2016
Assessment 2
learning may require both direct and indirect measures (VanDerLinden, 2016). Direct
measures of student learning provide compelling evidence that learning outcomes were
achieved while indirect measures show signs that students may have learned but the
evidence is less convincing (VanDerLinden, 2016). After reviewing the Northern Arizona
University [NAU] Course Evaluation structure, it appears NAU would benefit from
improving the questions on course evaluations to obtain more qualitative information and
direct evidence of learning (NAU, 2013). Along with grades and course evaluations,
colleges and departments could use focus groups and rubrics to gauge the learning
information from multiple sources and using the results to improve student learning
improvement (Keeling, Wall, Underhile, & Dungy, 2008). While it is important to meet
measures created without faculty input can create institutional problems. Therefore,
institutions that are able to explain the benefits of assessment and involve faculty in the
creation and implementation of it may have less pushback from faculty regarding
Assessment 3
assessment measures (Perrine, Sweet, Blythe, Kopacz, Combs, Bennett, Street, & Keeley,
2010).
Grades and course evaluations are frequently used to assess student learning.
However, grades and course evaluations are indirect measures of assessment. A grade can
be used as a sign that a student has learned something but we, as educators, do not know
for sure what they learned (VanDerLinden, 2016). The grade does not allow us to
determine the strengths and weaknesses of a students performance (Perrine et al., 2010).
tangible evidence of what the student has learned as a result of his/her participation in a
course.
Because course evaluations are self-reports of student learning and may be biased,
beneficial to get deeper information from students regarding why they respond a certain
way in course evaluations. If course evaluations pose appropriate questions, they can be a
meaningful and useful tool for faculty, departments, and colleges within an institution to
2009). With the 2013 format of NAU course evaluations, all responses are indicated by a
four-point scale from strongly disagree to strongly agree (NAU, 2013). The answers
to these questions would be informative but superficial. If the course evaluations offered
students the opportunity to freely write responses, the results of the evaluations could be
Assessment 4
more telling of the occurrence of student learning. Open-ended questions do not lead
individuals to certain responses, but allow them to describe their experiences, which
could provide rich, detailed information on what students did or did not learn and why
Questions on NAU course evaluations such as, tell us about what was helpful (or
not) in your learning experiences in this course or describe what would be useful in
helping students learn the material for this course could be very beneficial. These
questions would allow students to explain what they feel are their strengths or
weaknesses as students. Similarly, these questions will allow students to explain the
strength and weaknesses of the curriculum and assist in improvement efforts. This
measures are excellent for gathering rich detail and deeper understanding of survey data.
The 2013 NAU Course Evaluations provided quantitative data because ordinal scales
were used to measure student responses (Schuh et al., 2001). Conducting focus groups of
to the quantitative data from the evaluations. Therefore, focus groups would make the use
Using purposeful, stratified sampling for focus groups would provide important
stratified samples would consist of subgroups of the population who completed and
The students would participate in semi-structured interviews to learn the reasons behind
achieved (Bresciani et al., 2004). Themes and trends would be discovered during the
analysis of results (Schuh et al., 2001), and rubrics could be used to measure whether a
department was successful in achieving its learning outcomes. Rubrics assist in the
triangulation of data when used with surveys (Levy, McKelfresh, & Donavan, 2012).
The sole use of grades and course evaluations should no longer be an acceptable
establish clear learning outcomes and evidence that evaluation of learning outcomes has
been conducted (Bresciani et al., 2009). This information must be used to improve
programs. Patrick Callan said, It is an embarrassment that we can tell people anything
about education except how well students are learning (VanDerLinden, 2016). NAU
should strive to utilize effective assessment measures to gauge and improve student
learning.
Assessment 6
References
http://www.rpajournal.com/dev/wp-content/uploads/2013/11/SF1.pdf
Bresciani, M., Zelna, C., & Anderson, J. (2004). Assessing student learning and
Bresciani, M. J., Gardner, M. M., & Hickmott, J. (2009). Demonstrating Student Success:
Higher Learning Commission [HLC]. (2016). The criteria for accreditation and core
https://www.hlcommission.org/Criteria-Eligibility-and-Candidacy/criteria-and-
core-components.html
Keeling, R. P., Wall, A. F., Underhile, R., & Dungy, G. J. (2008). Assessment
Levy, J. D., McKelfresh, D. A., & Donavan, J. A. (2012). A scale for success. Talking
Northern Arizona University [NAU]. (2013). Fall 2013: NAU course evaluation.
4383285-dt-content-rid-36653456_1/courses/1167-NAU00-CCHE-687-SEC001-
7017.NAU-
Assessment 7
PSSIS/FALL%202013%20NAU%20Course%20Eval%20Questions%282%29.pd
Perrine, R., Sweet, C., Blythe, H., Kopacz, P., Combs, D., Bennett, O., Street, S., &
Schuh, J., Upcraft, M. L., & Associates (2001). Assessment practice in student affairs:
VanDerLinden, K. (2016). Type of assessment in higher education part II. [Power Point].
4383249-dt-content-rid-36653476_1/courses/1167-NAU00-CCHE-687-SEC001-
7017.NAU-PSSIS/Module%203_Learning%20Outcomes%20Assessment.mp4