You are on page 1of 12

A Paradigm for Assessing Conceptual and

Procedural Knowledge in Engineering Students


Journal of Engineering Education, Oct 2007 by Taraban, Roman, DeFinis, Alli,
Brown, Ashlee G, Anderson, Edward E, Sharma, M P

Conceptual and procedural knowledge are two mutually-supportive factors


associated with the development of engineering skill. The present study
extends previous work on undergraduate learning in engineering to provide
further validation for an assessment paradigm capable of quantifying
engineering students' conceptual and problem-solving knowledge. Eight
students who were enrolled in an introductory thermodynamics course and
four who were enrolled in the course sequel provided verbal protocol data
as they used instructional software. They were compared to existing data
from a cohort of eleven science and engineering majors who had not taken
thermodynamics. The results replicated earlier findings showing more
cognitive activity on computer screens requiring overt user interaction
compared to text-based screens. The data also indicated that higher- versus
lower-performing students, based on course grades, engaged in more
higher-order cognitive processing. There was no evidence that students
gained deeper cognitive processing as they advanced through the
engineering curriculum.

Keywords: cognitive processing, instructional software, skill development,


assessment

I. INTRODUCTION

A pressing goal of engineering education today is to find ways to draw more students
at all levels into the culture and practices of engineers. In order to do this, we need
innovative assessments capable of telling us in more detail about how students
approach and think about engineering concepts and problems. This position is
consistent with the conclusions of other researchers:

Conventional metrics such as standardized examinations or accumulation of credit


hours are no longer adequate to assess fully the complex outcomes of engineering
education. Instead, we need measures that examine the qualitative changes in students'
thinking processes [1, p. 39].

The goal of the this research was to extend our work on the development of an
assessment paradigm capable of quantifying engineering students' conceptual and
problem-solving knowledge [2]. The development of this assessment tool has drawn
on several contemporary theories and approaches to learning.

A principle of constructivism in student learning [3] has been adopted by researchers


in engineering education. A constructivist approach considers the preconceptions [4]
that students bring to the learning situation, because students build on what they
already know. Through active, hands-on learning [3, 4], students extend and refine
even more adaptive cognitive representations and associated skills in their domain of
training. Effective problem solving [5] is closely associated with concept learning,
making inferences, and categorization [6], which represent distinct components of
engineering skill.

When students understand a concept or problem, they do so along a continuum that


can be characterized as extending from shallow to deep knowledge. A distinction
between shallow and deep knowledge has been well articulated in the research
literature on text representation and comprehension [7,8].

Shallow knowledge consists of explicitly mentioned ideas in a text that refers to: lists
of concepts, a handful of simple facts or properties of each concept, simple definitions
of key terms, and major steps in a procedure (not the detailed steps). Deep knowledge
consists of coherent explanations of the material that fortify the learner for generating
inferences, solving problems, making decisions, integrating ideas, synthesizing new
ideas, decomposing ideas into subparts, forecasting future occurrences in a system,
and applying knowledge to practical situations. Deep knowledge is presumably
needed to articulate and manipulate symbols, formal expressions, and quantities,
although some individuals can master these skills after extensive practice without
deep mastery [7, p. 6].

The most prominent outcomes of deep knowledge are longer-term retention of


information due to more elaborated cognitive representations of the knowledge, and
significant advantages in transferring the knowledge to novel situations because the
knowledge is not tied to specific rote situations and procedures. Classic studies on the
development of expertise in physics showed that novice undergraduates are easily
misled by surface problem features. For instance, when given a sorting task, they
readily sorted together problems involving inclined planes, or problems involving
pulleys, with little regard for the underlying principles involved in the problem (e.g. ,
conservation of energy) that would allow sorting on a deeper, more meaningful level
[9]. When given a story problem, novices are also known to "work backward" from
the unknown variable value, patching together equations that come to mind involving
that variable [10; see also, 11], whereas experts show forwardthinking reasoning,
categorizing the problem and identifying the relevant physical principles as they read
through the problem. Experts pay attention to what information is already given in a
problem, and they anticipate what they might have to calculate and possible ways of
carrying out those calculations, in advance of actually pursuing a specific solution.

Research in engineering education has suggested that students strive to develop


conceptual knowledge, but, unfortunately, do so at low cognitive levels. In a study of
the learning effects of a computerbased module on the topic of control systems [12],
the researchers found greater gains at lower cognitive levels of Bloom's taxonomy
[13] (Level 2: Comprehension; Level 3: Application) than at higher levels (Level 4:
Analysis; Level 6: Evaluation). Other research has identified misconceptions held by
engineering students regarding basic engineering concepts, like rate and energy [14],
and concept inventories have been developed to identify misconceptions in single [15,
16] and across multiple domains [17]. Engineering students often lack deep
understanding of the concepts and principles that underlie their areas of training [18].
Equally important, engineering faculty often underestimate the difficulty that students
face in understanding many of these concepts [17].
In an effort to better understand the specific factors underlying learning in engineering
undergraduates, our research program has taken a "bottom-up" approach [19], probing
the details of engineering students' study behaviors [20], the effects of specific
resources on classroom performance, such as on-line homework exercises [21],
students' knowledge and growth in metacognitive comprehension strategies and
epistemological orientations toward knowledge [22], and their cognitions while
working through interactive learning content and exercises [2]. This paper is an
extension of previous work on learning thermodynamics in an interactive, computer-
based context [23,24]. Earlier work tested science and engineering majors who had
not previously taken a thermodynamics course [2]. The current study includes
students enrolled in introductory and advanced thermodynamics courses. The major
goal was to apply our assessment method to students who were currently studying the
topics in our materials and to a group who had covered the material in an earlier
course and who were taking the course sequel, and thereby to test the applicability of
the assessment method to a broader range of engineering students. Two main
questions were addressed:

* Using students in Thermodynamics I and II, could we confirm the reliability of our
rubric for assessing cognitive processing?

* Using students in Thermodynamics I and II, could we replicate an earlier finding


that interactive learning contexts were relatively more evocative of higher-order
cognitive processes than text-oriented contexts?

As a further test of the validity of this paradigm, we compared the extent to which
students engage in higher-order cognitive processing depending on course level (no
thermodynamics course, Thermo I, Thermo II) and their grades in the
thermodynamics courses.

Because the results presented here are based on a small number of participants, the
reader should treat the findings with caution. This study is largely exploratory,
attempting to build a foundation on which to base more comprehensive investigations
with larger student samples.

II. CASE STUDY

The questions in this study were addressed through the collection and analysis of
verbal protocol ("think-aloud") data. Verbal protocols are open-ended think-aloud
reports, through which participants are asked to verbalize what they are thinking as
they work through a task, without attempting to interpret or summarize the materials
for the experimenter, unless those interpretations or summaries are a natural part of
their thought processes. There are established precedents for using students' overt
verbalizations to identify the cognitive representations that they construct while
completing a task [10,11,25-29].

A verbal protocol methodology raises concerns about differences in students' abilities


to verbalize their thoughts. It is important to note that concurrent verbalization of
spontaneous thoughts is qualitatively different than explanation or introspection. The
latter may be more difficult for knowledgeable people who are not skilled in
verbalizing. The former, which represents the verbal protocol task, should minimize
individual differences in language ability.

Students in this study read text, listened to narrations, interacted with simulations, and
solved problems using instructional software. This software implemented active-
learning methods and exploited state-of-the-art technology and authoring tools for
learning. These materials differed from traditional lecture and textbook learning
resources in their ability to require overt responses from the student and to provide
immediate feedback to student inputs.

A. Participants, Materials, and Data-Coflection and Coding Procedures

Eight undergraduate students who were currently enrolled in Thermodynamics I at the


University of Wyoming and four undergraduate students enrolled in Thermodynamics
II at Texas Tech University were recruited by their course instructors. To allow for a
contrast of higher-performing and lower-performing students in the data analysis, the
instructors identified an equal number of the two types of students. Table 1 shows
percent total course points and final course grade (A-F) for each student. To reduce
bias in course grading, the instructors were blind to the results of the verbal protocol
analyses until the end of the semester in which participants were recruited and all
grades had been assigned.

The students were each paid $20 for approximately one hour of participation. These
two cohorts were selected because they represented students for whom the material
was either current (Thermo I) or for whom the material had already been covered
(Thermo II). The data from these participants was complemented with data from an
earlier study [2]. Participants in the earlier study were science or engineering majors
who had not taken a thermodynamics course, but who were at an academic level
(generally, sophomore level) appropriate for taking thermodynamics.

The materials were computer-based instructional supplements authored by E. E.


Anderson for the textbook Thermodynamics: An Engineering Approach, 4th Edition
[30]. The computer screens present students with text content, tables, figures, and
graphs. They also include active-learning screens with interactive exercises, graphical
modeling, physical world simulations, exploration, and quiz screens. More extensive
descriptions of these materials can be found elsewhere [2, 31]. The data for this study
were collected while students interacted with Chapter 2: Thermodynamic Properties.
Prior to the start of this study, the individual computer screens were classified by the
experimenters as either (a) text only, (b) text plus table, figure, or graphic, (c)
interactive, and (d) quiz in order to later use these distinctions in kinds of learning
environments for the data analyses.

We chose to hold the materials constant in this experimental design in order to allow
us to compare differences in the cognitive processing of students with more or less
experience with the thermodynamics content. This is a typical manipulation in
developmental and expert-novice studies, in which individuals of differing abilities
are given identical materials to process. These specific materials were chosen due to
convenience-they had a variety of learning contexts and were readily available. We
acknowledge that there are other types of materials that could be used effectively in
place of these.

Participants took part in the experiment through individual meetings in a quiet room
with the experimenter. They were given standard instructions for the verbal protocol
("think aloud") task. Specifically, they were asked to say out loud what they were
thinking as they worked through the materials, and not to try to explain or summarize
the material, unless that was a natural part of their thought processing. They were told
that whatever they said should simply reflect what was going through their minds
while working through the materials. The data were tape-recorded for kter
transcription, with the permission of participants. During data collection, the primary
role of the experimenter was to prompt participants to continue to verbalize their
thoughts if they fell silent for an extended period.

The twelve protocols were transcribed by an assistant. A small number of corrections


were made to the transcript in the course of coding by two independent coders. The
coders had been trained on the coding rubric in an earlier study [2]. Throughout the
process of coding the new transcripts the coders were aware that they could add new
codes to the previously established coding rubric from [2] if necessary. No new codes
were added, indicating that the established rubric was sufficient for analyzing the
responses of students with more exposure to thermodynamics than the original group.

Coding the transcript involved two components: parsing the utterances-i.e.,


segmenting the protocols-and assigning labels to the segments that corresponded to
cognitive processes. The convention adopted for parsing was to segment idea units,
which were often indicated by noticeable pauses in a participant's speech pattern or a
change in thought. The parsed segments were typically clauses or sentences. Codes
were assigned based on the context of the utterance-i.e., the four screen types, as
indicated in the columns of Table 2. The codes also identified the cognitive processes
that the student was carrying out in order to comprehend the materials, to interact with
the materials, and to complete questions-as indicated in the rows of Table 2.

III. RESULTS

A. Reliability of die Assessment Rubric

The process of transcription and coding of the twelve protocols was time-intensive
and took approximately 400 person-hours. The dataset consisted of 3,693 coded
utterances. The first analysis was carried out in order to confirm the reliability of the
coding rubric, which had been established and tested in prior research [2]. In the
initial coding, the raters agreed on parsing decisions 86.35 percent of the time. This
means that one rater coded a piece of text while the other rater combined the text with
a contiguous piece of text less than 14 percent of the time. For the 3,189 cases in
which both raters assigned a code, the raters agreed on codes 73.85 percent of the
time, which is a moderately high level of agreement. To further analyze the raters'
initial level of agreement, a Kappa statistic [32] was calculated. The use of Kappa is
often advocated by researchers because it adjusts the agreement measure for chance.
The Kappa statistic for these data was equal to 0.69. This was in the range of
Substantial Agreement (0.61-0. 80) [32]. Thus, the first goal of the present study,
which was to test the reliability of the coding rubric established in a prior study,
indicated that the rubric was reliable when used with a cohort of students who had
more exposure to thermodynamics than the original group

Discrepancies in parsing and coding were resolved through discussion among the
raters, followed by mutual agreement. These final codes were used in subsequent
analyses. Representative examples of verbalizations can be found in the Appendix.
(An extensive sample of verbalizations can be found in [2] and the complete transcript
can be obtained from the first author.) In the remaining analyses, verbalizations about
moving between screens (e.g., OK, on to the next page") were not considered because
they rekted to using the software and not directly to the content. These represented a
very small portion of the final codes (N = 132; 3.65 percent).

B. Differences in Cognitive Processing as a Function of Screen Type

For the analyses that follow, effects are deemed significant when p ≤ 0.05. The next
analysis considered the University of Wyoming participants, with the goal of
confirming the results of [2] with an independent cohort of students. The raw
frequencies of the final codes are shown in Table 2, with column sums indicating the
frequencies of codes for the four contexts in which an utterance was made, and row
sums indicating the frequencies of specific kinds of utterances, summed across
contexts. In agreement with [2], column sums show that Quiz screens evoked over 50
percent of the total verbalizations. The other screen types evoked 15-20 percent of the
verbalizations. In order to examine these data more carefully, specifically with respect
to cognitive processing, they were separated into comments associated with lower-
level cognitions (Codes 1,2, and 3, in Table 2) and higher-level cognitions (Codes 4
and 5 in Table 2), based on a search of cognitive research on comprehension and
problem solving [8,9,33-37]. The mean number of verbalizations made on each screen
type was calculated and is summarized in Table 3. These data were subjected to
statistical tests by first confirming that the data for each of the screen types were
normally distributed, using the Kolmogorov-Srnimov test [38]. The data were then
submitted to a repeated measures analysis of variance using screen type and cognition
level (lower- and higher-level cognitions) as factors. The analysis showed a
significant effect for screen type DF(3,42) = 53.38, p

C. Increases in Higher-Level Cognitions Based on Course Level

The remaining analyses are exploratory in nature and are pursued here to establish a
foundation for more comprehensive studies with larger samples. In the first analysis
we considered whether students process information more deeply as they advance
through the engineering curriculum. Data from [2] was combined with the present
data, specifically, think-aloud verbalizations from eleven science and engineering
students who had not taken a thermodynamics course and who responded to the same
material as the present participants, under identical experimental conditions. This
sample is labeled the pre-Thermo group. Data from those participants were coded by
the same coders as the present data and were not re-coded for the present analyses. In
order to exactly equate the content for the analyses, only that portion of the CD
materials that all participants completed was considered, specifically, the first 24
screens of Chapter 2-Thermodynamic Properties, on the CD, which included seven
Text Only screens, six Text plus Table, Figure, or Graphic screens, four Interactive
screens, and seven Quiz screens. The subtopics in this portion are Pure Substances,
Phases, Phase Conversions, Pressure-Volume Processes, and Property Tables. The
analyses considered only higher-level and lower-level cognitions, because these
addressed the theoretical question of interest.

To begin to address the question at hand related to increases in higher-level cognitions


based on course level, KolmogorovSmirnov tests [38] confirmed that the three levels
of data (preThermo, Thermo I, Thermo II) were normally distributed, as well as data
for higher-level and lower-level cognitions. An analysis of variance using course level
(pre-Thermo, Thermo I, Thermo II), cognition level (lower- and higher-level
cognitions), and screen type as factors showed significant effects for cognition level
[F 1, 40) = 11.13, p

D. Higher-Level Cognitions As Related to Course Grades

A compelling issue is whether academically successful students process information


at higher cognitive levels than less successful students. At the mid-semester point,
instructors for Thermo I and Thermo II selected volunteers for this study that they
perceived as higher-performing and lower-performing students with the intention of
testing this possibility. Final course grades were provided at the end of the semester
with the students' permission. Instructor ratings were consistent with final grades,
which were used to classify participants as higher- and lower-grade students (see
Table 1). An analysis of variance using grade-performance level, cognition level
(lower- and higher-level cognitions), and screen type as factors showed significant
effects for cognition level [F(1, 20) = 7.47, p

An additional confirmation of these effects used the final course points. Specifically,
nonparametric correlations were carried out separately between students' percent
course points (see Table 1) and (a) the mean number of lower-level cognitions
(Spearman p = -.532, p

IV. DISCUSSION

Research involving learning from instructional software that involved novice


participants [2] revealed significant differences in the number and kinds of cognitions
students engaged in, depending on the nature of the content on the computer screens.
Learning situations requiring overt actions-like interactive exercises and quiz
problems-evoked a larger measure of cognitive activity compared to purely text-based
materials. The present data extend those basic findings to engineering students who
were completing a first or second course in thermodynamics. Across a range of
academic experience, from pre-Thermo to Thermo II, students exhibited a highly
similar pattern of preference for interactive and non-text-oriented materials. Overall,
these findings lend credence to a basic principle of active learning that content and
context are interrelated [3,4]. In other words, students react differently to learning
material depending on the context in which the material is presented.

There are two other findings in this study, and together they have the potential to
significantly impact engineering education. On the one hand, there was no significant
increase in higher-level cognitions when one looked at the data comparing pre-
Thermo, Thermo I, and Thermo II levels. There was a trend but no statistical evidence
that students engaged in more higher-order thinking as they advanced through their
coursework. On the other hand, when higher- and lower-performing students within
courses were compared, there were significant effects showing that academically
successful students used less lower-level and more higher-level thinking. This is the
first study that we are aware of that directly ties student success within an engineering
course to higher levels of thinking. Advancing through the curriculum is in itself
apparently insufficient to guarantee becoming a more reflective and analytic thinker.
This finding, while still tentative and in need of confirmation in additional studies,
raises several essential questions for future research on engineering pedagogy. A
fundamental question is to what extent any curriculum could change a student from
being a lower-level thinker to a higher-level thinker, and to what extent these
differences may reflect more basic individual differences and styles of learning and
adaptation. The quotation at the beginning of this paper asserts that some individuals
are capable of developing skills in manipulating symbols, quantities, and equations
after extensive practice but without deep mastery of the content [7]. The lower-
performing students here may be like this latter groupstruggling to master the
problem-solving required of engineers, but doing so through lower-level thinking and
without attaining deep knowledge of the content. These findings also raise questions
about how educational innovations, like curricula using integrated and structured
problem-solving [25,41,42] and inquiry-based methods [14], should be implemented.
If there are individual differences within courses involving lower- and higher-level
thinking, perhaps specific innovations would be best applied selectively and not to all
individuals in a class.

It may be that the most important contribution of the present work is in refining a
paradigm for subsequent research on learning that can target both conceptual learning
and problem-solving skill, and the relation between die two. A limitation in the
present study is the low number of participants. However, even under these
circumstances, the experimental and analytic rubric for collecting, coding, and
analyzing student cognitions is quite robust and capable of difFerentiating between
groups. By tightly coupling conceptual information in text and figures to the
interactions and problems on the computer screens, researchers can obtain very fine-
level data regarding how and when students draw on background knowledge, connect
information in the learning context, make inferences, make predictions, and anticipate
and evaluate outcomes. A broader application of this paradigm across a variety of
engineering courses could lead to important insights into how students learn across
courses and across their undergraduate careers.
V. CONCLUSION

Knowledge of the ways in which conceptual and procedural knowledge interact and
contribute to the development of skill and expertise is essential to advancing
pedagogical practice [18, 25, 43, 44]. The present study extends the original test that
was applied to this paradigm [2] with an assessment of novice students and affirms its
reliability and validity with students who had taken thermodynamics courses. The
paradigm draws on state-of-the-art computerbased instructional technology and
combines text-oriented and interactive displays with the intention of evoking
responses from students to these materials and the goal of categorizing and
quantifying those responses in a way that allows us to better understand the cognitive
development of engineering undergraduates. Our past and present method have drawn
heavily on "think-aloud" methodologies from experimental psychology [26, 33, 45],
which has proven to be a highly-demanding and time-intensive approach. However, it
may be possible in future work to streamline and simplify data collection and analysis
without reducing the insights that this approach is providing about the learning
behaviors and cognitive processing of engineering students.

The findings presented here, while still tentative, raise important questions about the
organization of pedagogical innovation [1, 25, 46]. If it turns out, as suggested in
these data, that large individual differences persist throughout the engineering
curriculum, then it would be advisable, for the sake of maintaining the attention and
motivation of students, to tailor instructional materials and approaches with more
sensitivity to the differences in what and how students learn. The present paradigm is
assisting in identifying those differences. It will still remain to determine appropriate
instructional tactics to pursue in light of the knowledge of those differences.

ACKNOWLEDGMENT

We would like to thank Curtis Craig and Thomas Benitscheck III for discussing and
commenting on this research.

REFERENCES

[1] Marra, R.M., B. Palmer, and TA Litzenger, The Effects of a First-Year Engineering Design
Course on Student Intellectual Development as Measured by the Perry Scheme," Journal of
Engineering Education, Vol. 35, No. 1, 2000, pp. 39-45.

[2] Taraban, R, E.E. Anderson, A. DeFinis, A. Brown, A. Weigold, and M.P. Sharma, "First Steps
in Understanding Engineering Students' Growth of Conceptual and Procedural Knowledge in an
Interactive Learning Context," Journal of Engineering Education, Vol. 96, No. 1,2007, pp. 57-68.

[3] Bransford, J.D., A.L. Brown, and R.R. Cocking (Eds.), How People Learn: Brain, Mind,
Experience, and School (Expanded Edition), Washington, D.C., National Academy Press, 2000.

[4] Wankat, P.C., "Improving Engineering and Technology Education by Applying What is
Known About How People Learn," Journal of Science, Mathematics, Engineering, Technology
Education, Vol. 3, Nos. 1 and 2,2002, pp. 3-8.
[5] ABET (Engineering Accreditation Commission), "Criteria for Accrediting Engineering
Programs," accessed on 02/01/07 at www.abet.org/forms.shtml.

[6] Fisher, D., and J.P. Yoo, "Categorization, Concept Learning, and Problem-Solving: A
Unifying View," in G.V. Nakamura, R. Taraban, and D.L. Medin (Eds.), The Psychology of
Learning and Motivation: Categorization by Humans and Machines (Vol. 29), San Diego, CA:
Academic Press, 1993.

[7] Graesser, A.C., JA. Leon, and J. Otero,"Introduction to the Psychology of Science Text
Comprehension," in J. Otero, JA. Leon, and A.C. Graesser, (Eds.), The Psychology of Science
Text Comprehension, Mahwah, NJ: Erlbaum Associates, 2002.

[8] Kintsch, W., Comprehension, New York, NY: Cambridge University Press, 1998.

[9] Chi, M.T.H., PJ. Feltovich, and R Glaser, "Categorization and Representation of Physics
Problems by Experts and Novices," Cognitive Science, Vol. 5,1981, pp. 121-152.

[10] Larkin, J.H., "Enriching Formal Knowledge: A Model for Learning to Solve Textbook
Physics Problems," in J.R. Anderson (Ed.), Cognitive Skills and Their Acquisition (pp. 321-335),
Hillsdale, NJ: Erlbaum Associates, 1981.

[11] Priest, A.G., and R.O. Lindsay, "New Light on Novice-Expert Differences in Physics
Problem Solving," British Journal of Psychology, Vol. 83, pp. 389-405.

[12] Zywno, M.S., andM.F. Stewart, "Learning Styles of Engineering Students, Online Learning
Objects and Achievement," Proceedings of the American Society for Engineering Education
Annual Conference & Exposition, Portland, OR, 2005.

[13] Bloom, B.S., and D.R. Krathwohl, Taxonomy of Educational Objectives: The Classification
of Educational Goals, New York, NY: Longmans, Green, and Co., 1956.

[14] Prince, M., and M. Vigeant, "Using Inquiry-Based Activities to Promote Understanding of
Critical Engineering Concepts," Proceedings of the American Society for Engineering Education
Annual Conference & Exposition, Chicago, IL, 2006.

[15] Steif, P.S., "An Articulation of Concepts and Skills Which Underlie Engineering Statics,"
Proceedings of the 34th Frontiers in Education Conference, Savannah, GA, 2004.

[16] Miller, RL., R.A. Streveler, B.M. Olds, M.A. Nelson, and M.R. Giest, "Concept Inventories
Meet Cognitive Psychology: Using Beta Testing as a Mechanism for Identifying Engineering
Student Misconceptions," Proceedings of the American Society for Engineering Education Annual
Conference & Exposition, Portland, OR, 2005.

[17] Streveler, R., M.R. Geist, R. Ammerman, C. Suizbach, R.L. Miller, B.M. Olds, and M.
Nelson, "Identifying and Investigating Difficult Concepts in Engineering Mechanics and Electric
Circuits," Proceedings of the American Society for Engineering Education Annual Conference &
Exposition, Chicago, IL, 2006.

[18] Miller, R.L., R.A. Streveler, B.M. Olds, M.T.H. Chi, M.A. Nelson, and M.R. Giest,
"Misconceptions about Rate Processes: Preliminary Evidence for the Importance of Emergent
Conceptual Schemas in Thermal and Transport Sciences," Proceedings of the American Society
for Engineering Education Annual Conference & Exposition, Chicago, IL, 2006.

[19] Strauss, A.L., and J.M. Corbin, Basics of Qualitative Research, Newbury Park, CA: Sage
Publications, 1990.
[20] Taraban, R., M.W. Hayes, E.E. Anderson, and M.P. Sharma, "Giving Students Time for the
Academic Resources That Work," Journal of Engineering Education, Vol. 93, No. 3,2004, pp.
205-210.

[21] Taraban, R., E.E. Anderson, M.W. Hayes, and M.P. Sharma, "Developing On-Line
Homework for Introductory Thermodynamics," Journal of Engineering Education, Vol. 94, No.
3,2005, pp. 339-342.

[22] Taraban, R., "The Growth of Text Literacy in Engineering Undergraduates," Proceedings of
the American Society for Engineering Education Annual Conference and Exposition, Chicago, IL,
2006.

[23] Taraban, R., E.E. Anderson, M.P. Sharma, and A. Weigold, "Developing a Model of
Students' Navigations in Computer Modules for Introductory Thermodynamics," Proceedings of
the American Society for Engineering Education Annual Conference & Exposition., Nashville,
TN, 2003.

[24] Taraban, R., A. Weigold, E.E. Anderson, and M.P. Sharma,"Students" Cognitions When
Using an Instructional CD for Introductory Thermodynamics," Proceedings of the American
Society for Engineering Education Annual Conference and Exposition, Portland, OR., 2005.

[25] Litzinger, T., P. Van Meter, M. Wright, and J. Kulikowich, "A Cognitive Study of Modeling
During Problem Solving," Proceedings of the American Society Jor Engineering Education
Annual Conference & Exposition, Chicago, IL, 2006.

[26] Ericsson, K.A., and H.A. Simon, Protocol Analysis: Verbal Reports as Data, Cambridge,
MA: MIT Press, 1984.

[27] Pressley, M., and P. Afflerbach, Verbal Protocols of Reading: The Nature of Constructively
Responsive Reading, Hillsdale, NJ: Erlbaum Associates, 1995.

[28] Atman, CJ., and K.M. Bursic, "Verbal Protocol Analysis as a Method to Document
Engineering Student Design Processes," Journal of Engineering Education, Vol. 87, No. 2,1998,
pp. 121-132.

[29] Hmelo-Silver, C.E., and M.G. Pfeffer, "Comparing Expert and Novice Understanding of a
Complex System from the Perspective of Structures, Behaviors, and Functions," Cognitive
Science, Vol. 28, 2004, pp. 127-138.

[30] Cengel, Y.A., and M.A. Boles, Thermodynamics: An Engineering Approach, 4th Edition,
Boston, MA: McGraw-Hill, 2001.

[31] Anderson, E.E., R. Taraban, and M.P. Sharma, "Implementing and Assessing Computer-
Based Active Learning Materials in Introductory Thermodynamics," International Journal of
Engineering Education, Vol. 21, No. 6,2006, pp. 1168-1176.

[32] Landis, J.R., and G.G. Koch, The Measurement of Observer Agreement for Categorical
Data," Biometrics, Vol. 33, 1997, pp. 159-174.

[33] Pressley, M., and P. Afflerbach, VerbalProtocols of Reading: The Nature of Constructively
Responsive Reading, Hillsdale, NJ: Erlbaum Associates, 1995.

[34] Taraban, R., K. Rynearson, and M. Kerr, "College Students' Academic Performance and Self-
Reports of Comprehension Strategy Use," Journal of Reading Psychology, Vol. 21,2000, pp.
283-308.
[35] Saumell, L., M. Hughes, and K. Lopate, "Underprepared College Students' Perceptions of
Reading: Are Their Perceptions Different Than Other Students''?," Journal of College Reading and
Learning, Vol. 29, 1999, pp. 123-135.

[36] Nist, S.L., and J.L. Holschuh, "Comprehension Strategies at the College Level," in R. Flippo
and D. Caverly (Eds.), Handbook of College Reading and Study Strategy Research (pp. 75-104),
Mahwah, NJ: Erlbaum Associates, 2000.

[37] Chi, M.T.H., M. Bassok, M. Lewis, P. Reimann, and R. Glaser, "Self-Explanations: How
Students Study and Use Examples in Learning to Solve Problems," Cognitive Science, Vol. 18,
1989, pp. 145-182.

[38] SPSS Inc., SPSS Advanced Statistics User's Guide, Chicago, IL: Author, 1990.

[39] Box, G., H. Hunter, and J. Hunter, Statistics for Experimenters, New York, NY: John Wiley
& Sons, 1978.

[40] Conover, W.J., Practical Nonparametric Statistics (3rd Ed.), New York, NY: John Wiley,
1999.

[41] Gray, G.L., F. Costanzo, and M.E. Plesha, "Problem Solving in Statics and Dynamics: A
Proposal for a Structured Approach," Proceedings of the American Society for Engineering
Education Annual Conference & Exposition, Portland, OR, 2005.

[42] Woods, D.R., "An Evidence-Based Strategy for Problem Solving," Journal of Engineering
Education, Vol. 89, No. 3, 2000, pp. 443-459.

[43] VanLehn, K., "Cognitive Skill Acquisition," Annual Review of Psychology, Vol. 47,1996,
pp. 513-539.

[44] Chi, M.T.H., "Common Sense Conceptions of Emergent Processes: Why Some
Misconceptions Are Robust," Journal of the Learning Sciences, Vol. 14,2005, pp. 161-199.

[45] Ericsson, K.A., N. Charness, PJ. Feltovich, RR. Hoffman (Eds.), The Cambridge Handbook
of Expertise and Expert Performance, New York, NY: Cambridge University Press, 2006.

[46] Pavelich, M.J., and W.S. Moore, "Measuring the Effect of Experiential Education Using the
Perry Model," Joumal of Engineering Education, Vol. 31, No. 4,1996, pp. 287-292.

You might also like