You are on page 1of 10

Multimedia Courseware for Teaching Dynamic Concepts: Assessment of Student Learning

C. E. Hmelo (EduTech), E. Y. Lunken (Psychology), K. Gramoll, and I. Yusuf (Aerospace), Georgia Institute of Technology Atlanta, GA 30332-0280 email: ceh@cc.gatech.edu

Abstract:
The rise of multimedia has spurred the development of innovative packages for educational use. But what are students learning? To make the best use of this technology, we need to examine how it can enhance student learning. This makes issues of assessment very important. Multimedia is particularly powerful in engineering education for allowing students to visualize various dynamic physical phenomena, such as vibrations. In this paper, we discuss the assessment of learning for those students who use the GT-VIBS multimedia package and case study to learn about engineering vibrations. We also performed evaluation of the software. We accomplished this by conducting an experiment in which students who use a Multimedia Tutorial Module or an Engineering Case study were compared to students who read text. We present some of our preliminary results.

Introduction
Advances in multimedia have facilitated the development of innovative educational software. But it is not always clear what students are learning or that the effects of the software are being adequately assessed. To make the best use of this technology, we need to understand and assure student's learning, therefore issues of assessment are critical. An evaluative process needs to take into account two aspects: (1) assessment of student learning and (2) evaluation of the learning environment. To examine issues in assessment and evaluation, we use the GT-VIBS software, developed at Georgia Institute of Technology. GT-VIBS is a set of multimedia (MM) tutorial modules that are designed to teach engineering vibrations through the use of simulation and visualization. The modules are tutorials which use animations to illustrate concepts and equations. The tutorial contains animations both of mechanical systems and also graphs of their motion, allowing the learner to use multiple representations of the same physical phenomena.

The screen is composed of 3 panes. The panes on the left provide the visualizations, for example, a pendulum on top and a graph of its movement on the bottom. On the right, there are text and equations. For example, there might be definition of Harmonic Motion and several equations that describe harmonic systems. On some screens there is a button that allows the learner to jump into a simulation, manipulate parameters, and compare the effects of changing different parameters. In addition, these multimedia modules can be placed in the context of a case study by using a simulated laboratory approach. In the case study, the learner enters a simulated laboratory which resembles an actual laboratory. This requires that the user explore and solve the engineering problem with minimal instructions. The simulated laboratory contains a bookshelf from which the MM modules can be accessed, a file cabinet that contains specifications, and a computer for doing computations. For example, the case that we used for this study involves designing shock absorbers for motorcycle motion over a bumpy terrain. There is good evidence in the cognitive science and educational research literature that contextualized learning helps the students learn not only scientific principles, but when those principles are applicable [3]. Assessment needs to consider issues of comprehension and problem-solving transfer. Evaluation of the learning environment considers not only what the students are learning but also the usability of the software. Rather than considering learning and transfer as all-or-none phenomena, we prefer to consider different kinds of learning and to target our assessments toward those phenomena that we expect to be affected by the multimedia software. For example, because multimedia is particularly good at conveying dynamic phenomena such as those involved in learning about engineering vibrations, we would expect the students' qualitative reasoning to improve but we would expect no change in the students quantitative reasoning [3]. To measure this, we gave students qualitative and quantitative problems. Another indicator of qualitative understanding is the students' conceptual knowledge. To measure this, we asked students to define the concepts covered in the tutorial and to explain why the concepts were important. They were also asked to generate real world examples. Our approach is guided by cognitive theories of learning which make several predictions. The case study (CS) embeds the multimedia tutorials (MM) in a problem solving context, thus enhancing student learning and subsequent transfer [1]. In addition, the use of visualization and simulation supports the development of qualitative understanding. This occurs because these multimedia modalities allow the learners to experiment and receive comparative feedback [5]. We predict that students who use the software would develop a better qualitative understanding as measured by the conceptual definitions that they were asked to provide as well as understanding why these concepts are relevant than students learning from text. The multimedia tutorials and case study, GT-VIBS, are designed for advanced engineering undergraduates so it is critical that they understand why the knowledge is

important. Moreover, it is important that they know how and when to apply their knowledge. So in addition to assessing students' understanding, it is valuable to assess the students' knowledge of the relevance of the material they are learning. In this study, we expected an increase in the students' knowledge of importance and for the case study, we expected students to be able to generate examples as well. Moreover, modern theories of assessment suggest that students need to make their thinking about such issues (as importance) visible in order to assess the depth of the students' understanding [4]. Evaluation consists of examining the students' attitudes toward the software, their ratings of usability, and their learning as a result of the software. The remainder of this paper reports on the instruments that have been developed, the rationale for the measures, and some preliminary results of a comparative study of multimedia and text.

Methods
In this study we compared 3 groups of students. Those students who learned from multimedia were compared with students who used the case study prior to a multimedia module, and students who learned from text. We also report the results of their attitudes toward using these materials.

Subjects
The subjects in this study were 31 students who are juniors or seniors in engineering at the Georgia Institute of Technology. The students were from Aerospace, Civil, and Mechanical Engineering disciplines who were paid $15to participate in a 2-hour session. The students were randomly assigned to one of the three conditions: text, multimedia, or multimedia plus case study.

Materials
Multimedia Modules and Case Study
The software tested was the GT-VIBS software and an associated case study. In this study, the multimedia module (MM) for Elementary Vibrations (EV) and Transient Vibrations (TV) were used as well as the motorcycle case study (CS). Elementary Vibrations refers to the steady state response of a system when an external force is applied. It covers topics such as periodic motion, frequency, damping, and vibration analysis. The Transient Vibrations module refers to systems that are temporarily perturbed by an external force. In these systems, the force is removed and then the

vibration dies out. For example, a car is always vibrating in its steady state but if it hits a bump, it will increase its vibration temporarily. The latter vibration is an example of a transient vibration. This module covers different kinds of inputs that lead to transient vibrations.

Text
Material from a standard vibrations text was matched to the material in the MM modules. [7] The matching was done by the first author and subsequently checked by an expert in the subject matter.

Assessment Instruments
The assessment instruments were geared toward the kinds of understandings that we expected to be affected by these modules as well as by what we believe that students need to know. This allows us to examine the learning that is and is not occurring [2]. The students received pre- and post- testing on a measure of conceptual understanding that asked them to: Define the concepts covered in the module Discuss why they are important To provide examples of when these concepts are used. We expected that students' qualitative understanding of the concepts covered would improve as indicated by the accuracy of their answers. Because the modules provide no concrete examples, we did expect students to minimally increase their understanding of the applicability of the concepts. Because future versions of the software may have more examples, we felt it was important to establish a baseline of what students do and do not learn as features are added. In addition to the conceptual knowledge measure, we examined how students were applied their knowledge by solving problems. The strength of this software is in the simulation and visualization features. This suggests that there should be an improvement in the students' qualitative understanding but not necessarily their quantitative understanding. To measure this, we gave students qualitative and quantitative problems. These measures are still being scored and analyzed and are not reported here (but they are discussed in order to point out the importance of developing multiple measures to understand what students are learning).

Evaluation Instruments
To evaluate the software, students completed a usability questionnaire. This instrument asked students to evaluate different aspects of the software such as navigation, graphics, how they would like to see the software used in a course, how they thought it was helpful. Table 1 contains sample items. In the actual questionnaire,

students received each item in both a positively and negatively worded form. Students used a scale from 1-7 to indicate their agreement or disagreement with each item. All negative items were recoded to positive.

Procedure
Subjects first completed a pretest on the concepts that were to be covered. The students in the text and MM conditions then either read the text or used the MM tutorial, respectively. The students in these conditions used both the EV and TV materials. The students in the CS condition used the motorcycle case, during which they used the EV module via a simulated bookshelf. Following this the students completed a posttest, a problem-solving test, and a usability questionnaire.

Results and Discussion


This section presents the results of the usability questionnaire and the conceptual preand post- tests and a discussion of the implications of these results.

Usability
In general, the students' ratings were most positive about the multimedia tutorials and least positive toward the text, with the case study in the middle. The means and standard deviations for the usability results are reported in Table 2. The questionnaire was modified for the text condition such that items which were not applicable were deleted for those subjects. Scores above 4 indicate positive ratings and below 4 indicate negative ratings. In general the students preferred using the computer to the text. Moreover, they preferred the MMs to the CS which surprised us because we had assumed that the context provided by the case would help make the concepts more interesting and more concrete.

The students' comments indicated that the relationship between the EV module and the case study was not always apparent. This may have been a result of not being able to have both the case study window and the EV window on screen simultaneously. Some of the students noted that the goal of the case study and the means for accomplishing the goal were not clear. The students' ratings were neutral for the navigation questions in both the case and multimedia conditions. However, there were several negative comments related to the navigation. In general, there were also considerable positive comments about other aspects of the MM condition. The strong points of the multimedia software is clearly their role in helping students visualize the dynamic phenomena as demonstrated by the usability ratings and the students comments. The students were dissatisfied with the way that equations were presented, noting that they needed more explanation.

Pre- and Post- Test Conceptual Questions


The conceptual questions were scored for accuracy of definition, knowledge of why the concepts were important, and ability to generate examples. The maximum for any of these measures was eight points. Recall that the definition scores examine the students' conceptual knowledge whereas the importance and example scores reflect the students' knowledge of how the concepts are applied. For the elementary vibrations, all three groups learned the definitions (pretest mean: 3.66, standard deviation (SD) 1.73; posttest mean 5.16, SD 1.73; p<.001) but there were no differences between the three groups. For the importance measure, the means and standard deviations are shown in Table 3. The students in the two computer conditions learned more about the importance of the concepts than the students in the text condition (p<.08). In addition, the MM students condition learned more than the CS students (p<.05). For the use of examples, all the students learned from pretest to

posttest (pretest mean 2.97, SD 1.94; posttest mean: 4.94, SD 1.97; p<.001) but there were no differences among the MM, CS, and Text students. For the TV module, only the MM and text conditions were compared. The results for the definitions are shown in Table 4. The MM students learned more than the Text students (p<.05). The importance scores, shown in Table 5, indicate that the MM group learned why the concepts were important but the text group did not (p<.03). Neither group learned to generate examples.

In summary, we provide evidence that the multimedia modules tested here:

Help students understand concepts qualitatively, as demonstrated by the improvement in the students' ability to generate conceptual definitions.

Help students understand the concepts' importance in engineering as indicated by the improvement in the importance scores. Did not help students in generating examples of when these concepts come into play.

This is consistent with the design of the software, so the measures that were used were sensitive enough to detect the expected learning outcomes as well as noting where the multimedia failed to have an effect. In addition, this information is useful as formative evaluation of the software, identifying both strengths and weaknesses. This provides information that can be used in subsequent refinement. Future iterations of the software need to have more real-world examples and more explanation of the concepts that were used. Additional information will come from the analyses of student problem-solving. The usability questionnaire provides additional information that is part of the formative evaluation. The students noted the excellent use of visualization techniques as well as identifying the importance of navigational considerations. They also noted where more explanation is needed and made some valuable suggestions about how the modules might be integrated. We were surprised to note that the case study did not have the benefits that we had predicted. This suggests that more research is needed to understand how case studies might best be used. It does not mean that we should not use case studies. One explanation for the lack of effects is that this is an unfamiliar format. An alternative explanation is that it is difficult for an individual to manage the complexity inherent in the problem. In other domains, this type of contextualized learning has been demonstrated to be quite effective but in those other areas, case-based learning has been generally used in a group setting. [2, 6]

Conclusions
One of the advantages of multimedia is students can visualize and simulate dynamic systems. The multimedia software for teaching vibrations deals with such a system. The assessment and evaluation model described in this paper provides insight into what students are learning, ways that the software could be improved, and how multimedia might be useful in learning how dynamic systems function. In particular, these results suggest that the visualizations and simulations used in the software were useful in enhancing conceptual understanding and notions of why the concepts were important. The multimedia tutorials and the case both need more explanation of the equations presented and the tutorials need more concrete examples. In addition, this study suggests future directions for research in how innovative multimedia might be optimally utilized. We need to learn where visualization and simulations are most effective and what kinds of supporting explanations are necessary. In GT-VIBS, the

visualizations were often accompanied by sparse texts and a list of equations. The simulations did not have any explanatory text. From the students' comments and the usability questionnaire, it was clear that this was not sufficient. Prompting the students to reflect on their learning may help as well. Another important research area involves the case study. Students may need additional support either from a group or from software. Future versions of the software need to allow the students to look at both the case and the tutorial at the same time to help them maintain their attention on what they need to learn. Given the power of a case for making learning more real, it is important to understand what will be needed to make this work. This study suggests that multimedia can help students learn dynamic concepts but this was a short-term laboratory experiment. To fully understand how students work with multimedia, it needs to be integrated into a course on vibrations (that uses the full range of multimedia modules and cases studies) in order to understand the role that multimedia might play in the longer term.

Acknowledgments
Research reported here has been supported by ONR under contract N00014-92-J1234, and by the Woodruff Foundation's support of the EduTech Institute.

References
1. Brown, J. S., Collins, A., and Duguid, P. ``Situated cognition and the culture of learning''. Educational Researcher, vol. 18, pp. 32-41, 1989. 2. Cognition and Technology Group at Vanderbilt . ``Looking at technology in context: A framework for understanding technology and education research''. in The Handbook of Educational Psychology, D. Berliner and R. C. Calfee, Eds. NY: MacMillan, in press. 3. Cognition and Technology Group at Vanderbilt. ``Technology and the design of generative learning environments''. Educational Technology, vol. 20, pp. 35-40, 1991. 4. Glaser, R. and Silver, E. ``Assessment, testing, and instruction: Retrospect and prospect''. in Review of Research in Education, vol. 20, L. Darling-Hammond, Ed. Washington DC: American Educational Research Association, 1994, pp. 393-419. 5. Hegarty, M., Just, M. A., and Morrison, I. R. ``Mental models of mechanical systems: Individual differences in qualitative and quantitative reasoning''. Cognitive Psychology, vol. 20, pp. 191-235, 1988. 6. Hmelo, C. E. Development of independent thinking and learning skills: A study of medical problem-solving and problem-based learning. Unpublished doctoral dissertation. Nashville TN: Vanderbilt University, 1994.

7. Rao, S. Mechanical Vibrations, 2nd ed. Reading MA: Addison-Wesley, 1990.

You might also like