You are on page 1of 13

Multimed Tools Appl

DOI 10.1007/s11042-016-3774-7

A system for efficient motor learning using multimodal


augmented feedback
Grega Jakus 1 & Kristina Stojmenova 1 & Sao Tomai 1 &
Jaka Sodnik 1

Received: 26 February 2016 / Revised: 9 June 2016 / Accepted: 7 July 2016


# Springer Science+Business Media New York 2016

Abstract Numerous studies have established that using various forms of augmented feedback
improves human motor learning. In this paper, we present a system that enables real-time
analysis of motion patterns and provides users with objective information on their performance
of an executed set of motions. This information can be used to identify individual segments of
improper motion early in the learning process, thus preventing improperly learned motion
patterns that can be difficult to correct once fully learned. The primary purpose of the proposed
system is to serve as a general tool in the research on impact of different feedback modalities
on the process of motor learning, for example, in sports or rehabilitation. The key advantages
of the system are high-speed and high-accuracy tracking, as well as its flexibility, as it supports
various types of feedback (auditory and visual, concurrent or terminal). The practical application of the proposed system is demonstrated through the example of learning a golf swing.
Keywords Motor learning . Feedback . Visualization . Sonification . Golf swing

1 Introduction
Augmented or extrinsic feedback is information about the implementation of a motion pattern,
formed and transmitted to humans by means of an external source, i.e. separately and
independently of the internal human perceptual processes. Appropriately formed and conveyed
augmented feedback upgrades the internal (intrinsic) feedback and contributes to a more
efficient motor learning process, whose purpose is persistent improvement of a specific motion
pattern [24, 25, 28]. Augmented feedback can be communicated to the user by an instructor
(human expert) or it can be transmitted through specialized automatic systems. Both ways of
providing feedback involve observation of the person learning the specific motion (hereinafter

* Grega Jakus
grega.jakus@fe.uni-lj.si

Faculty of Electrical Engineering, University of Ljubljana, Traka cesta 25, 1000 Ljubljana, Slovenia

Multimed Tools Appl

referred to as a user), an evaluation of motion based on extant knowledge, and a response in


the form of augmented feedback. Augmented feedback can be transmitted through three
different perceiving channels (modalities): visual (e. g. presentation of information on a
display), audio (e. g. playing audio recordings), haptic (e. g. guiding a user through the motion
with the help of a robot), or multimodal (a combination of more perceiving channels).

1.1 Visual feedback


Vision is often regarded as the most important sense in humans everyday interaction, and is
also virtually irreplaceable in sports (e. g. tracking and terrain adaptation, measuring, observing
the opponent, etc.). Compared to the rest of the human senses, vision has the greatest
information throughput, due to the fact that visual information is perceived simultaneously
[21]. Because of its characteristics, visual feedback is the most important and most widely
researched and used type of feedback in practice for the process of motor learning. It is used in
various forms to present the instructor and therapists visual representation of motion patterns,
or to visually present relevant motion-related information.
Values of motion variables can be visually presented in an abstract or a natural form [28].
Abstract visual displays in the form of symbols, colors, scales, graphs, or imitations of
measuring instruments have a greater effect when performing less demanding motion tasks
(e. g. tasks with lower degrees of freedom and fewer relevant variables). In physiotherapy, for
example, values of force, torque, pressure, angle or time can be shown in an abstract form to
the patient in order to achieve a greater learning outcome. For example, a color scale can be
used to present the pressure distribution of a patients foot [9].
Abstract visual displays can be also used for more complex motion. Graphical scales and
measures have been used to represent deviation from equilibrium in balance training [26].
Abstract visual display is often used also in form of graphs that show the time course of motion
variables (e.g. acceleration in running [6], force onset in skiing [35], or for manual therapies
[31]) and graphs, which present the relationship between two different motion variables (e. g.
force as a function of the paddles angle in rowing) [30].
For high demanding spatial tasks, natural visualizations are more appropriate in order to
present much more information that are otherwise lost. In a natural visualization, the reference
and the users motion are displayed simultaneously, next to each other or one overlaying the
other. Examples of the latter are virtual environments in which the motion of the observed part
of the body is presented trough a video recording of the correct motion (e. g. recorded motion
from the instructor), and the learning process is based on imitation. The efficiency of virtual
environments in the process of human motor learning has been confirmed in rowing [29], table
tennis [33] and rehabilitation [11, 17].

1.2 Auditory feedback


Human hearing, because of the sequential order of perceiving information, has a lower
information throughput compared to human vision. While humans can choose to focus on
one piece of information among a range of visual information (e.g. with a simple shift of the
eye), it is much harder to concentrate on a particular sound when several auditory stimuli are
present simultaneously [15]. On the other hand, perceiving sound introduces fewer cognitive
distractions because it does not require specific attention and it is not spatially directed (spatial
orientation is not crucial in order to perceive sound). The use of auditory augmented feedback

Multimed Tools Appl

for motor learning has received less research attention than visual augmented feedback, and
has been mostly used for short repetitive motion patterns [28].
The auditory augmented feedback can be transmitted in the form of alarms, or sonified motion
variables. The alarms are simple unchanging audio recordings that are played on discrete events
such as, for example, when the observed movement variable exceeds a certain threshold value.
An alarm can, for example, provide users with feedback on the success of their motion or the
direction of the motions deviation. However, they cannot receive information about the amplitude of the deviation. The effectiveness of using alarms has been confirmed in rehabilitation for
walking (e. g. pressure distribution under a foot [9], the ratio of muscular activity between the legs
[22]), in dancing (foot pronation during barre exercise in dance) [5], and on circular movements
performed on a pommel horse (the bending angle of the hip muscles) [2].
Sonification is the process of translating motion variables into tone properties (e. g. intensity,
pitch, color, and rhythm) or virtual spatial position of the tone source. It is mainly used for
representation of big quantity of multidimensional motion data. Vertical motion, speed, and
acceleration are usually communicated through the tones pitch as a metaphor for height above
the ground, or as an analogy of the frequency of revolutions of the cars engine. Distance is
usually communicated by volume or reverberation time, while horizontal motion is appropriately presented using a balance of the stereo channels or the azimuth of the spatial source. The
augmented auditory feedback can include time information in the form of rhythmic patterns,
and key events are represented by a change in the tone volume [28]. Tone color (e. g. differentsounding music instruments) is used to sonify two or more simultaneous information flows. The
effectiveness of sonification has been confirmed in freestyle swimming [28] and breaststroke
swimming [8], and when sonificated variables included the hydrodynamic pressure of hand
paddles [3] or the speed of the swimmers belt in crawl swimming [4]. Other research studies
report also on use of sonification for boat speed and acceleration [26], paddle placement while
rowing [29], carving in alpine skiing [13], wrists motions in karate [32], and the velocity of the
club head in golf swings [1, 14].
Instead of sonifying the motion variables, it is also possible to sonify a variables deviation
from the desired value. Research studies have confirmed the efficiency of such augmented
feedback for maintaining stable stance (the deviation is presented with pitch [7, 19] and
volume [7]), speed skating (deviation of the ankle motion proportionally to the volume
[10]), rifle shooting (the deviations of the aiming point and the barrel position presented with
pitch) [16, 19], golf swings (the relative rotation of shoulders with respect to the hips) [14], and
rowing [27]. The latter is a typical example of use of augmented auditory feedback for
complex motion. It includes simultaneous sonification of three axes of a paddles motion:
horizontal (balance between the stereo channels), vertical (tone pitch), and rotation around the
paddle axis (tone color).
Although numerous researches proved that appropriately generated augmented feedback
can positively affect the process of motor learning [28], there are very few general guidelines
for designing augmented feedback for learning motion patterns. The choices of perceptual
channel, appropriate strategy for providing feedback, and accurate and detailed feedback
design largely depend on the motion complexity and the users skills. It is therefore no surprise
that most of the available studies report on systems that provide tailored augmented feedback
for one type of motion or set of predefined motion patterns.
Research results also show that in the early learning stages the most efficient feedback is
concurrent, real-time visual feedback [28]. In later stages the frequency and intensity of the
feedback must be reduced and adapted to the complexity of the motion and to the users skills.

Multimed Tools Appl

Nevertheless, many studies concentrate only on a specific level of user skills, leaving out the
need to adapt feedback to different learning stages.
Considering all the positive effects of augmented feedback for motor learning, we were
motivated to design a flexible system that supports various motion patterns and takes into
consideration motion and the user characteristics needed for effective motor learning. Due to
its ability of high-speed and high-accuracy tracking, the system can provide the user with
objective feedback on an arbitrary motion pattern. Another significant advantage of the
proposed system is that it supports more than one type of feedback (auditory and visual,
concurrent or terminal) and can be therefore adapted to the users needs or the learned motion
pattern characteristics, regardless of the level of their complexity.
This paper presents the system architecture and an example of a use case scenario for improving
a golf swing with the help of concurrent audio and visual terminal feedback. The paper concludes
with a discussion on the use of multimodal feedback and sets the direction of our future work.

2 System architecture
The proposed system for motor learning using multimodal augmented feedback consists of the
following modules [12]:

&
&
&

optical motion tracking system


application for motion analysis
feedback modules

The systems architecture is illustrated in Fig. 1. The optical motion tracking system
provides the application for motion analysis with the data on the position of the monitored
object or body parts. Based on the analysis of the received data, the application for motion
analysis provides the user with feedback through the auditory or visual feedback module.

2.1 Motion capture system


An optical motion capture system from Qualisys [23] is used for tracking and recording
movements of selected body or object parts. It consists of eight cameras and motion capture
software QTM (Qualisys Track Manager). Each camera has its own infrared light source, which
is reflected from highly reflective passive markers (attached to the selected body parts) back to the
cameras imaging sensor. These reflections are used to compute the distances between the
markers as seen from the individual cameras point of view. The data from the cameras are sent
to the QTM, which estimates the spatial positions of all markers. The tracking system supports a
tracking rate of up to 500 samples per second with the spatial accuracy of less than a tenth of a
millimeter for stationary and approximately one millimeter for moving objects.
QTMs user interface visualizes the 3D position of each marker in a Cartesian coordinate
system in form of a colored dot. Individual markers, or a group of markers, can be labeled and
connected to a structure called Ba model^. Each pair of markers with constant distance between
them can be connected with a line called Ba bone^ (due to its rigid structure). When tracking
human motion, the QTM bones correspond to the bones of the human body. The created
model, which consists of a set of markers and bones, can be saved for further measurements as
an Automatic Identification of Markers (AIM) model. Figure 2 shows a screenshot of the

Multimed Tools Appl

Fig. 1 System Architecture for multimodal augmented feedback

QTM with an example of an AIM model (the model represents an upper part of a human
body).

2.1.1 QTM real-time (RT) protocol


The QTM Real-Time (RT) protocol enables retrieval of processed real-time data from the
QTM using TCP or UDP transport protocols [20]. The QTM RT protocol provides features
such as: auto discovery, settings changing, streaming, error massaging, etc. A user connected
to the server can request data in several formats, such as 2D or 3D, 6 six degrees of freedom,
with or without marker labels, etc.
As in our case, a motion pattern is described as a set of sequential positions of the AIM model.
The latter consists of selected time-sampled markers, defined in a Cartesian coordinate system. The
application for motion analysis requests the server to stream the model data as B3D^ messages.

2.2 Motion analysis


The application for motion analysis in real-time (MART) implements the client side of the QTM
RT protocol. The MART application analyses the streaming data of the performed motion pattern
in real-time, and compares it to the reference pattern. The reference pattern is a prerecorded
motion pattern representing the desired execution of the motion pattern. The MART application
can be configured to detect various deviations between the two patterns and provides the
feedback modules with the information needed to form appropriate feedback.
The application includes different algorithms for calculation of these deviations. A deviation
can, for example, be calculated merely as a mismatch of a spatial position, or it can include also
a temporal dimension (time lag between two motion patterns). In any case, the deviation is

Multimed Tools Appl

Fig. 2 Visualization of an AIM model in QTM software consisting of a set of markers attached to different parts
of human body [18]

computed as a Euclidean distance between the (last known) spatial position of a selected marker
and the reference position - one of the recorded positions of the same marker in the reference
trace. The two methods, however, differ in the way the reference position involved in the
calculation is selected.
When mismatch of a spatial position is used as the measure for deviation, the position of the
monitored marker in the reference trace which is nearest to the position of the very same marker
in the real-time trace is selected. Such a calculation does not impact the timing of the motion, as
both positions included in the calculation can be recorded at different time instances relative to
the start of the motion pattern. In other words, when selecting the mismatch of a spatial position
as the deviation criterion, the system allows the user to execute the motion at an arbitrary pace as
long as they remain within the predefined margin from the reference motion.
When a mismatch in the timing of the performed motion is used as the criterion, the
reference position is the position of the marker in the reference trace that was recorded at the
same (or the nearest) time instance (relative to the start of the motion pattern) as the position of
the very same marker in the real-time trace.

2.3 Feedback modules


The performance of the motion pattern is submitted to the feedback modules, which present
this information to the user through the auditory and/or visual channel. Although the information throughput of the auditory channel is limited, the auditory feedback immediately draws
the users attention. Therefore, the audio channel is mostly intended to provide basic real-time
feedback in the form of alarms (e.g. when the deviation exceeds the acceptable margins or

Multimed Tools Appl

when a motion pattern is successfully completed). In addition to just playing simple sounds to
announce basic events, sonification of selected motion variables is also supported using basic
tone properties (e. g. intensity, pitch, color) or parameters of spatial sound (azimuth, elevation
and radius) to provide the user with additional spatial information.
The visual channel is used to visualize motion patterns and motion deviations after their
execution. Visual feedback provides very valuable and extensive information, which helps the
user to eliminate certain errors and correct motion patterns in the following repetitions. The
visual feedback module provides the user with abstract and natural visualizations of the
performed motion and is based on a tool for real-time visualization of QTM data [34].

3 Usage scenario
The proposed system is currently used for studying and learning golf swings based on the
following learning procedure:
1. A model of a tracking object is defined (e. g. the golf club, Fig. 3).
2. A user makes a series of swings, which are recorded with the QTM system and an
ordinary video camera.
3. Both recordings are reviewed and the user or the instructor selects a recording of one
individual swing that most closely replicates the potentially ideal swing (the desired
motion pattern); this recording is saved as a reference motion in the MART application.
4. The user starts practicing the swing under the supervision of the MART application.

3.1 Recording the reference swing


The first step includes defining (or selecting) the model of the club that will be used for
practice. As golf players can choose among different types of clubs, a separate model for each
club must be created. The MART application requires markers to be placed at the exact
predefined positions on the club and given predefined labels so they can be correctly identified
and correct clubs position and orientation can be computed. The model shown in Fig. 3, for
Fig. 3 A simple computer model
of a golf club

Multimed Tools Appl

example, is missing some of the required markers, thus allowing only the calculation of clubs
pitch and yaw but not its roll (the rotation around the clubs longitudinal axis).
When an appropriate club model is selected, the player can begin executing a series of
swings, which are recorded using the QTM system. In this phase, it is very helpful if the swings
are simultaneously recorded using a video camera. After the end of the session, the recorded
swings are reviewed, preferably with the help of a golf instructor. The swing that most closely
replicates the optimal swing is saved in the MART application as a reference motion that the
player will try to replicate when practicing with the help of augmented feedback.

3.2 Practicing the swing


During the practice with augmented feedback, the MART application constantly monitors the
reported positions of the golf club markers, analyses it, and compares it to the prerecorded
reference trace. The analysis includes identification of the typical phases of a swing, such as
start of swing, lower backswing, upper backswing, top of swing, swing, impact, etc. Such
identification is important because the parameters for correct swing execution differ according
to the phase of the swing. When the trajectory of the practiced motion diverges beyond the
margin predefined for the current swing phase (e.g. 50 mm for the clubs head in the backswing,
80 mm at the top of swing, etc.) the feedback modules respond using audio and visual feedback.
When the user gains on experience and is able to execute consistently the swings within the
preset margins of error, these margins can be reduced. Finally, a new practice cycle is started
by recording a new reference swing, approximating the ideal swing even more closely.

3.3 Golf swing-specific feedback


The audio feedback can be presented in form of alarms and sonified motion variables. The alarms
are mostly used to mark important events (e.g. start of a swing, detection of an inaccurate pattern,
successful completion of a swing, lost visibility of the object, etc.). This information is transmitted
instantly when there is excessive deviation. This way, the player can relate the Bunpleasant^
sound of the failure alarm with their current posture and clubs position and orientation.
The sonification of the motion variables related to the club or the golfer themselves is used
to perceive the deviation of the practiced swing from the desired one. The typical configurations of sonification include using stereo balance to present the deviation of the clubs roll,
pitch for presenting the deviation of swing tempo and clubs roll, and volume for deviations of
various body and club parts from their desired positions.
Along with the audio feedback, visual information is also displayed on a dedicated monitor
within the users sight. The user can therefore get information about the position and direction
of the deviation after each completed swing. Visual information includes an animated visualization of the swings deviation from the reference swing in space and time. The user can
replay the animation in various speeds or can jump to the specific time instance. The
information to be presented is configurable and it can include only the values relevant for
the specific time instance or the complete history of the pattern. The whole model of the club
can be displayed or, alternatively, only its most relevant parts can be shown.
Figure 4, for example, shows an example of the deviation of the club shaft from its proper
trajectory. The reference (green) and the training swing (red) are shown in the planes
perpendicular (above) and parallel (bellow) to the plane of the swing. For a clearer illustration,
only a part of the model was configured to be displayed, representing only the club shaft and

Multimed Tools Appl

omitting the club head. Figure 4 also shows the time instances where different parts of the
swing are identified and the moments when the training swing diverges from the reference
swing over the permissible margin. At these exact moments the sound alarm is triggered.

3.4 Analysis of results


Values of various variables, such as positions, velocities, accelerations of various observed
body parts, angle mismatches etc., can be analyzed based on data obtained in the training
session. These variables can be compared to the outcome of the performed motion (e.g. the ball
hits the hole) with the purpose of identifying the variables that are relevant to be presented using
feedback. At the same time, the efficiency of the feedback configuration can be evaluated, and
based on the results it can be reconfigured to design feedback as efficiently as possible.
An example of result analysis is shown in Fig. 5 with two examples of audio feedback
using sonification. Here, sonification is used to represent the clubs head angle deviation from
the angle perpendicular to the line between the ball and the hole. The deviation was sonificated
using pitch of a continuous tone (higher the deviation, higher the pitch). No tone was played
when the deviation of the golf head was lower than one degree from angle perpendicular to the
line between the ball and the hole.
The players involved in this short experiment performed the so-called Bputting^ short
strokes, intended to roll the ball into the hole from a short distance. Figure 5 shows two typical
examples of a strike from its beginning until the point of impact with (Fig. 5a) or without (Fig. 5b)
augmented feedback. The left part of the figure shows the trajectory of the three markers
representing the motion of the clubs head in the ground plane. The right side of the figure shows
the heads angle during the strike (purple line represents the impact with the ball). It can be seen
that the trajectory of the strike from Fig. 5b is much straighter compared to the trajectory in
Fig. 5a, and the angle of the strike is closer to zero. In the second example, the ball rolled into the
hole, which was not the case in the first example.

Fig. 4 The traces of training (red markers) and reference (green markers) golf swing

Multimed Tools Appl

Fig. 5 The analysis of a golf stroke without (a) and with feedback (b)

4 Conclusion
Although properly designed augmented feedback positively affects the process of learning
motion patterns, general guidelines can be applied only after taking into consideration the level
of complexity of the motor tasks, and the users skills. As was presented earlier, it is generally
known that the effect of the augmented feedback on the learning process increases along with the
complexity of the task. The frequency and intensity of augmented feedback are reduced as the
users skills increase up until the last stage, where the augmented feedback is completely left out.
Augmented feedback has been used and its effects have been researched for many different
motion patterns, however its use still brings a lot of challenges. Currently available studies mainly
report only on visual and auditory augmented feedback. However, for a better understanding of the
effects of this kind of feedback on the learning process, further research also on haptic and
multimodal feedback should be considered. Furthermore, use of multimodal feedback should be
preferred, if we want to take advantage of the specific benefits of each modality. Visual feedback can
be used mainly for providing a vivid representation of a motion pattern, due to sights superior
capability of perceiving spatial information. In the case of a golf swing learning system a visual
feedback enables students and instructors to visualize and extensively review the entire motion
pattern or its subparts and to identify potential errors and deviations. Auditory and haptic feedback,
on the other hand, can be used mainly throughout the motion execution phase and learning period
when real-time alerts and warnings can guide a student to correct wrong patterns on the go. Auditory
and haptic feedback are also very attention-grabbing and attract the users immediate attention.
The proposed real-time motion-capture system with multimodal feedback is an accurate and
highly efficient learning tool, especially for situations where repetitive and consistent motion
patterns are required and desired. In our future work, we will evaluate our system in an extensive
field study with large number of golf players. We expect the system to have positive effect on their
leaning abilities and to enable them to improve more quickly.

References
1. Arvind, D. K., & Bates, A. (2008, March). The speckled golfer. InProceedings of the ICST 3rd international
conference on Body area networks (p. 28). ICST (Institute for Computer Sciences, Social-Informatics and
Telecommunications Engineering)
2. Baudry L, Leroy D, Thouvarecq R, Chollet D (2006) Auditory concurrent feedback benefits on the circle
performed in gymnastics. J Sports Sci 24(2):149156

Multimed Tools Appl


3. Chollet, D., Micallef, J. P., & Rabischong, P. (1988). Biomechanical signals for external biofeedback to
improve swimming techniques. Swimming Science V. Champaign, IL: Human Kinetics Books, 389396
4. Chollet, D., Madani, M., & Micallef, J. P. (1992). Effects of two types of biomechanical bio-feedback on
crawl performance. Biomechanics and Medicine in Swimming, Swimming Science VI, 4853
5. Clarkson PM, James R, Watkins A, Foley P (1986) The effect of augmented feedback on foot pronation
during Barre exercise in dance. Res Q Exerc Sport 57(1):3340
6. Crowell HP, Davis IS (2011) Gait retraining to reduce lower extremity loading in runners. Clin Biomech
26(1):7883
7. Dozza M, Horak FB, Chiari L (2007) Auditory biofeedback substitutes for loss of sensory information in
maintaining stance. Exp Brain Res 178(1):3748
8. Effenberg AO (2000) Zum Potential komplexer akustischer Bewegungsinformationen fr die
Technikansteuerung. Leistungssport 5:1925
9. Femery VG, Moretto PG, Hespel JMG, Thvenon A, Lensel G (2004) A real-time plantar pressure feedback
device for foot unloading. Arch Phys Med Rehabil 85(10):17241728
10. Godbout, A., & Boyd, J. E. (2010, June). Corrective sonic feedback for speed skating: a case study. In
Proceedings of the 16th international conference on auditory display (pp. 2330)
11. Holden MK, Dyar T (2002) Virtual environment training: a new tool for neurorehabilitation. J Neurol Phys
Ther 26(2):6271
12. Jakus, G., Tomai, S., Sodnik, J. (2015). Sistem za podporo uenju gibalnih vzorcev z uporabo vemodalne
povratne informacije v realnem asu. In Proceedings of the 24th International Electrotechnical and Computer
Science Conference, 103106
13. Kirby R (2009) Development of a real-time performance measurement and feedback system for alpine skiers.
Sports Technology 2(12):4352
14. Kleiman-Weiner, M., & Berger, J. (2006). The sound of one arm swinging: a model for multidimensional
auditory display of physical motion
15. Koch, K., McLean, J., Segev, R., Freed, M. A., Berry, M. J., Balasubramanian, V., & Sterling, P. (2006).
How much the eye tells the brain. Curr Biol, 16(14), 14281434
16. Konttinen N, Mononen K, Viitasalo J, Mets T (2004) The effects of augmented auditory feedback on
psychomotor skill learning in precision shooting. J Sport Exerc Psychol 26(2):306316
17. Koritnik, T., Bajd, T., & Munih, M. (2008). Virtual environment for lower-extremities training. Gait Posture,
27(2), 323330
18. Kraek, A. & Sodnik, J. (2014). Qualisys Web Tracker A web-based visualization tool for realtime data of an optical tracking system. In Proceedings of ICIST 2014, 155160
19. Mullineaux DR, Underwood SM, Shapiro R, Hall JW (2012) Real-time biomechanical biofeedback
effects on top-level rifle shooters. Appl Ergon 43(1):109114
20. Nilsson, L. (2011). QTM Real-time Server Protocol Documentation, V1.9
21. Perrott, D. R., Sadralodabai, T., Saberi, K., & Strybel, T. Z. (1991). Aurally aided visual search in
the central visual field: effects of visual load and visual enhancement of the target. Hum Factors ,
33(4), 389400
22. Petrofsky J (2001) The use of electromyogram biofeedback to reduce Trendelenburg gait. Eur J Appl Physiol
85(5):491495
23. Qualysis Motion Capture Systems. (2015 September) Retrieved 6th , from: http://www.qualisys.com/
24. Schmidt, R. A. (1991). Motor learning principles for physical therapy. Contemporary management of motor control
problems: Proceedings of the II STEPConference: Alexandria, Va: American Physical Therapy Association, 4963
25. Schmidt, R. A., & Wrisberg, C. A. (2008). Motor learning and performance: A situation-based learning
approach. Champaign, IL: Human Kinetics
26. Shea CH, Wulf G (1999) Enhancing motor learning through external-focus instructions and feedback. Hum
Mov Sci 18(4):553571
27. Sigrist R, Schellenberg J, Rauter G, Broggi S, Riener R, Wolf P (2011) Visual and auditory augmented
concurrent feedback in a complex motor task. Presence: Teleop Virt Environ 20(1):1532
28. Sigrist R, Rauter G, Riener R, Wolf P (2013) Augmented visual, auditory, haptic, and multimodal feedback
in motor learning: a review. Psychonomic bulletin & review 20(1):2153
29. Sigrist R, Rauter G, Marchal-Crespo L, Riener R, Wolf P (2015) Sonification and haptic feedback in addition
to visual feedback enhances complex motor task learning. Exp Brain Res 233(3):909925
30. Smith RM, Loschner C (2002) Biomechanics feedback for rowing. J Sports Sci 20(10):783791
31. Snodgrass SJ, Rivett DA, Robertson VJ, Stojanovski E (2010) Real-time feedback improves
accuracy of manually applied forces during cervical spine mobilisation. Man Ther 15(1):1925
32. Takahata, M., Shiraki, K., Sakane, Y., & Takebayashi, Y. (2004, June). Sound feedback for
powerful karate training. In Proceedings of the 2004 conference on New interfaces for musical
expression (pp. 1318). National University of Singapore

Multimed Tools Appl


33. Todorov E, Shadmehr R, Bizzi E (1997) Augmented feedback presented in a virtual environment accelerates
learning of a difficult motor task. J Mot Behav 29(2):147158
34. Wulf G, Shea CH (2002) Principles derived from the study of simple skills do not generalize to complex skill
learning. Psychon Bull Rev 9(2):185211
35. Wulf G, Hrger M, Shea CH (1999) Benefits of blocked over serial feedback on complex motor skill
learning. J Mot Behav 31(1):95103

Grega Jakus received his Ph.D. in Electrical engineering from the University of Ljubljana, Slovenia, in 2012. Since
2013, he is an Assistant Professor at the Faculty of Electrical Engineering, University of Ljubljana, where he is a
member of the Information and Communications Technology department and the Laboratory for Information
Technologies. His research interests include knowledge representation, motor learning using augmented feedback,
and human-computer interaction, in desktop, vehicle and virtual environments. He lectures courses in the fields of
telecommunication, human-computer interaction, web technologies and software development.

Kristina Stojmenova is a PhD student at the University of Ljubljana (Slovenia). Her field of research is humancomputer interaction in in-vehicle information systems. She has an International Baccalaureate diploma (IBO)
and a Bologna Bachelors degree and a Bologna Masters degree in industrial engineering obtained from
University of Maribor (Slovenia). She is currently a junior researcher at the Information and communications
technology department on the Faculty of electrical engineering, University of Ljubljana (Slovenia). She is also a
facilitator at Demola Network, where she actively contributes to management of student teams, content creation
and modelling of workshops. Before becoming a junior researcher at the University of Ljubljana, she worked as a
graphical user interface designer for interfaces of dispatching systems at Iskratel (Slovenia). She is also the chair
of IEEE Student Branch of University of Maribor and the vice-chair of IEEE Woman in engineering Slovenia.

Multimed Tools Appl

Saso Tomai received the Ph. D. degree in electrical engineering from the University of Ljubljana in 1991. Since
2002, he is a full professor at the Faculty of Electrical Engineering, Ljubljana. He is the head of the Laboratory of
Information Technologies and the head of the Department of Information and Communication Technologies. He was
an adviser for information and telecommunications system at Ministry of Educational System and Sport from 1992 to
1998, a member of Strategic Council at Ministry of Defiance from 1999 to 2000, and the national coordinator of
research in the field of telecommunications at Ministry of Educational System and Sport from 2000 to 2003. His
research interests include ICT, signalprocessing, information theory, data mining and knowledge discovery, and
sensors. He has authored and/or coauthored 5 textbooks, 10 chapters in research monographs, and more than 200
journal and conference papers. He was an associate editor of Electrotechnical Review and he is currently the
chiefeditor of the Faculty of Electrical Engineering Publisher. He was leading researcher of 15 R&D projects and he
is the head of research program Algorithms and Optimization Methods in Telecommunications, which isone of two
research programs every time named among the best research programs in Slovenia.

Jaka Sodnik is an Associate Professor for the field of Electrical Engineering, at the Faculty of Electrical
Engineering, University of Ljubljana. As a member of the ICT department and the Laboratory for Information
Technologies, he is an active researcher and supervisor in the fields of human-machine interaction, web
technologies and acoustics. His early research focused on analysis and generation of spatial sound and its use
in human-machine interaction. As a visiting researcher at the HIT Lab New Zealand, he was also involved in
several research projects in the field of virtual and augmented reality. He also advises and supervises the R&D
department of NervTeh d.o.o., a company develops state-of-the-art motion driving simulators and offers various
methods of evaluating the drivers psychophysical state and their driving performance and abilities.

You might also like