You are on page 1of 6

A Natural and Immersive Virtual Interface for the Surgical

Safety Checklist Training


Andrea Ferracani, Daniele Pezzatini, Alberto Del Bimbo
Universit degli Studi di Firenze - MICC
Firenze, Italy
[name.surname]@uni.it
ABSTRACT
Serious games have been widely exploited in medicine train-
ing and rehabilitations. Although many medical simulators
exist with the aim to train personal skills of medical oper-
ators, only few of them take into account cooperation be-
tween team members. After the introduction of the Surgical
Safety Checklist by the World Health Organization (WHO),
that has to be carried out by surgical team members, several
studies have proved that the adoption of this procedure can
remarkably reduce the risk of surgical crisis. In this paper we
introduce a natural interface featuring an interactive virtual
environment that aims to train medical professionals in fol-
lowing security procedures proposed by the WHO adopting a
serious game approach. The system presents a realistic and
immersive 3D interface and allows multiple users to interact
using vocal input and hand gestures. Natural interactions
between users and the simulator are obtained exploiting the
Microsoft Kinect
TM
sensor. The game can be seen as a role
play game in which every trainee has to perform the correct
steps of the checklist accordingly to his/her professional role
in the medical team.
Categories and Subject Descriptors
H.5.2 [Information Systems Applications]: Information
Interfaces and PresentationMiscellaneous; J.3 [Computer
Applications ]: Life and Medical SciencesHealth
General Terms
Gaming, edutainment, human factors, health
Keywords
Serious games, medical training, immersive environments,
Surgical Safety Checklist
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full cita-
tion on the rst page. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-
publish, to post on servers or to redistribute to lists, requires prior specic permission
and/or a fee. Request permissions from Permissions@acm.org.
SeriousGames14, November 07 2014, Orlando, FL, USA
Copyright 2014 ACM 978-1-4503-3121-0/14/11. $15.00.
http://dx.doi.org/10.1145/2656719.2656725.
1. INTRODUCTION
The concept of serious games has been used since the
1960s whenever referring to solutions adopting gaming with
educational purpose rather than pure players entertainment
[1]. Among the several elds on which serious games have
been exploited, medicine is one the most prolic [12] count-
ing a large number of applications that feature Immersive
Virtual Environments (IVEs). Improvements in medical
training using gaming and IVEs have been brought out es-
pecially in the eld of surgical education where IVEs already
play a signicant role in training programmes [20][7].
The O-Pump Coronary Artery Bypass (OPCAB) game
[8] and the Total Knee Arthroplasty game [9], for exam-
ple, focus on the training of decision steps in a virtual op-
erating room. Serious games featuring IVEs about dier-
ent topics are Pulse! [6], for acute care and critical care,
CAVE
TM
triage training [3] or Burn Center
TM
for the treat-
ment of burn injuries [17]. Though all these medical training
simulators focus on individual skills, an important aspect of
healthcare to be taken into account is that, for the most, it
has to be provided by teams. Common techniques of team
training in hospitals include apprenticeship, role playing and
rehearsal and involve high costs due to the required per-
sonnel, simulated scenarios that often lack realism, and the
large amount of time needed. Several serious games fea-
turing IVEs for team training have also been developed in
the last years. 3DiTeams [6], CliniSpace
TM
[18], HumanSim
[22], Virtual ED [26] and Virtual ED II [14] are some ex-
amples of games in team training for acute and critical care
whose main objective is to identify and reduce the weak-
nesses in operational procedures. A virtual environment for
training combat medics has been developed by Wiederhold
and Wiederhold [25] to prevent the eventuality of post trau-
matic stress disorder. Medical team training for emergency
rst response has been developed as systems distributed over
the network by Alverson et al. [2] and Kaufman et al. [16].
IVEs have proved to be an eective educational tool. Tasks
can be repeated in a safe environment and as often as re-
quired. IVEs, in fact, allow to realistically experience a wide
range of situations that would be impossible to replicate
in the real world due to danger, complexity and impracti-
cability. Though IVEs have been traditionally associated
with high costs due to the necessary hardware, especially
to provide realtime interactivity (multiple projectors, input
devices etc.), and have always presented a dicult setup, in
the last years these issues have been partially solved by the
availability of cheap and easily deployable devices such as
Nintendos Wii or Microsoft Kinect
TM
.
Moreover 3D technologies used in IVEs oer several ad-
vantages in medical education training. In this context it is
essential to provide realistic representation of the environ-
ment and several points of view in order to create in learners
the correct mental model and to reinforce the memorability
of a specic procedure. On the basis of the constructivism
theory [11], one of the main feature of educational IVEs is
the possibility of providing highly interactive experiences ca-
pable to intellectually engage trainees in carrying out tasks
and activities they are responsible for. The opportunity to
navigate educational scenarios as rst person controllers al-
lows learners to have direct awareness of the interaction con-
text and of their responsibilities than in sessions mediated
through an un-related element such as a graphical user in-
terface or an other symbolic representation. In this regard
IVEs present a less cognitive eort in elaborating the con-
text and stimulate imagination. This is even more true in
scenarios where not only skills are learned in the environ-
ment where these will be applied, but also the tasks can be
carried out by the trainee acting in a natural way without
the mediation of any device (e.g. keyboard, mouse or other
controllers). It is also pointed out by constructivists that
learning is enhanced when gaming, group-work activity and
cooperation are provided [10]. Interacting with humans is
more interesting and involving than interacting with a com-
puter and it implies personal responsibility. Each learner
is expected by others to account for his/her actions and to
contribute to the achievement of the team goal. The use of
gaming techniques in education is called edutainment and
it is preferred by trainees to the traditional ones because it
increases students motivation and engagement in the learn-
ing context [15].
Anyway, despite the perceived advantages and the good
appreciation of serious games featuring IVEs for training,
the adoption of such system is still poor in real medical fa-
cilities. This is due to the fact that some open issues still
exists in systems usability, such as the facility to get lost due
to the possibility of free movement or the diculty to pro-
vide contextual and signicant set of options in the environ-
ment without switching to traditional device controlled 2D
interfaces (panels visualizing multiple choices or drop-down
menus). In particular it is an essential negative aspect of all
the mentioned system for team training that, for the most
part, trainees have to interact with the interface via com-
puter monitors or head mounted displays. This reduces the
immersive eect and hinders the desired natural collabora-
tiveness of participants in the simulation. In this regard we
think that serious games could benet a lot adopting natural
interaction techniques and exploiting new low-cost devices
available on the market for navigating and controlling IVEs.
In this paper we introduce a serious game set in an immer-
sive virtual environment with the aim to train profession-
als in surgery to carry out eciently the Surgical Security
Checklist (SSC) introduced by the World Health Organiza-
tion in 2009. The system features natural interaction tech-
niques and it is easily deployable requiring only a standard
PC, a projector and a Kinect
TM
sensor. The proposed sys-
tem is so part of the vocational training systems where the
need is to train an activity usual in the everyday job of sur-
geons. The paper is organized as follows: Sect. 2 describes
in details the SSC guidelines and denes the proposed sim-
ulation scenario. In Sect. 3 the architecture and the main
modules of the system are discussed along with some imple-
mentation details. Finally, conclusions are drawn in Sect. 4.
2. SURGICAL CHECKLIST SIMULATION
Surgical-care is a central part of health-care throughout
the world, counting an estimated 234 million operations per-
formed every year [24]. Although surgical operations are
essential to improve patients health conditions, they may
lead to considerable risks and complications. In 2009, the
World Health Organization (WHO) published a set of guide-
lines and best practices in order to reduce surgical compli-
cations and to enhance team-work cooperation [13]. The
WHO summarised many of these recommendations in the
Surgical Safety Checklist, shown in Fig. 1. Starting form
the proposed guidelines, many hospitals have implemented
their own version of the SSC in order to better match re-
quirements of internal procedures.
The SSC identies three main distinct phases correspond-
ing to a specic period during the execution of an operation:
before the induction of anaesthesia, before the incision of
the skin, before the patient leaves the operating room.
In each phase, the surgical team has to complete the listed
tasks before it proceeds with the procedure. All the actions
must be veried by a checklist coordinator who is in charge
to guarantee the correctness of the procedure. The goal is to
ensure patient safety checking machinery state and patient
conditions, verifying that all the sta members are identi-
able and accountable, avoiding errors in patient identity, site
and type of procedure. In this way, risks endangering the
well-being of surgical patients can be eciently minimized.
Gawande et al. [4] conducted several simulations of a sur-
gical crisis scenario in order to assess the benets obtained
by the adoption of the SSC. Results have shown that the
execution of the SSC improves medical teams performance
and that failure to adhere to best practices during a surgical
crisis is remarkably reduced.
Figure 1: The Surgical Safety Checklist proposed by
the WHO in 2009.
The proposed system is a serious game featuring an IVE
to train and practice users in the accomplishment of the sur-
gical checklist. It adopts natural interaction via gestures and
voice. The system de facto acts as the checklist coordinator
of the surgical team.
The simulation involves multiple trainees (up to three),
each of them associated with his/her role, and guide them
throughout a complete SSC in a surgical operation scenario.
Three actors (surgeon, anesthesiologist and the nurse), ex-
pected to complete the checklist, are automatically detected
by the system when approaching the IVE and can interact
gesturing and speaking as in the real life. Interactions and
choices are not mediated by haptic devices or other con-
trollers but mimic how the real procedure should be.
2.1 Simulation scenario
The virtual simulation allows the three health profession-
als who are going to execute a surgical operation to com-
plete the SSC, with respect of their professional role. The
IVE was designed with the objective to help the trainees to
understand the correct procedures to be followed.
Professionals (i.e. trainees) stand in front of the simu-
lation interface (see Fig. 2). The simulator associate users
with a professional role on the basis of their position in the
physical space. In practice, the user standing on left will be
associated with the anesthesiologist, the one in the centre
will be the surgeon and one on the right will be the nurse.
Figure 2: Avatar selection: the trainee can choose
which role to enact standing on the left, on the cen-
tre or on the right of the IVE.
Once every user is associated with a role, the IVE is shown
and the simulation can start. The rst environment repre-
sents the pre-operating room, where usually the before the
induction of anaesthesia part of the SSC takes place, with
the patient and the three professionals avatars (see Fig. 3).
From this moment, user can take control of the simulation
by taking a step ahead. When one of the user has control,
the environment is shown from his/her rst-person point of
view (POV). Hence, the active user can interact via voice
or gesture in order to carry out the specic step of the SSC
procedure.
Interactions can be performed both by voice and hand
gestures. Voice-based interactions are used during the SSC
when one of the professionals is expected to communicate
with the patient or with another teams member. For in-
stance, one step of the SSC simulation contemplates the
nurse conrming to other team members the site of the op-
eration, lets imagine it to be the right arm. Accordingly,
the user enacting the nurse should take a step ahead and
say something like we are going to operate the right arm
or a similar sentence. As soon as the sentence is pronounced,
Figure 3: A 3D pre-operating room environment
from a rst-person POV, showing the patient and
other professionals avatars.
the system veries if the site of the operation is correct, it
updates the SSC and it gives feedback in order to continue
with the simulation.
Hand gestures instead (e.g. hand pointing and push) are
used for other types of interaction, such as to touch the
patient, to check the state of the medical equipment or to
activate virtual menus. In thebefore the induction of anaes-
thesia part of the procedure the nurse has to indicate with
his/her hand the part of the body to be operated. The IVE
displays the patient with a view from above lying on the bed
in order to allow the trainee a better precision of movement
in the 3D space. Contextually an hand pointer is shown,
mapped to the real position of the nurses hand, which al-
lows him/her to select the body part as shown in Fig. 4.
Figure 4: The nurse indicates the patients body
part to be operated pointing his/her hand.
Furthermore, during the simulation, the active user can
perform a swipe gesture with his/her hand to activate a vir-
tual overlay containing all the information about the pa-
tient clinic history and the status of the SSC to be carried
out. The overlay simulates the patients medical card, usu-
ally available in the operation room. When all the steps of
the checklist are correctly performed, feedback is given to
trainees of the good outcome of the simulation and a sum-
mary is presented with the time spent in completing the
procedure and the recording of all the errors committed by
each user.
The system exploits these data in order to provide the
simulation with some gaming aspects: 1) the team mem-
bers share the goal to correctly complete the checklist in the
shortest time (the system provides a ranking table of the
best team scores); 2) each trainee competes with the other
team members and his performance is measured by a scor-
ing system which keeps track of all the individual errors and
show the results at the end of the simulation.
Although the SSC have been strictly dened and formalised
by the WHO at an high level, the content of the simula-
tion sessions (i.e. the patients anamnesis and clinic card) is
fully congurable in the system. This means that instruc-
tors can simulate, and trainees experience, all possible op-
erations and risks, and therefore that specic scenarios can
be created for teams of medical professionals in every eld.
3. THE SYSTEM
The proposed simulation system is a natural interface
that exploits a realistic 3D interface and natural interaction
paradigms in order to train users to correctly execute the
SSC. Users stand in front of a large-sized screen or projec-
tion and can interact without using any wearable or hand-
held device. Contrariwise they can use their voice and body
as interface controller. The system is composed by two main
modules:
The Interaction Detection Module (IDM).
The Immersive 3D Interface and the associated Game
Status Controller (GSC).
Figure 5 shows how these modules are organized from a log-
ical point of view in the simulator.
The remainder of this section describes in details the mod-
ules. Finally, some technical implementation details are pro-
vided.
Figure 5: Logical system architecture: the IDM
analyses motion and depth data coming from the
Kinect
TM
sensor, it detects actions and communi-
cates with the GSC that updates the IVE on the
basis of the scenarios conguration.
3.1 Interaction Detection
The interaction between users and the IVE is obtained
tracking movements and identifying users actions by ex-
ploiting the Microsoft Kinect
TM
sensor [27]. In particular,
the interface can be controlled by trainees using their po-
sition in the physical space, hand gestures and voice (see
Fig. 6). The three dierent types of interaction can be ex-
ecuted concurrently or are turned o/on depending on the
phase of the simulated procedure. For instance, if the sim-
ulator is waiting for a gesture input from the active user,
the speech recognition module is temporarily switched o.
The IDM is responsible of recognizing and notifying the user
actions to the GSC module in order to proceed with the sim-
ulation.
Figure 6: Dierent types of interaction the system
is able to detect.
In details, the IDM is able to detect:
Active user. The Kinect
TM
sensor is able to identity up
to six human gures standing in front of the cam-
era, but it can only track two skeletons simultaneously.
Since the system is designed for three users, a policy
is needed to dynamically dene which of them is con-
trolling the simulation (i.e. the interface). From the
depth map obtained by the sensor, the IDM detects
which user is closer to the interface. When a trainee
performs one step ahead, resulting in a reduction of the
distance on the z-axis, the module noties a change of
the active user. This detection is active during all the
simulation session.
Hand gestures. Once the active user is identied, skeleton
tracking is exploited to detect his movements and, in
particular, to track his/her hand position in the space.
The hand position is used to map an hand pointer in
the IVE used to interact with interface elements. The
IDM tracks the active hand of the trainee and sends its
spatial coordinates in the 3D space in order to update
the interface. When the user needs to activate some
virtual element on the interface, he/she must perform
a push gesture with the open hand. This is somehow
similar to a click on a mouse-based interface. Hands
tracking and gesture updates can be switched on/o
by the GSC, depending on the phase of the simulation.
Furthermore, a swipe gesture has been provided that
allows the user to open a virtual 2D overlay on the
interface containing the patient case history and clinic
card. This gesture is performed by moving the right
arm from the right to the left.
Speech inputs. The Kinect features an array of four sepa-
rate microphones spread out linearly at the bottom of
the sensor. By comparing each microphone response
to the same audio signal, the microphone array can be
exploited to determine the direction from which the
signal is coming and therefore to pay more attention
to the specic sound in that direction rather than in
another. The module checks if the angle from where
the vocal input is detected corresponds to the direction
where the active user is, in order to ignore unexpected
speech form other users. So, when the IDM has iden-
tied the active user, it tries to understand his/her
specic audio signal. In order to achieve this, back-
ground noise removal algorithms are applied [23]. The
Microsoft Speech SDK is used to verify the correctness
of trainees answers when interacting via voice with the
system. In particular, it is exploited to asses whether
users vocal input corresponds to a correct value for the
current SSC step. The GSC dynamically loads sets of
possible keywords that are correct for the current step
from the Scenario Conguration le. Based on these
sets, the module checks the audio input and computes
a condence value for the speech-to-text match. If the
detected condence is greater than a threshold value,
the correct interaction is notied to the GSC and the
simulation can continue to the next step.
3.2 Game Status Controller
The GSC is the module responsible of the logic behind the
whole simulation system. On simulation start-up the con-
troller parses an external Scenario Conguration le con-
taining all the phases and correct answers for the train-
ing session. On the basis of the conguration, it initializes
the 3D interface setting up the environment (e.g. the pre-
operating room) and the characters. For each simulation
phase and each SSC procedure step, it communicates to the
IDM which type of interaction must be detected. In case
of vocal input, the controller also evaluates a set of correct
keywords to be recognised in the current phase of the simu-
lation. The GSC receives interactions information and uses
them to verify if the actions are correct or wrong according
to the Scenario Conguration and the SSC. The controller
then updates the interface in order to give positive/negative
feedback to trainees and, consequently, to trigger animations
and 3D cameras/points-of-view changes. All the actions and
errors are recorded and stored in order to provide teams and
trainees performance feedback.
3.3 Implementation details
The system is composed by two modules: the GSC/IVE
and the IDM. The GSC module and the 3D immersive inter-
face have been developed within the Unity3D
1
environment
using both C# and JavaScript programming languages to
add interactivity support. The controller is in charge to
load and parse a Scenario Conguration le that contains
the SSC procedure structure, the patient case history and
clinic card in a JSON syntax. The IDM exploits Microsoft
Kinect
TM
SDK to detect userss interactions. It is develop
with C# language and uses a network socket to communi-
cate with the main program. Scenes and characters have
been created with Autodesk Maya. The system can run on
every normal Windows based workstation.
1
http://unity3d.com/
4. CONCLUSIONS
In this paper we present a work in progress immersive
virtual environment, featuring gaming techniques, designed
to train medical operators and professionals in the adoption
and in the correct execution of the procedures suggested by
the WHO in the Surgical Safety Checklist. The importance
of the adoption of the SSC during surgical operations has
been proved by several studies, showing improvements in
medical teams performance and in the reduction of surgi-
cal crisis. The main objective of the proposed system is
therefore the design and implementation of a natural and
immersive interface, suciently realistic and easy-to-use, to
be actually adopted in medical education courses of study.
In the immersive interface theres no possibility of free nav-
igation because the environment is controlled and exploited
as a means for the trainee to play his/her role in the game
and to follow a strictly dened procedure. Trainees are re-
quired to decide who is in charge of carrying out each of the
activities of the SSC and this improves awareness and sense
of responsibility.
According to its functionalities the developed system can
be described using the taxonomy proposed by Wattanasoon-
torn et al., as shown in Table 1. Wattanasoontorn et al.
extended the classication system dened by Rego et al.
[19] who identied a taxonomy of criteria for the classi-
cation of serious games for health (application area, inter-
action technology, game interface, number of players, game
genre, adaptability, progress monitoring, performance feed-
back and game portability).
Table 1: System classication by functionality ac-
cording to Wattanasoontorns taxonomy.
Functionality Solution
Application area Cognitive
Interaction Technology Microsoft Kinect
Game Interface 3D
Number of Players Multiplayer (up to 3 roles)
Game Genre Role Play
Adaptability No
Performance Feedback Yes
Progress monitoring No
Game portability Yes
Game Engine Unity 3D / Kinect SDK
Platform PC
Game Objective Professionals training
Connectivity No
During the entire design and development process of the
serious game, medical professionals have been involved in
order to reproduce scenarios and procedures that are highly
compliant with real surgical operations. Future works will
include several assessments of the system exploiting both
usability tests [5] and heuristic evaluations of the natural
and immersive virtual interface [21].
5. REFERENCES
[1] C. C. Abt. Serious games [by] Clark C. Abt. Viking
Press New York, 1970.
[2] D. C. Alverson, S. S. Jr, T. P. Caudell, K. Summers,
Panaiotis, A. Sherstyuk, D. Nickles, J. Holten,
T. Goldsmith, S. Stevens, K. Kihmm, S. Mennin,
S. Kalishman, J. Mines, L. Serna, S. Mitchell,
M. Lindberg, J. Jacobs, C. Nakatsu, S. Lozano, D. S.
Wax, L. Saland, J. Norenberg, G. Shuster, M. Keep,
R. Baker, H. S. Buchanan, R. Stewart, M. Bowyer,
A. Liu, G. Muniz, R. Coulter, C. Maris, and D. Wilks.
Distributed immersive virtual reality simulation
development for medical education. Journal of
International Association of Medical Science
Educators, 15(1), 2005.
[3] P. B. Andreatta, E. Maslowski, S. Petty, W. Shim,
M. Marsh, T. Hall, S. Stern, and J. Frankel. Virtual
reality triage training provides a viable solution for
disaster-preparedness. Academic emergency medicine,
17(8):870876, 2010.
[4] A. F. Arriaga, A. M. Bader, J. M. Wong, S. R.
Lipsitz, W. R. Berry, J. E. Ziewacz, D. L. Hepner,
D. J. Boorman, C. N. Pozner, D. S. Smink, et al.
Simulation-based trial of surgical-crisis checklists. New
England Journal of Medicine, 368(3):246253, 2013.
[5] D. A. Bowman, J. L. Gabbard, and D. Hix. A survey
of usability evaluation in virtual environments:
classication and comparison of methods. Presence:
Teleoperators and Virtual Environments,
11(4):404424, 2002.
[6] M. W. Bowyer, K. A. Streete, G. M. Muniz, and A. V.
Liu. Immersive virtual environments for medical
training. In Seminars in Colon and Rectal Surgery,
volume 19, pages 9097. Elsevier, 2008.
[7] D. A. Cook, R. Hatala, R. Brydges, B. Zendejas, J. H.
Szostek, A. T. Wang, P. J. Erwin, and S. J. Hamstra.
Technology-enhanced simulation for health professions
education: a systematic review and meta-analysis.
Jama, 306(9):978988, 2011.
[8] B. Cowan, H. Sabri, B. Kapralos, F. Moussa,
S. Cristancho, and A. Dubrowski. A serious game for
o-pump coronary artery bypass surgery procedure
training. In MMVR, pages 147149, 2011.
[9] B. Cowan, H. Sabri, B. Kapralos, M. Porte,
D. Backstein, S. Cristancho, and A. Dubrowski. A
serious game for total knee arthroplasty procedure,
education and training. Journal of CyberTherapy &
Rehabilitation (JCR), 3(3), 2010.
[10] C. Dede. The evolution of constructivist learning
environments: Immersion in distributed, virtual
worlds. Educational technology, 35(5):4652, 1995.
[11] T. M. Duy and D. H. Jonassen. Constructivism: New
implications for instructional technology.
Constructivism and the technology of instruction: A
conversation, pages 116, 1992.
[12] M. M. Hansen. Versatile, immersive, creative and
dynamic virtual 3-d healthcare learning environments:
a review of the literature. Journal of Medical Internet
Research, 10(3), 2008.
[13] A. B. Haynes, T. G. Weiser, W. R. Berry, S. R.
Lipsitz, A.-H. S. Breizat, E. P. Dellinger, T. Herbosa,
S. Joseph, P. L. Kibatala, M. C. M. Lapitan, et al. A
surgical safety checklist to reduce morbidity and
mortality in a global population. New England
Journal of Medicine, 360(5):491499, 2009.
[14] W. L. Heinrichs, P. Youngblood, P. M. Harter, and
P. Dev. Simulation for team training and assessment:
case studies of online training with virtual worlds.
World Journal of Surgery, 32(2):161170, 2008.
[15] J. G. Hogle. Considering games as cognitive tools: In
search of eective edutainment.. 1996.
[16] M. Kaufman. Team training of medical rst
responders for cbrne events using multiplayer game
technology. In Proceedings of Medicine Meets Virtual
Reality, 2006.
[17] S. N. Kurenov, W. W. Cance, B. Noel, and D. W.
Mozingo. Game-based mass casualty burn training.
Studies in health technology and informatics,
142:142144, 2008.
[18] D. Parvati, W. L. Heinrichs, and Y. Patricia.
Clinispace: A multiperson 3d online immersive
training environment accessible through a browser.
Medicine Meets Virtual Reality 18: NextMed, 163:173,
2011.
[19] P. Rego, P. M. Moreira, and L. P. Reis. Serious games
for rehabilitation: A survey and a classication
towards a taxonomy. In Information Systems and
Technologies (CISTI), 2010 5th Iberian Conference
on, pages 16. IEEE, 2010.
[20] H. W. Schreuder, G. Oei, M. Maas, J. C. Borles, and
M. P. Schijven. Implementation of simulation in
surgical practice: minimally invasive surgery has taken
the lead: the dutch experience. Medical teacher,
33(2):105115, 2011.
[21] A. Sutclie and B. Gault. Heuristic evaluation of
virtual reality applications. Interacting with
computers, 16(4):831849, 2004.
[22] J. M. Taekman and K. Shelley. Virtual environments
in healthcare: immersion, disruption, and ow.
International anesthesiology clinics, 48(3):101121,
2010.
[23] J. Webb and J. Ashley. Beginning Kinect Programming
with the Microsoft Kinect SDK. Apress, 2012.
[24] T. G. Weiser, S. E. Regenbogen, K. D. Thompson,
A. B. Haynes, S. R. Lipsitz, W. R. Berry, and A. A.
Gawande. An estimation of the global volume of
surgery: a modelling strategy based on available data.
The Lancet, 372(9633):139144, 2008.
[25] B. K. Wiederhold and M. Wiederhold. Virtual reality
for posttraumatic stress disorder and stress
inoculation training. Journal of Cybertherapy &
Rehabilitation, 1(1):2335, 2008.
[26] P. Youngblood, P. M. Harter, S. Srivastava,
S. Moett, W. L. Heinrichs, and P. Dev. Design,
development, and evaluation of an online virtual
emergency department for training trauma teams.
Simulation in Healthcare, 3(3):146153, 2008.
[27] Z. Zhang. Microsoft kinect sensor and its eect.
MultiMedia, IEEE, 19(2):410, 2012.

You might also like