You are on page 1of 18

Available online at www.sciencedirect.

com

Biomedical Signal Processing and Control 3 (2008) 118 www.elsevier.com/locate/bspc

Review

Human motion tracking for rehabilitationA survey


Huiyu Zhou a, Huosheng Hu b,*
b a Brunel University, Uxbridge UB8 3PH, United Kingdom University of Essex, Colchester CO4 3SQ, United Kingdom

Received 21 May 2007; received in revised form 21 August 2007; accepted 19 September 2007 Available online 31 October 2007

Abstract Human motion tracking for rehabilitation has been an active research topic since the 1980s. It has been motivated by the increased number of patients who have suffered a stroke, or some other motor function disability. Rehabilitation is a dynamic process which allows patients to restore their functional capability to normal. To reach this target, a patients activities need to be continuously monitored, and subsequently corrected. This paper reviews recent progress in human movement detection/tracking systems in general, and existing or potential application for stroke rehabilitation in particular. Major achievements in these systems are summarised, and their merits and limitations individually presented. In addition, bottleneck problems in these tracking systems that remain open are highlighted, along with possible solutions. # 2007 Elsevier Ltd. All rights reserved.
Keywords: Stroke rehabilitation; Sensor technology; Motion tracking; Biomedical signal processing; Control

Contents 1. 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generic sensor technologies . . . . . . . . . . . . . . . . . . . . . . . . 2.1. Non-visual tracking systems . . . . . . . . . . . . . . . . . . . 2.2. Visual based tracking systems . . . . . . . . . . . . . . . . . . 2.2.1. Visual marker based tracking systems . . . . . . . 2.2.2. Marker-free visual based tracking systems. . . . 2.3. Combination tracking systems . . . . . . . . . . . . . . . . . . Non-visual tracking systems . . . . . . . . . . . . . . . . . . . . . . . . 3.1. Inertial sensor based systems . . . . . . . . . . . . . . . . . . . 3.2. Magnetic sensor based systems . . . . . . . . . . . . . . . . . 3.3. Other sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4. Intersense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5. Glove-based analysis. . . . . . . . . . . . . . . . . . . . . . . . . Visual marker based tracking systems . . . . . . . . . . . . . . . . . 4.1. Passive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2. Active . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3. Non-commercialized systems . . . . . . . . . . . . . . . . . . . Marker-free visual tracking systems . . . . . . . . . . . . . . . . . . . 5.1. 2-D approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1. 2-D approaches with explicit shape models . . . 5.1.2. 2-D approaches without explicit shape models . 5.2. 3-D approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1. Model-based tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 2 3 4 4 4 4 4 5 6 6 6 7 7 8 8 9 9 9 9 10 10

3.

4.

5.

* Corresponding author: Tel.: +44 20 872297; fax: +44 20 872788. E-mail address: hhu@essex.ac.uk (H. Hu). 1746-8094/$ see front matter # 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.bspc.2007.09.001

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

6.

7. 8.

5.2.2. Feature-based tracking . . . . . 5.2.3. Camera conguration . . . . . . 5.3. Animation of human motion . . . . . . . Robot-aided tracking systems . . . . . . . . . . . 6.1. Typical working systems . . . . . . . . . . 6.1.1. Cozens . . . . . . . . . . . . . . . . 6.1.2. MIT-MANUS. . . . . . . . . . . . 6.1.3. Taylor and improved systems. 6.1.4. MIME. . . . . . . . . . . . . . . . . 6.1.5. ARM Guide. . . . . . . . . . . . . 6.1.6. Others . . . . . . . . . . . . . . . . . 6.2. Haptic interface techniques . . . . . . . . 6.3. Other techniques . . . . . . . . . . . . . . . 6.3.1. Gait rehabilitation. . . . . . . . . Discussion . . . . . . . . . . . . . . . . . . . . . . . . Conclusions. . . . . . . . . . . . . . . . . . . . . . . . Acknowledgement . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

11 11 11 12 12 12 12 12 12 13 13 13 13 13 14 14 14 14

1. Introduction Evidence shows that, during 20012002, 130,000 people in the UK experienced a stroke [72] and required admission to hospital. More than 75 % of these people were elderly; and anticipated locally based multi-disciplinary assessments and appropriate rehabilitative treatments after they were dismissed from hospital [29,54]. This resulted in increased demand on healthcare services and expense in the national health service. Reducing the need for face-to-face therapy might lead to an optimal solution for therapy efciency and expense issues. Therefore, more and more interest has been drawn toward the development of home based rehabilitation schemes [4,61]. The goal of rehabilitation, is to enable a person who has experienced a stroke to regain the highest possible level of independence so that they can be as productive as possible [182,183]. In fact, rehabilitation is a dynamic process which uses available facilities to correct any undesired motion behaviour in order to reach an expectation (e.g. ideal position) [150]. Therefore, in a rehabilitation course the movement of stroke patients needs to be continuously monitored and rectied so as to hold a correct motion pattern. Consequently, detecting/ tracking human movement becomes vital and necessary in a home based rehabilitation scheme [179]. This paper provides a survey of technologies embedded within human movement tracking systems, which consistently update spatiotemporal information with regard to human movement. Existing systems have demonstrated that, to some extent, proper tracking designs help accelerate recovery in human movement. Unfortunately, many challenges still remain open, due to the complexity of human motion, and the existence of error or noise in measurement. 2. Generic sensor technologies Human movement tracking systems are expected to generate real-time data that dynamically represents the pose changes of a human body (or a part of it), based on well developed motion-

sensor technologies [9]. Fig. 1 illustrates a proposed motion tracking system, where human movements can be detected using available visual and on-body sensors. Motion sensor technology in a home based rehabilitation environment, involves accurate identication, tracking, and post-processing of movement. Currently, intensive research interests address the application of position sensors, such as goniometry, pressure sensors and switches, magnetometers, and inertial sensors (e.g. accelerometers and gyroscopes). Data acquisition is usually bound to noise or error. It is essential to study the structure and characteristics of individual sensors so that we can identify noise or error sources. To proceed with a relevant analysis, we rst summarise overall sensory technologies, followed by a detailed description. In general, a tracking system can be non-visual, visual based (e.g. marker and markerless based) or a combination of both. Fig. 2 illustrates a classication of available sensor techniques that will be introduced later in this paper. Performance of the systems based on these techniques is outlined in Table 1. 2.1. Non-visual tracking systems Sensors employed within these systems adhere to the human body in order to collect movement information. These sensors are commonly categorised as mechanical, inertial, acoustic,

Fig. 1. An illustration of a proposed human movement tracking system (courtesy of Zhang et al. [175]).

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

Fig. 2. Classication of human motion tracking using sensor technologies.

Table 1 Performance comparison of different motion tracking systems according to Fig. 2 Systems Inertial Magnetic Ultrasound Glove Marker Marker-free Combinatorial Robot Accuracy High Medium Medium High High High High High Compactness High High Low High Low High Low Low Computation Efcient Efcient Efcient Efcient Inefcient Inefcient Inefcient Inefcient Cost Low Low Low Medium Medium Low High High Drawbacks Drifts Ferromagnetic materials Occlusion Partial posture Occlusion Occlusion Multidisciplinary Limited motion

radio, or microwave and magnetic based. Some of them have such small footprints that they can detect small amplitudes, such as nger or toe movements. Generally speaking, each kind of sensor has its own advantages and limitations. Modality-specic, measurement-specic, and circumstance-specic limitations accordingly affect the use of particular sensors in different environments [162]. One example is an inertial accelerometer (piezoelectric [136], piezoresistive [97] or variable capacitive [167]), which normally converts linear or angular acceleration (or a combination of both) into an output signal [21]. An accelerometer is illustrated in Fig. 3. An accelerometer is physically compact and lightweight, therefore it has been frequently accommodated in portable devices (e.g. headmounted devices). Furthermore, the outcomes of accelerometers

are immediately available without complicated computation. This feature normally plays a great role if people only need to obtain basic acceleration information from accelerometers. Unfortunately, accelerometers suffer from the drift problem if they are used to estimate velocity or orientation. This is due to sensor noise or offsets. Therefore, external correction is demanded throughout the tracking stage [17]. Even though each sensor has its own drawbacks, other available sensors may be used as a complement. For example, to improve the accuracy of location computation people have exploited odometers, instead of accelerometers, in the design of mobile robots. Recently, voluntary repetitive exercises administered with the mechanical assistance of robotic rehabilitators, have proven effective in improving arm movement ability in post-stroke populations. Through these robot-aided tracking systems, human movements can be measured using electromechanical or electromagnetic sensors that are integrated in the structures. Electromechanical sensor based systems prohibit free human movement, but the electromagnetic approach permits motion freedom. It has been justied that robot-aided tracking systems provide a stable and consistent relationship over a limited period, between system outputs and real measurements. An introduction to such robotaided tracking systems will be provided in a later section. 2.2. Visual based tracking systems Optical sensors (e.g. cameras) are normally applied to improve accuracy in position estimation. Tracking systems can be classied as either visual marker or marker-free, depending on

Fig. 3. Entrans family of miniature accelerometers [77].

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

whether or not indicators need to be attached to body parts. First, we provide a brief description of visual marker based systems. 2.2.1. Visual marker based tracking systems Visual marker based tracking is a technique where cameras are applied to track human movements, with identiers placed upon the human body. As the human skeleton is a highly articulated structure, twists and rotations generate movement at high degrees-of-freedom. As a consequence, each body part conducts an unpredictable and complicated motion trajectory, which may lead to inconsistent and unreliable motion estimation. In addition, cluttered scenes, or varied lighting, most likely distract visual attention from the real position of a marker. As a solution to these problems, visual marker based tracking is preferable in these circumstances. Visual marker based tracking system, e.g. VICON or Optotrack, are quite often used as a golden standard in human motion analysis due to their accurate position information (errors are around 1mm). This accuracy feature optimistically motivates popular applications of the visual marker based tracking systems in medicine. For example, a MacReex Motion Capture System was used in a study to evaluate the relationship between the body-balancing movements and anthropometric characteristics of subjects while they stood on two legs with eye open and closed [95]. Another application example is a study about inducing slips in healthy young subjects and determine if subjects that recovered after the slip could be discriminated from those subjects who fell after the slip using selected lower extremity kinematics [19]. One major drawback of using optical sensors with markers, is that rotated joints or overlapped body parts cannot be detected, and hence 3-D rendering is not available [146]. This situation could possibly happen in a home environment, where a patient lives in a cluttered background. 2.2.2. Marker-free visual based tracking systems Marker-free visual based tracking systems only exploit optical sensors to measure movements of the human body. This application is motivated by the aws of using visual marker based systems [81]: (1) identication of standard bony landmarks can be unreliable; (2) the soft tissue overlying bony landmarks can move, giving rise to noisy data; (3) the marker itself can wobble due to its own inertia; (4) markers can even come adrift completely. A camera can be of a resolution of a million pixels, indicating a high accuracy in detection of object movements. In addition, cameras nowadays can be easily obtained with a low cost, while the camera parameters can be exibly congured by the user. These merits encourage cameras to be popularly used in surveillance applications. A little bit disappointment is that this technique requires intensive computation to conduct 3-D localisation and error reduction; in addition to the minimisation of the latency of data [22]. Furthermore, high speed cameras are required, as conventional cameras (with a sampling rate of less than sixty frames a second) provide insufcient bandwidth for accurate data representation [12].

2.3. Combination tracking systems These systems take advantage of marker based and markerfree based technologies. This combination strategy helps reduce errors arising from using individual platforms. For example, the boundaries or silhouettes of human body parts can be captured in a motion trajectory if markers mounted on these parts are not in the eld of view of cameras. This strategy requires intensive calibration and computation, and hence will not be discussed further in this paper. For the purposes of research interest, a reader can refer to literature such as ref. [152]. 3. Non-visual tracking systems Tracking human actions is an effective method, which consistently and reliably represents motion dynamics over time [177]. In a rehabilitive course, the limbs of a patient must be localised so that undesirable patterns can be corrected. For this purpose, it is possible to make use of non-visual sensors, e.g. electromechanical or electromagnetic sensors. In fact, nonvision based tracking systems have been commonly used, as they do not suffer from the line-of-sight problem which cannot be effectively dealt with in a home based environment. In this paper, we will focus on systems with inertial, magnetic, ultrasonic, and other similar sensing techniques. Additionally, glove based techniques are included (due to their employment of modern sensing techniques). 3.1. Inertial sensor based systems Inertial sensors like accelerometers and gyroscopes have been frequently used in navigation and augmented reality modeling [157,176,115,172,15]. This is an easy to use and costefcient way for full-body human motion detection. The motion data of the inertial sensors can be transmitted wirelessly to a work base for further process or visualisation. Inertial sensors can be of high sensitivity and large capture areas. However, the position and angle of an inertial sensor cannot be correctly determined, due to the uctuation of offsets, and measurement noise, leading to integration drift. Therefore, designing drift-free inertial systems is the main target of the current research. MT9 (newly MTx) is a digital measurement unit that measures 3-D rate-of-turn, acceleration, and earth-magnetic eld [85] (Fig. 4). In a homogeneous earth-magnetic eld, the MT9 system has 0.058 root-mean-square (RMS) angular resolution; 1.08 static accuracy; and 38 RMS dynamic accuracy. Using such a commercially available inertial sensor, Zhou and Hu discovered a novel tracking strategy for human upper limb motion [180,178]. Human upper limb motion was represented by a kinematic chain, in which there were six joint variables to be considered. A simulated annealing based optimization method was adopted to reduce measurement error [180]. To effectively depress noise in measurement, Zhou and Hu [181] exploited an extended Kalman lter that fused the data from the on-board accelerometers and gyroscopes.

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

Fig. 4. A MT9 sensor [85].

Experimental results demonstrated a reduction of drift and noise. G-Link is an inertial sensor similar to MT9 (Fig. 5). G-Link has two acceleration ranges: 2 Gs and 10 Gs, while its battery lifespan can be 273 h. Furthermore, this product has a small transceiver size: 25 25 5 mm2 . Many G-Links can be linked together to form a wireless sensor network [78]. Literature about the use of G-Link can be found in refs. [101,5]. Luinge [104] introduced the design and performance of a Kalman lter to estimate inclination from the signals of a triaxial accelerometer. Empirical evidence shows that inclination errors are less than 28. Unfortunately, the problem of integration drift around the global vertical direction still appears. Foxlin et al. [55] revealed the rst prototype of the FlightTracker, which was designed to overcome the shortcomings addressed in a hybrid tracking platform that fuses ultrasonic range measurements with inertial tracking. Experimental results show that drift was slower than 1 mm/s or 18 min1. Lobo and Dias [103] presented a framework for using inertial sensor data in vision systems. Using the vertical reference provided by the inertial sensors, the image horizon line could be determined. The main weakness of this method was that vertical world features were not available in some circumstances, e.g. at surfaces, and cluttered scenes, etc. Similar work also has been described in refs. [113,117]. Applications of inertial sensors in medicine have been popularly observed up to date. Steele et al. [144] provided an overview of the potential applications of motion sensors to

detect physical activity in persons with chronic pulmonary disease in the setting of pulmonary rehabilitation. They used StayHealthy RT3 triaxial accelerometers to measure activity over 1 min epochs for collecting bouts of acitivity over 21 days. The study showed that in general the sensors outcomes corresponded to the real activity. Similarly, an accelerometer based wireless body area networks system was proposed by Jovanov et al., which presented ambulatory health monitoring using two perpendicular dual axis accelerometers for extended periods of time and near real-time updates of patients medical records through the Internet [92]. Naja et al. proposed to use a miniature gyroscope to conduct a study on the falls of the elderly [114]. The experimental results showed that the sensor measurement enabled the falls to be predicted according to the previous history of the elderly subjects with high and low fall-risk. Patrick et al. reported that parkinsonian rigidity could be assessed by monitoring force and angular displacements imposed by the clinician onto the limb segment distal to the joint being evaluated [118]. 3.2. Magnetic sensor based systems Magnetic motion tracking systems have been widely used for tracking user movements in virtual reality, due to their size, high sampling rate, lack of occlusion, etc. Despite great successes, magnetic trackers have inherent weaknesses, e.g. latency and jitter [100]. Latency arises due to the asynchronous nature by which sensor measurements are conducted. Jitter appears in the presence of ferrous or electronic devices in the surrounding, and noise in the measurements. A number of research projects have been launched to tackle these problems, using Kalman ltering or other predictive ltering methods [170,107,173]. MotionStar is a magnetic motion capture system produced by the Ascension Technology Corporation in the United States [73] (Fig. 6). It holds such good performance as: (1) translation range: 3:05 m; (2) angular range: all attitude 1808 for Azimuth and Roll, 908 for Elevation; (3) static resolution (position): 0.08 cm at 1.52 m range; (4) static resolution (orientation): 0.1 RMS at 1.52 m range. This system applies direct current (dc) magnetic

Fig. 5. A G-Link unit [78].

Fig. 6. A Motionstar Wireless 2 system [73].

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

Fig. 8. Illustration of InterSense IS-300 Pro [75]. Fig. 7. Illustration of LIBERTY by Polhemus [80].

tracking technologies, which are signicantly less susceptible to metallic distortion than alternating current (ac) electromagnetic tracking technologies. Another example is LIBERTY from Polhemus [80] (Fig. 7). LIBERTY computed at an extraordinary rate of 240 updates per second per sensor, with the ability to be upgraded from four sensor channels to eight via the addition of a single circuit board. Also, it had a latency of 3.5 ms, a resolution of 0.038 mm at a 30 cm range, and a 0.00128 orientation. Molet et al. [111,112] presented a real-time conversion of magnetic sensor measurements into human anatomical rotations. Using solid-state magnetic sensors and a tilt sensor, Caruso [27,28], developed a new compass that could determine an accurate heading. Suess et al. presented a frameless system for intraoperative image guidance [147]. This system generated and detected a dc pulsed magnetic eld for computing the displacements and orientation of a localizing sensor. The entire tracking system consists of an electromagnetic transmitting unit, a sensor and a digitizer that controlled the transmitter and received the data from the localizing sensor. Experiments revealed that the mean localisation errors are less than 2 mm. An image guided intervention system was proposed by Wood et al. [164]. A tetrahedral-shaped weak electromagnetic eld generator was designed in combination with open-source software components. The minimal registration error and tracking error are less than 5 mm. 3.3. Other sensors Acoustic systems collect signals by transmitting and sensing sound waves, where the ight duration of a brief ultrasonic pulse is timed and calculated. These systems are used in medical applications [48,120,133], but have not been used in motion tracking. This is due to the following drawbacks: (1) the efciency of an acoustic transducer is proportional to the active surface area, so large devices are desirable; (2) to improve the detected range, the frequency of ultrasonic waves must be low (e.g. 10 Hz), but this affects system latency in continuous measurement; (3) acoustic systems require a line of sight between emitters and receivers. Ultrasonic systems can be combined with other techniques so as to solve these existing problems. InterSense produced the

IS-600 Motion Tracker [75] (Fig. 8), which actually eliminated jitter. It is a hybrid acousto-inertial system, where orientation and position are generated by integrating the outputs of its gyros and accelerometers, and drift can be corrected using an ultrasonic time-of-ight range system. 3.4. Intersense Radio and microwaves are normally used in navigation systems and airport landing aids [162]. They have very low resolutions, therefore they cannot be applied in human motion tracking. Electromagnetic wave-based tracking approaches can provide range information, by calculating the radiated energy dissipated in a form of radius r as 1/ r 2 . For example, using a delay-locked loop (DL), a Global Positioning System (GPS) can achieve a resolution of 1 m. Obviously, this is not enough to discriminate human movements of 050 cm displacements per trial. A radio frequency-based precision motion tracker can be used to detect motion over a few millimeters. Unfortunately, it uses large racks of microwave equipment which is accommodated in a large room. The electromyogram (EMG) is an analysis of the electrical activity of the contracting muscles. It is often used to detect the muscles that are working or not working, and in what sequence they are working to respond the needs of the movements. EMG can provide an amount of intensity of muscle activity. This technique has commonly been used in rehabilitation exercises. Wang et al. designed a wearable training unit, which collected signals such as heart rate and EMG. By inspecting these biosignals, one can select optimal control signals corresponding to a proper workload for the device [160]. Mavroidis et al. [109] introduced several smart rehabilitation devices developed by Northeasten University Robotics and Mechatronics Laboratory. Among these devices, Biofeedback is a device that uses EMG to monitor muscle activity after knee surgery, and provides quantitive information on how a patient responds to a delivered stimulation. Patten et al. applied EMG to explore the biomechanical variations of locomotor activities when patients received vesticular rehabilitation [119]. 3.5. Glove-based analysis Since the late 1970s, people have studied glove-based devices for the analysis of hand gestures. Glove-based devices

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

Fig. 9. Illustration of a glove-based prototype (image courtesy of KITTY TECH [76]). Fig. 10. An operating Qualisys system [82].

adopt sensors attached to a glove (Fig. 9), that transduces nger exion and abduction into electrical signals, to determine hand pose. These devices may be used to reconstruct motor function in the case of hand impairment. These glove-based devices are encouraged to be used in hand therapy due to the exibility, easy donning and removal, lightweight and accuracy. The Dataglove (originally developed by VPL Research) was a neoprene fabric glove with two ber optic loops on each nger. At one end of each loop is an LED, and at the other end a photosensor. The ber optic cable has small cuts along its length. When the user bends a nger, light escapes from the ber optic cable through these cuts. The amount of light reaching the photosensor is measured and converted into a measure of how much the nger is bent. The Dataglove requires recalibration for each user [184]. The CyberGlove system included one CyberGlove [84], an instrumentation unit, a serial cable to connect to a host computer, and an executable version of the VirtualHand graphic hand model display and calibration software. Based on the design of the DataGlove, the PowerGlove was developed by Abrams-Gentile Entertainment. The PowerGlove consists of a sturdy Lycra glove with at plastic strain gauge bers, coated with conductive ink running up each nger; this measures change in resistance during bending, to measure the degree of ex for the nger as a whole. It employs an ultrasonic system to track the roll of the hand, where ultrasonic transmitters must be oriented toward the microphones in order to obtain an accurate reading. Drawbacks appear when a pitching or yawing hand changes the orientation of transmitters, and the signal is lost by the microphones. Simone and Kamper [141] reported a wearable monitor to measure the nger posture using a data glove that use materials of Lycra and Nylon blend, and contains ve bend sensors. the repeatability test showed average variability of 2.96% in the gripped hand position. A force feedback glove called the Rutgers Master was integrated into an orthopedic telerehabilitation system by Burdea et al. [23]. 4. Visual marker based tracking systems In 1973, Johansson explored his famous Moving Light Display (MLD) psychological experiment, to perceive biological motion [91]. He attached small reective markers to the joints of human subjects, which allowed these markers to be monitored during trajectories. This experiment became a

milestone in human movement tracking. Marker based tracking systems are capable of minimising the uncertainty of a subjects movements, due to the unique appearance of markers. This basic theory is still embedded in current state-of-the-art motion trackers. These tracking systems can be passive, active or hybrid in style: a passive system uses a number of markers that do not generate any light, only reect incoming light. In contrast, markers in an active system can produce light, i.e. infrared, which is then collected by a camera system. 4.1. Passive Qualisys is a motion capture system consisting of 116 cameras, each emitting a beam of infrared light [82] (Fig. 10). Small reective markers are placed on an object to be tracked. Infrared light is ashed from close to, and then picked up by, the cameras. The system then computes a 3-D position of the reective Target, by combining 2-D data from several cameras. A similar system, VICON, was specically designed for use in virtual and immersive environments [83] (Fig. 11). The application of these passive optical systems can be often found in medical science. For example, Davis et al. reported a study of using a VICON system for gait analysis [42]. A VICON system was also used to calculate joint centers and segment orientations by optimizing skeletal parameters from the trials [31].

Fig. 11. Reective markers used in a real-time VICON system [83].

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

Fig. 13. A Polaris system [79].

during a lateral hop task for the period 200 ms pre- and postinitial contact (IC) [43]. Another example is Polaris (Fig. 13). The Polaris system (Northern Digital Inc.) [79] optimally combines simultaneous tracking in both wired and wireless states. The whole system can be divided into two parts: position sensors, and passive or active markers. The former consist of a couple of cameras that are only sensitive to infrared light. This design is particularly useful when background lighting varies and is unpredictable. Passive markers are covered by reective materials, which are triggered by arrays of infrared light-emitting diodes surrounding the position sensor lenses. With proper calibration, this system may achieve 0.35 mm RMS accuracy in position measures.
Fig. 12. A CODA system [74].

4.3. Non-commercialized systems 4.2. Active One of the active visual tracking systems is CODA (Fig. 12). CODA was pre-calibrated for 3-D measurement, without the need to recalibrate using a space-frame [74]. Up to six sensor units can be used together, which enables the system to track 3608 movement. Active markers can be identied by virtue of their positions during a time multiplexed sequence. At a 3 m distance, this system has such good accurate parameters as follows: 1:5 mm in X and Z axes, 2.5 mm in Y axis for peakto-peak deviations from actual position. CODAs measurements have been commonly used as ground truth to evaluate the motion measurements [177,182]. In addition, this system was employed in an instrumented assessment of muscle overactivity and Spasticity with dynamic polyelectromyographic and Motion Analysis for Treatment Planning [49]. It was used to measure 3-D lower limb kinematics, kinetics and surface electromyography (EMG) of the rectus femoris, tibialis anterior, peroneus longus and soleus muscle in all subjects Using established techniques, people have developed hybrid strategies to perform human motion tracking. Such systems, although still at an experimental stage, have already demonstrated promising performance. Lu and Ferrier [106] presented a digital-video based system for measuring the human motion of repetitive workplace tasks. A single camera was exploited to track colored markers placed on upper limbs. From the marker locations, one could recover a skeleton model of the investigated arm. However, this system was not able to separate lateral movements of the arm. Mihailidis et al. [110] designed a vision based agent for an intelligent environment that assists older adults with dementia during daily living activity. A color-based motion tracking strategy was used to estimate upper limb motion. The weakness of this agent was the lack of a three dimensional representation for real movements. Tao et al. [152,151] proposed a visual tracking system, which exploited both marker-based and marker-free tracking methods (Fig. 14). Unfortunately, like other marker based motion

Fig. 14. Demonstration of Tao and Hus approach: (a) markers attached to the joints; (bd) markers position captured by three cameras [152].

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

Fig. 15. Demonstration of Pnder by Wren, et al. [165].

trackers, this system required calibration and professional intervention. 5. Marker-free visual tracking systems In the previous section, we described the features of markerbased tracking systems; which are restricted to limited degrees of freedom, due to mounted markers. As a less restritive motion capture technique, markerless based systems are capable of overcoming the mutual occlusion problem; as they are mainly concerned with the boundaries or features of human bodies. This has been an active and challenging research area for the past decade. The research addressed in this area is still ongoing, because of unsolved technical problems. Applications of these marker-free visual tracking systems have demonstrated promising performance. For example, a Camera Mouse system was developed to provide computer access for disabled people [10]. This system could track the users movements with a video camera and translated them to the movements of the mouse pointer on the screen. Twelve people with severe cerebral palsy or traumatic brain injury had used this system, and nine of them showed success. Human motion analysis can be divided into three groups [1]: body structure analysis (model and non-model based), camera conguration (single and multiple), and correlation platform (state-space and template matching). We provide a brief description as follows. 5.1. 2-D approaches As a commonly used framework, 2-D motion tracking is only concerned with human movement in an image plane; where the tracking system may adapt exibly, and respond rapidly due to reduced spatial dimensions. This approach can be employed with and without explicit shape models. Modelbased tracking involves matching generated object models with acquired image data.

5.1.1. 2-D approaches with explicit shape models In the presence of arbitrary human movements, selfocclusion commonly appears in rehabilitation environments. To solve this problem, one normally uses a priori knowledge of human movement in 2-D, by segmenting the human body. For example, Wren et al. [165] presented a region-based approach, where they regarded the human body as a set of blobs which could be described using a spatial and color Gaussian distribution (see Fig. 15). Ju et al. [93] proposed a cardboard human body model using a set of jointed planar ribbons. Niyogi and Adelson [116] examined the braided pattern yielded by the lower limbs of a pedestrian, whose head movements were projected in a spatio-temporal domain; followed by identication of joint trajectories. 5.1.2. 2-D approaches without explicit shape models Since human movements are non-rigid and arbitrary, the boundaries or silhouettes of a human body are viable and deformable, leading to difculty in describing them. Tracking the human body, e.g. hands, is normally achieved by means of background substraction or color detection. Furthermore, due to unavailable models, one has to utilize low level image processing (such as feature extraction). Baumberg and Hogg [7] considered using an Active Shape Model (ASM) to track pedestrians (Fig. 16). A Kalman lter was then applied to accomplish the spatio-temporal operation, which was similar to the work of Blake et al. [14]. Their work was then extended by generating a physical model, using a training set of examples for object deformation, and by tuning the elastic properties of the object to reect how the object actually deformed [8]. Freeman et al. [56] developed a special detector for computer games on-chip, which was used to infer useful information about the position, size, orientation, and conguration of human body parts (Fig. 17). Cordea et al. [37] discussed a 2.5 dimension tracking method, allowing the realtime recovery of the 3-D position and orientation of a head moving in its image plane. Fablet and Black [50] proposed a

Fig. 16. Parts of human tracking results using Baumberg and Hoggs approach [7].

10

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

Fig. 17. Computer game on-chip by Freeman et al. [56].

solution for the automatic detection and tracking of human motion, using 2-D optical ow information. A particle lter was used to represent and predict non-Gaussian posterior distributions over time. Chang et al. [30], considered tracking cyclic human motion by decomposing complex cyclic motions into components, and maintaining coupling between components. Wong and Wong [163] proposed a wavelet based tracking system, where the human body is located within a small search window, using color and motion as heuristics. The windows location and size were estimated using the proposed wavelet estimation. 5.2. 3-D approaches 2-D frameworks have natural restrictions, due to their viewing angle. To improve a tracker in an unpredicted environment, 3-D modelling techniques have been promoted as an alterative. In fact, these approaches attempt to recover 3-D articulated poses over time [60]. In some circumstances, people frequently project a 3-D model onto a 2-D image for later processing. 5.2.1. Model-based tracking Modelling human movements allows the tracking problem to be minimised: the future movements of a human body can be predicted regardless of self-occlusion or self-collision. Modelbased approaches contain stick gures, volumetric, and a mixture of models. 5.2.1.1. Stick gure. A stick gure is a representation of a skeletal structure, which is normally regarded as a collection of segments and joint angles (refer to Fig. 18). Bharatkumar et al. [11] used stick gures to model lower limbs, e.g. hip, knees, and ankles. They applied a medial-axis transformation to extract 2D stick gures of lower limbs. Hubers human model [86] was a rened version of the stick gure representation. Joints were connected by line segments, with a certain degree of constraint that could be relaxed using virtual springs. By modelling a human body with 14 joints and 15 body parts, Ronfard et al. [135] attempted to nd people in static video frames, using learned models of both the appearance of body parts (head, limbs, hands), and of the geometry of their assemblies. They built on Forsyth and Flecks

Fig. 18. Stick gure of human body [71].

general body plan methodology, and Felzenszwalb and Huttenlochers dynamic programming approach, to efciently assemble candidate parts into pictorial structures. Karaulova et al. [94] built a hierarchical model of human dynamics, encoded using hidden Markov models (HMMs). This approach allows view-independent tracking of a human body in monocular image clips. Sullivan et al. [149] combined automatic tracking of rotational body joints with well dened geometric constraints associated with a skeletal articulated structure. This work was based on heuristically tracked points [102], and correct tracking using the method in ref. [148]. Further similar work has been reported in ref. [116,58,70,88,124]. 5.2.1.2. Volumetric modelling. Elliptical cylinders are one of the volumetric models that model the human body. Rohr [134] extended the work of Marr and Nishihara [108], which used elliptical cylinders to represent the human body. Rehg and Kanade [126] represented two occluded ngers using several cylinders; the center axes of cylinders were projected into the center line segments of 2-D nger images. Goncalves et al. [64] modelled both the upper and lower arm as truncated circular cones; shoulder and elbow joints were presumed as spherical joints. Chung and Ohnishi [34] proposed a 3-D model-based motion analysis, which used cue circles (CC) and cue sphere (CS). Theobalt et al. [154] suggested combining efcient realtime optical feature tracking, with the reconstruction of the volume of a moving subject, in order to t a sophisticated humanoid skeleton to video footage. A scene was observed with four video cameras, two of which were connected to a PC. In addition, a voxel-based approximation to the visual hull was computed for each time step. Fig. 19 illustrates the nal outcome. Other research projects have been carried out using 3-D volumetric models, e.g. cones [45,44], super-quadrics [142], and cylinders, etc. Volumetric modelling requires more parameters to build up an entire model, resulting in intensive computation throughout registration. Similar results can be found in refs. [134,59,159,132].

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

11

Fig. 21. Applications of multiple cameras in human motion tracking by Ringer and Lasenby [131].

Fig. 19. Volumetric modelling by Theobalt [154].

require a common spatial reference to be employed, and a single camera does not have such a requirement. However, a single camera readily suffers occlusion from a human body, due to its xed viewing angle. Thus, a distributed-camera strategy is a better option for minimising such risk. One example of using two cameras is illustrated in Fig. 21. 5.3. Animation of human motion Video capture virtual reality (VR) uses a video camera and software to track human movements, without the need to place markers at specic body locations. The users image is generated within a simulated environment, such that it is possible to interact with animated graphics in a completely natural manner. This technology rst became available 25 years ago, but it was not applied to rehabilitation practice until ve years ago [161]. Recently, VR has been commonly used in stroke rehabilitation, e.g. refs. [89,174,171], etc. Holden and Dyar [69] pre-recorded the movements of a virtual teacher, and then asked patients to imitate movement Templates in order to conduct upper limb repetitive training using a VR system. Evidence shows that the Vivid GX video capture technology employed, can be used for improvements in upper extremity function [96]. Rand et al. [125] designed a Virtual Mall (VMall), using the available GX platform, where stroke patients could carry out daily activities such as shopping in a supermarket. A comprehensive survey on this topic is available in ref. [150].

In contrast, hierarchical modelling techniques are believed to improve the deciencies highlighted in the systems described above. For example, Plankers et al. [121] revealed a hierarchical human model for achieving more accurate tracking results, where four stages were engaged: skeleton, ellipsoid meatballs for tissues and fats, polygonal surface for skin, and shaded rendering. 5.2.2. Feature-based tracking This approach starts by extracting signicant characteristics, and then matches them across images. In this context, 2-D and 3-D features are adopted. Hu et al. [87] advocated that featurebased tracking algorithms consist of three groups, based on the nature of selected features: global feature-based [41,122,137,156], local feature-based [35,128], and dependence-graph-based algorithms [51,62,63,57]. 5.2.3. Camera conguration The line of sight problem can be partially tackled using a proper camera setup, including a single camera [6,18,46,123, 142,13,139,140,168,169](see Fig. 20)or a distributed-camera conguration [16,26,33,90,131]. Using multiple cameras does

Fig. 20. Human motion tracking by Sidenbladh et al. [139].

12

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

It is necessary to bear in mind that the marker-free visual tracking techniques described above, have been partially successful in real situations. The main problem is that the proposed algorithms/systems still need to be improved to compromise robustness and efciency. This bottleneck problem inevitably affects the further development of a home based motion detection system. 6. Robot-aided tracking systems Robot-aided tracking systems, a subset of therapeutic robots, are valuable platforms for delivering neuro-rehabilitation for human limbs following stroke [68,143]. The rendering position/ orientation of limbs is encompassed and necessarily required in order to guide limb motion. In this section, one can nd a rich variety of rehabilitation systems that are driven by electromechanical or electromagnetic tracking strategies. These systems incorporate individual sensor technologies to conduct sense-measure-feedback strategies. 6.1. Typical working systems 6.1.1. Cozens To justify whether or not motion tracking techniques can assist simple active upper limb exercises for patients recovering from neurological diseases (e.g. stroke), Cozens [38] reported a pilot study, using torque attached to an individual joint, combined with EMG measurement that indicated the pattern of arm movements during exercise. Evidence highlighted that greater assistance was given to patients with more limited exercise capacity. This work was only able to demonstrate the principle of assisting single limb exercises using a 2-D based technique. 6.1.2. MIT-MANUS To nd out whether exercise therapy inuences plasticity and recovery of the brain following a stroke, a tool is demanded to control the amount of therapy delivered to a patient; where appropriate, objectively measuring the patients performance. To address these problems, a novel automatic system named MIT-MANUS (Fig. 22), was designed to move, guide, or perturb the movement of a patients upper limb, while recording

motion-related quantities, e.g. position, velocity, or forces applied [99]. When comparing robotic assisted treatment with standard sensorimotor treatment, Fasoli et al. found a signicant reduction in motor impairment in the robotic assisted group [52]. Ferraro et al. also reported similar improvements after a 3 month trial [53]. However, it was also stated that the biological basis of recovery, and individual patients needs, should be further studied in order to improve the performance of the system under different circumstances. These ndings were also supported in ref. [98]. 6.1.3. Taylor and improved systems Taylor [153] described an initial investigation, where a simple two DOF arm support was built to allow movements of a shoulder and elbow in a horizontal plane. Based on this simple device, he then suggested a ve exoskeletal system, to allow the activities of daily living (ADL) to be performed in a natural way. The design was validated by tests which showed that the conguration interfaces properly with the human arm, resulting in a trivial addition of goniometric measurement sensors for the identication of arm position and pose. Another good example was provided in ref. [130], where a device was designed to assist elbow movement. This elbow exerciser was strapped to a lever, which rotated about a horizontal plane. A servomotor driven through a current amplier was applied to drive the lever; a potentiometer indicated the position of the motor. Obtaining the position of the lever was achieved by using a semi-circular array of light emitting diodes (LEDs) around the lever. However, this system required a physiotherapist to activate the arm movement, and use a force handle to measure forces applied. To effectively deal with the problem arising from individuals with spinal cord injuries, Harwin and Rahman [65] explored the design of head controlled force-reecting masterslave telemanipulators for rehabilitation applications. A test-bed power assisted orthosis, consisted of a six DOF master, with its end effector replaced by a six axis force/torque sensor. A splint assembly was mounted on the force torque sensor and supported a persons arm [145]. Similar to this technique, Chen et al. [32] provided a comprehensive justication for their proposal and testing protocols. 6.1.4. MIME Burgar et al. [24,105] summarised systems for post-stroke therapy conducted at the Department of Veterans Affairs Palo Alto, in collaboration with Stanford University. The original principle had been established with two or three DOF elbow/ forearm manipulators. Amongst these systems, MIME was more attractive, due to its ability to fully support a limb during 3-D movement, and self-guided modes of therapy (see Fig. 23). Subjects were seated in a wheelchair close to an adjustable height table. A PUMA-560 automation, was mounted beside the table, and was attached via a wristforearm orthosis (splint) and a six-axis force transducer. Also, Shor et al. [138] investigated the effects of MIME on pain and passive ranges of movement, nding no negative impact of MIME on a joint passive range of movement, or pain in the

Fig. 22. The MANUS system in MIT [99].

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

13

Fig. 23. The MIME system in MIT [24].

paretic upper limb. The disadvantage of this system is that it cannot allow a subject to freely move his/her body. 6.1.5. ARM Guide A rehabilitator namely ARM Guide [127], was presented to diagnose and treat arm movement impairment following stroke and other brain injuries. Some vital motor impairment, such as abnormal tone, lack of coordination, and weakness, were evaluated. Pre-clinical results showed that this therapy produced quantiable benets in a chronic hemiparetic arm. In the design, the subjects forearm was strapped to a specially designed splint, which slides along the linear constraint. A motor drove a chain drive attached to the splint. An optical encoder mounted on the motor, indicated the arm position. The forces produced by the arm were measured by a six-axis load cell, located between the splint and linear constraint. The system requires further development for efcacy and practicality, although it achieved great success. 6.1.6. Others Engelberger introduced rehabilitation applications for the HelpMate robot by Pyxis Co., San Diego, US [47]. The Handy 1 robot was rst invented in 1987 as a research project at Keele University, and is now able to animate make-up, shaving, and painting operations [155]. OxIM (Oxford, UK), developed the RT-series robots for rehabilitation applications [25]. 6.2. Haptic interface techniques Haptic interfaces are a type of robot designed to interact with a human being via touch. This interaction is normally undertaken via kinaesthetic and cutaneous channels. Haptic interface techniques are becoming an important area for assistive technologies; for example they provide a natural interface for people with visual impairment, or as a means to aid target reaching for post-stroke patients. This technique is potentially useful in home based environments, due to its reliable performance, such as [20,150]. Amirabdollahian et al. [3] proposed the use of a haptic device (Haptic Master by Fokker Control Systems), for errorless learning techniques and intensive rehabilitation treatment in post-stroke patients. This device can teach correct movement patterns, as well as correcting and aiding in

achieving point-to-point measurements using virtual, augmented, and real environments. The signicant contribution of this proposal was the implementation of a model that minimised jerk parameters during movement. Allin et al. [2], described their preliminary work in the use of a virtual environment to derive just noticeable differences (JNDs) for force. A JND is a measure of the minimum difference between two stimuli, that is necessary in order for the difference to be reliably perceived. Stroke patients normally produce signicant increases in JNDs. Their experimental results indicated that visual feedback distortions in a virtual environment, can be created to encourage increased force productions by up to 10%. This threshold can help discriminate stroke patients from healthy groups, and predict the consequence of rehabilitation. To improve the performance of haptic interfaces, e.g. stability and exibility, researchers have developed successful prototype systems, e.g. refs. [3,66]. Hawkins et al. [66] set up experimental apparatus consisting of a frame with one chair, a wrist connection mechanism, two embedded computers, a large computer screen, an exercise table, a keypad, and a 3 DOF haptic interface arm. A user was seated on the chair, with their wrist connected to the haptic interface via the wrist connection mechanism. The devices end-effector consisted of a gimbal which provided an extra three DOF to facilitate wrist movement. 6.3. Other techniques 6.3.1. Gait rehabilitation Rehabilitation for walking post-stroke patients, challenges researchers due to trunk balance and proper force distribution. Training and the functional recovery of lower limbs are attracting more and more interest. The Jet Propulsion Laboratory of NASA and UCLA, have designed a robotic stepper that uses a pair of robotic arms resembling knee braces, to guide a patients legs. Attached sensors, can measure a patients force, speed, acceleration and resistance [166]. A virtual reality (VR) walking simulator was developed to allow individuals post-stroke, to practise ambulation in a variety of virtual environments. This system, including Stewart-platforms, was based on the original design of Rutgers Ankle 6DOF pneumatic robot, where a user strapped into a weightless frame, stood on two such devices placed sideby-side [129]. Colombo et al. [36] built a robotic orthosis to move the legs of spinal cord injury patients during rehabilitation training on a treadmill. Van der Loos et al. [158] used a servomotor-controlled bicycle to study lower limb biomechanics in terms of resistance. Hesse and Uhlenbrock [67] introduced a newly developed gait trainer, allowing wheelchair-bound subjects to perform repetitive practice in gait-like movement, without overstressing therapists. It consisted of two footplates positioned on two bars, two rockers, and two cranks that provided propulsion. The system generated a different movement at the tip and rear of the footplate during swing. Otherwise, the crank propulsion was controlled via a planetary system, which provided a ratio of 60/

14

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118

40%, between stance and swing phases. Two cases of nonambulatory patients who regained their walking ability after 4 weeks of daily training on the gait trainer were positively reported. Reviews on robot-guided rehabilitation systems have been given [39,40]. 7. Discussion Existing rehabilitation and motion tracking systems have been comprehensively summarised in this paper. The advantages and weaknesses of these systems were also presented. All these rehabilitation or tracking systems, require professionals to perform calibration and sampling. Without their help, none of these systems would work properly. These systems did not provide patient-oriented therapy, and hence cannot yet be directly used in home-based environments. The second challenge is cost. People intended to build complicated tracking systems in order to satisfy multiple purposes. This imposes expensive components on designed systems. Some of these systems also consist of specically designed sensors, which limit the further development and broad application of the designed systems. The application or use of a device is very important. Most people who had suffered a stroke, have signicant loss of function in affected limbs, and therefore sensor systems need careful consideration. It has been suggested that devices should be as easy as possible to apply/handle. Existing rehabilitation systems occupy large spaces. As a consequence, this prevents people who have less accommodation space from using these systems to regain their mobility. A telemetric and compact system which overcomes the space problem should instead be proposed. Poor performance in humancomputer interface (HCI) design in both rehabilitation and motion tracking systems has been recognised. Unfortunately, people fail to discuss this issue in the literature. From a practical point of view, an attractive interface may increasingly encourage participants to carry out device manipulation. Feedback in real time has not been achieved yet. For example, some patients with a visual impairment may require an auditory signal, others with hearing problems would need visual feedback. There is a concept that a simple system is required to indicate correct or incorrect movements. Such a system should allow a patient to adjust his/her movements immediately. In summary, when one considers a recovery system, six issues need to be taken into account: cost, size, weight, function, operation, and automation. 8. Conclusions This paper reviews the development of human motion tracking systems and their application in stroke rehabilitation. State-of-the-art tracking techniques has been classied as non-visual, visual marker based, markerless visual, and robotaided systems; according to sensor location. In each subgroup, we have described commercialized and non-commercialized

platforms by taking into account technical feasibility, work load, size, and cost. In particular, we have focused on a description of markerless visual systems, as they offer positive features such as reduced restriction, robust performance, and low cost. Evidence shows that existing motion tracking systems, to some extent, are able to support various rehabilitation settings and training delivery. Therefore, these systems could possibly be used to replace face to face therapy on-site. Unfortunately, evidence also reveals that human motion is of a complicated physiological nature leading to unsolved problems beyond previous tracking systems functional capability, e.g. occlusion and drift. There is therefore a need to develop insight into the characteristics of human movement. Finally, it was highlighted that a successful design has to envisage all of these factors: real time operation, wireless properties, easy manipulation, correctness of data, friendly graphical interface, and portability. Acknowledgments This work was in part supported by the UK EPSRC, under Grant GR/S29089/01. We are grateful for the provision of partial literature sources from Miss Nargis Islam at the University of Bath, and Dr. Huiru Zheng at the University of Ulster. The authors also acknowledge Dr. Liam Cragg and Ms. Sharon Cording for proofreading this manuscript. References
[1] J. Aggarwal, Q. Cai, Human motion analysis: a review, Comput. Vis. Image Understand.: CVIU 73 (3) (1999) 428440. [2] S. Allin, Y. Matsuoka, R. Klatzky, Measuring just noticeable difference for haptic force feedback: a tool for rehabilitation, in: Proceedings of IEEE Symposium on Haptic Interfaces for Virtual and Teleoperator Systems, March 2002. [3] F. Amirabdollahian, R. Louerio, W. Harwin, A case study on the effects of a haptic interface on human arm movements with implications for rehabilitation robotics, in: Proceedings of the First Cambridge Workshop on Universal Access and Assistive Technology (CWUAAT), 2002. [4] C. Anderson, C. Mhurchu, S. Robenach, M. Clark, C. Spencer, A. Winsor, Home or hospital for stroke rehabilitation? results of a randomized controlled trial: Ii: cost minimization analysis at 6 months, Stroke 31 (2000) 10321037. [5] R. Bajcsy, J. Smith, Exploratory research in telerobotic control using ATM networks, Tech. Rep., Computer and Information Science, University of Pennsylvania, Philadelphia, 2002. [6] C. Barron, I. Kakadiaris, A convex penalty method for optical human motion tracking, in: First ACM SIGMM International Workshop on Video Surveillance, 2003. [7] A. Baumberg, D. Hogg, An efcient method for contour tracking using active shape models, in: Proceedings of IEEE Workshop on Motion of Non-Rigid and Articulated Objects, 1994. [8] A. Baumberg, D. Hogg, Generating spatiotemporal models from examples, Image Vis. Comput. 14 (1996) 525532. [9] T. Beth, I. Boesnach, M. Haimeri, J. Moldenhauer, K. Bos, V. Wank, Characteristics in Human MotionFrom Acquisition to Analysis, Humanoids, Germany, 2003. [10] M. Betke, J. Gips, P. Fleming, The camera mouse: visual tracking of body features to provide computer access for people with severe disabilities, IEEE Trans. Neural Syst. Rehab. Eng. 10 (2002) 110.

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118 [11] A. Bharatkumar, K. Daigle, M. Pandy, Q. Cai, J. Aggarwal, Lower limb kinematics of human walking with the medial axis transformation, in: Proceedings of IEEE Workshop on Non-Rigid Motion, 1994. [12] D. Bhatnagar, Position trackers for head mounted display systems, Tech. rep., TR93010, Department of Computer Sciences, University of North Carolina, 1993. [13] M. Black, Y. Yaccob, A. Jepson, D. Fleet, Learning parameterized models of image motion, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1997. [14] A. Blake, R. Curwen, A. Zisserman, A framework for spatio-temporal control in the tracking of visual contour, Int. J. Comput. Vis. (1993) 127 145. [15] M. Boonstra, R. van der Slikke, N. Keijsers, R. van Lummel, M. de Waal Malejt, N. Verdonschot, The accuracy of measuring the kinematics of rising from a chair with accelerometers and gyroscopes, J. Biomech. 39 (2006) 354358. [16] E. Borovikov, L. Davis, A distributed system for real-time volume reconstruction, in: Proceedings of International Workshop on Computer Architecture for Machine Perception, September 2000. [17] C. Bouten, K. Koekkoek, M. Verduim, R. Kodde, J. Janssen, A triaxial accelerometer and portable processing unit for the assessment daily physical activity, IEEE Trans. Biomed. Eng. 44 (3) (1997) 136147. [18] R. Bowden, T. Mitchell, M. Sarhadi, Reconstructing 3D pose and motion from a single camera view, in: British Machine Vision Conference, 1998. [19] R. Brady, M. Pavol, T. Owing, M. Grabiner, Foot displacement but not velocity predicts the outcome of a slip induced in young subjects while walking, J. Biomech. 33 (2000) 803808. [20] J. Broeren, K. Sunnerhagen, M. Rydmark, A kinematic analysis of haptic handled stylus in a virtual environment: a study in healthy subjects, J. NeuroEng. Rehab. 4 (2007) 13. [21] T. Brosnihan, A. Pisano, R. Howe, Surface micromachined angular accelerometer with force feedback, in: Digest ASME International Conference and Expo, 1995. [22] S. Bryson, Virtual reality hardware, in: Implementating Virtual Reality, ACM SIGGRAPH 93, 1993. [23] G. Burdea, V. Popescu, V. Hentz, K. Colbert, Virtual reality-based orthopedic telerehabilitation, IEEE Trans. Rehab. Eng. 8 (2000) 430 432. [24] C. Burgar, P. Lum, P. Shor, H. Machiel Van der Loos, Development of robots for rehabilitation therapy: the palo alto va/stanford experience, J. Rehab. Res. Dev. 37 (6) (2000) 663673. [25] M. Busnel, R. Cammoun, F. Coulon-Lauture, J.-M. Detriche, G. Claire, B. Lesigne, The robotized workstation master for users with tetraplegia: Description and evaluation, J. Rehab. Res. Dev. 36 (3) (1999) 217230. [26] Q. Cai, J.K. Aggarwal, Tracking human motion using multiple cameras, in: International Conference on Pattern Recognition, 1996. [27] M. Caruso, L. Withanawasam, Vehicle detection and compass applications using AMR magnetic sensors, in: Sensor Expo Proceedings, May 1999. [28] M. Caruso, Applications of magnetic sensors for low cost compass systems, in: Proceedings of IEEE on Position Location and Navigation Symposium, San Diego, 2000. [29] J. Cauraugh, S. Kim, Two coupled motor recovery protocols are better than one electromyogram-triggered neuromuscular stimulation and bilateral movements, Stroke 33 (2002) 15891594. [30] C. Chang, R. Ansari, A. Khokhar, Cyclic articulated human motion tracking by sequential ancestral simulation, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2003. [31] I. Charlton, P. Tate, P. Smyth, L. Roren, Repeatability of an optimised lower body model, Gait Post. 20 (2004) 213221. [32] S. Chen, T. Rahman, W. Harwin, Performance statistics of a headoperated force-reecting rehabilitation robot system, IEEE Trans. Rehab. Eng. 6 (1998) 406414. [33] K. Cheung, T. Kanade, J. Bouguet, M. Holler, A real time system for robust 3D voxel reconstruction of human motions, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2000.

15

[34] J. Chung, N. Ohnishi, Cue circles: Image feature for measuring 3-D motion of articulated objects using sequential image pair, in: Proceedings of the Third International Conference on Face & Gesture Recognition, 1998. [35] B. Coifman, D. Beymer, P. McLauchlan, J. Malik, A real-time computer vision system for vehicle tracking and trafc surveillance, Transpot. Res. C 6 (1998) 271288. [36] G. Colombo, M. Joerg, R. Schreier, V. Dietz, Treadmill training of paraplegic patients using a robotic orthosis, J. Rehab. Res. Dev. 37 (6) (2000) 693700. [37] M. Cordea, E. Petriu, N. Georganas, D. Petriu, T. Whalen, Real-time 21/ 2d head pose recovery for model-based video-coding, in: IEEE Instrumentation and Measurement Technology Conference, 2000. [38] J. Cozens, Robotic assistance of an active upper limb exercise in neurologically impaired patients, IEEE Trans. Rehab. Eng. 7 (2) (1999) 254256. [39] J. Dallaway, R. Jackson, P. Timmers, Rehabilitation robotics in Europe, IEEE Trans. Rehab. Eng. 3 (1995) 3545. [40] K. Dautenhahn, I. Werry, Issues of robot-human interaction dynamics in the rehabilitation of children with autism, in: Proceedings of FROM ANIMALS TO ANIMATS, The Sixth International Conference on the Simulation of Adaptive Behavior (SAB2000), 2000. [41] J. Davis, Hierarchical motion history images for recognizing human motion, in: IEEE Workshop on Detection and Recognition of Events in Video, 2001. [42] R.I. Davis, S. Ounpuu, D. Tyburski, J. Gage, A gait data collection and reduction technique, Hum. Mov. Sci. 10 (1991) 575587. [43] E. Delahunt, K. Monaghan, B. Cauleld, Ankle function during hopping in subjects with functional instability of the ankle joint, Scand. J. Med. Sci. Sports (2007). [44] Q. Delamarre, O. Faugeras, 3d articulated models and multi-view tracking with physical forces, Comput. Vis. Image Understand. 81 (2001) 328357. [45] Q. Delamarre, F.O., 3d articulated models and multi-view tracking with silhouettes, in: Proceedings of International Conference on Computer Vision, 1999. [46] S. Dockstader, M. Berg, A. Tekalp, Stochastic kinematic modeling and feature extraction for gait analysis, IEEE Trans. Image Process. 12 (8) (2003) 962976. [47] G. Engelberger, Helpmate, A service robot with experience, Ind. Robot. Int. J. 25 (2) (1998) 101104. [48] F. Escolano, M. Cazorla, D. Gallardo, R. Rizo, Deformable templates for tracking and analysis of intravascular ultrasound sequences, in: Proceedings of First International Workshop of Energy Minimization Methods in IEEE Conference on Computer Vision and Pattern Recognition, Venecia, Mayo 1997. [49] A. Esquenazi, N. Mayer, Instrumented assessment of muscle overactivity and spasticity with dynamic polyelectromyographic and motion analysis for treatment planning, Am. J. Phys. Med. Rehab. 83 (2004) S19S29, Supplement:. [50] R. Fablet, M.J. Black, Automatic detection and tracking of human motion with a view-based representation, in: European Conference on Computer Vision, 2002. [51] T. Fan, G. Medioni, G. Nevatia, Recognizing 3-d objects using surface descriptions, IEEE Trans. Pattern Recogn. Mach. Intell. 11 (1989) 1140 1157. [52] S. Fasoli, H. Krebs, J. Stein, W. Frontera, N. Hogan, Effects of robotic therapy on motor impairment and recovery in chronic stroke, Arch. Phys. Med. Rehab. 84 (2003) 477482. [53] M. Ferraro, J. Demaio, J. Krol, C. Trudell, L. Edelstein, P. Christos, J. England, S. Dasoli, M. Aisen, H. Krebs, N. Hogan, B. Volpe, Assessing the motor status score: a scale for the evaluation of upper limb motor outcomes in patients after stroke, Neurorehab. Neural Rep. 16 (2002) 301307. [54] H. Feys, W. De Weerdt, B. Selz, C. Steck, R. Spichiger, L. Vereeck, K. Putman, G. Van Hoydonck, Effect of a therapeutic intervention for the hemiplegic upper limb in the acute phase after stroke: a single-blind, randomized, controlled multicenter trial, Stroke 29 (1998) 785792.

16

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118 [88] M. Ivana, M. Trivedi, E. Hunter, P. Cosman, Human body model acquisition and tracking using voxel data, Int. J. Comp. Vis. 53 (3) (2003) 199223. [89] D. Jack, R. Boian, A. Merians, G. Tremaine, M. Burdea, S. Adamovich, M. Recce, H. Poizner, Virtual reality-enhanced stroke rehabilitation, IEEE Trans. Neural Syst. Rehab. Eng. 9 (2001) 308318. [90] O. Javed, S. Khan, Z. Rasheed, M. Shah, Camera handoff: tracking in multiple uncalibrated stationary cameras, in: IEEE Workshop on Human Motion, HUMO-2000, 2000. [91] G. Johansson, Visual motion perception, Sci. Am. 232 (1975) 7688. [92] E. Jovanov, A. Milenkovic, C. Otto, P. de Groen, B. Johnson, S. Warren, G. Taibi, A wban system for ambulatory monitoring of physical activity and health status: applications and challenges, in: Proceedings the 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2005. [93] S. Ju, M. Black, Y. Yaccob, Cardboard people: a parameterised model of articulated image motion, in: Proceedings of IEEE International Conference Automatic Face and Gesture Recognition, 1996. [94] I. Karaulova, P. Hall, A. Marshall, A hierarchical model of dynamics for tracking people with a single video camera, in: British Machine Vision Conference, 2000. [95] P. Kejonen, K. Kauanen, H. Vanharanta, The relationship between anthropometric factors and body-balancing movements in postural balance, Arch. Phys. Med. & Rehab. 84 (2003) 1722. [96] R. Kizony, N. Katz, P. Weiss, Adapting an immersive virtual reality for rehabilitation, J. Vis. Comput. Anim. 14 (2003) 261268. [97] A. Kourepenis, A. Petrovich, M. Meinberg, Development of a monotithic quartz resonator accelerometer, in: Proceedings of 14th Biennial Guidance Test Symposium, Hollman AFB, NM, 1989. [98] H. Krebs, N. Hogan, M. Aisen, B. Volpe, Robot-aided neurorehabilitation, IEEE Trans. Rehab. Eng. 6 (1) (1998) 7587. [99] H. Krebs, B. Volpe, M. Aisen, N. Hogan, Increasing productivity and quality of care: robot-aided nero-rehabilitation, J. Rehab. Res. Dev. 37 (6) (2000) 639652. [100] J. Lenz, A review of magnetic sensors, Proc. IEEE 78 (1990) 973989. [101] F. Lewis, Wireless sensor networks, in: D.J. Cook, S.K. Das (Eds.), Smart Environments: Technologies, Protocols, and Applications, John Wiley, New York, 2004. [102] D. Liebowitz, S. Carlsson, Uncalibrated motion capture exlpoting articulated structure constraints, in: International Conference on Computer Vision, 2001. [103] J. Lobo, J. Dias, Vision and inertial sensor cooperation using gravity as a vertical reference, IEEE Trans. Pattern Anal. Mach. Intell. 25 (12) (2003) 15971608. [104] H. Luinge, Inertial sensing of human movement, Ph.D. Thesis, Twente University Press, Netherlands, 2002. [105] P. Lum, D. Reinkensmeyer, R. Mahoney, W. Rymer, C. Burgar, Robotic devices for movement therapy after stroke: current status and challenges to clinical acceptance, Top Stroke Rehab. 8 (4) (2002) 4053. [106] C. Lu, N. Ferrier, A digital video system for the automated measurement of repetitive joint motion, IEEE Trans. Info. Tech. Biomed. (2004) 399 404. [107] A. Malkawi, R. Scrinivasan, Building performance visualiztion using augmented reality, in: Proceedings of 14th International Conference on Computer Graphics, Moscow, Russia, Se 6-10, 2004. [108] D. Marr, K. Nishihara, Representation and recognition of the spatial organization of three dimensional structure, in: Proceedings of the Royal Society of London 200, 1978, pp. 269294. [109] C. Mavroidis, J. Nikitczuk, G. Weinberg, B. Danaher, K. Jensen, J. Pelletier, P. Prugnarola, R. Stuart, R. Arango, M. Leahey, R. Pavone, A. Provo, D. Yasevac, Smart portable rehabilitation devices, J. NeuroEng. Rehab. 2 (2005) 18. [110] A. Mihailidis, B. Carmichael, J. Boger, The use of computer vision in an intelligent environment to support aging-in-place, safety, and independence in the home, IEEE Trans. Info. Tech. Biomed. 8 (2004) 238247. [111] T. Molet, R. Boulic, D. Thalmann, A real time anatomical convertor for human motion capture, in: Proceedings of Seventh Eurographics International Workshop on Animation and Simulation, 1996.

[55] E. Foxlin, Y. Altshuler, L. Naimark, M. Harrington, Flighttracker: a novel optical/inertial tracker for cockpit enhanced vision, in: Proceedings of International Symposium on Augmented Reality, November 25, 2004. [56] W. Freeman, K. Tanaka, J. Ohta, K. Kyuma, Computer vision for computer games, in: Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, 1996. [57] R. Frezza, A. Chiuso, Learning and exploiting invariants for multi-target tracking and data association, in: Submission for 44th IEEE Conference on Decision and Control and European Control Conference, 2005. [58] H. Fujiyoshi, A. Lipton, Real-time human motion analysis by image skeletonisation, in: Proceedings of the Workshop on Application of Computer Vision, 1998. [59] A. Galata, N. Johnson, D. Hogg, Learning behaviour models of human activities, in: British Machine Vision Conference, 1999. [60] D. Gavrila, The visual analysis of human movement: a survey, Comput. Vis. Image Understand.: CVIU 73 (1) (1999) 8298. [61] J. Geddes, M. Chamberlain, Home-based rehabilitation for people with stroke: a comparative study of six community services providing coordinated, multidisciplinary treatment, Clin. Rehab. 15 (2001) 589599. [62] G. Gennari, A. Chiuso, F. Cuzzolin, R. Frezza, Integrating shape and dynamic probabilistic models for data association and tracking, in: IEEE Conference on Decision and Control, 2002. [63] G. Gennari, A. Chiuso, F. Cuzzolin, R. Frezza, Integrating shape constraint in data association lter, in: IEEE Conference on Decision and Control, 2004. [64] L. Goncalves, E. Bernardo, E. Ursella, P. Perona, Monocular tracking of the human arm in 3d, in: International Conference on Computer Vision, 1995. [65] W. Harwin, T. Rahman, Analysis of force-reecting telerobotics systems for rebalitation applications, in: Proceedings of the First European Conference on Disability, Virtual Reality and Associated Technologies, 1996. [66] P. Hawkins, J. Smith, S. Alcock, M. Topping, W. Harwin, R. Loureiro, F. Amirabdollahian, J. Brooker, S. Coote, E. Stokes, G. Johnson, P. Mark, C. Collin, B. Driessen, Gentle/s project: design and ergonomics of a stroke rehabilitation system, in: Proceedings of the First Cambridge Workshop on Universal Access and Assistive Technology (CWUAAT), 2002. [67] S. Hesse, D. Uhlenbrock, A mechanized gait trainer for restoration of gait, J. Rehab. Res. Dev. 37 (6) (2000) 701708. [68] M. Hillman, Rehabilitation robotics from past to presenta historical perspective, in: Proceedings of the International Conference on Rehabilitation Robotics, April 2003. [69] M. Holden, T. Dyar, Virtual environment training: a new tool for rehabilitation, Neurol. Rep. 26 (2002) 6271. [70] T. Horprasert, I. Haritaoglu, D. Harwood, L. Davies, C. Wren, A. Pentland, Real-time 3d motion capture, in: PUI Workshop, 1998. [71] N. Howe, M. Leventon, W. Freeman, Bayesian reconstruction of 3d human motion from single-camera video, in: NIPS, 1999. [72] http://bmj.bmjjournals.com/cgi/reprint/325/. [73] http://www.ascensiontech.com/products/motionstar.pdf [74] http://www.charndyn.com/ [75] http://www.isense.com/products/prec/is600/ [76] http://www.kittytech.com/about/kitty.html [77] http://www.korins.com/m/ent/atoc.htm [78] http://www.microstrain.com/ [79] http://www.ndigital.com/polaris.php [80] http://www.polhemus.com/ [81] http://www.polyu.edu.hk/cga/faq/ [82] http://www.qualisys.se/ [83] http://www.vicon.com/ [84] http://www.vrealities.com/cyber.html [85] http://www.xsens.com/ [86] E. Huber, 3d real-time gesture recognition using proximity space, in: Proceedings of Interantional Conference on Pattern Recognition, 1996. [87] W. Hu, T. Tan, L. Wang, S. Maybeck, A survey on visual surveillance of object motion and behaviors, IEEE Trans. Syst. Man and Cyber. C: Appl. Rev. 34 (2004) 334352.

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118 [112] T. Molet, Z. Huang, R. Boulic, D. Thalmann, A real time anatomical convertor for human motion capture, in: Proceedings of Computer Animation Conference, 1997. [113] T. Mukai, N. Ohnishi, The recovery of object shape and camera motion using a sensing system with a video camera and a gyro sensor, in: Proceedings Seventh International. Conference Computer Vision, Sep, 1999. [114] B. Naja, K. Aminian, F. Leow, Y. Blanc, P. Robert, Measurment of stand-sit and sit-stand transitions using a miniature gyroscope and its application in fall risk evaluation in the elderly, IEEE Trans. Biomed. Eng. 49 (2002) 843851. [115] E. Nebot, H. Durrant-Whyte, Inertial calibration and alignment of low cost inertial navigation units for land vehicle applications, J. Robot. Syst. 16 (2) (1999) 8192. [116] S. Niyogi, E. Adelson, Analyzing and recognizing walking gures in xyt, in: Proceedings of the Conference on Computer Vision and Pattern Recognition, 1994. [117] F. Panerai, G. Metta, G. Sandini, Visuo-inertial stabilization in pacevariant binocular systems, Robot. Autonom. Syst. 30 (12) (2000) 195 214. [118] S. Patrick, A. Denington, M. Gauthier, D. Gillard, A. Prochazka, Quantication of the updrs rigidity scale, IEEE Trans. Neural Syst. Rehab. Eng. 9 (2001) 3141. [119] C. Patten, F. Horak, D. Krebs, Head and body center of gravity control strategies: adaptions following vestivular rehabilitation, Acta Otolaryngol. 123 (2003) 3240. [120] X. Pennec, P. Cachier, N. Ayache, Tracking brain deformations in time sequences of 3D US images, in: Proceedings of International Conference Information Processing in Medical Imaging, 2001. [121] R. Plankers, Articulated soft objects for video-based body modeling, in: International Conference on Computer Vision, 2001. [122] R. Polana, R. Nelson, Low level recognition of human motion, in: Proceedings IEEE Workshop Motion of Non-rigid and Articulated Objects, 1994. [123] R. Polana, R. Nelson, Low level recognition of human motion, in: Proceedings of Workshop on Non-rigid Motion, 1994. [124] R. Qian, T. Huang, Estimating articulated motion by decomposition, in: Cappellini (Ed.), Time-Varying Image Processing and Moving Object Recognition, vol. 3, 1994. [125] D. Rand, M. Katz, N. Shahar, R. Kizony, P. Weiss, The virtual mall: development of a functional virtual environment for stroke rehabilitation, in: The 55th Annual Conference of the Israel Associtation of Physical and Rehabilitation Medicine, 2004. [126] J. Rehg, T. Kanade, Model-based tracking of self-occluding articulated objects, in: International Conference on Computer Vision, 1995. [127] D. Reinkensmeyer, L. Kahn, M. Averbuch, A. McKenna-Cole, B. Schmit, W. Rymer, Understanding, treating arm movement impairment after chronic brain injury: progress with the arm guide, J. Rehab. Res. Dev. 37 (6) (2000) 653662. [128] L. Ren, G. Shakhnarovich, J. Hodgins, H. Pster, P. Viola, Learning silhouette features for control of human motion, ACM Trans. Graph. 24 (4) (2005) 13031331. [129] R.F. Boian, H. Kourtev, K.M. Erickson, J.E. Deutsch, J.A. Lewis, G.C. Burdea, Dual stewart-platform gait rehabilitation system for individuals post-stroke, in: Proceedings of the International Workshop on Virtual Rehabilitation, 2003, p. 92. [130] R. Richardson, M. Austin, A. Plummer, Development of a physiotherapy robot, in: Proceedings of the International Biomechanics Workshop, Enschede, 1999. [131] M. Ringer, J. Lasenby, Modelling and tracking articulated motion from multiple camera views, in: British Machine Vision Conference, 2000. [132] T. Robert, S. McKenna, I. Ricketts, Adaptive learning of statistical appearance models for 3d human tracking, in: British Machine Vision Conference, 2002. [133] A. Roche, X. Pennec, G. Malandain, N. Ayache, Rigid registration of 3D ultrasound with MR images: a new approach combining intensity and gradient information, IEEE Trans. Med. Imag. 20 (10) (2001) 1038 1049.

17

[134] K. Rohr, Toward model-based recognition of human movements in image sequences, CVGIP: Image Understand. 59 (1994) 94115. [135] R. Ronfard, C. Schmid, B. Triggs, Learning to parse pictures of people, in: European Conference on Computer Vision, LNCS 2553, vol. 4, 2002. [136] P. Scheeper, J. Gullv, L. Kofoed, A piezoelectric triaxial accelerometer, J. Micromech. Microeng. 6 (1996) 131133. [137] B. Schiele, Vodel-free tracking of cars and people based on color regions, in: Proceedings IEEE International Workshop on Performance Evaluation of Tracking and Surveilance, 2000. [138] P. Shor, P. Lum, C. Burgar, H. Van der Loos, M. Majmundar, R. Yap, The effect of robot-aided therapy on upper extremity joint range of motion and pain, in: Proceedings of International Conference on Rehabilitation Robots, 2001. [139] H. Sidenbladh, M.J. Black, D. Fleet, Stochastic tracking of 3d human gures using 2d image motion, in: European Conference on Computer Vision, 2000. [140] H. Sidenbladh, M.J. Black, L. Sigal, Implicit probabilistic models of human motion for synthesis and tracking, in: European Conference on Computer Vision, 2002. [141] L. Simone, D. Kamper, Design considerations for a wearable monitor to measure nger posture, J. NeuroEng. Rehab. 2 (2005) 5. [142] C. Sminchisescu, B. Triggs, Covariance scaled sampling for monocular 3d body tracking, in: Proceedings of the Conference on Computer Vision and Pattern Recognition, 2001. [143] J. Speich, J. Rosen, Medical robotics, in: G. Wnek, G. Bowlin (Eds.), Encyclopedia of Biomaterials and Biomedical Engineering, Marcel Dekker, Inc., 2004, pp. 983993. [144] B. Steele, B. Belza, K. Cain, C. Warms, J. Coppersmith, J. Howard, The relationship between anthropometric factors and body-balancing movements in postural balance, J. Rehab. Res. Dev. 40 (2003) 4558. [145] S. Stroud, A force controlled external powered arm orthosis, Masters Thesis. [146] D. Sturman, D. Zeltzer, A survey of glove-based input, IEEE Comput. Graph. Appl. (1994) 3039. [147] O. Suess, S. Suess, S. Mularski, B. Kuhn, T. Picht, S. Hammersen, R. Stendel, M. Brock, T. Kombos, Study on the clinical application of pulsed dc magnetic technology for tracking of intraoperative head motion during frameless stereotaxy, Head Face Medicine 2 (2006) 10. [148] J. Sullivan, S. Carlsson, Recognizing and tracking human action, in: European Conference on Computer Vision, 2002. [149] J. Sullivan, M. Eriksson, S. Carlsson, D. Liebowitz, Automating multiview tracking and reconstruction of human motion, in: European Conference on Computer Vision, 2002. [150] H. Sveistrup, Motor rehabilitation using virtual reality, J. NeuroEng. Rehab. 1 (2004) 10. [151] Y. Tao, H. Hu, H. Zhou, Integration of vision and inertial sensors for 3d arm motion tracking in home-based rehabilitation, Int. J. Robot. Res. 26 (2007) 607624. [152] Y. Tao, H. Hu, Building a visual tracking system for home-based rehabilitation, in: Proceedings of the 9th Chinese Automation and Computing Society Conference In the UK, 2003. [153] A. Taylor, Design of an exoskeletal arm for use in long term stroke rehabilitation, in: International Conference on Rehabilitation Robotics97, University of Bath, 1997. [154] C. Theobalt, M. Magnor, P. Schueler, H. Seidel, Combining 2d feature tracking and volume reconstruction for online video-based human motion capture, in: Proceedings of Pacic Graphics 2002, 2002. [155] M. Topping, M. Mokhtari, Handy 1, A Robot Aid to Independence for Severely Disabled People, IOS, Netherland, 2001, pp. 142147. [156] P. Tresader, I. Reid, Uncalibrated and unsynchronized human motion capture: a stereo factorization approach, in: IEEE Conference on Computer Vision and Pattern Recognition, 2004. [157] Y. Uno, M. Kawato, R. Suzuki, Formation and control of optimal trajectory in human multijoint arm movement: Minimum torque-change model, Biologocal Cybernetics 61 (1989) 89101. [158] M. Van der Loos, S. Kautz, D. Schwandt, J. Anderson, G. Chen, D. Bevly, Servomotor-controlled bicycle ergometer design for studies in human biomechanics, in: ICIRS, 2002.

18

H. Zhou, H. Hu / Biomedical Signal Processing and Control 3 (2008) 118 [173] D. Zetu, P. Banerjee, D. Thompson, Extended-range hybrid tracker and applications to motion and camera tracking in manufacturing systems, Tech. rep., Department of Mechanical Engineering, University of Illinois at Chicago 1999. [174] L. Zhang, B. Abreu, G. Seale, B. Masel, C. Christiansen, K. Ottenbacher, Virtual reality environment for evaluation of a daily living skill in brain injury rehabilitation: reliability and validity, Arch. Phys. Med. Rehab. (2003) 11181124. [175] Y. Zhang, H. Hu, H. Zhou, Study on adaptive kalman lter algorithms in human movement tracking, in: Proceedings of the International Conference on Information Acquisition, JuneJuly 2005. [176] J. Zhao, N. Badler, Inverse kinematics positioning using nonlinear programming for highly articulated gures, ACM Trans. Graph. 13 (4) (1994) 313336. [177] H. Zhou, H. Hu, N. Harris, J. Hammerton, Applications of wearable inertial sensors in estimation of upper limb movements, Biomed. Signal Process. Control 1 (2006) 2232. [178] H. Zhou, H. Hu, N. Harris, Wearable inertial sensors for arm motion tracking in home-based rehabilitation, in: Proceedings of Intelligent Autonomous Systems (IAS), Japan, 2006. [179] H. Zhou, H. Hu, A surveyhuman movement tracking and stroke rehabilitation, Tech. rep., CSM-420, Department of Computer Science, University of Essex, UK, 2004. [180] H. Zhou, H. Hu, Inertial motion tracking of human arm movements in stroke rehabilitation, in: Proceedings of IEEE International Conference on Mechatronics and Automation, Canada, 2005. [181] H. Zhou, H. Hu, Kinematic model aided inertial motion tracking of human upper limb, in: Proceedings of International Conference Info. Acqu, Hong Kong, 2005. [182] H. Zhou, H. Hu, Inertial sensors for motion detection of human upper limbs, Sens. Rev. 27 (2007) 151158. [183] H. Zhou, H. Hu, Upper limb motion estimation from inertial measurements, Int. J. Inform. Technol. 12 (2007) 114. [184] T. Zimmerman, J. Lanier, Computer data entry and manipulation apparatus method, Patent Application 5,026,930 (1992).

[159] S. Wachter, H.-H. Nagel, Tracking persons in monocular image sequences, Comput. Vis. Image Understand. 74 (1999) 174192. [160] Z. Wang, T. Kiryu, N. Tamura, Personal customizing exercise with a wearable measurement and control unit, J. NeuroEng. Rehab. 2 (2005) 14. [161] P. Weiss, D. Rand, N. Katz, R. Kizony, Video capture virtual reality as a exible and effective rehabilitation tool, J. NeuroEng. Rehab. 1 (2004) 12. [162] G. Welch, E. Foxlin, Motion tracking survey, IEEE Comput. Graph. Appl. (2002) 2438. [163] S.-F. Wong, K.-Y.K. Wong, Reliable and fast human body tracking under information deciency, in: Proceedings of IEEE Intelligent Automation Conference, Hong Kong, 2003. [164] B. Wood, H. Zhang, A. Durrani, S.L.D.L.E. Glossop, N. Ranjan, F. Banovac, J. Borgert, S. Krueger, J. Kruecker, A. Viswanathan, K. Claery, Navigation with electromagnetic tracking for interventional radiology procedures: a feasibility study, J. Vasc. Interv. Radiol. 16 (2005) 493505. [165] C. Wren, A. Azarbayejani, T. Darrell, A. Pentland, Pnder: Real-time tracking of the human body, IEEE Trans. Pattern Anal. Mach. Intell. 19 (7) (1997) 780785. [166] http://www.jpl.nasa.gov/releases/2000/stepper.html [167] H. Xie, G. Fedder, A cmos z-axis capacitive accelerometer with combnger sensing, Tech. rep., The Robotics Institute, Carnegie Mellon University, 2000. [168] Y. Yaccob, M. Black, Parameterized modeling and recognition of activities, in: International Conference on Computer Vision, 1998. [169] Y. Yacoob, L. Davies, Learned models for estimation of rigid and articulated human motion from stationary or moving camera, Int, J. Comput. Vis. 36 (1) (2000) 530. [170] Z. Yao, H. Li, Is a magnetic sensor capable of evaluating a vision-based face tracking system, in: Proceedings of FaceVideo04, 2004. [171] S. You, S. Jang, Y. Kim, M. Hallett, S. Ahn, Y. Kwon, J. Kim, M. Lee, Virtual reality-induced cortical reorganization and associated locomotor recovery in chronic stroke: an experimenter-blind randomized study, Stroke (2005) 11661171. [172] S. You, U. Neumann, R. Azuma, Hybrid inertial and vision tracking for augmented reality registration, in: Proceedings of IEEE Virtual Reality, Houston, TX, March 1999.

You might also like