You are on page 1of 11

Robotics and Autonomous Systems 56 (2008) 385395

www.elsevier.com/locate/robot

A visual 3D-tracking and positioning technique for stereotaxy


with CT scanners
C. Doignon , B. Maurin, B. Bayle, M. de Mathelin
LSIIT (UMR ULP-CNRS 7005) - Control, Vision and Robotics Group, University of Strasbourg, Bd. Brant, 67412 Illkirch, France

Received 24 May 2007; received in revised form 28 September 2007; accepted 4 October 2007
Available online 17 October 2007

Abstract

In this paper, we present a 3D-tracking technique inspired by visual servoing, specifically designed for Computed Tomography (CT). This work
has been developed within the framework of robot- and computer-assisted interventional radiology, using stereotactic external fiducials made of
radiopaque rods. These fiducials produce a set of image feature points that are used in a pose estimation algorithm, with only one slice. The
patients movements can then be tracked with the proposed algorithm by means of a motion field approach, so as to update the 2D/3D registration.
Therefore, the proposed method solves a fundamental safety issue associated to the robotic assistance of CT-guided interventions.
The contributions of the paper are threefold. First, the stereotactic visual feedback is modelled using the Plucker representation for 3D straight
lines, while the CT plane slice provides corresponding image points. It is shown that the number of features needed to compute the pose is minimal
compared to the known previous techniques. Second, the Jacobian matrix which relates the image points displacements to the velocity screw of
the stereotactic frame is computed, providing the CT motion field. Third, the update of the Jacobian matrix is investigated. It requires the on-line
stereotactic registration. As with the CT imaging modality, the 2D/3D registration is highly inaccurate when the fiducials are being moving, this
paper provides a CT visual 3D tracking method inspired by the image-based visual servoing, which may alleviate the Jacobian matrix updates.
To validate this technique, we first present simulations of the CT visual tracking. Finally, the proposed method is applied to real images
obtained with stereotactic markers mounted on a robotic platform and placed in a CT scanner.
c 2007 Elsevier B.V. All rights reserved.

Keywords: Medical image-based guidance; Stereotactic registration; CT motion field; Computed tomography

1. Motivation and background developing field in which percutaneous needle insertions are
performed thanks to visual feedback. This technique is suitable
1.1. Motivation both for diagnosis and therapy [2]. It relies on the ability to ac-
curately position and guide a surgical needle to an anatomical
For the last two decades, robotic and computer-assisted target by means of a CT imaging device. The guidance of the
surgery have gained increasing interest. Successful achieve- needle during the insertion is usually performed thanks to the
ments showed that both patients and surgeons can benefit from set of images (slices) provided by the CT scanner. The practi-
improvements in many kinds of interventions, from diagno- tioner inserts the needle and corrects manually its position, in
sis to therapy, e.g. biopsies, tumours detection and resection, order to follow a preoperative planned trajectory. Although this
real-time visualization, etc. Medical robots can provide signifi- technique has many advantages, such as reduced patient trauma
cant help in surgery, most notably to improve positioning accu- or shorter procedure duration, it has the drawback of exposing
racy and to perform intraoperative image guidance [1]. Among the radiologist to X-rays, which is strongly harmful and danger-
minimally invasive procedures, interventional radiology is a ous in the case of repeated interventions.
To improve the guidance accuracy and protect the radiologist
against X-ray radiations, several robotic systems have already
Corresponding author. Tel.: +33 0 388119111.
been proposed [35]. These pioneer systems are mainly
E-mail addresses: doignon@lsiit.u-strasbg.fr (C. Doignon),
maurin@eavr.u-strasbg.fr (B. Maurin), bayle@lsiit.u-strasbg.fr (B. Bayle), dedicated to interventions on motionless organs, such as the
demathelin@lsiit.u-strasbg.fr (M. de Mathelin). kidney [7], with CT scanners or C-arms. Indeed, although

c 2007 Elsevier B.V. All rights reserved.


0921-8890/$ - see front matter
doi:10.1016/j.robot.2007.10.003
386 C. Doignon et al. / Robotics and Autonomous Systems 56 (2008) 385395

breathing or other position disturbances have a limited et al. [5] about the needle positioning by means of a
influence on such organs, it is not the case for moving organs, visual servoing using cross-ratios). Additionally, standard CT
such as the liver. To reduce physiological motion effects, scanners interfaces do not allow to easily capture a whole image
electromagnetic tracking systems can be used [10]. Body sequence: the output video is limited by the manufacturers to
attached systems were also designed to that purpose [8,9,11]. display purpose. However, such images will be available in
As they move with the patients body, these robotic systems a very near future, thus allowing automatic visual feedback
partly prevent relative motions between the surgical tools and and then visual servoing with robots. Anticipating these
the organs. However, this may not be sufficient for applications technological innovations, we propose in this paper to study the
like breast needle biopsy, when the patients respiratory motions visual tracking and servoing problem with CT scanners, using
are really important. In this case, the limited number of image a set of stereotactic markers.
acquisitions and the resulting lack of online registration results
in a poor guiding precision and potential safety risks for the 1.3. Related work on model-based visual servoing
patient.
Since the CT motion field in the current work is largely
1.2. CT-guided robotized procedures inspired by the visual feedback methods used with classical TV
cameras, we propose to briefly recall keypoints, assumptions
In medical robotics, particularly in minimally invasive and methodologies of the early techniques developed for robot
interventions, image-guided robotic assistance can potentially control and visual tracking purposes. Two main approaches are
improve and facilitate the positioning of surgery tools [12]. commonly used to design a visual servoing scheme. Sanderson
Generally, the ability to use medical imaging for robot control and Weiss [17] introduced a classification which exhibits the
depends on whether real-time imaging is possible or not. In two oldest approaches: the Position-Based Visual Servoing
addition, these robotized procedures also have some drawbacks. (PBVS, or 3D visual servoing) based on successive rigid
In the case of CT-imaging, the most important problem is registrations and the Image-Based Visual Servoing (IBVS, or
related to outliers that may appear in the medical image because 2D visual servoing) based on the motion field. The PBVS
of the presence of several metallic parts of the robot. This approach requires a full geometrical model of the object of
notably disturbs the automatic registration process. Computer interest in order to use geometrical features extracted from
vision modules dedicated to CT-imaging have recently been successive images. The correspondence between image and
developed for single slice registration [13,31]. However, they object features as well as the pose computation have to be done
require a structured environment in order to be reliable enough. for every acquired image. For a desired 3D pose, an appropriate
To do so, stereotaxy is commonly used. This registration task function [18] can be chosen to express the pose error in
method was initially proposed by Brown [14] twenty-seven a global vector. Then, an appropriate control law is derived to
years ago. Originally developed for neurosurgery, it consisted obtain the asymptotic stable convergence to the desired pose.
in a set of patterns screwed onto the patients skull. More Some relevant modifications to the initial approach have been
generally, stereotactic fiducials have to contain radiopaque proposed. In particular, Cervera and Martinet [19] proposed to
material so as to provide a set of spots in the CT slices. change the reference frame in which the pose variations are
Brown developed both stereotactic fiducials as well as the expressed (object frame or camera frame). Indeed, the choice
first mathematical formulation for this patient-to-modality of this reference frame has a significant impact on the 2D
registration. With this technique, it is possible to compute the trajectory of visual features [20] so that in some cases the target
pose (position and orientation) of the fiducials reference frame may get out of the camera field of view.
with respect to the CT plane, which is the sensing area of a CT In the IBVS approach [2123], the error that has to be
scanner. This can be achieved with a small number of fiducials, minimized is directly expressed in the image plane. In the case
either manually [13,15,16] or automatically [11]. of a rigid object of interest, the relation between the velocity
The last generation of CT scanners embeds new capabilities screw and the displacements of geometrical image features
and allows us to acquire up to a dozen slices per second. This is expressed by a matrix called interaction matrix [24]. This
functionality, called CT fluoroscopic mode, is characterized by matrix can be estimated either online or only once, with the
a low resolution and a reduced X-ray energy. Its benefit is to desired image features arrangement, for instance. Contrarily
make it possible to acquire a video sequence, but, unfortunately to the PBVS approach, the pose can be partially estimated
the associated signal-to-noise ratio still remains limited. In since only the depth is required for the online update of the
particular, it is significantly lower than in the case of the matrix components. However, the depth signal is often noisy
one-shot slice mode. With this working mode, Siddique and and a visual servoing based on a precomputed depth works is
Jaffray [6] have studied the relation between the applied dose sufficient, provided the convergence is insured. Following this
and the uncertainty in localization. They have developed a approach, the Plucker representation of a set of 3D lines was
location algorithm based on particle filtering, with interframes used in a visual servoing scheme by Rives et al. [27,24] for
smoothing. This specific functionality of CT scanners should pinhole cameras, The motion analysis was further studied by
not be confused with X-ray fluoroscope imaging, provided Mitiche [25] and projective alignment methods were proposed
by C-arms. Actually, in this case, radiographic images are for large displacements by Bartoli and Sturm [26]. Recently, a
emanating from projections instead of slices (see Navab normalized version of the Plucker representation due to Navab
C. Doignon et al. / Robotics and Autonomous Systems 56 (2008) 385395 387

was incorporated in a 2D visual servoing with a sequential geometrical properties with a suitable modelling of each
decoupling in orientation and 3D position control [28]. ingredient. For this modality, we provide an image-based visual
Generally, IBVS methods are very accurate since the task servoing approach which can be applied either for numerical
function is expressed in the sensor plane. A major drawback optimization purposes (as it is for virtual visual servoing based
rely on the difficulty to detect the singularities of the Jacobian on IBVS for pinhole cameras [30]) or for vision-based robot
matrix, in order to avoid numerical problems in the control law. control with significantly less features (only 3 or 4). Hence, it
Besides, while image trajectories are close to straight lines, can be thought as a numerical optimization technique driven
3-D trajectories of the system may sometimes be complex by the stereotactic-frame-to-CT-image mapping, and also as
space curves. For these reasons, the IBVS approach is suitable a smoothing filter with intraframe interpolation while abrupt
for high precision objectives and when the initial position is interframe displacements occur.
in the neighbourhood of the desired position, as it is for the Furthermore, our work takes advantages of the object model
compensation of small 3D motions that are considered here. as it is available most of the time in stereotaxy. In the next
Some other hybrid techniques also exist, like the 2D 1/2 subsection, we provide the geometrical relationships between
visual-servoing method proposed by Malis and Chaumette [29]. a 3D line and its corresponding image point. A closed-form
The above-mentioned visual-servoing techniques have been solution for the pose is briefly presented and then the kinematic
mostly validated with classical visual sensors like CCD/CMOS analysis is detailed in the second part.
cameras with perspective projection models. For the last few
years, we have noticed the adaptation of these algorithms 2.2. Geometrical modelling
to less familiar visual sensors, e.g. omnidirectional vision, In CT-imaging, landmarks made of radiopaque objects are
ultrasound sensors, endoscopic images, etc. Of course, for used to register a 3D object with respect to the scanner frame.
such sensors, perspective projection no longer applies and These landmarks can either be internal organs, e.g. bones, or
appropriate geometrical modelling has to be done for 3D- artificial parts fixed to the patient. In the latter case, fiducials
reconstruction or visual-control purpose. This is the case of CT with radiopaque materials are commonly used to carry out
scanners, that we will consider in this paper. As far as we know, a stereotactic registration. The geometry of these fiducials is
this is the first time such a study is proposed in the context of generally simple for modelling convenience: they are typically
CT imaging and stereotaxy. built from a set of straight rods (see Fig. 1(a)). It was
designed to allow a robust registration with a limited number
1.4. Outline of fiducials [31]. The intersections of the rods with the CT-
plane provide spots. Notice that we assume that the CT-scanner
In Section 2, we model the stereotactic visual feedback with sensing zone is a plane whereas it is in fact a few millimetres
the Plucker representation of 3D straight lines and we calculate thick.
the matrix which relates the image points displacements to A geometrical transformation composed of anisotropic
the velocity screw of the stereotactic frame. These are the key scalings, an Euclidean transformation and an orthogonal
contributions of this work. The proposed modelling is generic projection is proposed to model the relationship between the
enough since it does not depend on any robotic architecture nor 3D straight lines and the corresponding image spot centres [31].
on the arrangements of the rods of the stereotactic fiducials. It To this end, we consider the Pluckerian coordinates 0 Li =
could also be used for other medical imaging devices such as T T
MRI, with of course appropriate fiducials materials. (0 yi , 0 wi ) of a straight line i expressed in the world
In Section 3, we propose a strategy to perform the 3-D reference frame F0 (the superscript x refers to the frame Fx ).
tracking, with CT images and without Jacobian matrix updates. Vector 0 yi is supported by the line direction and may be set
We then present simulations of this 3D tracking scheme, to unity. The vector 0 wi is defined by the product 0 wi =
inspired by visual servoing. Finally, the proposed method is OOi 0 yi and embeds the position and inclination of the plane
applied to CT images of stereotactic markers placed onto a (O, OOi , Oi 0 yi ), then it satisfies 0 yiT 0 wi = 0 (see Fig. 2). As
mannequin. a consequence, 0 Li has exactly four free parameters, providing
a suitable representation to express the 4 dof of a 3D straight
2. Modelling the scanner-to-world relations line. Let [a] stands for the skew-symmetrical matrix of vector
a. If [u i vi ]T is the vector of the coordinates of the intersection
2.1. Background and objectives of i with the CT plane, denoted as I Qi in the image I and Pi
in space. It was shown in [31] that
Most of previous works proposed closed-form solutions  T
or numerical optimizations for the stereotactic registration ct 1 0 0
OPi = Sct I Qi (1)
based on a special arrangements of the rods (a set of 9 N- 0 1 0
shaped rods in [13], a set of 12 V-shaped rods in [15]). Lee
which leads to the fundamental equation of the 2-D/3-D
et al. [16] provide a comparative study of several numerical
stereotactic registration [33]:
approaches for 6 rods. All these methods have been designed
   
with no distinction between the fiducial model and the imaging ui
0
[ yi ] [r1 r2 ] Sct + t = 0 wi .
0
(2)
model. This paper rather aims at understanding the embedded vi
388 C. Doignon et al. / Robotics and Autonomous Systems 56 (2008) 385395

Fig. 1. The stereotactic landmarks used for experiments. A set of metallic rods are embedded in a plexiglass cube of approximately 30 30 40 mm. (a) The two
cubes screwed and placed on a guide rail. (b) One of the stereotactic cubes mounted on the CT-bot platform.

consists in solving Eq. (2) for the unknowns r1 , r2 et t0 gathered


in vector x. Eq. (2) can be written as

h i r1
(u i , vi , 1) [0 yi ] S9 r2 = 0 wi , (4)
| {z } t0
aiT

where is the Kronecker product and where the 9 9 matrix


1
T 0
S1 Sct 0 I33 (5)
9 =
0 0 1

Fig. 2. A 3-D line i crossing the CT plane. The pair of vectors (0 yi , 0 wi ) is only depends on of the intrinsic parameter matrix Sct .
the Pluckerian representation of the line. With n 3D lines and the corresponding visible image points,
a (3n 9) matrix A = (a1 , a2 , . . . , an )T and a (3n 1)
RT = (r1 , r2 , r3 ) and t0 = RT t are respectively the 3D vector b can be built so as to write Eq. (4) as a deficient-
rotation matrix and the position vector of the object reference rank linear system in the form Ax = b. If n > 3, rank
frame, with respect to the CT scanner reference frame Fct . A 8 and the system can be solved thanks to the SVD
The scalings, which are the diagonal terms of the 2D diagonal (Singular Value Decomposition) of A. Enforcing the rank to be
matrix Sct , act as intrinsic parameters for the CT-imaging equal to 8 when n = 4 provides a one-parameter family of
device. solutions for x and allows to include one of the three quadratic
Generally, intersections of straight lines with the cutting constraints between the first two columns of the rotation matrix
plane should provide as many spots as there are lines. In RT : rT1 r2 = 0, kr1 k = 1 and kr2 k = 1. Finally, the solution for
practice, several spots may be missing in the image or, on the R is decomposed with the SVD to get the closest orthonormal
contrary, some artifacts may appear [16]. Line fiducials j are matrix in the sense of the Frobenius norm.
bounded (min j j maxj ). It is easy to compute these Note that with a uncalibrated scanner, the diagonal entries sx
extremal values for any displacement (R, t) and to check for the and s y of the matrix Sct of intrinsic parameters can be recovered
real intersection and then the relevance of the corresponding simultaneously with the pose parameters with at least 5 lines
spot since and by solving a linear system similar to (2) for the 9-vector
(lT1 = sx rT1 , lT2 = s y rT2 , t0T )T but with also three quadratic
0
T Sct constraints
j = 0 y j r1 r2 t0 0 I Qj.
 
(3)
0 0 1 lT1 l1 = sx2 , lT2 l2 = s y2 , lT1 l2 = 0. (6)

2.3. A linear solution for the pose 2.4. Expression of the apparent displacement with stereotactic
markers
We assume that a calibration of the scanner and line/point
correspondences were previously performed (see [31] for more To achieve a PBVS, the previous pose determination
details). Now consider the problem of the pose recovery, which algorithm and a line/point matching process must be used
C. Doignon et al. / Robotics and Autonomous Systems 56 (2008) 385395 389

Fig. 3. The stereotactic markers intersect the CT-plane at different locations in the image. (left) A set of three space configurations is obtained with large and small
displacements of the markers. (right) The corresponding displacement image points (spots) inside the scanner image.

simultaneously to perform a 2D/3D registration. However, following classical properties


in practice disturbances induce small displacements which
[u] v = u v = v u = [v] u
must be taken into account during a robotized image-guided
procedure. As we mentioned before, IBVS are more suitable for [] = RRT and [] = []T
small displacements compensation (see Fig. 3). We therefore to Eq. (9), we get
chose this approach preferably to the PBVS, in order to achieve 0
the 3D tracking. To do so, a kinematic analysis based on the " # ui
u i0
geometrical modelling is described in this paragraph. It leads to Fi = [0 yi ] V + vi0 t  (10)

the expressions of a CT motion field. vi0
0
Consider Eq. (2) expressed in the motionless scanner
reference frame Fct . Rearranging Eq. (2) and premultiplying with

both sides by the transpose of the rotation matrix, we get 1 0
Fi = [0 yi ] 0 1 .
1 0 "u 0 # 0 0
R[0 yi ] RT 0 1 i0 t = R 0 wi (7)
0 0 vi By premultiplying Eq. (9) by FiT , we finally obtain the
relationship between the variations of the image point positions
with [u i0 vi0 ]T = Sct [u i vi ]T . Assuming that intrinsic parameters and the velocity screw = [VT T ]T
in Sct remain constant and deriving Eq. (7), we obtain  
u i
0 = (Jct )i (11)
vi

ui
(R[0 yi ] RT + R[0 yi ] RT ) vi0 t
where the elementary Jacobian matrix (Jct )i can be written
0
(Jct )i = (Jv )i (J )i
 
" # (12)
1 0 u 0
+ R[0 yi ] RT 0 1 i0 t = R 0 wi . (8) with
0 0 vi
1 T 1 T 0
(Jv )i = Sct (FiFi ) Fi[ yi ] ,


In the above equation, the first left-hand-side term is equal to the 1 0
 
u (13)
right-hand-side one according to Eq. (2). Then, premultiplying (J )i = (Jv )i 0 1 Sct i t .
vi
both sides of the remaining terms in Eq. (8) with the 0 0


transposition of the rotation matrix it comes
0
ui
2.5. Kinematic analysis
[0 yi ] RT vi0 t Let us now analyse the properties of the matrix (Jct )i .
0 First of all, (Jct )i is a rank-2, (2 6) matrix and three

1 0 "u 0 #
pairs of corresponding 3D lines/2D points are necessary to
+ [0 yi ] RT 0 1 i0 t = 0. (9) get the velocity screw. Hence, the Jacobian matrix Jct =
0 0 vi [(Jct )T1 (Jct )T2 (Jct )T3 ]T is obtained by stacking all the
submatrices (Jct )i . To compute the inverse of the (6 6) matrix
Let us now denote as V = t (resp. ), the translational (resp. Jct , the following conditions have to be taken into account:
rotational) velocity of frame F0 , attached to the object fiducials, if the three 3D lines are parallel, the first three columns of
with respect to frame Fct , expressed in Fct . Applying the Jct are linearly dependent,
390 C. Doignon et al. / Robotics and Autonomous Systems 56 (2008) 385395

Fig. 4. The visual CT tracking based on the image-based visual servoing with simulated data and the proposed scanner modelling. Image trajectories of markers
(up), angular errors (middle) and position errors (bottom) of the stereotactic frame for the two studied cases: during the servoing, either the Jacobian matrix Jct is
updated (left column, figures (a), (c) and (e)) or Jct remains constant (right column, figures (b), (d) and (f)).

if the three 3D lines are coplanar, the rows of Jct are linearly (Jct )i can be seen as the product
dependent. h i
(Jct )i = Jv ([0 yi ] ) I33 , [T ( I Qi , t)i ] (14)
These singularities have to be avoided from the start of the | {z } |
(23)
{z }
design of the stereotactic cube. Considering the first condition, (36)
C. Doignon et al. / Robotics and Autonomous Systems 56 (2008) 385395 391

Fig. 5. The visual CT tracking based on the image-based visual servoing with simulated data corrupted by a Gaussian noise with zero-mean. The standard deviation
is set to 0.5 pixel and the noise is added to the coordinates of image points. Image trajectories of markers (up), angular errors (middle) and position errors (bottom)
of the stereotactic frame for the two studied cases: the Jacobian matrix Jct is updated (left column, figures (a), (c) and (e)) or it remains constant (right column,
figures (b), (d) and (f)). At the end of the servoing, the orientation error is about 3 degrees in average and the 3D position error is less than 1 mm.

what points out the decoupling between the left factor, which coordinates. If three lines i , j and k are parallel, vectors
0 y , 0 y and 0 y are equal, up to a scale, and matrices F , F
depends only on the object model (0 yi ), and the right factor, i j k i j
which depends on the 3-D position and the image point and Fk are also equal, up to a scale. Then (Jv )i , (Jv ) j and (Jv )k
392 C. Doignon et al. / Robotics and Autonomous Systems 56 (2008) 385395

Fig. 6. (a) The experimental setup. (b) The stereotactic cube mounted onto the ct-bot platform (the needle holder).

are equal and the resulting interaction matrix the position vector, using relations (12). This solution seems
+
ct Jct = I, t. In that
interesting since, ideally, it implies: Jc
(Jv )i (J )i

case, the trajectory of each image point is a straight line1
Jct = (Jv )i (J ) j (15)
(see Fig. 4(a)).
(Jv )i (J )k + ? b? b ?
ct = Jct (s , R , t ): in that case, the interaction matrix
+
Jc
is singular whatever the values of scalars and are, because is constant during the servoing and computed in advance
the left (6 3) submatrix has a rank equal or less than two. with the desired values of visual features. The stability
condition enounced in Eq. (17) is then ensured only in a
3. 3D tracking approach neighbourhood of these desired values. The resulting image
trajectory is no longer straight as one can see in Fig. 4(b).
This section deals with the validation of the CT motion field For these two cases, we implemented the visual CT tracking
developed in the previous paragraphs. To do so, a task function for a scalar gain value of g = 0.1, with a geometrical model of
is used [18] like for an image-based visual servoing, so as to the stereotactic markers corresponding to the cube displayed
compensate displacements of the fiducials with respect to the in Fig. 1. Angular errors corresponding to 3-D errors in the
CT scanner reference frame. We denote as s(t) the (2n 1) orientation of the stereotactic frame are reported in Fig. 4(c) and
vector of the coordinates of the image points at time t. To obtain (d) and position errors have been reported in Fig. 4(e) and (f).
an asymptotically stable convergence of s(t) to a desired s? , we The robustness of the two control strategies we performed
use the following kinematic control law was tested. The term robustness used here must be understood
as the convergence accuracy with respect to noise. To do so,
?
= g Jc
ct (s(t) s )
+
(16) a Gaussian white noise has been added onto the image point
coordinates (see Fig. 5). With the help of these figures, one can
that provide an estimate of the velocity screw at any time. The
notice that even far away from the desired position (the length
parameter g can either be a constant gain, a function or more
+ of the trajectories is about 70 pixels), both methods works very
generally a matrix. Jcct is an estimation of the pseudo-inverse of well and the desired position is reached with a mean residual of
Jct since errors in the scaling factors and image measurements 3 degrees for the orientation error (sum of the three absolute
imply that the real value of Jct remains unknown. In that case, error angles) and less than 1 mm for the 3D position error
a well-known sufficient condition to ensure the asymptotic (Euclidean distance between the position reached and the 3D
stability was given by Samson [18] and can be expressed for position corresponding to the desired 2D visual features).
an IBVS as The two control strategies provide similar results, however
the second one does not require to update the Jacobian matrix,
ct Jct (t) > 0,
+
Jc t. (17)
The full stereotactic registration is then not required either to
compensate for small displacements, which is very convenient
3.1. Simulation results for real-time purpose.

3.2. Results with real images


Two control strategies have been applied for the implemen-
tation of the visual CT tracking. They differ in the two possible To carry out the experiments, the cube of stereotactic
+
choices for Jcct : markers has been placed onto the base of a small robotic

ct = Jct (s(t), R, t): the Jacobian matrix is updated


+ +
Jc bb 1 In presence of noisy data, this trajectory is straight up to the least mean
continuously thanks to new image measurements and to squares residual. Strictly speaking, the 2D trajectory is straight when exactly
actualized estimations b t of the rotation matrix and
R and b three image points are used to compute J1ct .
C. Doignon et al. / Robotics and Autonomous Systems 56 (2008) 385395 393

Fig. 7. Experimental results for the compensation of small displacements. (a)(b) Initial and desired images. (c) Image trajectories. (d) zoom in of (c). (e) Angular
errors. (f) 3-D position errors.

platform named CT-Bot that was designed for percutaneous unfortunately not yet available to perform the visual servoing
procedures with CT scanners [32] (see Fig. 6). The whole with a real-time visual feedback). Currently, the platform
system will be used in a near future to position the needle together with the cube have been installed onto the mannequin
in real-time (as already mentioned, the fluoroscopic mode is which in turn has been placed on the operation table and moved
394 C. Doignon et al. / Robotics and Autonomous Systems 56 (2008) 385395

so as to bring it into the CT scanner tunnel. Images have been References


acquired with a Siemens Somatom Volume Zoom scanner at the
department of radiology, Strasbourg hospital (see Fig. 7(a) and [1] R. Taylor, J. Funda, D. LaRose, M. Treat, A telerobotic system for
(b)) and collected with ftp commands to a server. augmentation of endoscopic surgery, in: IEEE International Conference
To validate the proposed CT visual tracking approach on Engineering in Medicine and Biology, Paris, France, 1992, pp.
for accurate repositioning purpose, two CT-slices have been 10541056.
acquired, the images have been segmented and spots centres [2] J.A. Kaufman, M.J. Lee, Vascular and interventional radiology The
requesites, Mosby Ed., 2004.
have been estimated. The differences between the two images
[3] Y.S. Kwoh, J. Hou, E. Jonckheere, S. Hayati, A robot with improved
in Fig. 7(a) and (b) correspond to a displacement (a translation) absolute positioning accuracy for CT guided stereotactic brain surgery,
of the operation table. For the initial positioning, a full 2D/3D IEEE Transactions on Biomedical Engineering 35 (February) (1988)
registration was performed. 153160.
In Fig. 7(c), image trajectories are reported (Fig. 7(d) [4] D. Stoianovici, L.L. Whitcomb, J.H. Anderson, R.H. Taylor,
is a zoom in of Fig. 7(c)). They correspond to the case L.R. Kavoussi, A modular surgical robotic system for image guided
+ percutaneous procedures, in: International Conference on Medical Image
of a constant Jacobian matrix Jc ct . These trajectories in Computing and Computer-Assisted Intervention, Cambridge, USA,
the image are not perfectly straight. However, the proposed October 1998.
correction gives satisfactory results for the compensation of [5] N. Navab, B. Bascle, M.H. Loser, B. Geiger, R.H. Taylor, Visual servoing
small displacements, as the 3D position error is less than 1 mm for automatic and uncalibrated needle placement for percutaneous
when the desired position is reached (see Fig. 7(e)). procedures, in: Proceedings of the IEEE Intl Conf. on Computer Vision
and Pattern Recognition, Hilteon Head, SA, USA, June 2000, pp.
4. Conclusion 23272334.
[6] S. Siddique, D. Jaffray, Towards active image guidance: Tracking of
In this paper, we have presented a new kinematic modelling a fiducial in the thorax during respiration under x-ray fluoroscopy, in:
to update the stereotactic registration with CT scanners. The Kevin R. Cleary, Michael I. Miga (Eds.), Proceedings of the SPIE Intl
main contribution of this work are in the determination Symposium on Medical Imaging 2007: Visualization and Image-Guided
Procedures, vol. 6509, March 2007, pp. 650929650929-12.
of the relationships between the 3D motion of a set of
[7] L.M. Su, D. Stoianovici, T.W. Jarrett, A. Patriciu, W.W. Roberts,
generic stereotactic markers and the apparent displacements J.A. Cadeddu, S. Ramakumar, S.B. Solomon, L.R. Kavoussi, Robotic
(or CT motion field) of the corresponding visual features percutaneous access to the kidney: Comparison with standard manual
extracted from successive CT images. The corresponding access, Journal of Endourology 7 (2002) 471475.
Jacobian matrix Jct has been derived, thanks to the Plucker [8] J. Hong, T. Dohi, M. Hashizume, K. Konishi, N. Hata, An ultrasound-
representation for the rigid set of 3D lines and thanks to driven needle insertion robot for percutaneous cholecystostomy, Journal
an appropriate geometrical modelling. The proposed method, of Physics in Medicine and Biology (2004) 441455.
based on this representation, requires less features to solve [9] P. Berkelman, J. Troccaz, P. Cinquin, Body-supported medical robots: A
survey, Journal of Robotics and Mechatronics 16 (5) (2004) 513517.
the stereotactic registration than previously proposed in the
[10] E.B. Levy, J. Tang, D. Lindisch, N. Glossop, F. Banovac, K. Cleary,
literature. Implementation of an electromagnetic tracking system for accurate
To emphasize the effectiveness of the model, we have used it intrahepatic puncture needle guidance: Accuracy results in an in vitro
for the compensation of small displacements, both in simulation model, Academic Radiology 14 (3) (2007) 344354.
and with real images extracted from a sequence of CT images [11] B. Maurin, C. Doignon, J. Gangloff, B. Bayle, M. de Mathelin,
corresponding to motions of the operation table. To this end, we O. Piccin, A. Gangi, CT-Bot: A stereotactic-guided robotic assistant
have presented two control strategies which allow us to perform for percutaneous procedures of the abdomen, in: SPIE Conference on
Medical Imaging, vol. 5744, San Diego, USA, 2005. pp. 241250.
the 3D tracking thanks to image-based visual servoing existing
[12] K. Masamune, G. Fichtinger, A. Patriciu, R.C. Susil, R.H. Taylor,
methods. While both control strategies provide similar results, L.R. Kavoussi, J.H. Anderson, I. Sakuma, T. Dohi, D. Stoianovici,
the second one, which does not require update of the Jacobian System for robotically assisted percutaneous procedures with computed
matrix, is very convenient for real-time purpose. tomography guidance, Computer-Aided Surgery Journal 6 (6) (2001)
Natural perspectives of this work include the use of the 370383.
proposed modelling to register in real-time the end effector [13] R.C. Susil, J.H. Anderson, R.H. Taylor, A single image registration
of a robotic system placed in the CT scanner, thanks to its method for CT guided interventions, in: Intl Conf. on Medical Image
Computing and Computer-Assisted Intervention, Cambridge, UK, 1999,
fluoroscopic mode.
pp. 798808.
The geometrical modelling we proposed here is sufficiently
[14] R.A. Brown, T.S. Roberts, A.G. Osborne, Stereotactic frame and com-
accurate for most of applications involving a stereotactic puter software for CT-directed neurosurgical localization, Investigative
registration. It is another step toward the robotic assistance Radiology 15 (1980) 308312.
by means of external fiducials and dynamic visual feedback. [15] J. Dai, Y. Zhu, H. Qu, Y. Hu, An algorithm for stereotactic localization by
However, we wish to investigate more the geometrical computed tomography or magnetic resonance imaging, Journal of Physics
modelling in the future. In particular, as the CT image slice is in Medicine and Biology 46 (2001) N1N7.
the result of the intensity distribution and integration inside an [16] S. Lee, G. Fichtinger, G.S. Chirikjian, Numerical algorithms for spatial
registration of line fiducials from cross-sectional images, American
active volume bounded by two planes instead of a single plane
Association of Physicists in Medicine 29 (8) (2002) 18811891.
(active plane), it could be helpful to conceive a geometrical
[17] C.A. Sanderson, L.E. Weiss, Image-based visual servo control using
model taking into account an active volume so as to deeply relational graph error signals, in: IEEE International Conference on
understand the formations of artifacts and spots deformations. Robotics and Automation, 1980, pp. 10741077.
C. Doignon et al. / Robotics and Autonomous Systems 56 (2008) 385395 395

C. Doignon received the B.S. degree in Physics


[18] C. Samson, M. Le Borgne, B. Espiau, Robot Control: The Task Function
in 1987 and the Engineer diploma in 1989 both
Approach, Oxford University Press, 1991. from the Ecole Nationale Superieure de Physique de
[19] E. Cervera, Ph. Martinet, Visual servoing with indirect image control Strasbourg, France. He received the Ph.D. degree in
and a predictable camera trajectory, in: Proceedings of the IEEE/RSJ Computer Vision and Robotics from Louis Pasteur
Intl Conf. on Intelligent Robots and Systems, Kyongju, Korea, 1999, University, Strasbourg, France in 1994. In 1995 and
pp. 381386. 1996, he worked with the department of Electronics
[20] F. Chaumette, Potentiel problems of stability and convergence in and Computer Science at Padua University, Italy, for
the European Community under HCM programme
image-based and position-based visual servoing, in: The Confluence of Model Based Analysis of Video Information. Since
Vision and Control, in: LNCIS Series, vol. 237, Springer-Verlag, 1998, 1996, he has been associate professor in computer engineering at the Louis
pp. 6678. Pasteur University, Strasbourg, France. His major research interests include
[21] B. Espiau, F. Chaumette, P. Rives, A new approach to visual servoing in computer vision, signal and image processing, visual servoing and robotics.
robotics, IEEE Transactions on Robotics and Automation 8 (3) (1992)
313326. B. Maurin graduated from the Ecole Nationale
[22] J.T. Feddema, O.R. Mitchell, Vision-guided servoing with feature-based Superieure de Physique de Strasbourg (2001, ENSPS,
trajectory generation, Transactions on Robotics and Automation 5 (5) France). Then, he was a Ph.D. student at the
Laboratoire des Sciences de lImage, de lInformatique
(1989) 691700.
et de la Teledetection (LSIIT, France) and obtained
[23] S. Hutchinson, G.D. Hager, P.I. Corke, Tutorial on visual servo control, a doctorate in medical robotics and computer vision
IEEE Transactions on Robotics and Automation 12 (5) (1996) 651670. from the University Louis Pasteur (2005, ULP,
[24] F. Chaumette, La relation vision-commande: theorie et application a des France). Since 2006, he has been a Research and
taches robotiques, Ph. D. Thesis, University of Rennes I, France, 1990. Development Engineer for Cerebellum Automation
[25] A. Mitiche, Computational Analysis of Visual Motion, Plenum Press, SAS. His research interests are motion control for high
speed robotic systems and computer vision.
New York, 1994.
[26] A. Bartoli, P. Sturm, The 3-D line motion matrix and alignment of line
B. Bayle was born in 1972. He studied electrical
reconstructions, in: IEEE International Conference on Computer Vision
engineering at Ecole Normale Superieure de Cachan,
and Pattern Recognition, Hawaii, USA, 2001, pp. 287292. France, from 1992 to 1995. He studied robotics and
[27] P. Rives, B. Espiau, Recursive estimation of 3-D primitives by means of a control at the LAAS/CNRS in Toulouse and obtained
mobile camera, INRIA Research Report n 351, 1987. his M.S. and Ph.D. degrees from the University of
[28] N. Andreff, B. Espiau, R. Horaud, Visual servoing from lines, in: IEEE Toulouse, in 1996 and 2001 respectively. Since 2002,
International Conference on Robotics and Automation, San Francisco, he has been an Associate Professor with the Ecole
USA, 2000, pp. 20702075. Nationale Superieure de Physique, Strasbourg. His
research interests include medical robotics, haptics and
[29] E. Malis, F. Chaumette, 2 1/2 D visual servoing with respect to unknown teleoperation.
objects through a new estimation scheme of camera displacement,
International Journal of Computer Vision 37 (1) (2000) 7997. M. de Mathelin received the Electrical Engineering
[30] E. Marchand, F. Chaumette, Virtual visual servoing: A framework for real- degree from Louvain University, Louvain-La-Neuve,
time augmented reality, EUROGRAPHICS 21 (3) (2002) 289297. Belgium, in 1987, and the M.S. and Ph.D. degrees
[31] B. Maurin, C. Doignon, M. de Mathelin, A. Gangi, Pose reconstruction in electrical and computer engineering from Carnegie-
from an uncalibrated Computed Tomography imaging device, in: Mellon University, Pittsburgh, PA, in 1988 and 1993,
International Conference on Computer Vision and Pattern Recognition, respectively. During the 19911992 academic year,
he was a Research Scientist with the Electrical
Madison, USA, 2003. Engineering Department, Polytechnic School of the
[32] B. Maurin, J. Gangloff, B. Bayle, M. de Mathelin, O. Piccin, Royal Military Academy, Brussels, Belgium. In 1993,
P. Zanne, C. Doignon, L. Soler, A. Gangi, A parallel robotic system he became Matre de Conferences with the Universite
with force sensors for percutaneous procedures under CT-guidance, in: Louis Pasteur, Strasbourg, France, where, since 1999, he has been a Professor
International Conference on Medical Image Computing and Computer- with the Ecole Nationale Superieure de Physique de Strasbourg (ENSPS). He
Assisted Intervention, St-Malo, France, 2004. is Associate Editor of the IEEE Transactions on Control Systems Technology.
His research interests include adaptive and robust control, visual servoing, and
[33] B. Maurin, Conception et realisation dun robot dinsertion daiguille
medical robotics. He received the /King-Sun Fu Memorial/ award for the best
pour les procedures percutanees sous imageur scanner, Ph.D. Thesis, 2005 Transactions on Robotics paper. Dr. Mathelin is a Fellow of the Belgian
University of Strasbourg, France, 2005 (in French). American Educational Foundation.

You might also like