Professional Documents
Culture Documents
BEE TRANSACITONS ON ROBOTICS AND AUTOMATION, VOL. 12, NO. 1, FEBRUARY 1996
A Self-Calibration Technique
for Active Vision Systems
Song De Ma, Senior Member, IEEE
Abstruct- Many vision research groups have developed the the cameras motion from the platforms motion. This problem
active vision platform whereby the camera motion can be con- has been elegantly solved by Shiu and Ahmad [4], Tsai and
trolled. A similar setup is the wrist-mounted camera for a robot Lenz [5],and Chen [6]. Although the motion representations
manipulator. This head-eye (or hand-eye) setup considerably
facilitates motion stereo, object tracking, and active perception. adopted are different, these approaches are mathematically
One of the important issues in using the active vision system is equivalent. In all these methods, the head-eye geometry is
to determine the camera position and orientation relative to the determined by activating a sequence of platform movements
camera platform. This problem is called the head-eye calibration and simultaneously measuring the induced camera motion by
in active vision, and the hand-eye calibration in robotics.
observing an object of known structure (called the calibration
In this paper we present a new technique for calibrating the
reference).
If A (or B) represents the transformation between
head-eye (or hand-eye) geometry as well as the camera intrinsic
parameters. The technique allows camera self-calibrationbecause the camera frame and a world coordinate system attached to
it requires no reference object and directly uses the images of the a calibration reference before (or after) the movement, then
environment. Camera self-calibration is important especiaLIy in the induced camera motion is equal to AB- (see Fig. l),
circumstances where the execution of the underlying visual tasks
does not permit the use of reference objects. Our method exploits where A and B are called the camera extrinsic parameters (or
the flexibility of the active vision system, and bases camera the pose of the camera) and can be determined independently
calibration on a sequence of specially designed motion. It is shown by means of camera calibration using a calibration reference
that if the camera intrinsic parameters are known a priori, the ([7]-[13]). (Note, in this paper, camera calibration means the
orientation of the camera relative to the platform can be solved detennination of both intrinsic and extrinsic camera parameusing 3 pure translational motions. If the intriusic parameters
are unknown, then two sequences of motion, each consisting of ters, while the head-eye calibration refers to the determination
three orthogonal translations, are necessary to determine the of the head-eye geometry.) It has been proved [4]-[6] that, for
camera orientation and intrinsic parameters. Once the camera the head-eye calibration, two motions are necessary. Therefore,
orientation and intrinsic parameters are determined, the position four calibration processes are needed to determine AI and
of the camera relative to the platform can be computed from B1 (after the first motion), and A2 and Bz (after the second
an arbitrary nontranslational motion of the platform. AU the
computations in our method are linear. Experimental results with motion). Besides the computation cost involved, the previous
methods also require a reference object which is not always
real images are presented in this paper.
available in practice.
The goal of the current work is to develop a head-eye
I. INTRODUCTION
camera self-calibration technique which determines both the
ANY artificial vision research groups have developed head-eye geometry and the camera intrinsic parameters. The
the active vision platform whereby the camera motion self-calibration technique requires no known reference object
can be controlled [l]. A similar setup is the wrist-mounted and performs calibration directly using the images of the
camera on a robot manipulator. It has been shown [2], [3] that environment. This is important since in many cases, the camera
this head-eye (or hand-eye) setup considerably facilitates many needs to be recalibrated but reference objects cannot easily be
visual tasks. One of the important issues in using such a system placed in the environment (e.g., for the remote control robot).
is to determine the orientation and position of the camera Our method combines the head-eye calibration (the determirelative to the camera platform (or the robots end effector). nation of the head-eye geometry) and the camera calibration
The camera platforms motion is expressed in a coordinate (the detennination of the intrinsic camera parameters) in one
frame attached to the platforms base, whereas the image process. (Note we do not use the calibration reference which is
data are described in the camera coordinate system. Although a known object in the world coordinate system. Therefore, the
the platforms motion can be read from the controller, the camera extrinsic parameters, i.e., the position and orientation
transformation between the two coordinate systems (called the of the camera relative to the world coordinate system, are not
head-eye geometry) has to be determined in order to compute involved in our method).
Few self-calibration methods have been presented in the litManuscript received December 31, 1993; revised March 28, 1994. This
work was supported by the Chinese National Science Foundation and the erature. Maybank and Faugeras have described such a method
Chinese National High Technology Program.
for a moving camera [14]. They proved that from two sequenThe author is with the National Pattem Recognition Laboratory, Institute tial images of a moving camera, one can obtain two quadratic
of Automation, Chinese Academy of Sciences, Beijing 100080 P.R. China
equations (called the Kruppa equations) of the camera intrinsic
(e-mail: masd@prlsun2.ia.ac.cn).
Publisher Item Identifier S 1042-296X(96)01062-7.
parameters. Therefore, at least 2 motions are required to
1042-296X/96$05.00 0 1996 E E E
115
World
coordinate system
xp = Rx, + t.
- Uo)dz
f X C
f Yc
, (W - v o ) d y = ZC
11. NOTATIONS
AND PROBLEM DEFINITION
In an active vision system (or a robot hand-eye system),
the camera is rigidly mounted on a pan-tilt-translation platform. The platform can be controlled to move (rotate and/or
translate) along any desirable path, and the motion parameters
can be read from the controller. Since the camera coordinate
system does not coincide with that of the platform (see Fig.
l), the known motion parameters described by the platforms
coordinate system are not, in general, identical to those described by the camera frame. The transformation between
these two coordinate systems can be described by a rotation
R and a translation t, where R represents the orientation of
the camera, and t represents the position of the optical center
relative to the platforms coordinate system. Therefore, if
(1)
(2)
ZC
(3)
We call u g , V O , f z , and f y the camera intrinsic parameters.
In this paper, we present a head-eye self-calibration technique which allows us to determine both the head-eye geometry (R and t) and the camera intrinsic parameters U O,V O , f x ,
and f,.
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 12, NO. 1, FEBRUARY 1996
116
+ +
OF; = ( ( U ; - U O ) ~ Z(w;
, - ~ o ) d yf)T
,
Since the camera is mounted rigidly on the platform, the
camera motion (R,l, t,l) and the platform motion (Rpl,
t,l) are in fact the same motion but described by different
coordinate frames. We see from (6) and (7) that the two
representations of the same motion are related by the headeye geometry R and t which we want to determine in our
calibration process.
In the previous work of the head-eye calibration [4]-[6], it
is assumed that the platform motion can be obtained from the
controller and the induced camera motion can be computed by
observing a known reference object in front of the camera. In
this case, R and t can be solved from (6) and (7). But in our
method, no reference object is used and new ways have to be
found to compute the induced camera motion.
In order to simplify the method and to use the advantage
of the active vision system, we design a sequence of special
motions of the platform. From (6), we see that if the motion
is a pure translation, then R,1 = Rpl = I, where I is a
3 x 3 identity matrix. Hence both the platform motion and the
induced camera motion are pure translation. In this case, we
have from (7)
t,l = Rt,-.
= f ( ( W - .o)/fz,
(Vi - WO)/fy,
(9)
where i = 1
3.
Since we assume in this section that the intrinsic camera
parameters ( U O , WO,fz, fy ) are given, then from the coordinates
of the three FOE (F;) and (9), we can compute the direction
of OF;. Let t,i(i = 1
3) be the normalized vectors of
OFi(i = 1 3). Then t,; and t,; are the same normalized
vectors of the translations but described respectively by the
camera frame and the platform frame. From (8), we have
t,; = Rt,i(i = 1 3), or in matrix form,
(tpl,tp2,tp3)= R(t,l,tC2,tC3).
(10)
R = (tpi,tp2,tp3)(tci,tc2,tc3)-1
- -
(1 1)
(8)
IV. CAMERAINTRINSIC
PARAMETERS
FROM 6 PURE TRANSLATIONS
The camera often needs to be recalibrated since its intrinsic
parameters may be changed during the execution of the visual
task (e.g., after adjusting the focal length). We present a selfcalibration approach by which we can compute the camera
intrinsic parameters without using a known reference object.
It will be shown that our self-calibration process is almost the
same as that for the head-eye geometry calibration described in
the previous section. However, since more parameters need to
~
~~~
~~
117
-((U1
f,"
+1
- WO)
f,"
+1= 0
(12)
ti2,
-((U2
f,"
-uO)(u3
- 'W)
+7 ( V 2 -
Wo)(V3
-W O )
+1 = 0.
(14)
fY
9,
5,
(U1 - "315
+ (U1 - U3)Y - W ( V -~ ~ 1 3 =) ~U ~ ( U -I U S )
( ~ -2 7 . ~ 3 ) ~ (WZ
- W:$)Y - v1(wz - 2 1 3 ) ~= ~
(15)
1 ( ~ ~ 23 ) (16)
.
IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 12, NO. 1, FEBRUARY 1996
118
camera at
position A
Two SETSOF
THE
TABLE I
PURE TRANSLATION
DIRECTIONS
v.
CAMERA
(17)
119
I x
I
VII. CONCLUSION
120
IEEE TRANSACXONS ON ROBOTICS AND AUTOMATION, VOL. 12, NO. 1, FEBRUARY 1996