Professional Documents
Culture Documents
Let us consider P , a 3D point located on the All the system states can be expressed in terms of the flat
output and its derivatives (eq. 10 and 11).
positive side of the zc axis and its coordinates For the input vector, squaring and adding the expression
[x y z]T expressed in the camera frame Rc (figure of ẋ and ẏ (eq. 3) and substituting with the derivative of
2). It can be easily shown that the transformation eq. (10) in respect to the time, we obtain:
between the 3D point and this correspondant in r
u̇2p v̇p2
the image plane p = [up vp ]T is obtained by : v= + (12)
λ2u λ2v
x
up = αu . z + u0 ; Similarly, differentiating eq. (11) with respect to time, we
(5) obtain the second control input:
vp = αv . y + v0 ;
v̈p λu u̇p λv − üp λv v̇p λu
z w= (13)
u̇2p λ2v + v̇p2 λ2u
where αu ,αv , u0 , v0 are constants and correspond
to intrinsic parameters of the camera model. All the system states (eq. 10, 11) and the control
Since the mobile robot moves on a plane π, the inputs (eq. 12, 13) can be expressed in terms of
depth z, i.e. the distance along the optical axis zc , the flat output and its derivatives.
is constant (figure 2). Under this consideration, As a consequence of the previous flatness result,
the flat system (7) can be transformed into the Q is a positive definite symmetric matrix, Np is
linear Brunovsky’s canonical form by dynamic the prediction horizon, x ∈ X ⊂ Rn , u ∈ U ⊂ Rm , y
feedback and coordinate change. ∈ Y ⊂ Rp , where X,U,Y are compact sets defined
Differentiating (6) twice with respect to time by the constraints respectively on the states, the
gives: inputs and the outputs.
üp = λu v̇ cos θ − λu v θ̇ sin θ = v1
(14)
v̈p = λv v̇ sin θ + λv v θ̇ cos θ = v2
4.1 Application to Image-Based Visual Servoing
where the vector V = [v1 v2]T is the new control
input in the flat space. The previous relation (22) MPC can easily address both tasks of IBVS: min-
can be written in the following linear Brunovsky’s imization of an image error and constraint han-
canonical form: dling.
Ż = AZ + BV (15) The cost function is defined by:
where k+N
Xp
Jvs = error(j)T Q error(j) (20)
up 0 1 0 0 00
u̇p 0 0 0 0 1 0 j=k+1
vp A = 0
Z= B = (16)
0 0 1 0 0 with
v̇p 0 0 0 0 01 - error(j)=imaged (j) − imagem (j)
Our objective is to design a controller in the - imaged (j) = [udp (j) vdp (j)]T : desired image
flat space, such that the mobile robot tracks a - imagem (j) = [up (j) vp (j)]T : predicted image by
reference trajectory given by: the nonlinear global model (7).
Two kind of constraints can be taken into consid-
up ref eration :
imageref = (17)
vp ref - mechanical constraints such as actuator limita-
In the next section, we propose to use a MPC con- tions in amplitude or velocity:
troller in the flat space, to perform the trajectory
tracking. umin ≤ uj ≤ umax
(21)
∆umin ≤ uj − uj−1 ≤ ∆umax
- visibility constraints such as image limitations
4. FLATNESS-BASED MPC FOR VISUAL which ensure that the visual features stay in the
SERVOING image plane. Visibility constraints, the weak point
of the classical IBVS, can easily be added to the
A Nonlinear Model Predictive Controller formu- optimization problem.
lates a trajectory tracking problem into a nonlin- The constrained optimization problem (18) has
ear optimization problem. A cost function, defined to be solved at each sampling period, which is
by a quadratic form of the error between the very time consuming. In order to reduce the
desired trajectory and the predicted model, has to computational time, the flatness property of the
be minimized with respect to a control sequence considered model is then used. The predicted
u. The control law is computed in discrete-time
e image is ensured by (15) instead of (7).
and thus, it is necessary to discretize the model. Once the inputs, V = [v1 v2]T in the flat space,
Combined with the well-known Internal Model have been computed by the minimization of (20),
Control (IMC) structure (figure 5), the NMPC the dynamic feedback (22) is used to obtain the
problem is mathematically written as follows: inputs of the original nonlinear system (7), which
are applied to the real process.
yref(j) yd(j) u(j) yp(j)
cos θ sin θ
+ Optimization
Process
algorithm
_
λ λv
v̇ u v1
+
= (22)
e(j) ym(j)
w − sin θ cos θ v2
Model
_
vλu vλv
v (pixels)
135
−2
135
−4
130
v (pixels)
0
120
115 −5
110 −10
235 240 245 250 255 260 265 270 275 280
u (pixels)
0 100 200 300 400 500 600
Time (k*Te seconds)
0.6
Simulation 3 : To illustrate the capability of
0.4 handling visibility constraints (figures 9,10,11), we
define a forbidden area in the image, converted in
0.2
visibility constraints and describing an obstacle in
200 400 600 800 1000 1200 1400 the robot workspace. The mobile robot tracks for
Time (k*Te seconds)
the best the reference, avoiding the obstacle. The
control constraints are still satisfied.
Angular speed (rad.s−1)
4
3 Reference trajectory (black) and mobile robot trajectory (red)
150
2
1 145
0
140
−1
0 200 400 600 800 1000 1200 1400
Time (k*Te seconds) 135
v (pixels)
130
Fig. 6. Constrained control inputs
125
0.4 REFERENCES
0.2
G. Allibert, Courtial E., and Y. Touré. Visual
predictive control. Grenoble, France, October
0 2006. IFAC Workshop on Nonlinear Predictive
0 200 400 600 800 1000 1200 1400 Control for Fast Systems.
Time (k*Te seconds)
F. Chaumette. Potential problems of stability and
2
convergence in image-based and position-based
Angular speed (m.s−1)