You are on page 1of 38

Master of Science in Aerospace Mechanics and Avionics

Research Project Report

Robust Navigation of UAV’s by


Fusion of Vision, INS and GPS
Tutor: Mohamed Sahmoudi

Navya Krishna Ramuni | Major S2 & S3: Aircraft controls & guidance | March 17, 2016
ABSTRACT
To make a UAV autonomous and practical, a reliable estimation of its position, orientation as well
as the information of its surrounding environment are required. To enable this kind of operation in
UAV at any time in any environment, a navigation capability is required that is robust and not
solely dependent on GPS, which is not available in harsh environment (urban and indoor).

The solution is, developing a navigation system that can fuse the information from different
sensors and provide a reliable result. In this project, we propose a two different architectures for
the fusion of Vision and IMU- 1) Tightly coupled Vision-IMU based on trifocal tensor [13] 2)
Loosely coupled Vision-IMU. In the Tightly coupled Vision-IMU, Kalman gain is computed by
Unscented Kalman filter (UKF) where in loosely coupled, Extended Kalman filter is employed
(EKF) for computing gain. To make the navigation global, we further employ fusion with GPS using
Linear Kalman filter.

Compared with the traditional way of fusing Vision-Inertial or Inertial –GPS, the proposed method
is the balance of both the methods as it involves two step Kalman filter and capable of reliable
estimation of pose in the GPS and Vision outages and thereby reducing the drift of the IMU.

In this report, we reviewed the various available ways of fusing Vision-inertial. Then we introduce
the algorithm for Tight Vision-Inertial using Trifocal tensor and Loose Vision-Inertial. We also
demonstrate the algorithm of fusing GPS. All the coding is implemented under Matlab
environment and the validation is performed on KITTI Dataset [21].

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 1


ACKNOWLEDGEMENT
I want to say thanks to my supervisor, Mohamed Sahmoudi, for giving me the chance to work on
this project. I really appreciate his help during the project. In the past months, I gained new
knowledge and new skills. And I overcame many problems under his guidance. I need to say thanks
to my past colleagues (Sachin FERNANDES and Bharath Gopala Krishnan) for discussing the issues
and for the help in loose algorithm with GPS.

DECLARATION BY STUDENT

This assignment is entirely my own work. Quotations from literature are properly indicated with
appropriated references in the text. All literature used in this piece of work is indicated in the
bibliography placed at the end. I confirm that no sources have been used other than those stated.

I understand that plagiarism (copy without mentioning the reference) is a serious examinations
offence that may result in disciplinary action being taken.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 2


Table of Contents
ABSTRACT .................................................................................................................................................... 1
ACKNOWLEDGEMENT ...............................................................................................................................2
CHAPTER 1 –INTRODUCTION ...................................................................................................................5
1.1 Introduction ..........................................................................................................................................5
1.2 Methodology .........................................................................................................................................5
1.3 Review of Existing Vision-Inertial Fusion from Literature ..............................................................7
CHAPTER 2 - Review and Continuation to Previous work...................................................................... 8
2.1 Camera coordinate systems ............................................................................................................... 8
2.2 Trifocal Tensor .................................................................................................................................... 9
2.2 Feature Detection, Extraction & Matching ..................................................................................... 10
2.4 Bucketing ............................................................................................................................................ 11
2.5 RANSAC .............................................................................................................................................. 12
2.6 Visual Odometry (VO)...................................................................................................................... 12
CHAPTER 3 – Coordinates Map and IMU Mechanization ..................................................................... 13
3.1 Geometric Location of IMU-Camera- Global Coordinates ............................................................ 13
3.2 IMU Mechanization .......................................................................................................................... 14
CHAPTER 4 – Tight fusion of Vision-IMU by trifocal tensor ................................................................. 14
4.1 Formulation of state vector ............................................................................................................... 14
4.2 Filter Propagation .............................................................................................................................. 15
4.3 Measurement Update ........................................................................................................................ 16
4.4 Kalman Gain and State Update........................................................................................................ 17
4.5 Algorithm ........................................................................................................................................... 18
CHAPTER 5 – LOOSE VISION-IMU ......................................................................................................... 19
5.1 Formulation of State Vector .............................................................................................................. 19
5.2 Filter Propagation .............................................................................................................................. 19
5.3 Measurement Update ....................................................................................................................... 20
5.4 Algorithm ........................................................................................................................................... 21
CHAPTER 6 – Fusion with the GPS ...........................................................................................................22
6.1 Algorithm ............................................................................................................................................22
CHAPTER 7 – RESULTS ............................................................................................................................. 23
7.1 Robust Analysis .................................................................................................................................. 25

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 3


CHAPTER 6 – CONCLUSION ................................................................................................................... 29
Bibliography ..............................................................................................................................................30
Appendix 1 .................................................................................................................................................... 32
Appendix 2 ................................................................................................................................................... 33
Appendix 3 ...................................................................................................................................................34
Appendix 4 ...................................................................................................................................................34
Legacy ...........................................................................................................................................................34

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 4


CHAPTER 1 –INTRODUCTION
1.1 Introduction
As the cameras are getting cheaper, over the past decades, the vision aided inertial navigation has
become the interesting topic on employing on low-cost IMU’s. There is an extensive research in the
ego motion estimation by the camera. Visual odometry (VO) and Visual-Simultaneous localization
and mapping (V-SLAM) are the most proven methods for the ego-motion estimation. A detailed
comparison of both the methods is given in [1]. Each method has its unique advantages and
however the choice between the VO and V-SLAM depends on the trade-off between performance
and consistency, and simplicity in implementation.

The inertial navigation systems (INS), have been extensively used for the estimation of 6 degrees-
of-freedom (dof) in the GPS denied environments (indoors, urban canyons and multipaths). Most
of the classical INS rely on the Inertial measuring Unit (IMU). Usually the low cost IMU’s are
employed in the many land robots and small UAV’s. IMU measures tri-axis accelerations and tri-
axis angular velocity of the sensing platform with respect to the inertial frame. Simple integration
of these measures gives the pose of the UAV but these measures are often corrupted by noise and
bias and results in unreliable estimate of pose. These errors are accumulated over time, causing INS
to drift.

So the cameras, which are inexpensive, light weight and small, provides rich information of its
surrounding in the form of high definition images. Processing the optical flow from these images,
can aid the INS for navigation.

1.2 Methodology
Fusing the sensors is one solution to provide real time data to the navigation system even if one
sensor of the system fails.

The main purpose of this project is the reliable pose estimation of low cost drones in all kind of
environments. Therefore, we propose a multisensory fusion architecture in which, first the pose is
estimated by IMU-Vision (either tight or loose) and then this pose is loosely coupled with the GPS
using linear Kalman filter. This kind of architecture is best suitable for Navigation of low cost
drones and robots. As it involves both tight and loose fusion of Vision-IMU, there is possibility of
switching the architecture from tight to loose or vice versa depending upon the surroundings or
mission type(indoor, urban, underground and GPS outages).

Figure 1, shows the proposed navigation architecture for the fusion of VISION, INS and GPS.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 5


Figure 1: Description of fusion architecture

Our goal of our proposed method is the robust estimate of the pose in the Global coordinates in the
environments like GPS outages, multipath and eventually reducing the drift of the IMU with the
aid of Vision.

The main difference between tightly coupled and loosely coupled fusion is, we employ raw
measurements like acceleration, angular rate and Feature points in tight fusion. While in Loose
fusion we combine pose from Visual odometry (position and attitude) with the INS.

The following are some interesting points from both the fusion methods,

Loosely Coupled Fusion Tightly coupled fusion


 Better navigation performance
 Better INS calibration and alignment
 Better instrument calibration
 Better attitude estimates
 Reliable tracking under high dynamics
 Longer operation after jamming
 Good jamming resistance
 Suitable in open clear skies and
 Suitable in deep urban canyons, Indoor
moderate urban environments.
and under water.
 Simple and easy to implement.
 As raw measures are used, complexity
of algorithm increases.

Table 1: Difference between loose and tight coupling

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 6


1.3 Review of Existing Vision-Inertial Fusion from Literature
Processing the navigation information from the camera images is called as Vision based navigation
(VBN). VBN or Visual Odometry (VO) can be aided with the IMU. The most convenient way of
fusing Vision with IMU is the Kalman filter.

In the work of Jean-Philippe Tardif et al [8], VO pose is optimized by sparse bundle adjustment
(SBA), and then loosely coupled with the IMU. While Mingyang Li et al [9] VO pose is optimized by
EKF and then fused with IMU. However both the methods have shown good results.

Unlike in Tightly coupled Vision aided IMU, can be found in [10]-[12], where the 3D point of the
features are used in the measurement rather than pose itself. In the work of Mourikis et al[12]
measurement residual is obtained from the measured 3D points and predicted 3D point (from body
pose to camera pose), while in [10] and [11], measurement residual is obtained from measured and
predicted 3D Distinguishable Features (DF). Here the state vector is composed with IMU state and
DF’s points.

Another new approach in tightly coupled Vision aided IMU is by trifocal tensor. This is can be
found in [13]- [15].In this approach, Multi state constraint Kalman filter (MSCKF) is employed,
where the filter state is composed with current and previous INS pose. While the measurements
residual is obtained from the measured feature and predicted feature by trifocal tensor. The results
from this approach seem to be more reliable as epi-polar constraint gives strong relation between
the feature points and also eliminates the need of computing the 3D point.

However this kind of fusion architecture may increase the computation time as the 3 consecutive
image are required to build the trifocal relation. Similar kind of implementation is seen in [14],
where they extended the work to stereo case and taking both point and line features into account.
In both the cases Unscented Kalman Filter is used for solving the nonlinear dynamics.

In our project, we employed a Tight fusion architecture as proposed in [13]- [15]. Since Vision
aided Inertial using Trifocal tensor has a strong relation between feature points, the Kalman gain
computed through this method has better estimate than other tight methods.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 7


CHAPTER 2 - Review and Continuation to Previous work
2.1 Camera coordinate systems

Figure 2: Camera coordinate system


In the above figure, C is the camera center of Pinhole camera model. 𝑋𝑐 𝑌𝑐 𝑍𝑐 are the axis of camera
coordinate system with origin at C. Let (𝑢0 , 𝑣0 ) be the projection coordinates of the camera centre
on to the Image plane. Let 𝑃𝑐 (𝑥𝑐 , 𝑦𝑐 , 𝑧𝑐 ) be a scene point in camera coordinate system then its
corresponding projection in Image coordinate system is given by p(𝑢, 𝑣).

By the definition of perspective camera projection,


𝑢 𝛼𝑥 0 𝑢0 𝑋𝑐
[𝑣 ] = 𝐾𝑋 = [ 0 𝛼𝑦 𝑣0 ] [ 𝑌𝑐 ]
1 0 0 1 𝑍𝑐

...eq 1
Here,
𝐾 = 3𝑥3 𝐼𝑛𝑡𝑟𝑖𝑛𝑠𝑖𝑐 𝑀𝑎𝑡𝑟𝑖𝑐 𝑑𝑒𝑓𝑖𝑛𝑒𝑠 𝑡ℎ𝑒 𝑐𝑎𝑚𝑒𝑟𝑎 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠
αx = 𝑓𝑠𝑥 𝑎𝑛𝑑 αy = 𝑓𝑠𝑦
𝑓 = 𝑓𝑜𝑐𝑎𝑙 𝑙𝑒𝑛𝑔𝑡ℎ 𝑜𝑓 𝑡ℎ𝑒 𝑐𝑎𝑚𝑒𝑟𝑎
𝑠𝑥 , 𝑠𝑦 𝑎𝑟𝑒 𝑡ℎ𝑒 𝑛𝑜. 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠 𝑖𝑛 𝑥 & 𝑦 𝑎𝑥𝑖𝑠 𝑜𝑓 𝑖𝑚𝑎𝑔𝑒

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 8


2.2 Trifocal Tensor
The trifocal tensor is governed by the epi-polar geometry. The below figure demonstrates the epi-
polar constraint. Let us consider two camera views 𝑪𝟏 & 𝑪𝟐 of 3D point M.

Figure 3: Epi-polar geometry between two views


where,

e’ & e’’ are the epi-poles


𝒎𝟏 & 𝒎𝟐 are the corresponding projections of 3D point M on cameras 𝑪𝟏 & 𝑪𝟐
𝒍𝒎𝟏 & 𝒍𝒎𝟐 are the epipolar lines

The epi-polar geometry constraint is given by,

𝑚2𝑇 𝐹𝑚1 = 0 ...eq 2


−𝑇 𝑇 −1
Where 𝐹 = 𝐾 𝑅 [t ×]𝐾 is the Fundamental matrix with 𝒕 & 𝑹 are the position and rotation of
frame 𝑪𝟏 with respect to frame 𝑪𝟐 and 𝑲 is intrinsic matrix as defined in (eq 1).

The trifocal tensor is used to transfer points from a correspondence in two views to the
corresponding point in a third view.

𝑙2

Figure 4: the point-line-point correspondences between three views

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 9


The trifocal tensor encapsulates the geometry relationships between the three different viewpoints
and is independent of the scene structure.

Let 𝑃1 = [𝐼|0], 𝑃2 = [𝐴|𝑎4 ] 𝑎𝑛𝑑 𝑃3 = [𝐵|𝑏4 ] be the projection matrices of the camera from three
different view points𝑪𝟏 , 𝑪𝟐 𝒂𝒏𝒅 𝑪𝟑 where 𝐴 & 𝐵 are 3x3 matrices.

Then the trifocal tensor is derived by,

𝑇𝑖 = 𝑏4 𝑎𝑖𝑇 − 𝑎4 𝑏𝑖𝑇 ...eq 3


This is very useful relation, which allows us to estimate features points 𝒎𝟑 by knowing
𝒎𝟏 , 𝒎𝟐 𝑎𝑛𝑑 𝑇𝑖 ,

𝑚 ̃ 1𝑖 𝑇𝑖𝑇 )𝑙2
̃ 3 = (∑𝑖 𝑚 ...eq 4
Where ~ denotes normalized coordinates, 𝑙2 is line perpendicular to the epipolar line.

2.2 Feature Detection, Extraction & Matching


A feature in an image are nothing but the point of interest. These might be a corners, edges...etc.
All the feature detection, extraction & matching are performed on the grey scale images only.

The features are also referred as landmarks in SLAM problems. The mathematical operations are
performed only on these features rather than the whole image.

A lot of Open-source algorithms for feature detection are available. We can choose any one of them,
I have tested SIFT, FAST, BRISK, Harris, and SURF.

Functions in Matlab for feature detection:


 detectBRISKFeatures
 detectFASTFeatures
 detectHarrisFeatures
 detectSURFFeatures

In our previous work, we concluded that for, Motion estimation using SIFT takes a lot of
computational time while FAST and Harrys have a limited correspondences.

But though SIFT has more computation time, it is best suitable for Tight fusion with IMU. Hence it
is a trade-off between performance and efficiency.

In our previous work, we also implemented RANSAC from Matlab tool box for the outlier’s
removal. Before we implement RANSAC in the current work, we add a concept of bucketing to
uniformly distribute the features in the image.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 10


2.4 BUCKETING
After we obtain the matched features from the feature matching step, a subset is chosen by means
of bucketing [27]. The image is divided into several non-overlapping rectangles (see Figure 5). In
every bucket features are reduced with their respected weights.

This benefits us in several ways.

 First, the smaller number of features reduces the computational complexity of the
algorithm which is an important prerequisite for real time applications.
 Second, this technique guarantees that the used image features are well distributed along
the z-axis, i.e. the roll-axis of the vehicle. This turns out to be important for a good
estimation of the linear and angular velocities.
 Third, the used image features are uniformly distributed over the whole image
This is done by dividing the features by their respected weights in each subset and limiting total
number of maximum features.

Figure 5: Uniform distribution of features by bucketing concept

20 10 0 30 13 1 4 0 4 3 0 3 2 1 3 0
5 19 33 73 83 62 34 3 3 1 1 2 2 3 4 2
1 1 14 34 23 45 34 0 1 1 2 0 5 2 3 0
0 1 0 1 8 1 0 0 0 1 0 0 1 0 0 0

Table 2: Before & after bucketing, uniform distribution of features


In the left table, before bucketing there are 553 features points. In the right table, after bucketing
the features are uniformly distributes with random samples of same weights. In our work, total
number of maximum features are limited to 50.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 11


2.5 RANSAC
Though after uniform distribution of features from the bucketing concept, there are some matches
even after thresholding may be wrong. These wrong matches are called ‘Outliers’ and correct ones
are called ‘Inliers’.
The outliers can greatly diminish performance of Vision. So we continue with our previous work of
RANSAC in outlier’s removal.

Figure 6: RANSAC outlier detection and removal

Since we are dealing with trifocal tensor, the matched features from three different camera views
are considered for RANSAC outlier removal. From the above figure, the red points are the outliers.

2.6 Visual Odometry (VO)

Gray scale Image sequence

Feature detection
(Surf/ Harry’s/ BRISK/ SIFT...)
𝐶𝑘+1

Feature extraction & matching 𝐶𝑘

Outlier removal using RANSAC


𝐶𝑘−1

Motion estimation using Essential


matrix Figure 7: 2D-2D Motion estimation using Monocular VO [1]

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 12


In our previous work, we stopped at the motion estimation by essential matrix using 8- point
algorithm.

𝐸 = R c × Pc

Where Pc = 3x1 translation or position vector gives pose


R c = 3x3 Rotation matrix
E= essential matrix
All the computed transformation, Tc = [R c |Pc ] are concatenated to the initial camera frame (i,e.
first pose )

However, this VO is not optimal without any optimisation on the feature points and on the pose.
From the literature reading, we found that Bundle adjustment (BA) or filtering techniques like EFK
is used for the optimisation of pose.

These optimisation techniques are not the scope of this project so we have used open source VO
from Andries Geigner et al [22] for the loosely coupling with IMU (see Chapter4). In their work,
optimisation of pose is done by Extended Kalman filter.

CHAPTER 3 – Coordinates Map and IMU Mechanization


3.1 Geometric Location of IMU-Camera- Global Coordinates

Figure 8: IMU- Camera – Global Coordinates Map


To make the navigation global, our proposed method estimates the pose of the IMU in the Global
coordinates. Figure 8, shows the geometric location of the camera coordinate frame {C} and IMU
coordinate frame {I} and the global frame {G}.

Where,
TcI = (RIc |pIc ) is the 4x4 transformation matrix from camera frame {C} to IMU frame {I}.
T𝐺I is the transformation matrix of IMU frame{I} to the global frame {G}.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 13


In our method, in the first step, vision pose is expressed in IMU frame for Vision-IMU fusion (both
in tight and loose) and then this pose is expressed in global frame for the fusion with the GPS.
Finally pose is expressed in the global frame {G}.

3.2 IMU Mechanization


IMU measures 3-axis acceleration and angular rate along its body frame. We adopt the similar kind
of IMU formulation as mentioned in [13] and [14]. The following equations govern the IMU
acceleration and angular velocity.

𝑎𝑚 = 𝑅𝑇 (𝑞̅ 𝐼 )(𝑎𝐼 + 𝑔𝐼 ) + 𝑏𝑎 + 𝑛𝑎 𝜔𝑚 = 𝜔𝑏 + 𝑏𝑔 + 𝑛𝑔 ...eq 5

Where RT (q̅I )= direction cosine matrix of the quaternion (q̅I )(see [16] and Appendix 2)
aI = Acceleration in the body frame
g I = Gravity in inertial frame
ωb = angular rate of IMU in body frame
ωm = angular rate of the IMU body frame wrt to Inertial frame.
ba & bg = are the biases in the accelerometer.
na & ng = are the Gaussian white noises.

Let us build the IMU state as

𝑥𝐼𝑀𝑈 = [𝑝𝐼 𝑞̅ 𝐼 𝑣𝐼 𝑏𝑎 𝑏𝑔 ]𝑇 ...eq 6


Where [pI q̅I v I ba bg ]T is the (16x1) IMU state, consists of position, quaternion, velocity and
bias of the accelerometer and gyros in the IMU frame. We assume the mean of the noise is zero
mean and biases are modelled as Gaussian random walk process. The state kinematics of IMU is
evolved as shown below,

1 I 𝑇 𝑇
𝑝̇ 𝐼 = 𝑣 𝐼 , 𝑞̅̇ 𝐼 = q̅ ⊗ [0 ω𝑏 ] , 𝑣̇ 𝐼 = 𝑎𝐼
2

𝑏̇𝑎 = 𝑛𝑏𝑎 , 𝑏̇𝑔 = 𝑛𝑏𝑔 ...eq 7


The state xIMU is called the true state of the IMU.

To reduce the complexity, we linearize the true state into nominal state and error state as proposed
in [13].

𝑥𝐼𝑀𝑈 = 𝑥̂𝐼𝑀𝑈 + 𝑥̃𝐼𝑀𝑈 ...eq 8

CHAPTER 4 – Tight fusion of Vision-IMU by trifocal tensor


4.1 FORMULATION OF STATE VECTOR
The filter state vector comprises the IMU state and a history of last two poses of the camera. The
filter state is also divided into nominal state and error state. Thus the filter nominal state is given
by,

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 14


𝑥̂𝑘 = [𝑥̂𝐼𝑀𝑈𝑘 𝑇 𝑝̂𝐼𝐼1 𝑞̅̂𝐼𝐼1 𝑝̂𝐼𝐼2 𝑞̅̂𝐼𝐼2 ]𝑇 ...eq 9

Where the pair (𝑝𝐼𝐼1 𝑞̅𝐼𝐼1 ) denotes last the nominal –state pose of the IMU corresponding to the last
but one pose of the camera, while pair (𝑝𝐼𝐼2 𝑞̅𝐼𝐼2 ) is corresponding to the last pose of the camera.
The filter error state is given by,

𝑥̃𝑘 = [𝑥̃𝐼𝑀𝑈𝑘 𝑇 𝑝̃𝐼𝐼1 𝛿𝜃𝐼𝐼1 𝑝̃𝐼𝐼2 𝛿𝜃𝐼𝐼2 ]𝑇 ...eq 10

Remember that true state, 𝑥𝑘 = 𝑥̂𝑘 + x̃k

Since the past poses in the filter prediction step has no dynamic, we assume its process model is
zero:
𝑝̂̇𝐼𝐼1 = 0, 𝑞̅̂̇𝐼𝐼1 = 0 and 𝑝̃̇𝐼𝐼1 = 0, 𝛿𝜃̇𝐼𝐼1 = 0

𝑝̂̇𝐼𝐼2 = 0, 𝑞̅̂̇𝐼𝐼2 = 0 and 𝑝̃̇𝐼𝐼2 = 0, 𝛿𝜃̇𝐼𝐼2 = 0 ...eq 11

4.2 Filter Propagation


In the filter prediction step, the nominal state of the IMU is predicted using 4 th order Rungekutta.

Algorithm: nominal state prediction by 4th order Rungekutta

Input: 𝑥̂𝑘−1 = 𝑝𝑟𝑒𝑣𝑖𝑜𝑢𝑠 𝑛𝑜𝑚𝑖𝑛𝑎𝑙 𝑠𝑡𝑎𝑡𝑒


aI (𝑘) = 𝑎𝑐𝑐𝑒𝑙𝑒𝑟𝑎𝑡𝑖𝑜𝑛 𝑑𝑎𝑡𝑎 𝑖𝑛 𝑏𝑜𝑑𝑦 𝑓𝑟𝑎𝑚𝑒
ωb (𝑘) = 𝑎𝑛𝑔𝑢𝑙𝑎𝑟 𝑣𝑒𝑙𝑜𝑐𝑖𝑡𝑦 𝑖𝑛 𝑏𝑜𝑑𝑦 𝑓𝑟𝑎𝑚𝑒
Output: 𝑥̂𝑘 predicted nominal state
𝑘1 = ∆𝑡 × 𝑓(𝑥̂𝑘−1 , aI (𝑘), ωb (𝑘))
1
𝑘2 = ∆𝑡 × 𝑓 (𝑥̂𝑘−1 + ∆𝑇𝑘1 , aI (𝑘), ωb (𝑘))
2
1
𝑘3 = ∆𝑡 × 𝑓 (𝑥̂𝑘−1 + ∆𝑇𝑘2 , aI (𝑘), ωb (𝑘))
2
1
𝑘4 = ∆𝑡 × 𝑓 (𝑥̂𝑘−1 + ∆𝑇𝑘3 , aI (𝑘), ωb (𝑘))
2
1
𝑥̂𝑘 = 𝑥̂𝑘−1 + (𝑘1 + 2𝑘3 + 2𝑘3 + 𝑘4 )
6
The IMU state propagation is considered as continuous model and the similar kind of
mathematical equations as defined in [14] and [18] are implemented.

𝑥̃̇𝑘 = 𝐹𝑐 𝑥̂𝑘 + 𝐺𝑐 𝑛𝐼𝑀𝑈 ...eq 12

To compute the state covariance we need to discretize the 𝐹𝑐 to 𝐹𝑑 with the Taylor series as
proposed in [13] which gives the following sparse structure,

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 15


𝐼3 𝐴 ∆𝑡 −𝑅𝑇 (𝑞̅ 𝐼 ) ∗ ∆𝑡 2 /2 𝐷 𝑂3𝑥12
𝑂3 𝐵 𝑂3 𝑂3 𝐸 𝑂3𝑥12
𝐹𝑑 =
𝑂3 𝐶 𝐼3 −𝑅𝑇 (𝑞̅ 𝐼 ) ∗ ∆𝑡 −𝐴 𝑂3𝑥12
...eq 13
𝑂3 𝑂3 𝑂3 𝐼3 𝑂3 𝑂3𝑥12
𝑂3 𝑂3 𝑂3 𝑂3 𝐼3 𝑂3𝑥12
[𝑂12𝑥3 𝑂12𝑥3 𝑂12𝑥3 𝑂12𝑥3 𝑂12𝑥3 𝐼12𝑥12 ]

Where
∆𝑡 2 ∆𝑡 3 ∆𝑡 4
𝐴 = −𝑅𝑇 (𝑞̅ 𝐼 )[𝑎̂𝑥 ]( − [𝜔̂𝑥 ] + ̂𝑥 ]2 )
[𝜔
2 3! 4!
∆𝑡 2
𝐵 = 𝐼3 − ∆𝑡[𝜔 ̂𝑥 ] + ̂𝑥 ]2
[𝜔
2!
∆𝑡 2 ∆𝑡 3
𝐶 = −𝑅𝑇 (𝑞̅ 𝐼 )[𝑎̂𝑥 ](∆𝑡 − [𝜔
̂𝑥 ] + [𝜔̂𝑥 ]
2! 3!
3 4
∆𝑡 ∆𝑡 ∆𝑡 5
𝐷 = −𝑅𝑇 (𝑞̅ 𝐼 )[𝑎̂𝑥 ](− + [𝜔̂𝑥 ] − [𝜔̂𝑥 ]2 )
3! 4! 5!
∆𝑡 2 ∆𝑡 3
𝐸 = − ∆𝑡 + [𝜔̂𝑥 ] − [𝜔̂𝑥 ]2
2! 3!
The noise covariance,

𝑄𝑐 = nIMU nIMU T = diag[σ2𝑔 . I3 σ2 𝑎 . I3 σ2 ba . I3 σ2 bg . I3 ]

where σ2𝑔 , σ2 𝑎 , σ2 ba & σ2 bg are the variances of noise. The corresponding discrete noise
covariance is given by,
.
𝑄𝑑 = ∫∆𝑡 𝐹𝑑 (𝜏)𝐺𝑐 𝑄𝑐 𝐺𝑐 𝑇 𝐹𝑑 (𝜏)𝑇 𝑑𝜏 ...eq 14
Now finally the predicted error state covariance is obtained from Fd and 𝑄𝑑 ,

𝑃𝑘|𝑘−1 = 𝐹𝑑 𝑃𝑘−1|𝑘−1 𝐹𝑑 𝑇 + 𝑄𝑑 ...eq 15

4.3 Measurement Update


The measurement model employed for updating the filter state estimate is given by the epi-polar
geometry and the trifocal tensor which is discussed in section (2.2).

The trifocal tensor encapsulates the geometry relationships between the three viewpoints and is
independent of the scene structure.

𝑙2

Figure 9: point-line-point correspondences among three views


Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 16
The trifocal tensor is then computed from, Ti = b4 aTi − a 4 bTi

This trifocal tensor helps us to estimate the features in third view, (from eq (3) and (4))

m ̃ 1i TiT ) l2
̃ 3 = (∑ m
i

This estimated m
̃ 3 is compared with the measured m
̃ 3 to obtain the measurement residual. Thus
the measurement model comprises the epipolar geometry and the trifocal tensor.

̃ 2𝑇 𝑅12
𝑚 𝑇
⌊𝑡12 ×⌋ 𝑚 ̃1
𝑇 𝑇
𝑧𝑖 = ℎ(𝑥𝑘 , {𝑚1 , 𝑚2 , 𝑚3 }) = [𝑚
̃ 3 𝑅23 ⌊𝑡23 ×⌋ 𝑚 ̃ 2] ...eq 16
𝐾(∑𝑖 𝑚 ̃ 1𝑖 𝑇𝑖𝑇 )𝑙2
̃ 1 = 𝐾 −1 𝑚1 ,
With 𝑚 ̃ 2 = 𝐾 −1 𝑚2 𝑎𝑛𝑑
𝑚 ̃ 3 = 𝐾 −1 𝑚3
𝑚

𝑅1𝐼 = 𝑅( 𝑞̅̂𝐼𝐼1 )RIc , 𝑅2𝐼 = 𝑅( 𝑞̅̂𝐼𝐼2 ) RIc and 𝑅𝐼 = 𝑅( q̅̂I ) RIc

𝑡1𝐼 = 𝑝̂𝐼𝐼1 + 𝑅( 𝑞̅̂𝐼𝐼1 )RIc , 𝑡2𝐼 = 𝑝̂𝐼𝐼2 + 𝑅( 𝑞̅̂2𝐼 )RIc and 𝑡 𝐼 = 𝑝̂ 𝐼 + 𝑅( 𝑞̅̂ 𝐼 )RIc

𝑅12 = (𝑅1𝐼 )𝑇 𝑅2𝐼 𝑎𝑛𝑑 𝑅23 = (𝑅2𝐼 )𝑇 𝑅𝐼

𝑡12 = (𝑅1𝐼 )𝑇 + (𝑡2𝐼 − 𝑡1𝐼 ) 𝑎𝑛𝑑 𝑡23 = (𝑅2𝐼 )𝑇 + (𝑡 𝐼 − 𝑡2𝐼 )

where 𝑥𝑘 is the true state obtained by nominal and error state by equation(8). Since this
measurement model is non-linear we use Unscented Kalman Filter (UKF). Here sigma point
approach (see Appendix 3) is used to update the filter state estimate.

4.4 Kalman Gain and State Update


In UKF, the covariance’s 𝑃 𝑍𝑗 𝑧̂ 𝑗 and 𝑃 𝑋𝑧𝑗 are computed from the sigma point approach (see
Appendix 3)
−1
𝐾𝑗 = 𝑃 𝑋𝑧𝑗 (𝑃𝑍𝑗 𝑧̂ 𝑗 + 𝑅𝑗 )

where 𝑅𝑗 is the measurement noise covariance

The error state and the state covariance are updated,

𝑥̃𝑘|𝑘 = 𝑥̃𝑘−1|𝑘−1 + 𝐾𝑗 (𝑧 − 𝑧̂ )

𝑃𝑘|𝑘 = Pk|k−1 − 𝐾𝑗 𝑃 𝑍𝑗 𝑧̂ 𝑗 𝐾𝑗 𝑇

We use the 𝑥̃𝑘|𝑘 to correct the nominal state

𝑥̂𝑘|𝑘 = 𝑥̂𝑘|𝑘−1 + 𝑥̃𝑘|𝑘

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 17


4.5 Algorithm
Algorithm: Tight VISION-IMU

Initialize the 𝐱̂ 𝟎|𝟎 , 𝑷𝟎|𝟎 and 𝐱̃ 𝟎|𝟎 = 𝐎𝟐𝟕 𝐱𝟏


{Time Update}

for k=1…
1. Use 4th order Rungekutta to predict x̂ 𝑘|𝑘−1 (section 3.4)
1
𝑥̂𝑘|𝑘−1 = 𝑥̂𝑘−1 + (𝑘1 + 2𝑘3 + 2𝑘3 + 𝑘4 )
6
%....................Propagate state error…………..…...%
2. Compute Fd and Q d by equation (13) and (14)
3. x̃k|k−1 = O27 x1 and Pk|k−1 = Fd Pk−1|k−1 Fd T + Q d

{Measurement update}

if 𝑓𝑟𝑎𝑚𝑒 ≥ 3
4. Match features in last three images to get {m1 , m2 , m3 }
5. Use bucketing to uniformly distribute the features and to limit the Maximum features
6. Outlier’s removal by RANSAC
%...............................Generate sigma points to predict measurement………………………..%
𝑋̃𝑘|𝑘−1 = 027×1 ± (√(𝐿 + 𝜆)Pk|k−1 )
𝑍𝑗 = ℎ(𝑔(𝑥̂𝑘|𝑘−1 , 𝑋̃𝑘|𝑘−1 ), {m1 , m2 , m3 }), 𝑧̂𝑗 = ∑2𝐿 𝑙
𝑙=0 𝑊𝑠 𝑍𝑗
2𝐿
𝑍𝑗 𝑧̂ 𝑗
𝑃 = ∑(𝑍𝑗 − 𝑧̂𝑗 )(𝑍𝑗 − 𝑧̂𝑗 )𝑇
𝑙=0
2𝐿

𝑃 𝑋𝑧𝑗 = ∑ 𝑊𝑐𝑙 (𝑋̃𝑘|𝑘−1 − 027×1 )(𝑍𝑗 − 𝑧̂𝑗 )𝑇


𝑙=0
7. %% compute Kalman gain
−1
𝐾𝑗 = 𝑃 𝑋𝑧𝑗 (𝑃 𝑍𝑗 𝑧̂ 𝑗 + 𝑅𝑗 )
8. %% Update error state and error covariance
𝑥̃𝑘|𝑘 = 𝑥̃𝑘−1|𝑘−1 + 𝐾𝑗 (𝑧𝑖 − 𝑧̂𝑗 )
𝑃𝑘|𝑘 = Pk|k−1 − 𝐾𝑗 𝑃 𝑍𝑗 𝑧̂ 𝑗 𝐾𝑗 𝑇
9. Use the 𝑥̃𝑘|𝑘 to correct the nominal state estimate
𝑥̂𝑘|𝑘 = 𝑥̂𝑘|𝑘−1 + 𝑥̃𝑘|𝑘
10. %%%%Replace old state and revise error covariance
𝑥̂𝑘 = 𝑇𝑛 𝑥̂𝑘 , 𝑃𝑘|𝑘 = 𝑇𝑒 𝑃𝑘|𝑘 𝑇𝑒 𝑇
11. %% reset the error state
𝑥̃𝑘|𝑘 = 027×1
end
end

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 18


CHAPTER 5 – LOOSE VISION-IMU
The geometric coordinates of the camera and IMU doesn’t change and is the same as in section 3.1.
The IMU mechanisation is described in section 3.2.

5.1 Formulation of State Vector


The state vector is composed of nominal state of the IMU which has total 16 states,

𝑥̂𝐼𝑀𝑈 = [ 𝑝̂𝐼 𝑞̅̂𝐼 ̂𝐼


𝑣 ̂
𝑏𝑎 ̂
𝑏𝑔 ]𝑇 ...eq 17
And this corresponds to an error state of 15 states and is given by

𝑥̃𝐼𝑀𝑈 = [𝑝̃𝐼 ̃𝐼
𝛿𝜃 𝑣̃𝐼 ̃𝑎
𝑏 𝑏̃
𝑔]
𝑇
...eq 18
The error state is propagated and dynamics is observed in all the states of IMU. In our work, there
is no interaction of the IMU state with the vision (in the measurement) in the system model; they
only interact through the measurement model. Therefore system transition is partitioned as:

F 0
F = [ IMU ]
0 Fvis

where FIMU is continuous state transition of the error model which is described in the section 5.2
and Fvis (= H see 𝑠𝑒𝑐𝑡𝑖𝑜𝑛 5.3)is the measurement transition model which is a linear model in our
case. In a similar fashion, system noise covariances are also partitioned.

5.2 Filter Propagation


For the prediction of nominal state of IMU, we employ 4th order RungeKutta to predict the IMU
same like in chapter 4

The IMU state propagation is considered as continuous model and the similar kind of
mathematical equations as defined in [14] and [18] are implemented.

x̃̇ IMU = FIMU x̂IMU + GIMU nIMU

O3 I3 O3 O3 O3
O3 −[(ωm (t) − bg (t)) ×] O3 −I3 O3
FIMU = O3 −RT (q̅I (t))[(a m (t) − ba (t)) ×] O3 O3 −RT (q̅I (t)) ...eq 19
O3 O3 O3 O3 O3
[O3 O3 O3 O3 O3 ]

O3 O3 O3 O3
−I3 O3 O3 O3
GIMU = O3 −RT (q̅I (t)) O3 O3
O3 O3 I3 O3
[ O3 O3 O3 I3 ]

Q IMU = nIMU nIMU T = diag[σ2𝑔 . I3 σ2 𝑎 . I3 σ2 ba . I3 σ2 bg . I3 ] ...eq 20

To compute the state covariance we discretize the FIMU as

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 19


tk
Φk = Φ(t k , t k−1 ) = exp(∫ FIMU (τ) dτ)
tk−1

Where Φk is the discrete state transition matrix and can be computed by exponential (expm in
Matlab) of the matrix FIMU as defined in the equation (8).

And the corresponding discrete noise covariance Q d is evaluated as described in [19],

Q d = (IFIMU + ∆t. FIMU ) ∗ (∆t. GIMU Q IMU GIMU T ) ...eq 21

Finally the predicted error state covariance is obtained as,

𝑃𝑘|𝑘−1 = 𝛷𝑘 𝑃𝑘−1|𝑘−1 𝛷𝑘 𝑇 + 𝑄𝑑 ...eq 22

5.3 Measurement Update


We have used the transformation from Monocular visual odometry for the measurement update.
The work of Andreas Geiger [6] et al is followed for the measurement update in our project.

The transformation obtained is 4 x 4 matrix composed of translation and rotation of the camera
with respect a fixed inertial frame.

𝑇𝑐 = [𝑅𝑐 |𝑃𝑐 ] ...eq 23


where R c (= R z (𝜃)R y (Ф)R x (Ѱ) is 3x3 rotation matrix, composed of Euler angles.
Pc is 3x1 vector, gives the position in camera coordinate frame.

We compute the vision pose in IMU frame. The pose of the camera is expressed in the IMU frame
as follows,

PcI = Pc + R c . PCI

𝑅𝑐𝐼 = 𝑅𝑐 . 𝑅𝐶𝐼 ...eq 24


Where TCI = [R CI |PCI ] is the transformation of camera with respect to the IMU. All the
measurements are expressed in IMU frame before the computation of state innovation.

The measurement equation is governed by,

𝑧𝑣𝑖𝑠 𝐼 = 𝐻 𝑥𝑣𝑖𝑠 𝐼 ...eq 25


Since our measurements are linear, H = (I6x6 O6x9 )

𝑥𝑣𝑖𝑠 𝐼 = [𝑝𝑐𝐼 𝜃𝑐𝐼 ]𝑇 ...eq 26


So for State innovation we take the position and attitude from the IMU prediction step and the
corresponding position and attitude from the measurement update by camera.

𝑠𝑡𝑎𝑡𝑒𝐼𝑛𝑛𝑜𝑣𝑎𝑡𝑖𝑜𝑛𝑉𝐼𝑀𝑈 = [𝑝𝑐𝐼 𝜃𝑐𝐼 ]𝑇 − [𝑝̂𝐼 𝐼𝑀𝑈 𝜃̂𝐼 𝐼𝑀𝑈 ]𝑇 ...eq 27

Where θ̂I is the predicted attitude from the nominal state of the quaternion q̅̂I in equation (5).

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 20


Further Kalman gain is computed and state covariance and x̂ IMU is updated by EKF.
VIMU
Pz = H. Pk|k−1 . HT + R

VIMU
H
K 𝑖 = Pk|k−1 .
Pz

x̃ k|k = x̃0|0 + K i (stateInnovationVIMU )

𝑉𝐼𝑀𝑈 𝑉𝐼𝑀𝑈
𝑃𝑘|𝑘 = (𝐼𝑃𝑉𝐼𝑀𝑈 − 𝐾𝑖 . 𝐻)𝑃𝑘|𝑘−1 ...eq 28
Update the nominal state,

𝑥̂𝐼𝑀𝑈𝑘|𝑘 = 𝑥̂𝐼𝑀𝑈𝑘|𝑘−1 + 𝑥̃𝑘|𝑘 ...eq 29

5.4 Algorithm
Algorithm: Loose Vision + IMU

VIMU
Initialize the x̂IMU0|0 , P0|0 , x̃0|0 = O15 x1

For k = 2 … ..
[Prediction step]
a) Predict the nominal state x̂IMUk+1|k by 4th order Runge-kutta
1
x̂IMUk|k−1 = x̂ IMUk|k + (𝑘1 + 2𝑘3 + 2𝑘3 + 𝑘4 )
6

b) Compute the Φk and Q d using (10) and (11)


%%%Propagate the state error and state covariance%%%%
Pk|k−1 = Φk Pk−1|k−1 Φk T + Q d
[Update step]
c) Get the pose of VO in IMU frame
PcI = Pc + R c . PCI
RIc = R c . R CI
xvis I = [pIc θIc ]T

d) Compute the innovation process


stateInnovationVIMU = [pIc θIc ]T − [p̂I IMU θ̂I IMU ]T
e) Compute the Kalman gain,
VIMU
Pz = H. Pk|k−1 . HT + R
VIMU
H
K 𝑖 = Pk|k−1 .
Pz
f) Update the state error and covariance
x̃ k|k = x̃0|0 + K i (stateInnovationVIMU )
VIMU VIMU
Pk|k = (IPVIMU − K i . H)Pk|k−1
g) Update the nominal state,
x̂IMUk|k = x̂IMUk|k−1 + x̃ k|k
h) Reset the error state,
x̃ k|k = O15 x1
end

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 21


CHAPTER 6 – Fusion with the GPS
In the chapter 4 & 5, we did tight and loose fusion of Vision- IMU. We further add fusion with GPS
to make the navigation more reliable.

We are interested in estimating pose i.e., position, attitude and velocity. Remembering our goal to
estimate final pose in global coordinates, we transform the pose obtained in chapter 4 & 5 in Global
system.

So xVIMU I in inertial frame is transformed to xVIMU G , global frame using T𝐺I .


𝑇
𝑥𝑉𝐼𝑀𝑈 𝐺 = [𝑝𝑉𝐼𝑀𝑈 𝐺 𝑣𝑉𝐼𝑀𝑈 𝐺 𝜃𝑉𝐼𝑀𝑈 𝐺 ] ...eq 30
So the state vector for the fusion with GPS is given below,

𝑥𝑓𝑢𝑠 𝐺 = [𝑝𝑓𝑢𝑠 𝐺 𝑣𝑓𝑢𝑠 𝐺 𝜃𝑓𝑢𝑠 𝐺 ]𝑇 ...eq 31


For the prediction step,

𝐺 𝐺 𝐺 𝐺
𝑥𝑓𝑢𝑠 𝑘|𝑘−1
= 𝑥𝑓𝑢𝑠 𝑘−1|𝑘−1
+ (𝑥𝑉𝐼𝑀𝑈 𝑘|𝑘
− 𝑥𝑉𝐼𝑀𝑈 𝑘−1|𝑘−1
) ...eq 32

In the measurement update, position and velocity from the GPS is used in the measurement vector,

𝑧𝐺𝑝𝑠 𝐺 = 𝐻𝑥𝐺𝑝𝑠 𝐺 ...eq 33


Where H = [I6 O6x3 ] and xGps G = [pGps G vGps G ] T

So the innovation as follows,

𝑠𝑡𝑎𝑡𝑒𝐼𝑛𝑛𝑜𝑣𝑎𝑡𝑖𝑜𝑛𝑓𝑢𝑠 = [𝑝𝑓𝑢𝑠 𝐺 𝑣𝑓𝑢𝑠 𝐺 ]𝑇 − [𝑝𝐺𝑝𝑠 𝐺 𝑣𝐺𝑝𝑠 𝐺 ] 𝑇 ...eq 34


There by Kalman gain is computed and nominal state and state covariance is updated by linear
Kalman filter

𝑓𝑢𝑠 fus
𝑃𝑧 = HPk|k−1 H T + Rfus

𝑓𝑢𝑠 𝑓𝑢𝑠
𝐾𝑓𝑢𝑠 = 𝑃𝑘|𝑘−1 𝐻𝑇 / 𝑃𝑧 ...eq 35

Update the state and covariance,

G G
xfusk|k
= xfusk|k−1
+ 𝐾𝑓𝑢𝑠 (stateInnovationfus )

𝑓𝑢𝑠 𝑓𝑢𝑠
𝑃𝑘|𝑘 = (𝐼𝑃𝑓𝑢𝑠 − 𝐾𝑓𝑢𝑠 . 𝐻)𝑃𝑘|𝑘−1 ...eq 36

6.1 Algorithm
Algorithm: Fusion with GPS

T
Build the xVIMU G = [pVIMU G vVIMU G θVIMU G ] from chapter 4 & 5 using T𝐺I
[Prediction step]
a) Predict the fused state,
G G G G
xfusk|k−1
= xfusk−1|k−1
+ (xVIMUk|k
− xVIMUk−1|k−1
)

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 22


[Update state]
b) Get the GPS Measurements,
xGps G = [pGps G vGps G ] T
c) Compute the innovation process,
stateInnovationfus = [pfus G vfus G ]T − [pGps G vGps G ] T
d) Compute the Kalman gain,
𝑓𝑢𝑠 fus
𝑃𝑧 = HPk|k−1 H T + Rfus
fus 𝑓𝑢𝑠
𝐾𝑓𝑢𝑠 = Pk|k−1 H T / 𝑃𝑧
e) Update the state and covariance,
G G
xfusk|k
= xfusk|k−1
+ 𝐾𝑓𝑢𝑠 (stateInnovationfus )
fus fus
Pk|k = (IPfus − 𝐾𝑓𝑢𝑠 . H)Pk|k−1

CHAPTER 7 – RESULTS
We evaluated the proposed method, by using the publicly available dataset [21]. The dataset which
we used consists of Monocular image sequences and with its corresponding GPS/IMU data. More
information of the dataset is given in Appendix 1. We validate both the tight and loose Vision+IMU
with two different cases datasets.

Trajectory Trajectory

49.02 49.0105 reference(GPS-RTK)


reference(GPS-RTK) INS alone
INS alone Loose VIS-IMU
49.018 Loose VIS+IMU Tight VISION+IMU
Tight VISION+IMU Mono-VO
Mono- VO 49.01
49.016

49.014
latitude(deg)
latitude(deg)

49.0095
49.012

49.01
49.009
49.008

49.006
49.0085

49.004 8.437 8.4375 8.438 8.4385 8.439 8.4395 8.44 8.4405 8.441
8.46 8.465 8.47 8.475 8.48 8.485 8.49 longitude(deg)
longitude(deg)

Figure 10:Case1 (116.9seconds) Figure 11: Case2 (44.7 seconds)

In both the cases INS has drifted after sometime. It is clearly seen that tight fusion trajectory of
Vision-IMU is closer to the reference (RTK-GPS) than loose fusion of Vision-IMU. Thus we can
conclude once again that the tight fusion has better estimate than loose fusion.

Now let us see the RMSE (root mean square error) of both the cases,

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 23


Table 3: RMSE comparison for case 1 Table 4: RMSE comparison for case 2

Algorithm Position Position Position Algorithm Position Position Position


east north upward east north upward
RMSE(m) RMSE(m) RMSE(m) RMSE(m) RMSE(m) RMSE(m)

Pure INS 102.593 150.34 151.36 Pure INS 11.675 17.815 5.41

Loose 37.0064 28.3417 15.284 Loose 13.23 12.29 4.5


VIS+IMU VIS+IMU

Tight 17.5910 81.357 2.619 Tight 7.67 3.942 0.6408


VIS+IMU VIS+IMU

It is also observed that the RMSE for the tight fusion is lower than the loose fusion.

The main validation step of the both proposed loose and tight is by computation time.

Computation time
87.644
CASE2
146.44
Case

224.69
CASE1
348.226

0 50 100 150 200 250 300 350 400


Case1 Case2
Loose Vis+IMU 224.69 87.644
Tight Vis+IMU 348.226 146.44

time(seconds)

Loose Vis+IMU Tight Vis+IMU

Figure 12: Computation time comparison


We observe that the computation time of the Tight Vision-IMU is approximately 1.5 more than the
loose Vision-IMU. This is because, processing raw measurements and complexity of tight algorithm
makes it time consuming. Other reason is building the trifocal relation requires a matching of
features in three consecutive images unlike in loose Vision-IMU, matching of features is only
between two successive images. However it is the tradeoff between efficiency and performance.

To reduce the number of plots, from here we present only case1 with the tight Vis-IMU and loose
GPS plots and results.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 24


Trajectory With GPS
49.02
reference(GPS-RTK)
INS alone In Figure 13: Trajectory comparison,
49.018 Tight VIS-IMU + loose GPS
Tight VISION-IMU
we compare the Tight Vis-IMU and
49.016
Tight Vis-IMU along with loose GPS.
The GPS data in the dataset is
obtained through RTK-GPS. So after
latitude(deg)

49.014

fusing this GPS in the tight Vis-IMU,


49.012 the trajectory is almost closer with
the original GPS-RTK trajectory.
49.01

49.008

49.006 Figure 13: Trajectory comparison


8.465 8.47 8.475 8.48 8.485 8.49
longitude(deg)

RMSE

The RMSE (see Appendix 2) is computed w.r.t to RTK- GPS. We compare the RMSE value for our
configurations. In Table 5, Tight VIS-INS +GPS trajectory seems to have lower RMSE as it is fused
with the RTK-GPS (no erroneous GPS data in this dataset).

Algorithm Position east Position north Position upward


RMSE(m) RMSE(m) RMSE(m)
Tight VIS-IMU + loose GPS 0.0914 0.0093 0.00187
Tight VIS+IMU 17.5910 81.357 2.619
Pure INS 102.593 150.34 151.36

Table 5 : RMSE Comparison

7.1 Robust Analysis


To achieve the robustness of our proposed navigation, we validate our work in the following
different scenarios

 Outages
 Multipath
 Insufficient features points
 Large bias on INS

Comparison in Outages and multipath

Since the dataset we used has no erroneous data in the GPS. To prove the robustness of our method
we have created the outages and multipath on the existing RTK-GPS data (see Appendix 4). Figure
14 shows the GPS outages and multipaths in a random location. From Figure 15, the fused
trajectory in outages and multipath is quite satisfactory. Basically when the navigation detects the
GPS outages, the pose is estimated from Vis-IMU alone and in case of multipath, the filter
estimates less Kalman gain thus making the navigation robust.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 25


GPS Outages and Multpaths Comparision of Fusion trajectory with & without multipaths/outages
49.02 49.02
Gps with multipaths/outages
Fused trajectory with multipaths/outages
49.018 49.018 Fused trajectory w/o multipaths/outages

49.016 49.016
latitude(deg)

49.014

latitude(deg)
49.014

49.012 49.012

49.01 49.01

49.008 49.008

49.006 49.006
8.465 8.47 8.475 8.48 8.485 8.49 8.465 8.47 8.475 8.48 8.485 8.49
longitude(deg) longitude(deg)

Figure 14: creation of GPS outages and multipath Figure 15: Fused trajectory in outages/multipath

Insufficient feature points

Now we decrease the number of maximum features points for processing the trifocal tensor relation.
We observe that the computation time decreases with the decrease in feature points but the fused
trajectory drifts a little compared to the one with more maximum features. This is shown in Figure
16,

49.02
49.02
ref
ref(RTK-GPS)
VIS-INS(maxFeatures = 50) INS bias=1e-2
VIS-INS(maxFeatures = 40) 49.018
INS bias=1e-8

49.016
49.015
latitude(deg)

49.014

49.012

49.01
49.01

49.008

49.006
49.005 8.465 8.47 8.475 8.48 8.485 8.49
8.465 8.47 8.475 8.48 8.485 8.49 longitude(deg)

Figure 16: Comparison with less number of features Figure 17: Comparison with increase in INS biases
Increase in bias

The acceleration and gyros bias has significant effect on the overall pose. In our work, we observed
that increase in acceleration and gyros biases noises has degraded our trajectory of the vehicle.
Hence the biases should be properly modelled. Figure 17, shows the effect of increase in INS biases.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 26


Velocity Comparison Attitude Comparison

vel east
30 roll
0.12
25 GPS-VIS-INS
0.1 VIS-INS
20 ref
INS
0.08
15

10 0.06
m/s

rad
0.04

0
0.02
-5 GPS
VIS-INS 0
-10
INS
GPS-VIS-INS -0.02
-15

-20 -0.04
0 20 40 60 80 100 120 0 20 40 60 80 100 120
time(sec) time(sec)

Figure 19: Velocity comparison along east Figure 18: Comparison of roll

vel north pitch


30 0.1
GPS-VIS-INS
25 GPS
0.08 VIS-INS
VIS-INS ref
20
INS
INS
GPS-VIS-INS 0.06
15

10 0.04
m/s

5
rad

0.02

0
0
-5
-0.02
-10

-15 -0.04

-20 -0.06
0 20 40 60 80 100 120 0 20 40 60 80 100 120
time(sec) time(sec)

Figure 21: Velocity comparison along north Figure 20: Comparison of pitch

In the dataset, it is observed that the car has advanced only in XY plane, so the velocity variation
along the geometric upward is very small, which is clearly seen in the velocity upward plot (Figure
22).

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 27


vel upward yaw
1.5 -1.8

GPS-VIS-INS
1 -1.9 VIS-INS
ref
0.5 INS
-2

0
-2.1
m/s

rad
-0.5
-2.2
-1
GPS
VIS-INS -2.3
-1.5
INS
GPS-VIS-INS
-2 -2.4

-2.5 -2.5
0 20 40 60 80 100 120 10 20 30 40 50 60 70 80 90
time(sec) time(sec)

Figure 22: Velocity comparison along upward Figure 23: Comparison of yaw

Other plots

-4
x 10 Acc bias
Position
5
20

0
m
m/s 2

0
-20
0 10 20 30 40 50 60 70 80 90 100
time(sec)
Attitude
-5 0.02
0 10 20 30 40 50 60 70 80 90 100
time(sec)
rad

-3
0
x 10 Gyro bias
5
-0.02
0 10 20 30 40 50 60 70 80 90 100
time(sec)
0
Velocity
rad/s

2
-5
m/s

-10 -2
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
time(sec) time(sec)

Figure 25: Acceleration bias and gyros bias Figure 24: Error in position, velocity and attitude

The above figures shows biases plots of accelerometer & gyros and error residual in position,
attitude and velocity. We observe that the error residual and biases are of high magnitude for the
pose along x direction in global system.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 28


CHAPTER 6 – CONCLUSION
This report discussed the various coupling architecture available for vision –inertial from the
literature. We have proposed a navigation configuration using Vision, IMU and GPS for low cost
drones and UAV’s. A tight fusion architecture of IMU with the monocular vision using trifocal
tensor is discussed. Also a loose fusion architecture of monocular VO and IMU is proposed. This
report further proposed a loose fusion architecture with GPS.

The performance and computation time of both tight and loose configurations are compared. The
robustness is verified with insufficient feature points, large INS biases and GPS outages and
multipath.

This kind of architecture has the following characteristics: 1) Choice of changing the architecture
from loose to tight or vice versa depending on the surrounding (indoor, urban and underground. 2)
Capable for reliable pose estimation in the GPS and VO outages.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 29


Bibliography
[1] D. Scaramuzza and F. Fraundorfer , "Visual odometry: Part I: The first 30 years and
fundamentals" , Robotics & Automation Magazine, IEEE , vol. 18 , pp.80 -92 , 2011.

[2] R. Hartley and A. Zisserman , Multiple View Geometry in computer vision , 2008 :Cambridge
University Press.

[3] J. Civera, O. G. Grasa, A. J. Davison and J. M. M. Montiel , "I-Point RANSAC for EKF
Filtering. Application to Real-Time Structure from Motion and Visual Odometry”, Journal of
Field Robotics , vol. 27 , no. 5 , pp.609 -631 , 2010.

[4] Hauke Strasdat , J. M. M. Montiel , Andrew J. Davison, Editors Choice Article: Visual SLAM:
Why filter?, Image and Vision Computing, v.30 n.2, p.65-77, February, 2012.

[5] Strasdat, H., Montiel, J.M.M. and Davison, A.J., Real-time monocular SLAM: why filter?. In:
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA),

[6] Andreas Geiger and Julius Ziegler and Christoph Stiller, “StereoScan: Dense 3D
Reconstruction in Real-time”, Intelligent Vehicles Symposium (IV), 2011

[7] Bernd Kitt and Andreas Geiger and Henning Lategahn,”Visual Odometry based on Stereo
Image Sequences with RANSAC-based Outlier Rejection Scheme”, Intelligent Vehicles
Symposium (IV), 2010.

[8] J. P. Tardif, M. George, M. Laverne, A. Kelly, and A. Stentz, " A new approach to vision-aided
inertial navigation," in IEEE/RSJ International Conference on Intelligent Robots and Systems,
pp. 4161- 4168, 2010.

[9] M.Li and A.Mourikis, "Improving the accuracy of EKF-based visual-inertial odometry," in
Proc. IEEE Intl.Conf.on Robotics and Automation (ICRA), 2012.

[10] J. A. Hesch, D. G. Kottas, S. L. Bowman and S. I. Roumeliotis "Towards Consistent Vision-


aided Inertial Navigation", 10th International Workshop on the Algorithmic Foundations of
Robotics 13-15 June 2012, Cambridge, Massachusetts, USA

[11] Guoquan Huang, Michael Kaess, and John J. Leonard, “Towards Consistent Visual-Inertial
Navigation”

[12] A.I.Mourikis , N. Trawny , S. I. Roumeliotis , A. E. Johnson , A. Ansar and L.


Matthies , "Vision-aided inertial navigation for spacecraft entry, descent, and landing" , IEEE
Trans. Robot. , vol. 25 , no. 2 , pp.264 -280 , 2009

[13] J. Hu and M. Chen , "A sliding-window visual-imu odometer based on tri-focal tensor
geometry" , Robotics and Automation (ICRA), 2014 IEEE International Conference
on , pp.3963 -3968.

[14] Kong, Xianglong; Wu, Wenqi; Zhang, Lilian; Wang, Yujie. 2015. "Tightly-Coupled Stereo
Visual-Inertial Navigation Using Point and Line Features." Sensors 15, no. 6: 12816-12833.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 30


[15] E. Asadi and C. L. Bottasso , "Tightly-coupled vision-aided inertial navigation via trifocal
constraints" , Proc. of the IEEE Intl. Con! on Robotics and Biomimetics (ROBIO) , pp.85 -90.

[16] Joan Sola, “Quaternion kinematics for the error-state KF”, February 2, 2016.

[17] B. P. Zhang , J. Gu , E. Milios and P. Huynh , "Navigation with IMU/GPS/digital compass with
unscented Kalman filter" , Proc. IEEE Int. Conf. Mechatron. Autom. , pp.1497 -1502, 2005.

[18] Frazzoli, E., Lozano-Perez, T., Roy, N., Rus, D, “Algorithmic Foundations of Robotics X”,
Proceedings of the Tenth Workshop on the Algorithmic Foundations of Robotics, 2013.

[19] D. Titterton and J. Weston, “Strapdown Inertial Navigation Technology” (IEE Radar, Sonar,
Navigation and Avionics Series) 2nd Edition, March 24, 2005.

[20] Scott Gleason and Demoz Gebre-Egziabher, “GNSS Applications and Methods” 1st Edition,
July 30, 2009.

[21] Andreas Geiger and Philip Lenz and Christoph Stiller and Raquel Urtasun, “Vision meets
Robotics: The KITTI Dataset”, International Journal of Robotics Research (IJRR), 2013.

[22] A.Geiger, P. Lenz, and R. Urtasun, "Are we ready for Autonomous Driving? The KITTI Vision
Benchmark Suite," in Proc. of the IEEE Intl. Conf. on Computer Vision and Pattern
Recognition (CVPR), Providence, RI, USA, Jun. 16-21, 2012, pp. 3354-3361.)

[23] Amani Ben Aa, Lina Deambrogio, Daniel Salos, Anne-Christine Escher, Christophe Macabiau
et al. “Review and classification of vision-based localisation techniques in unknown
environments", IET Radar, Sonar & Navigation, 2014.

[24] Bassem Ibrahim Sheta, “Vision based Navigation (VBN) of Unmanned Aerial Vehicles
(UAV)’’, Dec. 2012.

[25] F. M. Mirzaei and S. I. Roumeliotis , "A Kalman filter-based algorithm for IMU-camera
calibration: Observability analysis and performance evaluation" , IEEE Trans. Robot. , vol.
24 , no. 5 , pp.1143 -1156 , 2008.

[26] Theodore S. Lindsey , “On the Kalman Filter and Its Variations”, April 18, 2014.

[27] Bassem Ibrahim Sheta, “Vision based Navigation (VBN) of Unmanned Aerial Vehicles (UAV)”,
December, 2012.

[28] Zhengyou Zhang, Rachid Deriche, Olivier Faugeras, Quang-Tuan Luong. “A robust technique
for matching two uncalibrated images through the recovery of the unknown epipolar
geometry”. RR-2273, 1994.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 31


Appendix 1
Datasets

The datasets were published jointly by Karlsruhe Institute of Technology and Toyota Technological
Institute at Chicago. They have collected the data from a running car in streets of Chicago. The
sensors set up is shown below,

The data sets also includes

 Time stamps for the


collected data
 Camera parameters
 IMU calibration w.r.t
vehicle frame
 camera calibration
w.r.t vehicle axis are
provided with the above
similar fashion

Figure 26: Sensor system configuration in KITTI

The properties of the sensors in KITTI are as follows,


 grayscale cameras (FL2-14S3M-C), 1.4 Megapixels, 1/2” Sony ICX267 CCD, global shutter
 OXTS RT3003 inertial and GPS navigation system, 6 axis, 100 Hz, L1/L2 RTK, resolution:
0.02m / 0.1◦.
The Dataset includes the Monocular and stereo image sets along with the following data from IMU
and GPS
IMU/GPS data format in KITTI
dataset
1 Lat latitude of the oxts-unit (deg)
2 Lon longitude of the oxts-unit (deg)
3 Alt altitude of the oxts-unit (m)
4 Roll roll angle (rad)
5 Pitch pitch angle (rad)
6 Yaw heading (rad)
7 Vn velocity towards north
8 Ve velocity towards east (m/s)
9 Vf forward velocity, i.e. parallel to earth-surface (m/s)
10 Vl leftward velocity, i.e. parallel to earth-surface (m/s)
11 Vu upward velocity, i.e. perpendicular to earth-surface (m/s)
12 ax acceleration in x, i.e. in direction of vehicle front (m/s^2)
13 ay acceleration in y, i.e. in direction of vehicle left (m/s^2)
14 az acceleration in z, i.e. in direction of vehicle top (m/s^2)
15 af forward acceleration (m/s^2)
16 al leftward acceleration (m/s^2)

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 32


17 au upward acceleration (m/s^2)
18 wx angular rate around x (rad/s)
19 wy angular rate around y (rad/s)
20 wz angular rate around z (rad/s)
21 wf angular rate around forward axis (rad/s)
22 wl angular rate around leftward axis (rad/s)
23 wu angular rate around upward axis (rad/s)
24 Pos_accuracy velocity accuracy (north/east in m)
25 Vel_accuracy velocity accuracy (north/east in m/s)
26 navstat navigation status (see navstat_to_string)
27 numsats number of satellites tracked by primary GPS receiver
28 Posmode position mode of primary GPS receiver (see gps_mode_to_string)
29 Velmode velocity mode of primary GPS receiver (see gps_mode_to_string)
30 orimode orientation mode of primary GPS receiver (see gps_mode_to_string)

Appendix 2
General definitions

Quaternions

A unit quaternion can be described as 𝑞 = [𝑞0 𝑞1 𝑞2 𝑞3 ]𝑇

|𝑞|2 = 𝑞02 + 𝑞12 + 𝑞22 + 𝑞32 = 1

𝜙 𝑎𝑡𝑎𝑛2(2(𝑞0 𝑞1 + 𝑞2 𝑞3 ), 1 − 2(𝑞12 + 𝑞22 ))


Euler angles, [𝜃] = [ arcsin(2(𝑞0 𝑞2 − 𝑞3 𝑞1 )) ]
𝜓 𝑎𝑡𝑎𝑛2(2(𝑞0 𝑞3 + 𝑞1 𝑞2 ), 1 − 2(𝑞22 + 𝑞32 ))

Quaternion to direction cosine or rotation matrix,

𝑞02 + 𝑞12 − 𝑞22 − 𝑞32 2(𝑞1 𝑞2 − 𝑞0 𝑞3 ) 2(𝑞0 𝑞2 + 𝑞3 𝑞1 )


𝑅(𝑞̅ ) = [ 2(𝑞1 𝑞2 + 𝑞0 𝑞3 ) 𝑞02 − 𝑞12 + 𝑞22 − 𝑞32 2(𝑞2 𝑞3 − 𝑞0 𝑞1 ) ]
2(𝑞1 𝑞3 − 𝑞0 𝑞2 ) 2(𝑞0 𝑞1 + 𝑞2 𝑞3 ) 𝑞0 − 𝑞12 − 𝑞22 + 𝑞32
2

Skew symmetric operator


Let 𝑡 = [𝑡1 𝑡2 𝑡3 ]𝑇 be a column vector then,
0 −𝑡3 𝑡2
⌊𝑡 ×⌋ = [ 𝑡3 0 −𝑡1 ]
−𝑡2 𝑡1 0

RMSE (Root mean square error)

∑𝑛𝑡=1(𝑦̂𝑡 − 𝑦)2
𝑅𝑀𝑆𝐸 = √
𝑛

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 33


Appendix 3
Sigma Points in UKF

In UKF, error is predicted by sigma points. The following sets of sigma points are selected,

𝑋̃𝑘|𝑘−1 = 027×1 ± (√(𝐿 + 𝜆)Pk|k−1 )

where L=27 is the dimension of the state, 𝜆 = 𝛼 2(𝑛 + 𝜅) − 𝐿 are the tuning parameters of the filter.
The following are the weights of the parameter,

𝜆
𝑊𝑠0 =
𝐿+𝜆
𝜆
𝑊𝑐0 = − (1 + 𝛼 2 + 𝛽)
𝐿+𝜆
1
𝑊𝑠𝑖 = 𝑊𝑐𝑖 =
2(𝐿 + 𝜆)
The predicted measurement vector is determined by propagating individual sigma point through
the nonlinear observation function h (.) from eq (16).
𝑍𝑗 = ℎ(𝑔(𝑥̂𝑘|𝑘−1 , 𝑋̃𝑘|𝑘−1 ), {m1 , m2 , m3 }), 𝑧̂𝑗 = ∑2𝐿 𝑙
𝑙=0 𝑊𝑠 𝑍𝑗
2𝐿

𝑃 𝑍𝑗 𝑧̂ 𝑗 = ∑(𝑍𝑗 − 𝑧̂𝑗 )(𝑍𝑗 − 𝑧̂𝑗 )𝑇


𝑙=0
2𝐿

𝑃 𝑋𝑧𝑗 = ∑ 𝑊𝑐𝑙 (𝑋̃𝑘|𝑘−1 − 027×1 )(𝑍𝑗 − 𝑧̂𝑗 )𝑇


𝑙=0

Appendix 4
GPS outages ad multipath

In Matlab, the outages in GPS i.e, in latitude, longitude and altitude (LLA) are created by masking
original value with [NaN NaN NaN]. These are created at random locations.

To create a multipath scenario, the original LLA is multiplied by a random number (eg: we used
1e6). But make sure that this random number has same decimals as of in unaltered LLA.

Legacy
I allow anyone to access the contents of this report and the code if interested.

Since our Tight and loose Vis-IMU codes are huge, it cannot be included in this report. So it is
provided in the USB key to the jury. However loose GPS code is provided in this report.

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 34


Function to fuse with GPS

function[x_fus,P,dxg]=fusion_gps(x_vins,x_prev_vins,x_prev_fus,RawData,
gps_pos_lla,P,R,multipath )
% Performs loosly coupled with VINS and GPS

% Input = x_VINS % current loosly coupled state of VISION & IMU


% = x_prev_VINS % previous loosly coupled state of VISION & IMU
% = x_prev_fus % previous fused state of VISION+IMU+GPS
% = RawData % current GPS measurment
% = P % Initial covariance
% = gps_pos_lla % Use this if outages/multipath exists
% = R % Measurement covariance

% Ouput = x_fus % Current fused state of VISION+IMU+GPS


% = P % Covariance after Mesurement update
% = dxg % error residual

d2r = pi/180; % Degrees to radians


r2d = 1/d2r; % Radians to degrees

global T0 scale Rnb


global gdt idt cdt gg Tic

tau = min(idt,cdt); % sampling ratio of VINS

% get reference attitude


ref_att=Cbn2eul(Rnb);
yaw = ref_att(3);
pitch = ref_att(2);
roll = ref_att(1);

% get position and velocity from gps


pos_gps =
glb_body([gps_pos_lla(1);gps_pos_lla(2);gps_pos_lla(3)],scale,T0);
gps_vel = Rnb'*[RawData(8); RawData(7); RawData(11)];

% get current position, velocity and attitude from x_VINS

pos_VINS = x_vins(1:3);
vel_VINS = x_vins(4:6);%+ref_att;;
att_VINS = x_vins(7:9);

%......................................................................
% get previous state position, velocity and attitude from x_VINS
%......................................................................

pos_prev = x_prev_vins(1:3);
vel_prev = x_prev_vins(4:6);%+ref_att;;
att_prev = x_prev_vins(7:9);

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 35


%......................................................................
% previous PVA of fusion
%......................................................................
pos_fus_pre = x_prev_fus(1:3);
vel_fus_pre = x_prev_fus(4:6);
att_fus_pre = x_prev_fus(7:9);

% Initialise variables for kalman filter

H =[eye(3),zeros(3),zeros(3);
zeros(3),eye(3),zeros(3)];

Cbn = eul2Cbn(att_fus_pre'); % rotation matrix from euler angles

%............................prediction...............................&

att_fus = att_fus_pre + tau*(att_VINS - att_prev)/tau;


vel_fus = vel_fus_pre + tau*(vel_VINS - vel_prev)/tau;
pos_fus = pos_fus_pre + tau*(pos_VINS - pos_prev)/tau;

%..............................Update.................................%

posInnov = pos_fus - pos_gps;


velInnov = vel_fus - gps_vel;
stateInnov = [posInnov; velInnov];

% Compute Kalman gain and update state covariance

Pz = H*P*H' + R;
K = (P*H')/(Pz);

P = (eye(9) - K*H)*P;
P = 0.5*(P + P');

if multipath==1
if isnan(gps_pos_lla)==1
x_fus= [pos_fus;vel_fus;att_fus];
stateError = zeros(9,1);
else
stateError = K*stateInnov;

posFeedBack = stateError(1:3);
velFeedBack = stateError(4:6);
attFeedBack = stateError(7:9);

% Update position and velocity using computed corrections

pos_fus = pos_fus - posFeedBack;


vel_fus = vel_fus - velFeedBack;

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 36


% Update attitude using computed corrections

Cbn_minus = eul2Cbn(att_fus');
Cbn_plus = (eye(3) + skew(attFeedBack))*Cbn_minus;
att_fus = [Cbn2eul(Cbn_plus)];

% att_fus = att_fus-attFeedBack ;
x_fus= [pos_fus;vel_fus;att_fus];

end
else

if isnan(gps_pos_lla)==1
x_fus= [pos_fus;vel_fus;att_fus];
stateError = zeros(9,1);
else

stateError = K*stateInnov;

posFeedBack = stateError(1:3);
velFeedBack = stateError(4:6);
attFeedBack = stateError(7:9);

% Update position and velocity using computed corrections

pos_fus = pos_fus - posFeedBack;


vel_fus = vel_fus - velFeedBack;

% Update attitude using computed corrections

Cbn_minus = eul2Cbn(att_fus');
Cbn_plus = (eye(3) + skew(attFeedBack))*Cbn_minus;
att_fus = [Cbn2eul(Cbn_plus)];

% att_fus = att_fus-attFeedBack ;
x_fus= [pos_fus;vel_fus;att_fus];

end
end
dxg = x_fus-x_vins;

Robust Navigation of UAV’s by Fusion of Vision, INS and GPS : RAMUNI 37

You might also like