You are on page 1of 12

Performance of Integrated Electro-Optical Navigation Systems

Takayuki Hoshizaki, Dominick Andrisani II, Purdue University, School of Aeronautics and Astronautics, West Lafayette, IN 47907-2023 Aaron Braun, Ade Mulyana and James Bethel Purdue University, School of Civil Engineering, West Lafayette, IN 47907-2051

BIOGRAPHY Takayuki Hoshizaki is a Ph.D. candidate in the School of Aeronautics and Astronautics Engineering at Purdue University. He received his M.E. degree in Aeronautics and Astronautics Engineering from Kyushu University, Japan, in 1998. Dr. Dominick Andrisani is an Associate Professor in the School of Aeronautics and Astronautics Engineering at Purdue University. He received his Ph.D. degree in Electrical Engineering from the State University of New York at Bualo in 1979. He has worked at NASA Langley Research Center (19701972) and at Calspan Advanced Technology Center (1972-1980). He has been at Purdue University since 1980. His current research interests include the design of ight control systems to achieve satisfactory ying qualities, aircraft inertial navigation, GPS-aided navigation systems and air trac management. Aaron Braun is a Ph.D. candidate in the Geomatics Department in the School of Civil Engineering at Purdue University. He received his M.S. degree in Civil Engineering from Purdue University in 2000. Ade Mulyana is a Ph.D. candidate in the Geomatics Department in the School of Civil Engineering at Purdue University. He received his M.S. degree in Geodetic Engineering from Delft University of Technology, The Netherlands, in 1996. Dr. James Bethel is an Associate Professor in the School of Civil Engineering at Purdue University. He received his Ph.D. degree in Civil Engineering from Purdue University in 1983. He worked at Teledyne Geotronics and Kern Instruments prior to coming to Purdue University in 1989. His current research interests include photogrammetry/remote sensing data

adjustment and digital image processing. He is a coauthor of Introduction to Modern Photogrammetry, Wiley, 2001 [19]. ABSTRACT Performance analyses are made for the new airborne tightly integrated INS (Inertial Navigation System) / GPS (Global Positioning System) / EO (ElectroOptical imaging system) navigation system in which aircraft states, sensor biases, and unknown ground object coordinates are estimated simultaneously by a single Kalman lter. These analyses are done by: 1. comparing the new navigation system with existing tightly integrated INS/GPS navigation systems; 2. changing INS and GPS performance to see the dependency of integrated systems performance on individual sensor performance; 3. investigating the capability of focusing on known ground objects (control points). The simulation results show that: 1. aircraft yaw angle determination (a weak point common in INS/GPS systems) is greatly improved by focusing on an unknown ground object with the INS/GPS/EO system; 2. GPS performance has eects on aircraft position, velocity, and orientation determination accuracy, while INS performance has eects only on aircraft orientation determination accuracy; 3. focusing on a control point results in (1) better navigation accuracy than focusing on an unknown ground object, and (2) possibility of using control points as an alternative to GPS. INTRODUCTION
MOTIVATION

The motivation for this research is the need for increased accuracy in the aircraft navigation system. Using the tightly coupled INS/GPS/EO navigation

system, however, Hoshizaki et al. [13] have proven the hypothesis that increased accuracy in navigation will result if the navigation and geo-positioning problems are solved simultaneously, and have also investigated the benets of focusing on known ground points (control points). This study will show how aircraft navigation errors are inuenced by various levels of sensor performance in the INS and GPS subsystems.
LITERATURE REVIEW

with traditional INS/GPS navigation systems. TIGHTLY COUPLED INS/GPS/EO SYSTEM Figure 1 shows the layout of the tightly coupled INS/GPS/EO system. The simulation of the tightly
Aircraft States (exact values) Tightly Coupled INS/GPS/EO State Estimates

The use of imaging as an aid to INS-based navigation has been traditionally studied by way of terrain matching methods in which aerial images are matched to an on-board digital elevation map [20]. When integrating navigation and photogrammetry the rst related problem is observer motion estimation via visual cues where the camera is the only used sensor. A variety of methods are reported to solve this problem such as: Heeger and Jepson [12] use a least squares formulation; Soatto et al. (1996) [22], Soatto and Perona (1997) [23] use an analysis of topological manifolds to propose the Implicit Extended Kalman Filter (Implicit EKF); Gurl and Rotstein [9] use an Implicit EKF to estimate aircraft states. The second related problem is object motion estimation via visual cues. There are two distinct approaches to solve this problem: optical ow methods and feature based methods. Optical ow methods have an advantage that they are free from object recognition problems [1]. A feature based method is taken in Broida et al. [3] where several featured marks on an object are tracked by a stationary camera to estimate the objects states. In their method, a batch process is used to give initial estimates based upon the rst few images, then the IEKF (Iterated Extended Kalman Filter) recursively estimates the objects states. This paper oers the idea of a two-stage estimator where the rst stage does a quick ground object location and passes this to the second stage thereby improving the initialization of the second state estimator. The study of the integration of observer motion estimation and object position estimation, especially those using knowledge from the INS, is seldom reported. Hagen and Hayerdahl [11] propose to use image position measurements with a digital elevation map for the absolute positioning without using an INS. The IEKF tightly couples six DOF aircraft state estimation and more than one stationary ground object position estimation. Hafskjold et al. [10] combine the technique used in Hagen and Hayerdahl [11] with an INS technique. While these methods depend on a digital elevation map for aircraft absolute positioning, Hoshizaki et al. [13] use the GPS for the same purpose in the tightly coupled INS/GPS/EO system, resulting in improved navigation accuracy compared

Aircraft

accelerations angular velocities

IMU

Navigation Equation
Position, Velocity, Orientation and Covariance Correction

Bias Correction

Kalman Gain GPS Receiver

+ + +

Kalman Filter Pseudorange Pseudorange rate


Image position

Camera

Figure 1: Layout of the tightly coupled INS/GPS/EO. coupled system consists of the aircraft model, the INS, the GPS, the imaging sensor (Charge Coupled Device camera), and the single Kalman lter. The mechanism of each component is briey described in the following subsections.
ELLIPSOIDAL NAMICS EARTH-BASED AIRCRAFT DY-

To simulate the actual ight trajectory as realistically as possible, we use an ellipsoidal-Earth model using WGS-84 [6]. The dynamics of the aircraft are described by six DOF equations of motion [25] using a Cessna 182 as the aircraft model. The turbulence model used in the simulation is given by MILSPEC 1797A [7].
INERTIAL NAVIGATION SYSTEM

The Inertial Measurement Unit (IMU) in the INS, which consists of three accelerometers and three rate gyros, measures accelerations and angular velocities. These imperfect measurements contain small errors often specied by a scale factor, nonlinearity, bias, and white noise [14] [17] [24]. For simplicity, only bias modeled with a Markov process and the white noise are assumed to be errors in both accelerometers and rate gyros. Based on the imperfect sensor outputs, the Navigation Equation is used to estimate aircraft velocity, position, and orientation. The most commonly

used navigation equation based on the NED (NorthEast-Down) coordinate system is used in this study [26]. t2 t3 The absolute measurements of aircraft position and velocity are obtained using a GPS receiver on board f the aircraft. For this study, a single frequency GPS PSfrag replacements(Focal Length) receiver is simulated, which measures pseudoranges and pseudorange rates using the satellite broadcast t1 y ephemerides, in the real time mode [16]. t2 t3 ELECTRO OPTICAL SYSTEM
IMAGING GEOMETRY

Frame Photograph

t1

GLOBAL POSITIONING SYSTEM

Image Plane (Negative Plane)

z L (Perspective Center) x

y0 t1

x0 t2

t3

Image Plane (Positive Plane)

Figure 2 describes the imaging geometry for a frame camera. (A frame camera is an imaging sensor model which denes how to reconstruct the bundle of rays connecting image points and the corresponding ground objects. See Mikhail et al. [19] for more details.) For purposes of geometry and mathematical modeling, the camera lens is represented by a single point, called the perspective center, even though the lens assembly is composed of many optical elements [19]. The focal length for a xed-focus lens is the distance along the optical axis from the perspective center to the image plane. The image coordinate system is dened on the positive image plane, since the projected view is reversed on the negative image plane which is less convenient to work with. In Figure 2, the image points, t1 , t2 and t3 , correspond to the ground objects, T1 , T2 and T3 , respectively. C denotes the origin of the Image coordinate system.
IMAGE POSITION MEASUREMENTS

T3 T1

T2

Figure 2: Imaging geometry for a frame camera. points T and L, respectively, described in the ECEF (Earth Centered Earth Fixed) coordinate system, pLt = k T pLT c e
ce

or

where Let the point t be one of image points, and let the point T be the corresponding ground object point. Suppose that image coordinates of the image point t are (x, y, 0). Image coordinates of the perspective center L are, from Figure 2, (x0 , y0 , f ) where x0 and y0 are small osets. Then, x x0 pLt = Ct CL = y y0 c f where the subscript c denotes that the vector is described in image coordinate unit vectors. Since the LOS (Line of Sight) vector LT is an extension of the vector Lt, the following vector equation holds: pLt = kpLT c c where k is a scale factor. Letting XT = [XT , YT , ZT ]T and XL = [XL , YL , ZL ]T be the position vectors of the

U x x0 y y0 = k V W f U X XL . T V =T YT YL ce W ZT ZL

Note that T is the transformation matrix from the ce . ECEF to the Image coordinate system. Letting xc = . x x0 and yc = y y0 , U xc yc = k V (1) W f Substituting k = f /W which is given by the 3rd row into the 1st and 2nd rows, we obtain image position measurement equations [19]: xc yc = f U W V = f W (2) (3)

In the image simulation, white noise corresponding to one pixel is added to the x and y image coordinates to simulate image inaccuracy.

TIGHTLY COUPLING

In a tightly coupled INS/GPS/EO navigation system, the single integrated Kalman lter receives the measurements of pseudoranges and pseudorange rates from the GPS receiver, ground object image coordinates from the imaging sensor. The residuals between these measurements and their estimates from the Kalman lter are used to compute correction signals. The state equation and the output equation of the Kalman lter are given by: x z = F x + Gv = Hx + w

For each combination of the navigation sensors, an ensemble of 30 experiments is performed changing the initial conditions and the random noise input to generate the true models (wind gusts, aircraft trajectories, navigation sensor biases, and the ground objects coordinates). Figure 3 shows 30 dierent actual ight trajectories and 30 dierent true ground object locations in the local coordinate system where a set of a ight trajectory and a ground object location is used in one experiment. In the local coordinate system, the x, y and z axes are dened along the Easterly, Northerly and Upward directions, respectively, with the origin located on the equator. Table 1: GPS performance. Pseudo Range Pseudo Range Rate value value (m) (m/s) 7.5104 0.03 0.1 0.03 6.6 0.05 20 0.275 33.3 0.5 1000 1000

where x consists of 20 elements: three orientation errors, three velocity errors (north, east and downcomponents), three position errors (longitude, latitude and altitude), three rate gyro biases, three accelerometer biases, GPS receiver clock bias and drift, and three ground object coordinate errors. The matrix F consists of the linearized navigation equation, the linear bias dynamics and the stationary ground object dynamics. The matrix H is obtained by linearizing the pseudorange, pseudorange rate, and image position equations. A least squares batch process and the IEKF (Iterated Extended Kalman Filter [15]) are used to allow large initial errors in ground object position estimates, and to reduce linearization errors in the image position equations. While the nonlinear navigation equation is numerically integrated at a high frequency, it is assumed that the small perturbation error states (x(t)) propagate linearly. Every one second, aircraft states, sensor biases, and ground object coordinates are updated with a set of correction signals from the Kalman lter [2] [4] [8] [21]. SIMULATION To investigate the performance of the INS/GPS/EO system and the benets of focusing on a control point, simulations are conducted according to the following three scenarios: Scenario I. the traditional INS/GPS navigation system; Scenario II. the INS/GPS/EO system focusing on a single unknown ground object; Scenario III. the INS/GPS/EO system focusing on a control point whose location is known with the accuracy of =0.1m. In each scenario, six cases of GPS performance as shown in Table 1 and ve cases of INS performance as shown in Table 2, for a total of 30 combinations of navigation sensors, are investigated. The additive white noise of one pixel is used as the error model of the imaging sensor ( = 5 106 meter white noise), which is constant through the all simulations.

Notation

RTK1 RTK2 2001H 2001M 2001L Broken

RTK1: Real-time kinematic dierential GPS receiver, carrier phase, dierential channel is assumed. This is considered as the best performance available today (February 2003). RTK2: Real-time kinematic dierential GPS receiver, C/A code phase is assumed. This is considered as the secondary best performance available today. 2001H: A high performance GPS receiver in the year 2001 using P code is assumed. 2001M: A middle performance GPS receiver between 2001H and 2001L is assumed. 2001L: A low performance GPS receiver in the year 2001 using C/A code is assumed. Broken: It is assumed that the GPS receiver is jammed or out of order. The nominal trajectory in the simulations is approximately a straight path to the North with the altitude of 6096 meters (20000 feet), the velocity of 61 m/s (200 ft/s), and a 60 second ight duration, starting from a point on the equator. The mean location of the ground objects is 1829m (6000ft) north and 3048m (10000ft) west of the starting point of the aircraft. The nominal ight trajectory and the mean location of the ground objects form a good aircraft/ground object geometry since the direction of the line of sight largely changes during the simulation. After conducting the simulations of one ensemble, corresponding to one of navigation sensors combina-

Table 2: INS performance. Notation Rate Gyro Bias Stability Random Walk value P SD value (deg/hr) (deg/hr/ Hz) 2010 0.0001 0.00001 2005 0.001 0.0006 2001H 0.003 0.0015 2001M 0.1765 0.036 2001L 0.35 0.07 Accelerometer (106 g) (106 g/ Hz) 2010 0.1 0.1 2005 4 1 2001H 25 5 2001M 37.5 27.5 2001L 50 50 2010: An imaginary high performance INS which is more than 10 times better than the INS 2005 is assumed. This level of accuracy is not available to date (February 2003). 2005: An imaginary high performance INS which is much more accurate than the INS 2001H is assumed. This level of accuracy is still dicult to nd in commercial products to date. 2001H: A high performance INS in the year 2001 using ring laser gyros is assumed. 2001M: A middle performance INS between 2001H and 2001L is assumed. 2001L: A low performance INS in the year 2001 using ber optical gyros is assumed. tions, the ensemble average (RMS value) of navigation errors at the 60 second time point is computed to be used as a data point in the contour plot. For example, Figure 4 shows the 30 experiments of aircraft roll angle estimation errors in time histories. The 2 boundaries are represented by the dotted lines in which 2 is the theoretical variance as computed by the IEKF. The thick solid line is the time history of the twice amount of the RMS ensemble average. For position and velocity accuracy, ensemble averages of the magnitude of the error vectors are used. Based upon the 30 data points given by 30 ensemble averages corresponding to each navigation sensor combination, contour lines are created as shown in Figures 5 - 19. The following subsections explain the detailed simulation congurations and the results.

6000 Actual Aircraft Flight Trajectories 5000

(Upward) (m)
local

4000 3000

2000

z
1000 True Ground Object Locations 0 3000 2000 y (Northing) (m)
local

1000

3000

1000 2000 x local

(Easting) (m)

Figure 3: The local coordinate system and the aircraft/ground object geometry.
I. INS/GPS

The traditional INS/GPS system is simulated to estimate the aircraft states in this scenario. The update of the estimates in the Kalman lter is performed at 1 Hz as GPS receivers measurements are obtained. Figures 5 - 9 show the contour plots describing the ensemble averages of the navigation errors at the 60 second time point according to navigation sensors performance. The rst two gures, Figures 5 and 6, show aircraft position and velocity determination errors, respectively. These gures clearly show that GPS performance is the dominant factor for aircraft position and velocity estimation accuracy in the INS/GPS navigation system. The following three gures, Figures 7 - 9, show aircraft roll, pitch and yaw angle determination errors, respectively. Notice that: 1. yaw angle accuracy is signicantly worse than roll and pitch angle accuracy; 2. roll and pitch angle accuracy depends both on GPS and INS performance while yaw angle accuracy depends mostly on GPS performance. Note that the thick contour lines track the same accuracy in the corresponding gures over dierent simulations to help with visualization. For example, the roll angle accuracy of 1 105 radians is emphasized with thick lines in Figures 7, 12 and 17.
II. INS/GPS/EO WITH AN UNKNOWN GROUND OBJECT

The tightly coupled INS/GPS/EO system is simulated in this scenario. The imaging sensor is always bore-

x 10

Yaw Angle Determination Error for (INS, GPS)=(2001H, 2001H), Simulation II

3 2

improved for the region where the GPS receiver is working. The comparison between Figures 9 and 14, at the point (INS,GPS) = (2001H, 2001H), tells that the improvement factor is about 26.
III. INS/GPS/EO WITH A CONTROL POINT

2 2 Ensemble Average (2RMS) 1

Ensemble Experiments 0 10 20 30 time (sec) 40 50 60

Figure 4: The time histories of aircraft roll angle determination errors corresponding to the data point of (INS,GPS)=(2001H,2001H) in Figure 12. sighting a single unknown ground object precisely, assuming to use a gimballed image tracking system. During a 60 second ight, the imaging sensor keeps taking images of the ground object at 1 Hz at the same time as the GPS receiver catches the signals, for a total of 61 images. The batch process uses the rst 20 images to obtain the initial estimates of the ground object location. The INS/GPS/EO based on an IEKF works on the remaining 41 images starting with the initial ground object location estimates given by the batch process. Hence, the navigation system is equivalent to the INS/GPS system during 0 - 19 seconds. Figures 10 - 14 show the ensemble averages of the navigation errors at the 60 second time point for this scenario. While Figures 10 and 11, aircraft position and velocity determination errors, respectively, do not show any signicant dierence from Simulation I, Figure 12 shows that aircraft roll angle determination accuracy is improved signicantly in the region of the poor GPS performance compared to Figure 7. Roll angle accuracy is largely independent of GPS performance. Meanwhile, the pitch angle accuracy shown in Figure 13 is not improved signicantly compared to Figure 8. This is because the unknown ground object is located on the left of the aircraft straight ight path where the aircraft roll angle has a larger impact than pitch angle in estimation of aircraft position, orientation, and the ground object coordinates. Correspondingly, if the ground object is located along the ight line, these two gures would be switched. Figure 14 to Figure 9 shows that aircraft yaw angle determination accuracy is dramatically

Assume now that the INS/GPS/EO system is focusing on the same single ground object as Scenario II in each experiment. However, the ground object location is known with 0.1 meter accuracy from the beginning of the simulation in this scenario. Hence, the tightly coupled mode is activated throughout 0 - 60 seconds. Notice that when the GPS receiver is Broken and if INS performance is equivalent to or better than 2001H, a certain level of navigation accuracy is guaranteed for all aircraft position, velocity, and orientation, that we can obtain with a low performance GPS receiver in Simulation I. For instance, the 2.5 meter positioning accuracy obtained at the point (INS, GPS) = (2001H, Broken) in Figure 15 is better than the point (INS, GPS) = (2001H, 2001L) in Figure 5. This implies that focusing on a control point with the accuracy of 0.1 m, with a relatively good aircraft/control point geometry and a certain level of the INS, can substitute for a low performance GPS receiver. Comparing the accuracy with Simulation II for a particular point, e.g., (2001H, 2001H), we notice that focusing on a control point results in better navigation accuracy by a factor of two for position determination, and 1.3 for yaw angle determination. CONCLUSIONS The following conclusions are based upon: (1) a relatively good aircraft/ground object (control point) geometry; (2) the use of 61 images separated in time by 1 second; (3) 0.1 meter accuracy of the control point. INS/GPS System 1. GPS performance is the dominant factor for aircraft position, velocity, and yaw angle estimation accuracy. Both GPS and INS performance are important factors for aircraft roll and pitch angle estimation accuracy. INS/GPS/EO System with an Unknown Ground Object 2. Aircraft yaw angle estimation accuracy is signicantly improved compared to the INS/GPS system for the region where the GPS receiver is working. Its accuracy depends more on GPS performance than INS performance. 3. When the unknown ground object is located in the left or right of the aircraft ight path, aircraft roll angle estimation accuracy is improved signicantly compared to the INS/GPS system, resulting in being independent from GPS performance. It is expected that the aircraft

Roll (rad)

pitch angle estimation accuracy would be signicantly improved if the unknown ground object is located along the aircraft ight path. 4. GPS performance is still the dominant factor for aircraft position and velocity estimation accuracy. INS/GPS/EO System with a Control Point 5. Focusing on a control point gives much better navigation accuracy than focusing on an unknown ground object or a simple INS/GPS system especially in the case GPS performance is poor. 6. Particularly, aircraft yaw angle estimation accuracy becomes 30 times better than the INS/GPS system and 1.3 times better than the INS/GPS/EO system focusing on an unknown ground object for (INS, GPS) = (2001H, 2001H). 7. Focusing on a control point with a certain level of the INS can substitute for a low performance GPS receiver. Simulation I. INS/GPS System
Broken 8.9 8.1 7.7 7.9 7.5 7.3 6.5 5.55.3 5.1 4.9 4.7 4.1 3.5 3.1 3.3 2.9 A/C Position Errors; Ensemble Averages (m) 9.1 9.1 8.9 8.9 8.5 8.5 8.3 8.7 8.1 8.3 8.7 8.1 7.9 7.7 7.9 7.7 7.5 7.5 7.3 7.3 6.9 7.1 6.9 7.1 6.7 6.3 6.7 6.3 6.5 6.5 6.1 6.1 5.9 5.9 5.7 5.7 5.55.3 5.55.3 5.1 5.1 4.9 4.9 4.7 4.7 4.5 4.5 4.1 4.1 3.9 4.3 3.9 4.3 3.7 3.7 3.5 3.1 3.5 3.1 3.3 2.9 3.3 2.9
2.7 2.5
2.7 2.5

Broken

A/C Roll Angle Error; Ensemble Averages (10 rad)


14

13 12

18 17

10

10

11

15

16

2001L
9 9
10
12

13

2001M
GPS

8 7 6 5
4 3
2

8 7 6 5
9 8
7
6
5

11

2001H 2

4 3

10
9

1
3

RTK2
1
2

7 6

RTK1 2010

2005

2001H INS

2001M

2001L

Figure 7: Aircraft roll angle determination error.

Broken
9

A/C Pitch Angle Error; Ensemble Averages (10 rad) 14 13 9


8
7
11 12

2001L

9.1 8.5 8.3 8.7 6.9 7.1 6.7 6.3 6.1 5.9 5.7 4.5 3.9 4.3 3.7
2.7 2.5
GPS

18 19 16 17 15

2001L

10 14 13

2001M
4

5
4

12 11
6

2001M
GPS

2001H

1.3 1.1 0.9 0.7 0.5

2.3 2.1 1.9 1.7 1.5

2.3

2.3

2.1 1.9 1.7 1.5

2.1 1.9 1.7 1.5 1.3 1.1 0.9 0.7


0.5 0.3 0.1 0.3 0.1

10

2001H

3
2
4

8
7

1.3 1.1 0.9 0.7 0.5


0.3 0.1

1
3

RTK2
1
2

RTK2

RTK1 2010

2005

2001H INS

2001M

2001L

RTK1 2010

2005

2001H INS

2001M

2001L

Figure 8: Aircraft pitch angle determination error.

Figure 5: Aircraft position determination error.


Broken
5 A/C Yaw Angle Error; Ensemble Averages (10 rad)

24

Broken

2001L

2001M

0.032 0.028 0.024 0.02


2001H

0.04 0.036

0.016 0.012
0.008

0.032 0.028 0.024 0.02

0.04 0.036

0.044

GPS

A/C Velocity Errors; Ensemble Averages (m/s) 0.168 0.168 0.164 0.17 0.176 0.164 0.16 0.164 0.16 0.168 0.16 2 0.156 0.156 0.152 0.152 0.156 0.148 0.148 0.148 0.152 0.144 0.144 0.144 0.14 0.14 0.14 0.136 0.132 0.136 0.132 0.136 0.13 0.128 0.128 0.124 0.124 0.12 0.112 0.12 0.128 0.116 0.108 0.116 0.112 0.120.12 42 0.112 0.108 0.104 0.116 0.108 0.104 0.104 0.1 0.1 0.096 0.096 0.092 0.092 0.088 0.09 0.088 0.090.1 2 0.088 6 0.0760.084 0.0760.084 0.0760.084 0.0720.08 0.0720.08 0.0720.08 0.068 0.068 0.06 0.06 0.06864 0.056 0.064 0.056 0.064 0.0 0.052 0.052 0.06 0.05 0.048 6 0.048 0.05 2 0.044 0.048 0.044
0.04 0.036

2001L

236

236
23

2001M

232 228 224 220 228 224 220

232 228 224 220

232

236

0.016
0.012

0.032 0.028 0.024 0.02

GPS

2001H

208 204
196

216 212
200
196

208 204

216 212
200

0.016

RTK2

0.004

0.008
0.004

0.012

0.008
0.004

192 192 188 188 184 184 180172 176 172 168 164 168 160 160 16 156 156 152 152 148 144 14 4 4 140 136 14 1408 128 132 128 132 124 120 116 124 136 120 116 108 112 108 112 96 104 92100 96 104 92100 84 80 88 76 72 64 68 84 72 64 60 56 52 68 48 44 40 60 36 5652 32 44 8 760 40 48 20 24 28 32 12 16 2036 28 24 8 RTK1 2010 2005 2001H INS

RTK2

1 192 96 18 180 8 17617 2 16 156 152 0

208 204

216 212

184
168 164 14 144 140 8

200
180 176

128 113 12 12 4 326 1 0 90 1 89218 0 4 12 6 8 6 0 1 11 2001M

2001L

RTK1 2010

2005

2001H INS

2001M

2001L

Figure 9: Aircraft yaw angle determination error.

Figure 6: Aircraft velocity determination error.

Simulation II. INS/GPS/EO System with an Unknown Ground Object


Broken A/C Position Errors; Ensemble Averages (m) 9.3 8.9 9.3 8.9 8.9 9.1 8.7 9.1 8.5 8.7 8.7 8.3 8.5 8.3 8.5 8.1 8.3 7.9 8.1 8.1 7.7 7.9 7.9 7.5 7.7 7.7 7.5 7.3 7.5 7.3 7.1 7.3 6.9 7.1 7.1 6.9 6.7 6.9 6.7 6.5 6.7 6.3 6.5 6.5 6.3 6.1 6.3 6.1 6.1 5.9 5.7 5.9 5.7 5.9 5.7 5.5 5.5 5.5 5.3 5.3 5.3 5.1 4.7 5.1 4.7 5.1 4.7 4.9 4.9 4.9 4.5 4.5 4.5 4.3 4.3 4.3 4.1 4.1 4.1 3.9 3.9 3.9 3.7 3.7 3.7 3.5 3.5 3.5 3.3 3.3 3.3 3.1 3.1 3.1 2.9 2.9 2.9 2.7 2.7 2.7 2.5 2.3 2001M
GPS

Broken

A/C Roll Angle Error; Ensemble Averages (10 rad)


7 6
4

15

14
12

11

2001L

10
9
8

13

2001L

2001M
6
4
GPS

2.5

2.5
2.3

2.3

2.1 1.7 1.5

2.1 1.9
1.3 1.1 0.9 0.7 0.5

1.9

2.1 1.7 1.5

2001H

12 11 10 9 8

1.7 1.5

1.9
6

2001H

1.3 1.1 0.9 0.7 0.5

1.3 1.1 0.9 0.7 0.5


0.3 0.1

RTK2
4
2

0.3

0.3

RTK2

0.1

0.1

RTK1 2010

2005

2001H INS

2001M

2001L

RTK1 2010

2005

2001H INS

2001M

2001L

Figure 12: Aircraft roll angle determination error.

Figure 10: Aircraft position determination error.


A/C Velocity Errors; Ensemble Averages (m/s) 0.156 0.152 0.16 0.152 0.152 0.148 0.156 0.148 0.148 0.144 0.14 0.144 0.14 0.144 0.14 0.136 0.136 0.136 0.132 0.124 0.132 0.124 0.132 0.128 0.128 0.12 0.128 0.124 0.12 0.116 0.12 0.116 0.112 0.116 0.112 0.108 0.112 0.108 0.1 0.104 0.1 0.1040.096 0.108 0.104 0.1 0.096 0.092 0.092 0.088 0.088 0.096 0.084 0.092 0.084 0.088 0.08 0.08 0.084 0.076 0.076 0.08 0.072 0.072 0.068 0.068 0.076 0.064 0.064 0.072 0.068 0.06 0.06 0.064 0.056 0.056 0.052 0.044 0.052 0.048 0.048 00.06 .056 0.0 0.05 0.04 44 2 0.04 0.0 0.036 4 0.036 8
0.032

Broken

A/C Pitch Angle Error; Ensemble Averages (10 rad)


7

17 16 15 14

12

6
5

11
10

Broken

2001L

13

2001L

2001M
GPS

4
4

12 11 10

2001M

0.028 0.024 0.02 0.016 0.012 0.008

2001H

0.024 0.02 0.016 0.012


0.008

0.0 32 0.028

0.036

0.04

4 0.04

2001H
1

3
4

6
5

GPS

0.024 0.02 0.016


0.012

0.032 0.028
RTK2

0.004

0.008
0.00 4

RTK2

RTK1 2010

2005

2001H INS

2001M

2001L

0.004

RTK1 2010

2005

2001H INS

2001M

2001L

Figure 13: Aircraft pitch angle determination error.

Figure 11: Aircraft velocity determination error.

APPENDIX A part of the mathematical background for this study is described in the appendix. Readers should refer to [13] for the comprehensive description.
Kalman Filter State Equation

5 A/C Yaw Angle Error; Ensemble Averages (10 rad) Broken 240 248 244 232 240 236 244 232 240 248 232 236 236 224 216 228 212 220 224 216 228 212 224 216 228 212 220 220 208 204 208 208 204 204 192 188 200 192 180196 184 176196 192 180196 188 200 188 200 172 160 180 172 184 176 168 172 184 176 168 164 168 164 164 160 160 156 156 156 152 152 152 140 148 140 148 140 148 136 144 136 144 136 144 132 132 132 128 128 128 120 124 120 124 120 124 108 108 108 104 116 104 116 104 116 100 112 100 112 100 112 96 96 96 88 92 88 88 84 84 84 80 72 92 80 72 92 80 72 76 76 76 68 68 68 64 64 64 60 60 60 56 56 2001L 56 52 52 52 48 48 48 56 44 44 44 40 40 52 40 36 36 48 36 2001M 32 32 44 32 28 28 40 24 24 28 20 20 36 24 16 16 32 20 12 12 2001H 8 8 28

GPS

16

24
12

The linearized navigation equation and the linear equations of sensor biases, clock bias and ground object stationary dynamics are combined to yield the 20 element state equations to be used in the Kalman lter as follows: x z where = F (x)x + Gv = H(x)x + w (4) (5)

RTK2

20

16
8
4

RTK1 2010

12

2005

2001H INS

2001M

2001L

Figure 14: Aircraft yaw angle determination error.

Broken

2.3 2.1 1.9

A/C Position Errors; Ensemble Averages (m) 2.5 2.7 2.3


2.1
2.3 2.1
1.9

2001L
3.1 3.3

2.9

2001L
1.7

1.9

2.5

2.7
GPS

2001M

14

15

Simulation III. INS/GPS/EO with a Control Point (0.1m accuracy)

Broken

A/C Roll Angle Error; Ensemble Averages (10 rad)


12 13
16 17 18
20 19

11 10

12 13
8 7

11 10

16 15 14

3
2

1.5

1.7

2.1

2001H

8 7

2001M
GPS

1.3 1.1

0.9 0.7

0.9

1.5 1.3 1.1

2001H
0.3 0.1

0.5
0.3

0.7
0.5

1.7 1.5 1.3 1.1 0.9

4 3

6 5 4

RTK2

0.7

0.5

3
1

0.3

RTK2

0.1

0.1

RTK1 2010

2005

2001H INS

2001M

2001L

RTK1 2010

Figure 17: Aircraft roll angle determination error.


2005 2001H INS 2001M 2001L

Figure 15: Aircraft position determination error.

Broken

A/C Pitch Angle Error; Ensemble Averages (10 rad)


5
6
5

16 13 14 15
12

1 17 8

11 10 9

Broken

A/C Velocity Errors; Ensemble Averages (m/s)


0.04
0.04
0.

2001L
0. 0.0 0.0 0. 07 7 8 08 4 2 6

4
8

0. 05 6 0. 06

0.

11 10

05

0.036

0.

0.036
0.032

04

04 8

2001L

0.032

GPS

0.
0.0

0.

06
6

06

2001M
8

3
3
5

9 8 7

4 0. 06

0.028

0.

05

0.
2

2
2

05

4
3

4
0.0 36

2001H
1

2001M
GPS

0.024 0.02
0.016 0.012

0.02

4 0.02

0.0 0.032 28

0.044 0.048

0.04 0.036

0.01

2001H

0.008

6 0.01 2
8

0.024 0.02 0.016 0.012

0.028

RTK2
4
2

0.004

0.00

0.004
0.00 4

RTK2

0.008

RTK1 2010

2005

2001H INS

2001M

2001L

RTK1 2010

Figure 18: Aircraft pitch angle determination error.


2005 2001H INS 2001M 2001L

Figure 16: Aircraft velocity determination error. Kalman lters states (20 state Kalman lter) x = [ , , , : Orientation angle errors vN , vE , vD , : Velocity errors : Position errors in geodetic coordinates Bx , By , Bz , : Rate gyro biases Bax , Bay , Baz , : Accelerometer biases b, d, : The clock bias and drift T XT , YT , ZT ] : Ground object position errors Process noise : v = [ vx , vy , vz , : Rate gyro white noises with the PSD-value of Q vax , vay , vaz , : Accelerometer white noises
GPS

Broken

5 A/C Yaw Angle Error; Ensemble Averages (10 rad) 32

36 32

28

28
24

28

2001L

24

24

20

20

20

, , h,

2001M
16

16
12 8
8
12

12

16

2001H

12

RTK2
8

RTK1 2010

2005

2001H INS

2001M

2001L

Figure 19: Aircraft yaw angle determination error.

with the PSD-value of Qa vBx , vBy , vBz , : Rate gyro bias white noises with the P SD-value of 2 2B vBax , vBay , vBaz , : Accelerometer bias white noises with the P SD-value 2 of 2Ba a vb , v d ]
T

: Clock bias and drift white noises with the PSD-values of Sb and Sd

where B and Ba are the -values of rate gyro and accelerometer biases specied by the sensor performance and a are the reciprocals of the time constants t and ta . 60 seconds are used for both t and ta . State matrices: F1,99 G1,96 0 F2,66 0152 0173 F (x)2020 = 69 0215 F3,22 0317 F4,33 T 033 nb G1,96 = 033 T
nb

where GP Si is the pseudorange measurement given by the GPS receiver GP Si is the pseudorange rate measurement given by the GPS receiver k is the number of visible satellites xccamera and yccamera are the image coordinate measurements given by the imaging sensor hi , i = 1 4, are nonlinear measurement equations Hi , i = 1 4, are linearized measurement equations i is the white process pseudorange measurement noise specied by the -value of (see Table 3) i is the white process pseudorange rate measure ment noise specied by the -value of (see Table 3) xc and yc are the white process image coordinate measurement noise.
SENSOR PERFORMANCE

The parameters used in the simulations are summarized as follows: Table 3: GPS performance. Name Notation Value Pseudorange Table 1 () Pseudorange Table 1 Rate () Clock bias white Sb 0.009 noise (PSD) Clock drift white Sd 0.0355 noise (PSD)

Unit m m/s m2 /Hz (m/s)2 Hz

where F1 and G1 are given by linearizing the Navigation Equation F2 and G2 dene the bias dynamics (the Markov process is assumed) F3 and G3 dene the clock bias dynamics F4 is a 3 3 zero-matrix since the ground object is assumed to be stationary. Linearized measurements: GP S1 h11 (x) H11 (x) . . . . . . GP Sk h1k (x) H1k (x) h (x) , H(x) = H21 (x) , z = GP S1 . 21 . . . . . H2 (x) k GP Sk h2k (x) H3 (x) xccamera h3 (x) H4 (x) yccamera h4 (x) w = [w1 , , wk , w1 , , wk , wxc , wyc ]
T

G2014 =

033

033

G1,96 096 066 G2,66 0212 0314

0152 G3,22

Table 4: INS performance. Name Notation Value Unit Rate Gyro Bias B Table 2 deg/hr () Random Walk Q Table 2 (deg/hr) ( P SD) Hz Accelerometer Bias Ba Table 2 g () Random Walk Qa Table 2 g/ Hz ( P SD)

INITIAL ERROR SIGMAS

The initial error sigma values used in producing actual aircraft trajectories and biases are summarized in Table 5. The initial values of the covariance matrix in the Kalman lter are summarized in Table 6. Note that the Kalman lter knows only the mean values of

the initial conditions and the covariances of the initial errors. Table 5: Initial error sigma values. Notation Value Unit [, , ]0 According to rad [E1 , E2 , E3 ]0 [E1 , E2 , E3 ]0 [0.0001,0.0001, rad 0.002] [vN , vE , vD ]0 [0.02,0.02,0.02] m/s [, ]0 [1.57 107 , rad 1.57 107 ] h0 1 m [Bx , By , Bz ]0 [B , B , B ] rad/s [Bax , Bay , Baz ]0 [Ba , Ba , Ba ] g b0 Sb m d0 Sd m/s

Substituting this into the rst and second rows, we have (m111 xc1 + m21 yc1 m311 f ) (ZT ZL1 ) (m131 xc1 + m23 yc1 m331 f ) = XT XL1 (m121 xc1 + m22 yc1 m321 f ) (ZT ZL1 ) (m131 xc1 + m23 yc1 m331 f ) = YT YL1 Letting c11 c21 = = m111 xc1 m131 xc1 m121 xc1 m131 xc1 + m21 yc1 + m23 yc1 + m22 yc1 + m23 yc1 m311 f m331 f m321 f m331 f

(7)

(8)

Eqs.(7) and (8) reduce to c11 (ZT ZL1 ) = XT XL1 c21 (ZT ZL1 ) = YT YL1 or X T c 11 Z T Y T c 21 Z T Rearranging them, 1 0 0 c11 1 c21 XT YT = ZT Ax = b Since there are three unknowns, (XT , YT , ZT ), we need at least three data. One image gives two data, therefore we need at least two images. In general, in the case we have i number of images, the above equation looks like 1 0 c11 X L 1 c 11 Z L 1 0 1 c21 Y L 1 c 21 Z L 1 1 0 c12 XT X L 2 c 12 Z L 2 0 1 c22 YT = YL2 c22 ZL2 ZT 1 0 c1i X L i c 1i Z L i 0 1 c2i Y L i c 2i Z L i The least squares solution is given by XT x = YT = (AT A)1 AT b ZT X L 1 c 11 Z L 1 Y L 1 c 21 Z L 1 = X L 1 c 11 Z L 1

Table 6: Initial error covariance values. Notation Value Unit 2 2 2 P0 (, , ) [0 , 0 , 0 ] rad2 2 2 2 P0 (vN , vE , vD ) [vN 0 , vE0 , vD0 ] (m/s)2 2 2 P0 (, ) [0 , 0 ] rad2 2 P0 (h) h0 m2 2 P0 (Bi ); i=x,y,z B (rad/s)2 2 P0 (Bai ); i=x,y,z Ba g2 P0 (b, d) [Sb , Sd ] m2 /Hz (m/s)2 /Hz

= Y L 1 c 21 Z L 1

BATCH MODE LEAST SQUARES INITIALIZER

or equivalently,

Since the Kalman lter requires reasonably accurate initial ground object coordinates, we initialize the lter after a separate batch process has analyzed the rst few images, as described below. Eq.(1) can be written as x X XL1 1 T c1 T y c1 YT YL1 M = k1 1 f Z T Z L1 where M = T . Note that the subscript 1 represents ce the 1st image. Looking at the detailed elements, m 1 111 m121 k1 m131 m211 m221 m231 m311 x c1 XT XL1 m321 yc1 = YT YL1 m331 f Z T Z L1

(9)

The last row gives Z T Z L1 1 = k1 (m131 xc1 + m231 yc1 m331 f ) (6)

This convenient method allows to roughly estimate ground object locations without initial estimates.

REFERENCES [1] Ballard, D. H. and Kimball, O. A., Rigid Body Motion from Depth and Optical Flow, Computer Vision, Graphics, and Image Processing 22, April 1983, pp. 95-115. [2] Britting, K. R., Inertial Navigation Systems Analysis, Wiley Interscience, New York, NY, 1971. [3] Broida, T. J., Chandrashekhar, S. and Chellapa, R., Recursive 3-D Motion Estimation from a Monocular Image Sequence, IEEE Transactions on Aerospace and Electronic Systems, vol. 26, no. 4, July 1990, pp. 639-656. [4] Brown, R. G. and Hwang, P. Y. C. (1997), Introduction to Random Signals and Applied Kalman Filtering, John Wiley & Sons, Inc., New York., NY. [5] Buechler, D. and Foss, M., Integration of GPS and Strapdown Inertial Subsystems into a Single Unit, Navigation: Journal of The Institute of Navigation, vol. 34, no. 2, Summer 1987, pp. 140159. [6] Department of Defense, World Geodetic System 1984, Its Denition and Relationships with Local Geodetic Systems, National Imagery And Mapping Agency Technical Report, 1984. [7] Department of Defense, Military Standart for Flying Qualities of Piloted Aircraft 1797A. [8] Gelb, A., Applied Optimal Estimation, M.I.T. Press., Cambridge, MA, 1974. [9] Gurl, P. and Rotstein, H., Partial Aircraft State Estimation from Visual Motion Using the Subspace Constraints Approach, Journal of Guidance, Control, and Dynamics, vol. 24, no. 5, September-October 2001, pp. 1016-1028. [10] Hafskjold, B. H., Jalving, B., Hagen, P. E. and Gade, K., Integrated Camera-Based Navigation, Navigation: Journal of the Institute of Navigation, vol. 53, no. 2, 2000, pp. 237-245. [11] Hagen, E. and Heyerdahl, E., Navigation by Images, Modeling, Identication and Control, vol. 14, no. 3, 1993, pp. 133-143. [12] Heeger, D. J. and Jepson, A. D., Subspace Methodes for Recovering Rigid Motion I: Algorithm and Implementation, International Journal of Computer Vision, vol. 7, no. 2, 1992, pp. 95117.

[13] Hoshizaki, T., Andrisani II, D., Braun, A., Mulyana, A. and Bethel, J., Optical Navigation Systems, AIAA Guidance, Navigation and Control Conference and Exhibit, August 2003 (to be published). [14] IEEE, IEEE Standard Specication Format Guide and Test Procedure for Single-Axis Laser Gyros, IEEE Std. 647-1995, 1995. [15] Jazwinski, A. H., Stochastic Processes and Filtering Theory, Academic Press, Inc., New York, NY, 1970. [16] Kaplan, E. D., Understanding GPS Principles and Applications, Artech House, Norwood, MA, 1996. [17] Lawrence, A., Modern Inertial Technology, Springer-Verlag, New York, NY, 1992. [18] Martinez, P. and Klotz, A., A Pracical Guide to CCD Astronomy Cambridge University Press, Cambridge, United Kingdom, 1998. [19] Mikhail, E. M., Bethel, J. S. and McGlone, J. C., Introduction to Modern Photogrammetry, John Wiley & Sons, Inc., New York, NY, 2001. [20] Rodriguez, J. J. and Aggarwal, J. K., Matching Aerial Images to 3-D Terrain Maps, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 12, December 1990, pp. 1138-1149. [21] Rogers, R. M., Applied Mathematics In Integrated Navigation Systems, AIAA Education Series, AIAA, Reston, VA, 2000. [22] Soatto, S., Frezza, R. and Perona, P, Motion Estimation via Dynamic Vision, IEEE Transactions on Automatic Control, vol. 41, no. 3, March 1996, pp. 393-413. [23] Soatto, S. and Perona, P., Recursive 3-D Visual Motion Estimation Using Subspace Constraints, International Journal of Computer Vision, vol. 22, no. 3, 1997, pp. 235-259. [24] Stieler, B. and Winter, H., Gyroscopic Instruments and Their Application to Flight Testing, AGARDograph No. 160, vol. 15, NATO/AGARD, 1982. [25] Stevens, B. L. and Lewis, F. L., Aircraft Control and Simulation, John Wiley & Sons, Inc., New York, NY, 1992. [26] Titterton, D. H. and Weston, J. L., Strapdown Inertial Navigation Technology, Peter Peregrinus Ltd., Stevenage, Herts., England, U. K., 1997.

You might also like