You are on page 1of 540

P RINCIPLES OF

R EMOTE S ENSING

I NTERNATIONAL I NSTITUTE

FOR

G EO -I NFORMATION S CIENCE
2004 ITC

first

previous

next

last

back

exit

zoom

contents

index

about

AND

E ARTH O BSERVATION

Principles of
Remote Sensing
An introductory textbook

Editors
Norman Kerle

Lucas L. F. Janssen

Gerrit C. Huurneman

Authors
Wim H. Bakker
Karl A. Grabmaier
Gerrit C. Huurneman
Freek D. van der Meer
Anupma Prakash
Klaus Tempfli

first

previous

next

Ambro S. M. Gieske
Ben G. H. Gorte
Chris A. Hecker
John A. Horn
Lucas L. F. Janssen
Norman Kerle
Gabriel N. Parodi
Christine Pohl
Colin V. Reeves
Frank J. van Ruitenbeek
Michael J. C. Weir
Tsehaie Woldai

last

back

exit

zoom

contents

index

about

Cover illustration:
Paul Klee (18791940), Chosen Site (1927)
Pen-drawing and water-colour on paper. Original size: 57.8 40.5 cm.
Private collection, Munich
Paul Klee, Chosen Site, 2001 c/o Beeldrecht Amstelveen
Cover page design: Wim Feringa
All rights reserved. No part of this book may be reproduced or translated in any form, by
print, photoprint, microfilm, microfiche or any other means without written permission
from the publisher.
Published by:
The International Institute for Geo-Information Science and Earth Observation
(ITC), Hengelosestraat 99, P.O. Box 6, 7500 AA Enschede, The Netherlands
CIP-GEGEVENS KONINKLIJKE BIBLIOTHEEK, DEN HAAG

Principles of Remote Sensing


Norman Kerle, Lucas L. F. Janssen and Gerrit C. Huurneman (eds.)
(ITC Educational Textbook Series; 2)
Third edition
In print: ISBN 9061642272 ITC, Enschede, The Netherlands
ISSN 15675777 ITC Educational Textbook Series

first

previous

next

last

back

exit

zoom

contents

index

about

Contents
1

Introduction to remote sensing


1.1
1.2
1.3

27
L. L. F. Janssen
Spatial data acquisition . . . . . . . . . . . . . . . . . . . . . . . . . 28
Application of remote sensing . . . . . . . . . . . . . . . . . . . . . 34
Structure of this textbook . . . . . . . . . . . . . . . . . . . . . . . . 45

Electromagnetic energy and remote sensing


2.1
2.2

2.3

first

Introduction . . . . . . . . . . . . . . . . .
Electromagnetic energy . . . . . . . . . . .
2.2.1 Waves and photons . . . . . . . . .
2.2.2 Sources of EM energy . . . . . . .
2.2.3 Electromagnetic spectrum . . . . .
2.2.4 Active and passive remote sensing
Energy interaction in the atmosphere . . .
2.3.1 Absorption and transmission . . .
2.3.2 Atmospheric scattering . . . . . .

previous

next

last

back

exit

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

zoom

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

contents

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

49
T. Woldai
. . . 50
. . . 52
. . . 53
. . . 56
. . . 58
. . . 60
. . . 61
. . . 63
. . . 66

index

about

Contents
2.4
3

Sensors and platforms


3.1
3.2

3.3

3.4
3.5
4

Energy interactions with the Earths surface . . . . . . . . . . . . .


2.4.1 Spectral reflectance curves . . . . . . . . . . . . . . . . . . .

Introduction . . . . . . . . . . . . .
Sensors . . . . . . . . . . . . . . . .
3.2.1 Passive sensors . . . . . . .
3.2.2 Active sensors . . . . . . . .
Platforms . . . . . . . . . . . . . . .
3.3.1 Airborne remote sensing . .
3.3.2 Spaceborne remote sensing
Image data characteristics . . . . .
Data selection criteria . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

71
73

82
L. L. F. Janssen & W. H. Bakker
. . . . . . . . . . . . . . . . 83
. . . . . . . . . . . . . . . . 84
. . . . . . . . . . . . . . . . 86
. . . . . . . . . . . . . . . . 98
. . . . . . . . . . . . . . . . 106
. . . . . . . . . . . . . . . . 108
. . . . . . . . . . . . . . . . 110
. . . . . . . . . . . . . . . . 114
. . . . . . . . . . . . . . . . 118

Aerial cameras
4.1
4.2

4.3

first

127
J. A. Horn & K. A. Grabmaier
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Aerial camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.2.1 Lens cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.2.2 Film magazine and auxiliary data . . . . . . . . . . . . . . 134
4.2.3 Camera mounting . . . . . . . . . . . . . . . . . . . . . . . 136
Spectral and radiometric characteristics . . . . . . . . . . . . . . . 137
4.3.1 General sensitivity . . . . . . . . . . . . . . . . . . . . . . . 138
4.3.2 Spectral sensitivity . . . . . . . . . . . . . . . . . . . . . . . 139
4.3.3 True colour and colour infrared photography . . . . . . . . 140
4.3.4 Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

previous

next

last

back

exit

zoom

contents

index

about

Contents
4.4
4.5

4.6
4.7
4.8
5

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

161
W. H. Bakker
. . . . . 162
. . . . . 163
. . . . . 164
. . . . . 166
. . . . . 167
. . . . . 169
. . . . . 170
. . . . . 172
. . . . . 173
. . . . . 179
. . . . . 186
. . . . . 193
. . . . . 197
. . . . . 203

Multispectral scanners
5.1
5.2

5.3

5.4

CCD as image recording device . . . . . . .


Spatial characteristics . . . . . . . . . . . . .
4.5.1 Scale . . . . . . . . . . . . . . . . . .
4.5.2 Spatial resolution . . . . . . . . . . .
Relief displacement . . . . . . . . . . . . . .
Aerial photography missions . . . . . . . .
Recent developments in aerial photography

Introduction . . . . . . . . . . . . . . . . . . . . . . . . .
Whiskbroom scanner . . . . . . . . . . . . . . . . . . . .
5.2.1 Spectral characteristics of a whiskbroom . . . .
5.2.2 Geometric characteristics of a whiskbroom . . .
Pushbroom sensor . . . . . . . . . . . . . . . . . . . . .
5.3.1 Spectral characteristics of a pushbroom . . . . .
5.3.2 Geometric characteristics of a pushbroom . . . .
Some operational Earth observation systems . . . . . .
5.4.1 Low-resolution systems . . . . . . . . . . . . . .
5.4.2 Medium-resolution systems . . . . . . . . . . . .
5.4.3 High-resolution systems . . . . . . . . . . . . . .
5.4.4 Imaging spectrometry, or hyperspectral systems
5.4.5 Example of a large multi-instument system . . .
5.4.6 Future developments . . . . . . . . . . . . . . . .

Active sensors

first

previous

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

143
145
146
148
150
153
156

210
C. Pohl, K. Tempfli & G. C. Huurneman

next

last

back

exit

zoom

contents

index

about

Contents
6.1
6.2

6.3

Introduction . . . . . . . . . . . . . . . .
Radar . . . . . . . . . . . . . . . . . . . .
6.2.1 What is radar? . . . . . . . . . . .
6.2.2 Principles of imaging radar . . .
6.2.3 Geometric properties of radar . .
6.2.4 Data formats . . . . . . . . . . .
6.2.5 Distortions in radar images . . .
6.2.6 Interpretation of radar images .
6.2.7 Applications of radar . . . . . . .
6.2.8 INSAR . . . . . . . . . . . . . . .
6.2.9 Differential INSAR . . . . . . . .
6.2.10 Application of (D)InSAR . . . . .
6.2.11 Supply market . . . . . . . . . .
6.2.12 SAR systems . . . . . . . . . . . .
6.2.13 Trends . . . . . . . . . . . . . . .
Laser scanning . . . . . . . . . . . . . . .
6.3.1 Basic principle . . . . . . . . . . .
6.3.2 ALS components and processes .
6.3.3 System characteristics . . . . . .
6.3.4 Variants of Laser Scanning . . . .
6.3.5 Supply Market . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

211
212
213
215
220
224
229
235
239
240
245
247
251
252
253
254
255
258
264
266
268

Remote sensing below the ground surface


7.1
7.2
7.3

first

273
C. V. Reeves
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Gamma-ray surveys . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Gravity and magnetic anomaly mapping . . . . . . . . . . . . . . 277

previous

next

last

back

exit

zoom

contents

index

about

Contents
7.4
7.5
8

Radiometric correction
8.1
8.2
8.3

Electrical imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281


Seismic surveying . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
288
G. N. Parodi & A. Prakash
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
From satellite to ground radiances: the atmospheric correction . . 291
Atmospheric correction in the visible part of the spectrum . . . . 295
8.3.1 Cosmetic corrections . . . . . . . . . . . . . . . . . . . . . . 296
8.3.2 Relative AC methods based on ground reflectance . . . . . 303
8.3.3 Absolute AC methods based on atmospheric processes . . 306

Geometric aspects
9.1
9.2

9.3

316
L. L. F. Janssen, M. J. C. Weir, K. A. Grabmaier & N. Kerle
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Two-dimensional approaches . . . . . . . . . . . . . . . . . . . . . 320
9.2.1 Georeferencing . . . . . . . . . . . . . . . . . . . . . . . . . 321
9.2.2 Geocoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Three-dimensional approaches . . . . . . . . . . . . . . . . . . . . 330
9.3.1 Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
9.3.2 Monoplotting . . . . . . . . . . . . . . . . . . . . . . . . . . 338
9.3.3 Orthoimage production . . . . . . . . . . . . . . . . . . . . 340
9.3.4 Stereo restitution . . . . . . . . . . . . . . . . . . . . . . . . 341

10 Image enhancement and visualisation

346
B. G. H. Gorte & E. M. Schetselaar
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
10.2 Perception of colour . . . . . . . . . . . . . . . . . . . . . . . . . . . 348

first

previous

next

last

back

exit

zoom

contents

index

about

Contents
10.2.1 Tri-stimuli model . . . . . . . . . . . . . . . . .
10.2.2 Colour spaces . . . . . . . . . . . . . . . . . . .
10.3 Visualization of image data . . . . . . . . . . . . . . .
10.3.1 Histograms . . . . . . . . . . . . . . . . . . . .
10.3.2 Single band image display . . . . . . . . . . . .
10.4 Filter operations . . . . . . . . . . . . . . . . . . . . . .
10.4.1 Noise reduction . . . . . . . . . . . . . . . . . .
10.4.2 Edge enhancement . . . . . . . . . . . . . . . .
10.5 Colour composites . . . . . . . . . . . . . . . . . . . .
10.5.1 Application of RGB and IHS for image fusion

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

383
L. L. F. Janssen
. . . . . . 384
. . . . . . 386
. . . . . . 387
. . . . . . 389
. . . . . . 392
. . . . . . 394
. . . . . . 395
. . . . . . 402
. . . . . . 407
. . . . . . 409

11 Visual image interpretation


11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . .
11.2 Image understanding and interpretation . . . . . . .
11.2.1 Human vision . . . . . . . . . . . . . . . . . .
11.2.2 Interpretation elements . . . . . . . . . . . .
11.2.3 Stereoscopic vision . . . . . . . . . . . . . . .
11.3 Application of visual image interpretation . . . . . .
11.3.1 Soil mapping with aerial photographs . . . .
11.3.2 Land cover mapping from multispectral data
11.3.3 Some general aspects . . . . . . . . . . . . . .
11.4 Quality aspects . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

349
351
357
358
362
365
366
367
368
370

12 Digital image classification

415
L. L. F. Janssen & B. G. H. Gorte
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416

first

previous

next

last

back

exit

zoom

contents

index

about

10

Contents
12.2 Principle of image classification . . . . . . .
12.2.1 Image space . . . . . . . . . . . . . .
12.2.2 Feature space . . . . . . . . . . . . .
12.2.3 Image classification . . . . . . . . . .
12.3 Image classification process . . . . . . . . .
12.3.1 Preparation for image classification
12.3.2 Supervised image classification . . .
12.3.3 Unsupervised image classification .
12.3.4 Classification algorithms . . . . . . .
12.4 Validation of the result . . . . . . . . . . . .
12.5 Problems in image classification . . . . . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

418
419
420
423
425
427
429
430
434
440
443

13 Thermal remote sensing


13.1
13.2

13.3

13.4

first

449
C. A. Hecker & A. S. M. Gieske
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Principles of Thermal Remote Sensing . . . . . . . . . . . . . . . . 452
13.2.1 The physical laws . . . . . . . . . . . . . . . . . . . . . . . . 453
13.2.2 Blackbodies and emissivity . . . . . . . . . . . . . . . . . . 456
13.2.3 Radiant and kinetic temperatures . . . . . . . . . . . . . . 459
Processing of thermal data . . . . . . . . . . . . . . . . . . . . . . . 461
13.3.1 Band ratios and transformations . . . . . . . . . . . . . . . 462
13.3.2 Determining kinetic surface temperatures . . . . . . . . . . 465
Thermal applications . . . . . . . . . . . . . . . . . . . . . . . . . . 467
13.4.1 Rock emissivity mapping . . . . . . . . . . . . . . . . . . . 468
13.4.2 Thermal hotspot detection . . . . . . . . . . . . . . . . . . . 470

previous

next

last

back

exit

zoom

contents

index

about

11

Contents
14 Imaging Spectrometry
475
F. D. van der Meer, F. J. A. van Ruitenbeek & W. H. Bakker
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
14.2 Reflection characteristics of rocks and minerals . . . . . . . . . . . 479
14.3 Pre-processing of imaging spectrometer data . . . . . . . . . . . . 481
14.4 Atmospheric correction of imaging spectrometer data . . . . . . . 482
14.5 Thematic analysis of imaging spectrometer data . . . . . . . . . . 483
14.5.1 Spectral matching algorithms . . . . . . . . . . . . . . . . . 484
14.5.2 Spectral unmixing . . . . . . . . . . . . . . . . . . . . . . . 486
14.6 Applications of imaging spectrometry data . . . . . . . . . . . . . 490
14.6.1 Geology and resources exploration . . . . . . . . . . . . . . 491
14.6.2 Vegetation sciences . . . . . . . . . . . . . . . . . . . . . . . 492
14.6.3 Hydrology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
14.7 Imaging spectrometer systems . . . . . . . . . . . . . . . . . . . . 494
14.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
Glossary

505

A SI units & prefixes

539

first

previous

next

last

back

exit

zoom

contents

index

about

List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8

Principle of ground-based methods . . . . .


Principle of remote sensing based methods
Remote sensing and ground-based methods
Sea surface temperature map . . . . . . . .
Ocean biomass map . . . . . . . . . . . . . .
Sea surface height map . . . . . . . . . . . .
Ocean surface wind map . . . . . . . . . . .
Structure of the textbook . . . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

30
31
36
37
39
40
41
45

2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8

A remote sensing sensor measures energy . . . . . . . . . . . .


Electric and magnetic vectors of an electromagnetic wave . . .
Relationship between wavelength, frequency and energy . . .
Blackbody radiation curves based on Stefan-Boltzmanns law
The electromagnetic spectrum . . . . . . . . . . . . . . . . . . .
Energy interactions in the atmosphere and on the land . . . . .
Atmospheric transmission expressed as percentage . . . . . .
Electromagnetic spectrum of the Sun . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

50
53
54
56
59
62
64
65

first

previous

next

last

back

exit

zoom

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

contents

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

index

about

12

13

List of Figures
2.9
2.10
2.11
2.12
2.13
2.14
2.15

Rayleigh scattering . . . . . . . . . . . . . . . .
Rayleigh scattering affects the colour of the sky
Effects of clouds in optical remote sensing . . .
Specular and diffuse reflection . . . . . . . . . .
Reflectance curve of vegetation . . . . . . . . .
Reflectance curves of soil . . . . . . . . . . . . .
Reflectance curves of water . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

67
68
70
71
74
76
77

3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
3.10
3.11

Overview of sensors . . . . . . . . . . . . . . .
Example video image . . . . . . . . . . . . . . .
Example multispectral image . . . . . . . . . .
TSM derived from imaging spectrometer data
Example thermal image . . . . . . . . . . . . .
Example microwave radiometer image . . . . .
DTM derived by laser scanning . . . . . . . . .
Example radar image . . . . . . . . . . . . . . .
Roll, pitch and yaw angles . . . . . . . . . . . .
Meteorological observation system . . . . . . .
An image file comprises a number of bands . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

85
90
91
93
95
96
100
102
108
112
114

4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8

Vertical and oblique photography . . . . . . . . . .


Vertical and oblique aerial photo of ITC building .
Lens cone of an aerial camera . . . . . . . . . . . .
Auxiliary data annotation on an aerial photograph
Spectral sensitivity curves . . . . . . . . . . . . . .
Effect of focal length . . . . . . . . . . . . . . . . .
Illustration of the effect of terrain topography . . .
Illustration of height displacement . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

128
129
132
134
139
147
149
151

first

previous

next

last

back

exit

zoom

contents

index

about

14

List of Figures
4.9

A survey area for block photography . . . . . . . . . . . . . . . . 153

5.1
5.2
5.3
5.4

Principle of the whiskbroom scanner . . . . . . . . . .


Spectral response curve of a sensor . . . . . . . . . . .
Principle of a pushbroom sensor . . . . . . . . . . . .
NOAA/AVHRR spatial resolution varies significantly

.
.
.
.

163
165
168
177

6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
6.10
6.11
6.12
6.13
6.14
6.15
6.16
6.17
6.18

Principle of active microwave remote sensing . . . . . . . . . . .


From radar pulse to pixel . . . . . . . . . . . . . . . . . . . . . . .
Microwave spectrum and band identification by letters . . . . .
Polarization of electromagnetic waves . . . . . . . . . . . . . . .
Radar remote sensing geometry . . . . . . . . . . . . . . . . . . .
Slant range resolution . . . . . . . . . . . . . . . . . . . . . . . . .
Geometric distortions in radar imagery due to terrain elevations
Original and speckle filtered radar image . . . . . . . . . . . . .
Phase differences forming an interferogram . . . . . . . . . . . .
INSAR geometry . . . . . . . . . . . . . . . . . . . . . . . . . . .
Surface deformation mapping . . . . . . . . . . . . . . . . . . . .
Polar measuring principle and ALS . . . . . . . . . . . . . . . . .
DSM of part of Frankfurt/Oder, Germany . . . . . . . . . . . . .
Concept of Laser Ranging . . . . . . . . . . . . . . . . . . . . . .
Multiple return laser ranging . . . . . . . . . . . . . . . . . . . .
First and last return DSM . . . . . . . . . . . . . . . . . . . . . . .
Devegging laser data . . . . . . . . . . . . . . . . . . . . . . . . .
3D modelling by a TLS . . . . . . . . . . . . . . . . . . . . . . . .

214
216
218
219
219
221
230
233
240
242
248
255
257
258
260
261
263
267

7.1
7.2

Abundance map of K, Th and U from gamma-ray measurements 276


Sea floor topography as determined by satellite altimetry . . . . 278

first

previous

next

last

back

exit

zoom

contents

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

index

.
.
.
.

about

15

List of Figures
7.3
7.4
7.5

Magnetic anomaly map derived by an airborne magnetometer . 280


Conductivity measured by airborne measurements . . . . . . . . 282
3D terrain from seismic surveys . . . . . . . . . . . . . . . . . . . 284

8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8

Image pre-processing steps . . . . . . . . . . . . . . . .


Solar radiation going through the atmosphere . . . . . .
Original Landsat ETM image of Enschede and environs
Image with line-dropouts . . . . . . . . . . . . . . . . .
Image corrected for line-dropouts . . . . . . . . . . . . .
Image with line striping . . . . . . . . . . . . . . . . . .
Image with spike noise . . . . . . . . . . . . . . . . . . .
Model atmospheric profiles . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

290
292
297
298
299
300
302
313

9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
9.10
9.11

Image and map coordinate systems . . . . . . . . . . .


Original, georeferenced and geocoded satellite image
Transformation and resampling process . . . . . . . .
Illustration of different image transformations . . . .
Schematic of image resampling . . . . . . . . . . . . .
Effect of different resampling methods . . . . . . . . .
Illustration of the collinearity concept . . . . . . . . .
Inner geometry of a camera and the associated image
Relative image orientation . . . . . . . . . . . . . . . .
The process of digital monoplotting . . . . . . . . . .
Illustration of parallax in stereo pair . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

320
324
326
327
328
329
332
333
337
338
342

10.1
10.2
10.3

Sensitivity curves of the human eye . . . . . . . . . . . . . . . . . 350


Comparison of additive and subtractive colour schemes . . . . . 352
The RGB cube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353

first

previous

next

last

back

exit

zoom

contents

.
.
.
.
.
.
.
.
.
.
.

index

about

16

List of Figures
10.4
10.5
10.6
10.7
10.8
10.9
10.10
10.11
10.12
10.13
10.14

Relationship between RGB and IHS colour spaces . . . . . . . .


Standard and cumulative histogram . . . . . . . . . . . . . . . .
Multi-band image displayed on a monitor . . . . . . . . . . . . .
Transfer functions . . . . . . . . . . . . . . . . . . . . . . . . . . .
Single band display . . . . . . . . . . . . . . . . . . . . . . . . . .
Input and output of a filter operation . . . . . . . . . . . . . . . .
Original, edge enhanced and smoothed image . . . . . . . . . .
Multi-band display . . . . . . . . . . . . . . . . . . . . . . . . . .
Procedure to merge SPOT panchromatic and multispectral data
Fused image of Landsat 7 ETM and orthophoto mosaic . . . . .
Fused images of and ERS 1 SAR and SPOT 2 scene . . . . . . . .

354
359
362
363
363
364
367
369
372
376
377

11.1
11.2
11.3
11.4
11.5
11.6
11.7
11.8
11.9

Satellite image of Antequera area in Spain . .


Mud huts of Labbezanga near the Niger river
The mirror stereoscope . . . . . . . . . . . . .
Panchromatic photograph to be interpreted .
Photo-interpretation transparency . . . . . .
Land use and land cover. . . . . . . . . . . . .
Sample of the CORINE land cover map. . . .
Comparison of different line maps . . . . . .
Comparison of different thematic maps . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

388
388
393
396
397
403
404
410
411

12.1
12.2
12.3
12.4
12.5
12.6

The structure of a multi-band image . . . . . . . .


Two- and three-dimensional feature space . . . . .
Scatterplot of a digital image . . . . . . . . . . . . .
Distances in the feature space . . . . . . . . . . . .
Feature space showing six clusters of observations
The classification process . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

419
420
421
422
424
425

first

previous

next

last

back

exit

zoom

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

contents

index

about

17

List of Figures
12.7
12.8
12.9
12.10
12.11
12.12

Image classification input and output . .


Results of a clustering algorithm . . . . .
Box classification . . . . . . . . . . . . . .
Minimum distance to mean classification
Maximum likelihood classification . . . .
The mixed pixel or mixel . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

426
432
433
436
439
445

13.1
13.2
13.3
13.4

Illustration of Plancks radiation law . . . . . . . . . .


Thermal infrared spectra of a sandy soil and a marble
Decorrelation stretched colour MASTER image . . . .
ASTER thermal image of coal fires in Wuda, China . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

454
458
469
471

14.1
14.2
14.3

Imaging spectrometry concept . . . . . . . . . . . . . . . . . . . .


Kaolinite spectrum at various spectral resolutions . . . . . . . .
Effects of different processes on absorption of electromagnetic
radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Concept of signal mixing and spectral unmixing . . . . . . . . .

477
478

14.4

first

previous

next

last

back

exit

.
.
.
.
.
.

zoom

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

contents

index

480
487

about

List of Tables
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
5.10
5.11
5.12
5.13

Meteosat-8/SEVIRI characteristics . . . . . . . .
NOAA-17/AVHRR characteristics . . . . . . . .
Landsat-7/ETM+ characteristics . . . . . . . . .
Example applications of Landsat-7/ETM+ bands
Terra/ASTER characteristics . . . . . . . . . . . .
SPOT-5/HRG characteristics . . . . . . . . . . . .
Resourcesat-1/LISS4 characteristics . . . . . . .
Ikonos/OSA characteristics . . . . . . . . . . . .
EO-1/Hyperion characteristics . . . . . . . . . .
Proba/CHRIS characteristics . . . . . . . . . . .
Applications of Envisats instruments . . . . . .
Characteristics of Envisat, ASAR and MERIS . .
Envisat/MERIS band characteristics . . . . . . .

6.1
6.2

Airborne SAR systems . . . . . . . . . . . . . . . . . . . . . . . . . 252


Spaceborne SAR systems . . . . . . . . . . . . . . . . . . . . . . . . 252

8.1

Characteristics of selected RTMs . . . . . . . . . . . . . . . . . . . 307

first

previous

next

last

back

exit

zoom

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

contents

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

index

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

174
176
180
182
185
186
189
191
193
195
198
200
201

about

18

19

List of Tables
9.1

Sample set of ground control points . . . . . . . . . . . . . . . . . 323

10.1
10.2
10.3
10.4
10.5

Example histogram in tabular format . .


Summary histogram statistics . . . . . .
Filter kernel for smoothing . . . . . . . .
Filter kernel for weighted smoothing . .
Filter kernel used for edge enhancement

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

360
361
366
366
367

11.1
11.2
11.3
11.4

Example geopedologic legend . . . . . . . . . . .


Example soil legend . . . . . . . . . . . . . . . . .
CORINE land cover classes . . . . . . . . . . . .
CORINEs description of mineral extraction sites

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

400
401
405
406

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

12.1 Sample error matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 441


12.2 Spectral, land cover and land use classes . . . . . . . . . . . . . . . 444
14.1 Airborne imaging spectrometer systems . . . . . . . . . . . . . . . 496
A.1
A.2
A.3
A.4

first

Relevant SI units in the context of remote sensing


Unit prefix notation . . . . . . . . . . . . . . . . . .
Common units of wavelength . . . . . . . . . . . .
Constants and non-SI units . . . . . . . . . . . . .

previous

next

last

back

exit

zoom

.
.
.
.

.
.
.
.

.
.
.
.

contents

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

index

.
.
.
.

.
.
.
.

539
540
540
540

about

Preface
Principles of Remote Sensing is the basic textbook on remote sensing for all students enrolled in the educational programmes at ITC. As well as being a basic textbook for the institutes regular MSc and PM courses, Principles of Remote
Sensing will be used in various short courses and possibly also by ITCs sister
institutes. The first edition is an extensively revised version of an earlier text
produced for the 19992000 programme. Principles of Remote Sensing and the
companion volume, Principles of Geographic Information Systems [10], are published in the ITC Educational Textbook series. We need to go back to the 1960s
to find a similar official ITC textbook on subjects related to Remote Sensing: the
ITC Textbooks on Photogrammetry and Photo-interpretation, published in English and French [16, 17].
You may wonder why ITC has now produced its own introductory textbook
while there are already many books on the subject available on the market. Principles of Remote Sensing is different in various aspects. First of all, it has been
developed for the specific ITC student population, thereby taking into account
their entry level and knowledge of English language. The textbook relates to
the typical ITC application disciplines and among others provides an introduction into techniques which acquire sub-surface characteristics. As the textbook is
first

previous

next

last

back

exit

zoom

contents

index

about

20

21

Preface
used in the start of the programmes, it tries to stimulate conceptual and abstract
thinking by providing and explaining some fundamental, yet simple, equations
(in general, no more than one equation per chapter). Principles of Remote Sensing
aims to provide a balanced approach towards traditional photogrammetric and
remote sensing subjects: three sensors (aerial camera, multispectral scanner and
radar) are dealt with in more or less the same detail. Finally, compared to other
introductory textbooks which often focus on the technique, Principles of Remote
Sensing also introduces processes. In this sense, it provides a frame to refer to
when more detailed subjects are dealt with later in the programme.

first

previous

next

last

back

exit

zoom

contents

index

about

22

Preface

How to use the material


Principles of Remote Sensing has been produced both as a hard-copy textbook and
as an electronic document. In this way, the student is offered the optimal combination to study the subject and to use the book as a general reference. Each
chapter gives a summary and provides questions for self testing. The book comprises a glossary, an index and a bibliography. The electronic document (PDF
format) enables fast navigation and quick referencing.

first

previous

next

last

back

exit

zoom

contents

index

about

23

Preface

Acknowledgements
This textbook is the result of a process to define and develop material for a core
curriculum. This process started in 1998 and was carried out by a working group
comprising of Rolf de By, Michael Weir, Cees van Westen, myself, chaired by
Ineke ten Dam and supported by Erica Weijer. This group put many efforts
in the definition and realization of the earlier version of the two core textbooks.
Ineke was also supervising the process leading to this result. My fellow working
group members are greatly acknowledged for their support.
This textbook could not have materialized without the efforts of the (co) authors of the chapters: Wim Bakker, Ben Gorte, John Horn, Christine Pohl, Colin
Reeves, Michael Weir and Tsehaie Woldai. Many other colleagues contributed
one way or another to either the earlier version or this version of Principles of Remote Sensing: Paul Hofstee, Gerrit Huurneman, Yousif Hussin, David Rossiter,
Rob Soeters, Ernst Schetselaar, Andrew Skidmore, Dhruba Shrestha and Zoltan
Vekerdy.
The design and implementation of the textbook layout, of both the hard-copy
and electronic document, is the work of Rolf de By. Using the LATEX typesetting system, Rolf realized a well structured and visually attractive document to
study. Many of the illustrations in the book have been provided by the authors,
supported by Job Duim and Gerard Reinink. Final editing of the illustrations
was done by Wim Feringa who also designed the cover.
Michael Weir has done a tremendous job in checking the complete textbook
on English spelling and grammar. We know that our students will profit from
this.
The work on this textbook was greatly stimulated through close collaboration

first

previous

next

last

back

exit

zoom

contents

index

about

24

Preface
with the editor of Principles of Geographic Information Systems, Rolf de By.
Lucas L. F. Janssen, Enschede, September 2000

first

previous

next

last

back

exit

zoom

contents

index

about

25

Preface

Preface to the third edition


The Principles of Remote Sensing book is now in its third edition. After Lucas
Janssen left ITC, the editorial work for the second edition was done by Gerrit
Huurneman. Gerrit added a new chapter on atmospheric aspects, and revised
and updated the entire text, in particular the information on multispectral scanners, the chapter that becomes outdated most quickly.
Since then three years have passed, and again much changed in the field of
remote sensing. I was, therefore, asked to edit a new version of the book. I benefited greatly from the high quality of the existing material, and the elaborate
LATEX typesetting frame developed by Rolf de By. Many ITC colleagues provided suggestions and constructive criticism on how the text could be improved
input that is gratefully acknowledged. The revision of the second edition has
also emphasised the dynamic nature of remote sensing and Earth observation.
Much had to be added again to Chapter 5, since a number of relevant new satellites were launched in the last few years. In addition to those relatively minor
changes, complete sections and chapters were added this time:
A thorough introduction to airborne laser scanning was added to Chapter 6 to provide a more complete picture of active remote sensing.
The chapter on atmospheric aspects (Chapter 8) was completely revised,
and now contains a more detailed introduction of absolute correction methods, including radiative transfer models.
A comprehensive overview of image fusion methods was added to Chapter 10.

first

previous

next

last

back

exit

zoom

contents

index

about

26

Preface
A new chapter on thermal remote sensing was added (Chapter 13), providing a more in-depth discussion of thermal concepts first introduced in
Chapter 2.
Lastly, a chapter was added on the concepts of imaging spectrometry (Chapter 14). Like in the new chapters 8 and 13, the more quantitative side of
remote sensing is highlighted here.
Those modifications and additions show how fast-paced developments in
remote sensing and Earth observation continue to be, a good reason, in my view,
to study geoinformation science and be part of this exciting field. Some of the
Chapters and sections in this book are not part of the ITC core modules, but
provide information on more detailed or specific concepts and methods. They
are marked by a special icon in the margin, depicting a dangerous bend road
sign.
Lastly, I would like to thank those colleagues that provided material for the
additions to the book. In addition to most of the original authors acknowledged
above by Lucas Janssen, I thank the following new authors (in alphabetic order):
Ambro Gieske, Karl Grabmaier, Chris Hecker, Gerrit Huurneman, Freek van der
Meer, Gabriel Parodi, Frank van Ruitenbeek and Klaus Tempfli.
I am indebted to Rolf de By for his help with the LATEX implementation, and
especially to Wim Bakker for providing a careful review of, and valuable additions to, many of the chapters.
Norman Kerle, Enschede, September 2004

first

previous

next

last

back

exit

zoom

contents

index

about

CAUTION

Chapter 1
Introduction to remote sensing

first

previous

next

last

back

exit

zoom

contents

index

about

27

28

1.1. Spatial data acquisition

1.1 Spatial data acquisition


All ITC students, one way or another, deal with georeferenced data. They might
be involved in the collection of data, processing of the data, analysis of the data
or actually using the data for decision making. In the end, data are acquired to
yield information for management purposes: water management, land management, resources management, etc. By data we mean representations that can be
manipulated using a computer; by information we mean data that have been interpreted by human beings (see Principles of Geographic Information Systems [10],
Chapter 1). This textbook focusses on the methods used to collect georeferenced,
or geospatial, data. The need for spatial data is best illustrated by some examples:
An agronomist is interested in forecasting the overall agricultural production of a large area. This requires data on the area planted with different
crops and data on biomass production to estimate the yield.
An urban planner needs to identify areas in which dwellings have been
built illegally. The different types of houses and their configuration need
to be determined. The information should be in a format that enables integration with other socio-economic information.
An engineer needs to determine the optimal configuration for siting of relay stations for a telecommunication company. The optimal configuration
primarily depends on the form of the terrain and on the location of obstacles such as buildings.
A geologist is asked to explore an area and to provide a map of the surface
mineralogy. In addition, s/he should start to give a first estimation of the
first

previous

next

last

back

exit

zoom

contents

index

about

29

1.1. Spatial data acquisition


effect of water pumping on the neighbouring agricultural region.
phe A climatologist would like to understand the causes of the El Nino
nomenon. For this, s/he would need data on many parameters including
sea currents, sea surface temperature, sea level, meteorological parameters,
energy interactions between the land and water surface, etc.
Note that all of the above examples deal with spatial phenomena; in fact,
with spatio-temporal phenomena since time is also an important dimension. To
satisfy the information requirements of the above mentioned examples, a wide
variety of methods will be used: conducting interviews, land surveying, laboratory measurements of samples, interpretation of satellite images, measurements
by in situ sensors, using aerial photographs, running numerical models, etc. For
our purposes, it makes sense to distinguish between ground-based and remote
sensing (RS) methods.

first

previous

next

last

back

exit

zoom

contents

index

about

30

1.1. Spatial data acquisition


Ground-based and Remote Sensing Methods
In principle, there are two main categories of spatial data acquisition:
ground-based methods such as making field observations, taking in situ measurements and performing land surveying. Using ground-based methods,
you operate in the real world environment (Figure 1.1).
Observation and
measurement

Real world

Spatial
database

Figure 1.1: The principle of a ground-based


method: measurements
and observations are performed in the real world.

Ground based approach

remote sensing methods, which are based on the use of image data acquired
by a sensor such as aerial cameras, scanners or a radar. Taking a remote
sensing approach means that information is derived from the image data,
which form a (limited) representation of the real world (Figure 1.2). Notice, however, that increasingly (remote) sensing devices are used in the
field that can acquire data in a fashion similar to air- or spaceborne sensors. Thus, the strict division between ground-based and remote sensing
methods is blurring.
This textbook, Principles of Remote Sensing, provides an overview and some
first concepts of the remote sensing process. First some definitions of remote
sensing will be given. In Section 1.2, some of the aspects and considerations
when taking a remote sensing approach are discussed.

first

previous

next

last

back

exit

zoom

contents

index

about

31

1.1. Spatial data acquisition

Real world

Sensor

Image data

Observation and
measurement

Spatial
database

Figure 1.2: The principle of a remote sensing


based method: measurement and analysis are performed on image data.

Remote sensing based approach

first

previous

next

last

back

exit

zoom

contents

index

about

32

1.1. Spatial data acquisition


Remote Sensing Definitions
A number of different, and equally correct, definitions of remote sensing are
given below:
Remote sensing is the science of acquiring, processing and interpreting images that record the interaction between electromagnetic energy and matter [33].
Remote sensing is the science and art of obtaining information about an
object, area, or phenomenon through the analysis of data acquired by a
device that is not in contact with the object, area, or phenomenon under
investigation [23].
Remote sensing is the instrumentation, techniques and methods to observe
the Earths surface at a distance and to interpret the images or numerical
values obtained in order to acquire meaningful information of particular
objects on Earth [4].
Common to the three definitions is that data on characteristics of the Earths
surface are acquired by a device (sensor) that is not in contact with the objects
being measured. The result is usually, though not necessarily, stored as image
data (in this book, aerial photographs are also considered as image data). The
characteristics measured by a sensor are the electromagnetic energy reflected or
emitted by the Earths surface. This energy relates to some specific parts of the
electromagnetic spectrum: usually visible light, but it may also be infrared light
or radio waves.
There is a wide range of remote sensing sensors, which, linked to a certain
platform, can be classified according to their distance from the Earths surface:
first

previous

next

last

back

exit

zoom

contents

index

about

33

1.1. Spatial data acquisition


airborne, spaceborne, or even ground-based sensors. The term remote sensing is
nowadays used for all of these methods. The previously common term aerospace
surveying, synonymous with the combined use of remote sensing and groundbased methods, is somewhat outdated.
Another appropriate term to introduce here is Earth Observation (EO), which
usually refers to spaceborne remote sensing, but strictly speaking would also
include the ground-based use of remote sensing devices.
Before the image data can yield the required information about the objects or
phenomena of interest, they need to be processed. The analysis and information
extraction or information production is part of the overall remote sensing process.
In this textbook, remote sensing refers to all aerospace remote sensing techniques, including aerial photography, but also ground-based methods such as
field spectrometry (see Chapter 14).

first

previous

next

last

back

exit

zoom

contents

index

about

34

1.2. Application of remote sensing

1.2 Application of remote sensing


The textbook Principles of Geographic Information Systems [10] introduced the ex effect to illustrate aspects of database design
ample of studying the El Nino
and functionality of spatial information systems. The example departs from a
database table, which stores a number of parameters derived from buoys, i.e. in
situ, measurements. These measurements can be analysed as they are (per buoy,
over time). Most often, however, spatial interpolation techniques will be used to
generate maps that enable analysis of spatial and temporal patterns.
Now, let us consider the analysis of Sea Surface Temperature (SST) patterns
and discuss some particular aspects related to taking a remote sensing approach
case.
to the El Nino

first

previous

next

last

back

exit

zoom

contents

index

about

35

1.2. Application of remote sensing


Remote sensing typically provides image data
The image data acquired by remote sensing (RS) relate to electromagnetic properties of the Earth. Under many conditions the data can be related to real world
parameters or features. For example, measurements in the thermal infrared
wavelength can be used directly to calculate the surface temperature. This requires some processing in which, among others, the effect of the atmosphere
is corrected for. SST calculation is a relatively straightforward procedure and
standard SST products are publicly available.

first

previous

next

last

back

exit

zoom

contents

index

about

36

1.2. Application of remote sensing


Remote sensing requires ground data
Although remote sensing data can be interpreted and processed without other
information, the best results are obtained by linking remote sensing measurements to ground (or surface) measurements and observations. Highly accurate
SST maps can be derived from satellite image data when combined with in situ
temperature measurements (by buoy). This idea of complementarity between
remote sensing and ground-based surveying is also part of ITCs former name:
aerospace surveying. Figure 1.3 illustrates this concept.

first

Real world

RS sensor

previous

next

last

Image data

back

exit

Analysis

zoom

Figure 1.3: In most situations, remote sensing


based data acquisition is
complemented by groundbased measurements and
observations.

Spatial
database

contents

index

about

37

1.2. Application of remote sensing


Remote sensing provides synoptic, i.e. area covering data
The spatial pattern of SST can best be studied using map-like products. These
can be obtained by interpolation from in situ measurements by buoys. However,
it is unlikely that the number and distribution of buoys is adequate to enable
detection/identification of relevant surface features. The distance between the
is in the order of 2000 km (in x-direction) and 300 km
buoys network for El Nino
(in y-direction). Using data of meteorological satellites, SST can be assessed at a
level of detail up to 1 km (Figure 1.4). RS derived SST data can, for example, be
used to determine an optimal density and location of buoys.

Figure 1.4:
Sea surface
temperature
as
determined from NOAAAVHRR data. Courtesy of
NOAA.

first

previous

next

last

back

exit

zoom

contents

index

about

38

1.2. Application of remote sensing


Remote sensing provides surface information
In principle, remote sensing provides information about the upper few millimetres of the Earths surface. Some techniques, specifically in the microwave domain, relate to greater depths. The fact that measurements only refer to the surface is a limitation of remote sensing. Additional models or assumptions are
required to estimate subsurface characteristics. In the case of SST, the temperature derived from RS only tells something about the temperature of the actual
surface of the ocean. No information can be derived about subsurface currents
(which is possible when using buoys).
Apart from SST, remote sensing can be used for assessing many other sur the following parameters may be
face characteristics. In the context of El Nino
assessed from RS data: biomass (Figure 1.5), sea level (Figure 1.6), precipitation
and surface wind (Figure 1.7).

first

previous

next

last

back

exit

zoom

contents

index

about

39

1.2. Application of remote sensing

Figure 1.5: Ocean (and


land) biomass as determined from OrbView-2
data. Courtesy of Orbimage.

first

previous

next

last

back

exit

zoom

contents

index

about

40

1.2. Application of remote sensing

Figure 1.6: Sea surface


height relative to the
ocean geoid as determined from spaceborne
radar and laser systems.
Courtesy of University of
Texas.

first

previous

next

last

back

exit

zoom

contents

index

about

41

1.2. Application of remote sensing

Figure 1.7: Ocean surface wind as determined


from scatterometer measurements by QuickScat.
Courtesy of NOAA.

first

previous

next

last

back

exit

zoom

contents

index

about

42

1.2. Application of remote sensing


Remote sensing is the only way to do it
The Pacific Ocean is known for its unfavourable weather and ocean conditions.
It is quite difficult to install and maintain a network of measuring buoys in this
region. A RS approach is specifically suited for areas that are difficult to access.
A related topic is that of acquiring global or continental data sets. RS allows data
to be acquired globally using the same or a similar sensor. This enables methods
for monitoring and change detection. Since an increasing number of issues are
of global concern, such as climate change, environmental degradation, natural
disasters and population grows, synoptic RS has become a vital tool, and for
many applications the only one suitable.

first

previous

next

last

back

exit

zoom

contents

index

about

43

1.2. Application of remote sensing


Remote sensing provides multipurpose image data

In this example, SST maps are used to study El Nino.


The same data could
be used years later to find, for example, a relation between SST and the algae
blooms around Pacific islands. SST maps are not only of interest to researchers
but also to large fishing companies that want to guide their vessels to promising
fishing grounds. A data set, thus, can be of use for more than one organization
and can also prove useful for historical studies.

first

previous

next

last

back

exit

zoom

contents

index

about

44

1.2. Application of remote sensing


Remote sensing is cost-effective
The validity of this statement is sometimes hard to assess, especially when dealing with spaceborne remote sensing. Consider an international scientific project
phenomenon. Installation and maintenance of buoys
that studies the El Nino
costs a lot of money. Meteorological satellites have already been paid for and the
data can be considered free. Remote sensing would thus be a cheap technique.
However, as RS can only be used to measure surface characteristics, buoys still
need to be placed to determine subsurface characteristics.
Although the above statements are related to a scientific problem of a global
phenomenon, they are applicable to other application contexts. Consider the
above statements related to the monitoring of (illegal) urban growth around
African cities using aerial photography.
You will probably conclude that you do not have all the knowledge required
to comment on these statements in the urban context. You may even find it
difficult to identify which remote sensing technique solves your own particular information requirements. After studying this textbook Principles of Remote
Sensing you can expect to be better equipped to consider these issues.

first

previous

next

last

back

exit

zoom

contents

index

about

45

1.3. Structure of this textbook


Real world

Sensor

Image data

Observations and
measurements

Spatial
databases

2- Electromagnetic
energy
3- Sensors and platforms
4- Aerial camera
5- Multispectral scanner
6- Active sensors
7- RS below the ground
surface
8- Radiometric correction
9- Geometric aspects

Figure 1.8: Relationship


between the chapters of
this textbook and the remote sensing process.

10- Image enhancement and visualisation


11- Visual image interpretation
12- Digital image classification
13- Thermal remote sensing
14- Imaging spectrometry

1.3

Structure of this textbook

Figure 1.8 summarizes the content of Chapters 2 to 12 in relation to the overall


remote sensing process.
First of all the underlying principle of remote sensing is explained: the measurement of electromagnetic energy that is reflected or emitted by the objects
and materials on the Earths surface. The main source for this energy is the Sun
(Chapter 2).
In the subsequent chapters (Chapters 36), the platform-sensor concept is
explained and followed by an overview of the different types of sensors. The
aerial camera, the multispectral scanner and the active sensor systems are each

first

previous

next

last

back

exit

zoom

contents

index

about

46

1.3. Structure of this textbook


introduced in a dedicated chapter. This part concludes with a summary of some
specific remote sensing techniques that yield information about subsurface characteristics (Chapter 7).
Chapters 8 and 9 introduce the radiometric and geometric aspects of remote
sensing. These are important because the image data often need to be integrated
with other data in a GIS environment. Radiometic corrections are also required if
quantitative information, such as surface temperatures, are to be extracted form
imagery.
Image data can be visualized in different ways. Chapter 10 introduces the
main concepts required to understand and analyse the main classes of remote
sensing images.
The next two chapters (11 and 12) deal with qualitative and quantitative information extraction methods, respectively: visual image interpretation by a human operator, and digital image classification based upon computer algorithms.
Lastly, two new, and more specialized, chapters were added in the third edition of this book: Chapter 13 explains the concepts and methods of thermal remote sensing, while Chapter 14 provides a detailed overview of imaging spectrometry concepts, techniques and applications.

first

previous

next

last

back

exit

zoom

contents

index

about

47

1.3. Structure of this textbook

Summary
Many human activities and interests involve some geographic component. For
planning, monitoring and decision making, there is typically a need for georeferenced (geospatial) data.
In this introduction chapter the concept of remote sensing was explained. A
remote sensing approach is usually complemented by ground-based methods
and the use of numerical models. For the appropriate choice of relevant remote
sensing data acquisition information requirements for a given application have
to be defined. The chapter gave an overview of how remote sensing can obtain
different types of information about the ground surface (under some circumstances even below), cover large and also less accessible areas, and be considered
a cost-effective tool.

first

previous

next

last

back

exit

zoom

contents

index

about

48

1.3. Structure of this textbook

Questions
The following questions can help to study Chapter 1.
1. To what extent are GIS applied by your organization (company)?
2. Which ground-based and which remote sensing methods are used by your
organization (or company) to collect georeferenced data?
3. Remote sensing data and derived data products are available on the internet. Locate three web-based catalogues or archives that comprise remote
sensing image data.
These are typical exam questions:
1. Explain, or give an example, how ground-based and remote sensing methods may complement each other.
2. List three possible limitations of remote sensing data.

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 2
Electromagnetic energy and remote
sensing

first

previous

next

last

back

exit

zoom

contents

index

about

49

50

2.1. Introduction

2.1 Introduction
Remote sensing relies on the measurement of electromagnetic (EM) energy. EM
energy can take several forms. The most important source of EM energy at the
Earths surface is the Sun, which provides us, for example, with (visible) light,
heat (that we can feel) and UV-light, which can be harmful to our skin.
Sun

Passive
Sensor

Passive
Sensor

Reflected
Sunlight

Active
Sensor

Earth
energy

Figure 2.1:
A remote
sensing sensor measures
reflected or emitted energy. An active sensor has
its own source of energy.

Earths surface

Many sensors used in remote sensing measure reflected sunlight. Some sensors, however, detect energy emitted by the Earth itself or provide their own
energy (Figure 2.1). A basic understanding of EM energy, its characteristics and
its interactions is required to understand the principle of the remote sensor. This
knowledge is also needed to interpret remote sensing data correctly. For these
reasons, this chapter introduces the basic physics of remote sensing.
In Section 2.2, EM energy, its source, and the different parts of the electromagnetic spectrum are explained. In between the remote sensor and the Earths
first

previous

next

last

back

exit

zoom

contents

index

about

51

2.1. Introduction
surface is the atmosphere that influences the energy that travels from the Earths
surface to the sensor. The main interactions between EM waves and the atmosphere are described in Section 2.3. Section 2.4 introduces the interactions that
take place at the Earths surface.

first

previous

next

last

back

exit

zoom

contents

index

about

52

2.2. Electromagnetic energy

2.2 Electromagnetic energy

first

previous

next

last

back

exit

zoom

contents

index

about

53

2.2. Electromagnetic energy

2.2.1

Waves and photons

Electromagnetic (EM) energy can be modelled in two ways: by waves or by


energy bearing particles called photons. In the wave model, electromagnetic energy is considered to propagate through space in the form of sine waves. These
waves are characterized by electrical (E) and magnetic (M) fields, which are perpendicular to each other. For this reason, the term electromagnetic energy is used.
The vibration of both fields is perpendicular to the direction of travel of the wave
(Figure 2.2). Both fields propagate through space at the speed of light c, which
is approximately 299,790,000 m/s and can be rounded off to 3 108 m/s.
Electric field

Wavelength, l

Distance
Magnetic field

Figure 2.2: Electric (E)


and magnetic (M) vectors of an electromagnetic
wave.

Velocity of light, c

One characteristic of electromagnetic waves is particularly important for understanding remote sensing. This is the wavelength, , that is defined as the distance between successive wave crests (Figure 2.2). Wavelength is measured in
metres (m), nanometres (nm = 109 m) or micrometres (m = 106 m). (For an
explanation of units and prefixes refer to Appendix 1).
The frequency, v, is the number of cycles of a wave passing a fixed point over
a specific period of time. Frequency is normally measured in hertz (Hz), which

first

previous

next

last

back

exit

zoom

contents

index

about

54

2.2. Electromagnetic energy


is equivalent to one cycle per second. Since the speed of light is constant, wavelength and frequency are inversely related to each other:
(2.1)

c = v.

In this equation, c is the speed of light (3 108 m/s), is the wavelength (m), and
v is the frequency (cycles per second, Hz).
The shorter the wavelength, the higher the frequency. Conversely, the longer
the wavelength, the lower the frequency (Figure 2.3).
Short wavelength

Long wavelength

High frequency
High energy

Figure 2.3: Relationship


between wavelength, frequency and energy.

Low frequency
Low energy

Most characteristics of EM energy can be described using the wave model as


described above. For some purposes, however, EM energy is more conveniently
modelled by the particle theory, in which EM energy is composed of discrete
units called photons. This approach is taken when quantifying the amount of
energy measured by a multispectral sensor (Section 5.2.1). The amount of energy
held by a photon of a specific wavelength is then given by
c
Q=hv =h ,
(2.2)

where Q is the energy of a photon (J), h is Plancks constant (6.6262 1034 J s),
and v the frequency (Hz). From Equation 2.2 it follows that the longer the wavelength, the lower its energy content. Gamma rays (around 109 m) are the most
first

previous

next

last

back

exit

zoom

contents

index

about

55

2.2. Electromagnetic energy


energetic, and radio waves (> 1 m) the least energetic. An important consequence for remote sensing is that it is more difficult to measure the energy emitted in longer wavelengths than in shorter wavelengths.

first

previous

next

last

back

exit

zoom

contents

index

about

56

2.2. Electromagnetic energy

2.2.2

Sources of EM energy

If T increases, peak moves


towards shorter wavelengths,
and the total area under the
curve increases

104
12
T=

103

73

Spectral radiant exitance (Wm-2m-1)

All matter with a temperature above absolute zero (0 K, where n C = n + 273 K)


radiates EM energy due to molecular agitation. Agitation is the movement of
the molecules. This means that the Sun, and also the Earth, radiate energy in the
form of waves. Matter that is capable of absorbing and re-emitting all EM energy
that it receives is known as a blackbody. For blackbodies both the emissivity, ,
and the absorptivity, , are equal to (the maximum value of) 1.
The amount of energy radiated by an object depends on its absolute temperature and its emissivity, and is a function of the wavelength. In physics, this
principle is defined by Stefan-Boltzmanns Law. A blackbody radiates a contin-

102

T=
107

101

T=873

Figure 2.4:
Blackbody
radiation curves based on
Stefan-Boltzmanns
law
(with temperatures, T , in
K).

T=673

1.0

first

previous

2.0

3.0

next

last

4.0
5.0
Wavelength (m)

back

exit

6.0

zoom

7.0

8.0

contents

index

about

57

2.2. Electromagnetic energy


uum of wavelengths. The radiation emitted by a blackbody at different temperatures is shown in Figure 2.4. Note the units in this figure: the x-axis indicates
the wavelength and the y-axis indicates the amount of energy per unit area. The
area below the curve, therefore, represents the total amount of energy emitted
at a specific temperature. From Figure 2.4 it can be concluded that a higher
temperature corresponds to a greater contribution of shorter wavelengths. The
peak radiation at 400 C (673 K) is around 4 m while the peak radiation at
1000 C is at 2.5 m. The emitting ability of a real material compared to that of
the blackbody is referred to as the materials emissivity. In reality, blackbodies
are hardly found in nature; most natural objects have emissivities less than one.
This means that only part, usually between 8098%, of the received energy is reemitted. Consequently, part of the energy is absorbed. This physical property is
relevant in, for example, the modelling of global warming processes. Chapter 13
provides a more detailed discussion on these concepts.

first

previous

next

last

back

exit

zoom

contents

index

about

58

2.2. Electromagnetic energy

2.2.3

Electromagnetic spectrum

All matter with a temperature above absolute zero (K) radiates electromagnetic
waves of various wavelengths. The total range of wavelengths is commonly
referred to as the electromagnetic spectrum (Figure 2.5). It extends from gamma
rays to radio waves.
Remote sensing operates in several regions of the electromagnetic spectrum.
The optical part of the EM spectrum refers to that part of the EM spectrum in which
optical phenomena of reflection and refraction can be used to focus the radiation.
The optical range extends from X-rays (0.02 m) through the visible part of the
EM spectrum up to and including far-infrared (1000 m). The ultraviolet (UV)
portion of the spectrum has the shortest wavelengths that are of practical use
for remote sensing. This radiation is beyond the violet portion of the visible
wavelengths. Some of the Earths surface materials, in particular rocks and minerals, emit or fluoresce visible light when illuminated with UV radiation. The
microwave range covers wavelengths from 1 mm to 1 m.
The visible region of the spectrum (Figure 2.5) is commonly called light.
It occupies a relatively small portion in the EM spectrum. It is important to
note that this is the only portion of the spectrum that we can associate with the
concept of colour. Blue, green and red are known as the primary colours or
wavelengths of the visible spectrum. Section 10.2 gives more information on
light and perception of colour.
The longer wavelengths used for remote sensing are in the thermal infrared
and microwave regions. Thermal infrared gives information about surface temperature. Surface temperature can be related, for example, to the mineral composition of rocks or the condition of vegetation. Microwaves can provide information on surface roughness and the properties of the surface such as water
content.
first

previous

next

last

back

exit

zoom

contents

index

about

59

2.2. Electromagnetic energy

0.6

0.7 (mm)

red

green

UV

0.5

blue

0.4

Near-infrared

Visible

Wavelength (mm)

10-6 10-5 10-4 10-3 10-2 10-1

10

102

103

104 105

106

107 108

back

exit

zoom

contents

d
an io
d
ra

last

next

io

previous

is

av

ro

ev

ic

l
Te

IR

ys

ra

al
m
er
Th -IR
id -IR

lt

ys

ra

ys

ra

ic

M ar
V
e le (U
N ib let
s o
Viravi

os

first

109

index

Figure 2.5: The electromagnetic spectrum, after


[23].

about

60

2.2. Electromagnetic energy

2.2.4

Active and passive remote sensing

In remote sensing, the sensor measures energy, whereby we distinguish between


passive and active techniques. Passive remote sensing techniques employ natural
sources of energy, such as the Sun or artificial light. Passive sensor systems based
on reflection of the Suns energy can only work during daylight. Passive sensor
systems that measure the longer wavelengths related to the Earths temperature
do not depend on the Sun as a source of illumination and can be operated at
any time. For example, passive sensor systems need to deal with the varying
illumination conditions of the Sun, which are greatly influenced by atmospheric
conditions. Active remote sensing techniques, for example radar and laser, have
their own source of energy. Active sensors emit a controlled beam of energy
to the surface and measure the amount of energy reflected back to the sensor
(Figure 2.1). The main advantage of active sensor systems is that they can be
operated day and night, have a controlled illuminating signal, and are typically
not affected by the atmosphere.

first

previous

next

last

back

exit

zoom

contents

index

about

61

2.3. Energy interaction in the atmosphere

2.3 Energy interaction in the atmosphere


The most important source of energy is the Sun. Before the Suns energy reaches
the Earths surface, three fundamental interactions in the atmosphere are possible: absorption, transmission and scattering. The energy transmitted is then
reflected or absorbed by the surface material (Figure 2.6).

first

previous

next

last

back

exit

zoom

contents

index

about

62

2.3. Energy interaction in the atmosphere

Sun

y
nerg
ent e

Incid
RS
Sensor

Atmospheric
absorbtion
Scattered
radiation

Cloud

Direct
radiation

Scattered
radiation

Earth

first

previous

next

Atmospheric
emission
Thermal
emission

Reflected
radiation

Reflection
processes

last

back

Figure 2.6: Energy interactions in the atmosphere


and at the surface.

Emission
processes

exit

zoom

contents

index

about

63

2.3. Energy interaction in the atmosphere

2.3.1

Absorption and transmission

Electromagnetic energy travelling through the atmosphere is partly absorbed by


various molecules. The most efficient absorbers of solar radiation in the atmosphere are ozone (O3 ), water vapour (H2 O) and carbon dioxide (CO2 ).
Figure 2.7 gives a schematic representation of the atmospheric transmission
in the 022 m wavelength region. From this figure it may be seen that about half
of the spectrum between 022 m is not useful for remote sensing of the Earths
surface, simply because none of the corresponding energy can penetrate the atmosphere. Only the wavelength regions outside the main absorption bands of
the atmospheric gases can be used for remote sensing. These regions are referred
to as the atmospheric transmission windows and include:
A window in the visible and reflected infrared region, between 0.42 m.
This is the window where the optical remote sensors operate.
Three windows in the thermal infrared region, namely two narrow windows around 3 and 5 m, and a third, relatively broad, window extending
from approximately 8 to 14 m.
Because of the presence of atmospheric moisture, strong absorption bands are
found at longer wavelengths. There is hardly any transmission of energy in the
region from 22 m to 1 mm. The more or less transparent region beyond 1 mm
is the microwave region.
The solar spectrum as observed both with and without the influence of the
Earths atmosphere is shown in Figure 2.8. First of all, look at the radiation curve
of the Sun (measured outside the influence of the Earths atmosphere), which
resembles a blackbody curve at 6000 K. Secondly, compare this curve with the

first

previous

next

last

back

exit

zoom

contents

index

about

64

2.3. Energy interaction in the atmosphere


radiation curve as measured at the Earths surface. The relative dips in this curve
indicate the absorption by different gases in the atmosphere.

% transmission

100

H2O
and
Co2

50

Co2

first

previous

next

H2O

last

O3

10

back

Co2

12

exit

14

zoom

H2O

16

18

contents

20

Figure 2.7: Atmospheric


transmission expressed as
percentage.

22

index

about

65

1.0 x
104
Solar extra terrestrial
6000 K blackbody
Solar at Earths surface

0.8
O3
O2
H2O
O2
H2 O

0.6
0.4

Visible

Spectral radiant emittance (W/cm2)/m

2.3. Energy interaction in the atmosphere

0.2
0.2

0.4

H2O

0.8

1.0

1.8 2.0

1.4

Figure 2.8: The electromagnetic spectrum of the


Sun as observed with and
without the influence of the
Earths atmosphere.

H2O
CO2

H2O

H2O

2.4

2.6

2.8

3.0

Wavelength m

first

previous

next

last

back

exit

zoom

contents

index

about

66

2.3. Energy interaction in the atmosphere

2.3.2

Atmospheric scattering

Atmospheric scattering occurs when the particles or gaseous molecules present


in the atmosphere cause the EM waves to be redirected from their original path.
The amount of scattering depends on several factors including the wavelength of
the radiation, the amount of particles and gases, and the distance the radiation
travels through the atmosphere. For the visible wavelengths, 100% (in case of
cloud cover) to 5% (in case of a clear atmosphere) of the energy received by the
sensor is directly contributed by the atmosphere. Three types of scattering take
place: Rayleigh scattering, Mie Scattering and Non-selective scattering.

first

previous

next

last

back

exit

zoom

contents

index

about

67

2.3. Energy interaction in the atmosphere


Rayleigh scattering
Rayleigh scattering predominates where electromagnetic radiation interacts with
particles that are smaller than the wavelength of the incoming light. Examples
of these particles are tiny specks of dust, and nitrogen (NO2 ) and oxygen (O2 )
molecules. The effect of Rayleigh scattering is inversely proportional to the 4th
power of the wavelength: shorter wavelengths are scattered more than longer
wavelengths (Figure 2.9).

q1

Blue light

Red light

Figure 2.9: Rayleigh scattering is caused by particles smaller than the


wavelength and is maximal for small wavelengths.

q2

In the absence of particles and scattering, the sky would appear black. At
daytime, the Sun rays travel the shortest distance through the atmosphere. In
that situation, Rayleigh scattering causes a clear sky to be observed as blue because this is the shortest wavelength the human eye can observe. At sunrise and
sunset, however, the Sun rays travel a longer distance through the Earths atmosphere before they reach the surface. All the shorter wavelengths are scattered
after some distance and only the longer wavelengths reach the Earths surface.
As a result, the sky appears orange or red (Figure 2.10).
In the context of satellite remote sensing, Rayleigh scattering is the most important type of scattering. It causes a distortion of spectral characteristics of the
reflected light when compared to measurements taken on the ground: due to
the Rayleigh effect the shorter wavelengths are overestimated. In colour photos
taken from high altitudes it accounts for the blueness of these pictures. In genfirst

previous

next

last

back

exit

zoom

contents

index

about

68

2.3. Energy interaction in the atmosphere


eral, the Rayleigh scattering diminishes the contrast in photos, and thus has a
negative effect on the possibilities for interpretation. When dealing with digital
image data (as provided by multispectral scanners) the distortion of the spectral
characteristics of the surface may limit the possibilities for image classification.

B Sun
G R

Daytime

Sunset
Blue

atmosphere blue sky


Earth

first

previous

Red

Earth

next

last

back

exit

zoom

contents

Figure 2.10:
Rayleigh
scattering causes us to
perceive a blue sky during
daytime and a red sky at
sunset.

Sun

index

about

69

2.3. Energy interaction in the atmosphere


Mie scattering
Mie scattering occurs when the wavelength of the incoming radiation is similar
in size to the atmospheric particles. The most important cause of Mie scattering
are aerosols: a mixture of gases, water vapour and dust.
Mie scattering is generally restricted to the lower atmosphere where larger
particles are more abundant, and dominates under overcast cloud conditions.
Mie scattering influences the entire spectral region from the near-ultraviolet up
to and including the near-infrared, and has a greater effect on the larger wavelengths than Rayleigh scattering.

first

previous

next

last

back

exit

zoom

contents

index

about

70

2.3. Energy interaction in the atmosphere


Non-selective scattering
Non-selective scattering occurs when the particle size is much larger than the radiation wavelength. Typical particles responsible for this effect are water droplets
and larger dust particles.
Non-selective scattering is independent of wavelength, with all wavelengths
scattered about equally. The most prominent example of non-selective scattering
includes the effect of clouds (clouds consist of water droplets). Since all wavelengths are scattered equally, a cloud appears white. Optical remote sensing,
therefore, cannot penetrate clouds. Clouds also have a secondary effect: shadowed regions on the Earths surface (Figure 2.11).
Sensor
Su

nr
ad

Cloud

previous

next

ow zo

ne

Cloud

Figure 2.11: Direct and indirect effects of clouds in


optical remote sensing.

Shadow

Earth

first

ce

Zone of no
penetration

Shad

ce
an n
i
d
ra utio
d
b
ou tri
Cl con

ian

last

back

exit

zoom

contents

index

about

71

2.4. Energy interactions with the Earths surface

2.4 Energy interactions with the Earths surface


In land and water applications of remote sensing we are most interested in the
reflected radiation because this tells us something about surface characteristics.
Reflection occurs when radiation bounces off the target and is then redirected.
Absorption occurs when radiation is absorbed by the target. Transmission occurs when radiation passes through a target. Two types of reflection, which
represent the two extremes of the way in which energy is reflected by a target,
are specular reflection and diffuse reflection (Figure 2.12). In the real world, usually
a combination of both types is found.
Specular reflection, or mirror-like reflection, typically occurs when a surface is smooth and all (or almost all) of the energy is directed away from
the surface in a single direction. It is most likely to occur when the Sun is
high in the sky. Specular reflection can be caused, for example, by a water
surface or a glasshouse roof. It results in a very bright spot (also called hot

Figure 2.12: Schematic


diagrams
showing
(a) specular and (b) diffuse reflection.

first

previous

next

last

back

exit

zoom

contents

index

about

72

2.4. Energy interactions with the Earths surface


spot) in the image.
Diffuse reflection occurs in situations where the surface is rough and the
energy is reflected almost uniformly in all directions. Whether a particular
target reflects specularly or diffusely, or somewhere in between, depends
on the surface roughness of the feature in comparison to the wavelength
of the incoming radiation.

first

previous

next

last

back

exit

zoom

contents

index

about

73

2.4. Energy interactions with the Earths surface

2.4.1

Spectral reflectance curves

Consider a surface composed of a certain material. The energy reaching this


surface is called irradiance. The energy reflected by the surface is called radiance.
Irradiance and radiance are expressed in W m2 sr1 .
For each material, a specific reflectance curve can be established. Such curves
show the fraction of the incident radiation that is reflected as a function of wavelength. From such curve you can find the degree of reflection for each wavelength (e.g. at 0.4 m, 0.41 m, 0.42 m, . . . ). Most remote sensing sensors are
sensitive to broader wavelength bands, for example from 0.40.8 m, and the
curve can be used to estimate the overall reflectance in such bands. Reflectance
curves, which are very specific for different materials (see Section 14.1, and are
typically collected in the optical part of the electromagnetic spectrum (up to
2.5 m). Large efforts are made to store collections of typical curves in spectral
libraries.
Reflectance measurements can be carried out in a laboratory or in the field
using a field spectrometer. In the following subsections the reflectance characteristics of some common land cover types are discussed.

first

previous

next

last

back

exit

zoom

contents

index

about

74

Red

20

0
0.4

first

0.6

previous

Colour IR
sensitive region

Green

Chlorophyll absorption

Blue
40

Leaf refle
ct

Percent reflectance

60

nc

Chlorophyll absorption
Chlorophyll absorption

2.4. Energy interactions with the Earths surface

0.8

next

1.0

last

1.2
1.4
1.6
1.8
Wavelength, micrometers

back

exit

2.0

zoom

2.2

2.4

contents

Figure 2.13: An idealized


spectral reflectance curve
of a healthy vegetation.

2.6

index

about

75

2.4. Energy interactions with the Earths surface


Vegetation
The reflectance characteristics of vegetation depend on the properties of the
leafs, including the orientation and the structure of the leaf canopy. The proportion of the radiation reflected in the different parts of the spectrum depends
on leaf pigmentation, leaf thickness and composition (cell structure), and on the
amount of water in the leaf tissue. Figure 2.13 shows an ideal reflectance curve
of healthy vegetation. In the visible portion of the spectrum, the reflection from
the blue and red light is comparatively low, since these portions are absorbed
by the plant (mainly by chlorophyll) for photosynthesis, and the vegetation reflects relatively more green light. The reflectance in the near-infrared is highest,
but the amount depends on leaf development and cell structure. In the middle
infrared, the reflectance is mainly determined by the free water in the leaf tissue; more free water results in less reflectance. They are therefore called water
absorption bands. When the leafs dry out, for example during the harvest time
of the crops, the plant may change colour (for example, to yellow). At this stage
there is no photosynthesis, causing reflectance in the red portion of the spectrum to be higher. Also, the leafs will dry out, resulting in higher reflectance in
the middle infrared, whereas the reflectance in the near-infrared may decrease.
As a result, optical remote sensing data provide information about the type of
plant and also about its health condition.

first

previous

next

last

back

exit

zoom

contents

index

about

76

2.4. Energy interactions with the Earths surface


Bare soil
Surface reflectance from bare soil is dependent on so many factors that it is difficult to give one typical soil reflectance curve. The main factors influencing
the reflectance are soil colour, moisture content, the presence of carbonates, and
iron oxide content. Figure 2.14 gives some reflectance curves for the five main
types of soil occurring in the USA. Note the typical shapes of most of the curves,
which show a convex shape between 0.51.3 m and dips at 1.451.95 m. These
dips are so-called water absorbtion bands and are caused by the presence of soil
moisture. The iron-dominated soil (e) has quite a different reflectance curve that
can be explained by the iron absorbtion dominating at longer wavelengths.
40

reflectance (%)

30

b
c

20

Figure 2.14: Reflectance


spectra of surface samples of five mineral soils,
(a) organic dominated,
(b) minimally altered,
(c) iron altered, (d) organic
affected and (e) iron
dominated (from [24]).

d
a

10

e
0
400

800

1200

1600

2000

2400

wavelength (nm)

first

previous

next

last

back

exit

zoom

contents

index

about

77

2.4. Energy interactions with the Earths surface


Water
Compared to vegetation and soils, water has the lower reflectance. Vegetation
may reflect up to 50%, soils up to 3040%, while water reflects at most 10% of
the incoming radiation. Water reflects EM energy in the visible up to the nearinfrared. Beyond 1.2 m all energy is absorbed. Some curves of different types
of water are given in Figure 2.15. The highest reflectance is given by turbid (silt
loaded) water, and by water containing plants with a chlorophyll reflection peak
at the green wavelength.

40

reflectance (%)

30

Figure 2.15: Typical effects of chlorophyll and


sediments on water reflectance: (a) ocean water, (b) turbid water, (c) water with chlorophyll (from
[24].)

20
10
a

400

800

1200

1600

2000

2400

wavelength (nm)

first

previous

next

last

back

exit

zoom

contents

index

about

78

2.4. Energy interactions with the Earths surface

Summary
Remote sensing is based on the measurement of Electromagnetic (EM) energy.
EM energy propagates through space in the form of sine waves characterized by
electrical (E) and magnetic (M) fields, which are perpendicular to each other. EM
can be modelled either by waves or by energy bearing particles called photons.
One property of EM waves that is particularly important for understanding remote sensing is the wavelength (), defined as the distance between successive
wave crests measured in metres (m), micrometres (m, 106 m)or nanometres
(nm, 109 m). The frequency is the number of cycles of a wave passing a fixed
point in a specific period of time and is measured in hertz (Hz). Since the speed
of light is constant, wavelength and frequency are inversely related to each other.
The shorter the wavelength, the higher the frequency and vice versa.
All matter with a temperature above the absolute zero (0 K) radiates EM
energy due to molecular agitation. Matter that is capable of absorbing and reemitting all EM energy received is known as a blackbody. All matter with a
certain temperature radiates electromagnetic waves of various wavelengths depending on its temperature. The total range of wavelengths is commonly referred to as the electromagnetic spectrum. It extends from gamma rays to radio
waves. The amount of energy detected by a remote sensing system is a function
of the interactions on the way to the object, the object itself and the interactions
on the way returning to the sensor.
The interactions of the Suns energy with physical materials, both in the atmosphere and at the Earths surface, cause this energy to be reflected, absorbed,
transmitted or scattered. Electromagnetic energy travelling through the atmosphere is partly absorbed by molecules. The most efficient absorbers of solar
radiation in the atmosphere are ozone (O3 ), water vapour (H2 O) and carbon

first

previous

next

last

back

exit

zoom

contents

index

about

79

2.4. Energy interactions with the Earths surface


dioxide (CO2 ).
Atmospheric scattering occurs when the particles or gaseous molecules present in the atmosphere interact with the electromagnetic radiation and cause
it to be redirected from its original path. Three types of scattering take place:
Rayleigh scattering, Mie Scattering and Non-selective scattering.
When electromagnetic energy from the Sun hits the Earths surface, three
fundamental energy interactions are possible: absorption, transmission, and reflectance. Specular reflection occurs when a surface is smooth and all of the
energy is directed away from the surface in a single direction. Diffuse reflection
occurs when the surface is rough and the energy is reflected almost uniformly in
all directions.

first

previous

next

last

back

exit

zoom

contents

index

about

80

2.4. Energy interactions with the Earths surface

Questions
The following questions can help you to study Chapter 2.
1. What are advantages/disadvantages of aerial RS compared to spaceborne
RS in terms of atmospheric disturbance?
2. How important are laboratory spectra in understanding the remote sensing images?
These are typical exam questions:
1. List and describe the two models used to describe electromagnetic energy.

2. How are wavelength and frequency related to each other (give a formula)?

3. What is the electromagnetic spectrum?


4. List and define the three types of atmospheric scattering?

first

previous

next

last

back

exit

zoom

contents

index

about

81

2.4. Energy interactions with the Earths surface

5. What specific energy interactions take place when EM energy from the Sun
hits the Earths surface?
6. In your own words give a definition of an atmospheric window.
7. Indicate True or False: Only the wavelength region outside the main absorption bands of the atmospheric gases can be used for remote sensing.
8. Indicate True or False: The amount of energy detected by a remote sensing
sensor is a function of how energy is partitioned between its source and
the materials with which it interacts on its way to the detector.

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 3
Sensors and platforms

first

previous

next

last

back

exit

zoom

contents

index

about

82

83

3.1. Introduction

3.1 Introduction
In Chapter 2, the underlying principle of remote sensing was explained. Depending on the surface characteristics, electromagnetic energy from the Sun or
active sensor is reflected, or energy may be emitted by the Earth itself. This energy is measured and recorded by the sensors. The resulting data can be used to
derive information about surface characteristics.
The measurements of electromagnetic energy are made by sensors that are
attached to a static or moving platform. Different types of sensors have been
developed for different applications (Section 3.2). Aircraft and satellites are generally used to carry one or more sensors (Section 3.3). General references with
respect to missions and sensors are [18, 21]. ITCs online Database of Satellites and
Sensors provides a complete and up-to-date overview.
The sensor-platform combination determines the characteristics of the resulting data. For example, when a particular sensor is operated from a higher altitude, the total area imaged is increased, while the level of detail that can be
observed is reduced (Section 3.4). Based on your information needs and on time
and budgetary criteria, you can determine which image data are most appropriate (Section 3.5).

first

previous

next

last

back

exit

zoom

contents

index

about

84

3.2. Sensors

3.2 Sensors
A sensor is a device that measures and records electromagnetic energy. Sensors
can be divided into two groups:
Passive sensors (Section 3.2.1) depend on an external source of energy, usually
the Sun, and sometimes the Earth itself. Current operational passive sensors
cover the electromagnetic spectrum in the wavelength range from less than 1 picometer (gamma rays) to larger than 1 meter (microwaves). The oldest and most
common type of passive sensor is the photographic camera.
Active sensors (Section 3.2.2) have their own source of energy. Measurements
by active sensors are more controlled because they do not depend upon varying
illumination conditions. Active sensing methods include radar (radio detection
and ranging), lidar (light detection and ranging) and sonar (sound navigation
ranging), all of which may be used for altimetry as well as imaging.
Figure 3.1 gives an overview of the types of sensors that are introduced in
this section. The camera, the multispectral scanner and the imaging radar are
explained in more detail in Chapters 4, 5 and 6 respectively. Procedures used for
the processing of data acquired with imaging spectrometers and thermal scanners are introduced in Chapter 13. For more information about spaceborne remote sensing you may refer to ITCs Database of Satellites and Sensors.

first

previous

next

last

back

exit

zoom

contents

index

about

85

3.2. Sensors

Visible
domain

Passive
sensors

gamma ray
spectrometer

Optical
domain

- multispectral scanner
- imaging spectrometer

Microwave
domain

thermal
scanner

passive microwave
radiometer

- aerial camera
- video camera

Active
sensors

radar
altimeter

laser
scanner

imaging
radar

Figure 3.1: Overview of


the sensors that are introduced in this chapter.

wavelength

first

previous

next

last

back

exit

zoom

contents

index

about

86

3.2. Sensors

3.2.1

first

Passive sensors

previous

next

last

back

exit

zoom

contents

index

about

87

3.2. Sensors
Gamma-ray spectrometer
The gamma-ray spectrometer measures the amount of gamma rays emitted by
the upper soil or rock layers due to radioactive decay. The energy measured
in specific wavelength bands provides information on the abundance of (radio
isotopes that relate to) specific minerals. Therefore, the main application is found
in mineral exploration. Gamma rays have a very short wavelength on the order
of picometers (pm). Because of large atmospheric absorption of these waves,
this type of energy can only be measured up to a few hundred meters above the
Earths surface. Example data acquired by this sensor are given in Figure 7.1.
Gamma-ray surveys are treated in more detail in section 7.2.

first

previous

next

last

back

exit

zoom

contents

index

about

88

3.2. Sensors
Aerial camera
The (digital) camera system, lens and film (or CCD), is mostly found in aircraft
for aerial photography. Low orbiting satellites and NASA Space Shuttle missions also apply conventional camera techniques. The film types used in the
camera enable electromagnetic energy in the range between 400 nm and 900 nm
to be recorded. Aerial photographs are used in a wide range of applications.
The rigid and regular geometry of aerial photographs in combination with the
possibility to acquire stereo-photography has enabled the development of photogrammetric procedures for obtaining precise 3D coordinates (Chapter 9). Although aerial photos are used in many applications, principal applications include medium and large scale (topographic) mapping and cadastral mapping.
Today, analogue photos are often scanned to be stored in and processed by digital systems. Various examples of aerial photos are shown in Chapter 4. A recent
development is the use of digital cameras, which bypass the use of film and
directly deliver digital image data (Section 4.8).

first

previous

next

last

back

exit

zoom

contents

index

about

89

3.2. Sensors
Video camera
Video cameras are frequently used to record data. Most video sensors are only
sensitive to the visible spectrum, although a few are able to record the nearinfrared part of the spectrum (Figure 3.2). A recent development is the use of
thermal infrared video cameras. Until recently, only analogue video cameras
were available. Today, digital video cameras are increasingly available, some
of which are applied in remote sensing. Mostly, video images serve to provide
low cost image data for qualitative purposes, for example, to provide additional
visual information about an area captured with another sensor (e.g. laser scanner
or radar). Most image processing and information extraction methods useful for
individual images can be applied to video frames.

first

previous

next

last

back

exit

zoom

contents

index

about

90

3.2. Sensors

Figure 3.2:
Analogue
false colour video image
of De Lopikerwaard, the
Netherlands. Courtesy of
Syntoptics.

first

previous

next

last

back

exit

zoom

contents

index

about

91

3.2. Sensors

Figure 3.3:
Landsat-5
Thematic Mapper image
of Yemen, 1995. False
colour composite of TM
bands 4, 5 and 7 shown
in red, green and blue,
respectively. The image
covers an area of 30 km
by 17 km. The meaning of the colours, and
ways to interpret such
imagery, are provided in
Chapter 10.

first

previous

next

last

back

exit

zoom

contents

index

about

92

3.2. Sensors
Multispectral scanner
An instrument is a measuring device for determining the present value of a
quantity under observation. A scanner is an instrument that obtains observations in a point-by-point and line-by-line manner. In this way, a scanner fundamentally differs from an aerial camera, which records an entire image in only
one exposure.
The multispectral scanner is an instrument that measures the reflected sunlight in the visible and infrared spectrum. A sensor systematically scans the
Earths surface, thereby measuring the energy reflected by the viewed area. This
is done simultaneously for several wavelength bands, hence the name multispectral scanner. A wavelength band or spectral band is an interval of the electromagnetic spectrum for which the average reflected energy is measured. Typically, a number of distinct wavelength bands are recorded, because these bands
are related to specific characteristics of the Earths surface.
For example, reflection characteristics in the range of 2 m to 2.5 m (for
instance, Landsat TM band 7) may give information about the mineral composition of the soil, whereas the combined reflection characteristics of the red and
near infrared bands may tell something about vegetation, such as biomass and
health.
The definition of the wavebands of a scanner, therefore, depends on the applications for which the sensor has been designed. An example of multispectral
data for geological applications is given in Figure 3.3. Methods to interpret such
imagery are introduced in Chapter 10.

first

previous

next

last

back

exit

zoom

contents

index

about

93

3.2. Sensors

Figure 3.4:
Total Suspended Matter concentration data of the North Sea
derived from the SeaWiFS sensor onboard the
OrbView-2 satellite. The
image roughly covers an
area of 600 km by 500 km.
Courtesy of CCZM, Rijkswaterstaat.

30100+

TSM

0
SeaWIFS
2-bands MIM
Atm correction: Modtran

first

previous

next

last

back

exit

zoom

contents

index

about

94

3.2. Sensors
Imaging spectrometer or hyperspectral imager
The principle of the imaging spectrometer is similar to that of the multispectral
scanner, except that spectrometers measure many (64256), very narrow (5 nm to
10 nm) spectral bands. This results in an almost continuous reflectance curve per
pixel rather than the limited number values for relatively broad spectral bands of
the multispectral scanner. The spectral curves depend on the chemical composition and microscopic structure of the measured material. Imaging spectrometer
data, therefore, can be used, for instance, to determine the mineral composition
of the Earths surface, the chlorophyll content of surface water, or the total suspended matter concentration of surface water (Figure 3.4).

first

previous

next

last

back

exit

zoom

contents

index

about

95

3.2. Sensors
Thermal scanner
Thermal scanners measure thermal data in the range of 8 m to 14 m. Wavelengths in this range are directly related to an objects temperature. For instance,
data on cloud, land and sea surface temperature are indispensable for weather
forecasting. For this reason, most remote sensing systems designed for meteorology include a thermal scanner. Thermal scanners can also be used to study
the effects of drought on agricultural crops (water stress), and to monitor the
temperature of cooling water discharged from thermal power plants. Another
application is in the detection of underground coal fires (Figure 3.5).

Figure 3.5: Night-time airborne thermal scanner image of a coal mining area
affected by underground
coal fires. Darker tones
represent colder surfaces,
while lighter tones represent warmer areas. Most
of the warm spots are due
to coal fires, except for the
large white patch, which is
a lake. Apparently, at that
time of night, the temperature of the water is higher
than the temperature of
the land. Scene is approximately 4 km across.

first

previous

next

last

back

exit

zoom

contents

index

about

96

3.2. Sensors

Figure 3.6: Map of the


Iberian Peninsula showing
the long-term trend in soil
moisture change over a
10-year period. The red
area shows a 5% to 10%
(volume percent) decline
in soil moisture in northeastern Spain. The image
roughly covers an area of
1200 km by 900 km. The
map is based on Nimbus/SMMR observations.
Courtesy of Free University Amsterdam / NASA.

first

previous

next

last

back

exit

zoom

contents

index

about

97

3.2. Sensors
Microwave radiometer
Long wavelength EM energy (1 cm to 100 cm) is emitted from the objects on,
or just below, the Earths surface. Every object with a temperature above the
absolute temperature of zero Kelvin emits radiation, called the blackbody radiation (Section 2.2.2). Natural materials may emit radiation that is somewhat
lower than the ideal case of a blackbody, which is demonstrated by an emissivity
microwave radiometer records this emitted radiation of objects.
smaller than 1A
The depth from which this emitted energy can be recorded depends on the properties of the specific material, such as the water content. The recorded signal is
called the brightness temperature. The physical surface temperature can be calculated from the brightness temperature, but then the emissivity must be known
(see Section 13.2.2. With an emissivity of 98% to 99% water behaves almost like
a blackbody, while land features may show varying emissivity numbers. Furthermore, the emissivity of materials may vary with changing conditions. For
instance, a wet soil may have a considerable higher emissivity than a dry soil.
Because blackbody radiation is weak, the energy must be measured over relatively large areas, and consequently passive microwave radiometers are characterized by a low spatial resolution. Passive microwave radiometer data can be
used in mineral exploration, soil mapping, soil moisture estimation (Figure 3.6),
and snow and ice detection.

first

previous

next

last

back

exit

zoom

contents

index

about

98

3.2. Sensors

3.2.2

first

Active sensors

previous

next

last

back

exit

zoom

contents

index

about

99

3.2. Sensors
Laser scanner
A very interesting active sensor system, similar in some respects to radar, is
lidar (light detection and ranging). A lidar transmits coherent laser light, at a
certain visible or near-infrared wavelength, as a series of pulses (thousands per
second) to the surface, from which some of the light reflects. Travel time for the
round-trip and the returned intensity of the reflected pulses are the measured
parameters. Lidar instruments can be operated as profilers and as scanners on
airborne and spaceborne platforms, day and night. Lidar can serve either as
a ranging device to determine altitudes and measure speeds, or as a particle
analyser for air. Light penetrates certain targets, which makes it possible to use
it for assessing tree height (biomass) and canopy conditions, or for measuring
depths of shallow waters such as tidal flats.
Laser scanners are typically mounted on aircraft or helicopters and use a
laser beam to measure the distance from the sensor to points located on the
ground. This distance measurement is then combined with exact information
on the sensorss position, using a satellite position system and an inertial navigation system (INS), to calculate the terrain elevation. Laser scanning produces
detailed, high-resolution, Digital Terrain Models (DTM) for topographic mapping (Figure 3.7). Laser scanning can also be used for the production of detailed
3D models of city buildings. Portable ground-based laser scanners can be used
for oblique and transverse measurements.

first

previous

next

last

back

exit

zoom

contents

index

about

100

3.2. Sensors

Figure 3.7: Digital Terrain Model (5 m grid) of


the marl-pit on the Sint
Pietersberg, the Netherlands. The size of the pit
is roughly 2 km by 1 km.
Clearly visible is the terraced rim of the pit. The
black strip near the bottom of the image is the
river Maas. Courtesy Survey Department, Rijkswaterstaat.

first

previous

next

last

back

exit

zoom

contents

index

about

101

3.2. Sensors
Imaging radar
Radar (radio detection and ranging) instruments operate in the 1 cm to 100 cm
wavelength range. Different wavelength bands are related to particular characteristics of the Earths surface. The radar backscatter (Figure 3.8) is influenced
by the emitted signal and the illuminated surface characteristics (Chapter 6).
Since radar is an active sensor system and the applied wavelengths are able to
penetrate clouds, it can acquire images day and night and under all weather
conditions, although the images may be affected somewhat by heavy rainfall.
The combination of two stereo radar images of the same area can provide
information about terrain heights (radargrammetry). Similarly, SAR Interferometry (INSAR) combines two radar images acquired at almost the same locations. These images are acquired either at different moments or at the same
moment using two systems on either end of a long boom, and can be used to
assess changes in height or vertical deformations with great precision (5 cm or
better). Such vertical motions may be caused by oil and gas exploitation (land
subsidence), or crustal deformation related to earthquakes.

first

previous

next

last

back

exit

zoom

contents

index

about

102

3.2. Sensors

Figure 3.8: ERS-1 SAR


image of the Mahakam
Delta, Kalimantan. The
image shows different
types of land cover. The
river itself is black. The
darker patch of land on the
left is the inland tropical
rainforest.
The rest is
a mixed forest of Nipa
Palm and Mangrove on
the delta. The right half
of the image shows light
patches, where the forest
is partly cleared.
The
image covers an area of
30 km 15 km.

first

previous

next

last

back

exit

zoom

contents

index

about

103

3.2. Sensors
Radar altimeter
Radar altimeters are used to measure the topographic profile parallel to the satellite orbit. They provide profiles, i.e. single lines of measurements, rather than
image data. Radar altimeters operate in the 1 cm to 6 cm wavelength range and
are able to determine height with a precision of 2 cm to 5 cm. Radar altimeters
are useful for measuring relatively smooth surfaces such as oceans and for small
scale mapping of continental terrain models. Sample results of radar altimeter
measurements are given in Figure 1.6 and Figure 7.2.

first

previous

next

last

back

exit

zoom

contents

index

about

104

3.2. Sensors
Bathymetry and Side Scan Sonar
Sonar stands for sound navigation ranging. It is a process used to map sea floor
topography or to observe obstacles underwater. It works by emitting a small
burst of sound from a ship. The sound is reflected off the bottom of the body of
water. The time that it takes for the reflected pulse to be received corresponds to
the depth of the water. More advanced systems also record the intensity of the
return signal, thus giving information about the material on the sea floor.
In its simplest form, the sonar looks straight down, and is operated very
much like a radar altimeter. The body of water will be traversed in paths like
a grid, and not every point below the surface will be monitored. The distance
between data points depends on the ships speed, the frequency the measurements, and the distance between the adjacent paths.
One of the most accurate systems for imaging large areas of the ocean floor
is called the side scan sonar. This is a towed system that is normally moved in a
straight line. Somewhat similar to side looking airborne radar (SLAR), side scan
sonar transmits a specially shaped acoustic beam perpendicular to the ships
path, and out to the left and right side. This beam propagates into the water and
across the seabed. The roughness of the floor of the ocean and any objects laying
upon it, reflect some of the incident sound energy back in the direction of the
sonar. The sonar is sensitive enough to receive these reflections, amplify them
and send them to a sonar data processor and display. Images produced by side
scan sonar systems are highly accurate and can be used to delineate even very
small (< 1 cm) objects.
The shape of the beam in side scan is crucial to the formation of the final
image. Typically, the acoustic beam a side scan sonar is very narrow in the horizontal dimension (about 0.1 degree) and much wider (40 to 60 degrees) in the
vertical dimension.
first

previous

next

last

back

exit

zoom

contents

index

about

105

3.2. Sensors
Using sonar data, contour maps of the bottoms of bodies of water are made.
Maps that show the contours under bodies of water through depth soundings
are called bathymetric maps. They are analogous to topographic maps that are
made to show contours of terrestrial areas.

first

previous

next

last

back

exit

zoom

contents

index

about

106

3.3. Platforms

3.3 Platforms
A platform is a vehicle, such as a satellite or aircraft, used for a particular activity
or purpose or to carry a specific kinds of equipment or instruments.
Sensors used in remote sensing can be carried at heights ranging from just
a few centimeters, using field equipment, up to orbits in space as far away as
36,000 km (geostationary orbits) and beyond. Very often the sensor is mounted
on a moving vehicle, which we call the platform, such as aircraft and satellites.
Occasionally, static platforms are used. For example, by using a multispectral
sensor mounted to a pole, the changing reflectance characteristics of a specific
crop during the day or season can be assessed.
Airborne observations are carried out using aircraft with specific modifications to carry sensors. An aircraft needs a hole in the floor or a special remote
sensing pod for the aerial camera or a scanner. Sometimes Ultra Light Vehicles
(ULVs), balloons, helicopters, airships or kites are used for airborne remote sensing. Depending on the platform and sensor, airborne observations are possible
at altitudes ranging from less than 100 m up to 40 km.
The navigation of an aircraft is one of the most crucial parts of airborne
remote sensing. The availability of satellite navigation technology has significantly improved the quality of flight execution as well as the positional accuracy
of the processed data. A recent development is the use of Unmanned Aerial
Vehicles (UAVs) for remote sensing.
For spaceborne remote sensing, satellites and space stations are used. Satellites
are launched into space with rockets. Satellites for Earth Observation are typically
positioned in orbits between 150 km and 36,000 km altitude. The choice of the
specific orbit depends on the objectives of the mission, e.g. continuous observation of large areas or detailed observation of smaller areas.
first

previous

next

last

back

exit

zoom

contents

index

about

107

3.3. Platforms
A recent development is the use of relatively small satellites with a low mass
of 1 kg to 100 kg (mini, micro and nanosatellites), which can be developed and
launched at relatively low-cost. These satellites are so small that they can even
be put into orbit by solid rocket boosters launched from aircraft flying at speeds
of 1000 km/h at 12 km altitude.

first

previous

next

last

back

exit

zoom

contents

index

about

108

3.3. Platforms

3.3.1

Airborne remote sensing

Airborne remote sensing is carried out using different types of aircraft depending on the operational requirements and budget available. The speed of the
aircraft can vary between 150 km/h and 750 km/h and must be carefully chosen
in relation to the mounted sensor system. The selected altitude influences the
scale and the resolution of the recorded images. Apart from the altitude, the aircrafts orientation also affects the geometric characteristics of the remote sensing
data acquired. The orientation of the aircraft is influenced by wind conditions
and can be corrected for to some extent by the pilot. The orientation can be expressed by three different rotation angles relative to a reference path, namely
roll, pitch and yaw (Figure 3.9). A satellite position system and an Inertial Navigation System (INS) can be installed in the aircraft to measure its position and
the three rotation angles at regular intervals. Subsequently, these measurements
can be used to correct the sensor data for geometric distortions resulting from
altitude and orientation errors.

+
0
-

+
0
-

roll angle

Figure 3.9: The three angles (roll, pitch and yaw)


of an aircraft that influence
the geometry of the acquired images.

+
0
-

pitch angle

yaw angle

Today, most aircraft are equipped with standard satellite navigation technology, which yields positional accuracies better than 10 m to 20 m (horizontal) or
20 m to 30 m (horizontal plus vertical). More precise positioning and navigation
(1 m to 5 m) is possible using a technique called differential correction, which involves the use of a second satellite receiver. This second system, called the base
first

previous

next

last

back

exit

zoom

contents

index

about

109

3.3. Platforms
station, is located at a fixed and precisely known position. Even better positional
accuracies (1 cm) can be achieved using more advanced equipment. In this textbook we refer to satellite navigation in general, which comprises the American
GPS system, the Russian Glonass system and the forthcoming European Galileo
system. Refer to the Principles of Geographic Information Systems textbook for an
introduction to GPS.
In aerial photography (Chapter 4) the images are recorded on hard-copy material (film, Section 4.3) or, in case of a digital camera, recorded digitally. For
digital sensors, e.g. a multispectral scanner, the data can be stored on tape and
other mass storage devices or transmitted directly to a receiving station.
Owning, operating and maintaining survey aircraft, as well as employing a
professional flight crew is an expensive undertaking. In the past, survey aircraft were owned mainly by large national survey organizations that required
large amounts of photography. There is an increasing trend towards contracting specialized private aerial survey companies. Still, this requires a thorough
understanding of the process involved.

first

previous

next

last

back

exit

zoom

contents

index

about

110

3.3. Platforms

3.3.2

Spaceborne remote sensing

Spaceborne remote sensing is carried out using sensors that are mounted on
satellites, space vehicles and space stations. The monitoring capabilities of the
sensor are to a large extent determined by the parameters of the satellites orbit.
In general, an orbit is a circular path described by the satellite in its revolution
about the Earth. Different types of orbits are required to achieve continuous
monitoring (meteorology), global mapping (land cover mapping), or selective
imaging (urban areas). For remote sensing purposes, the following orbit characteristics are relevant :
Orbital altitude, which is the distance (in km) from the satellite to the surface
of the Earth. Typically, remote sensing satellites orbit either at 150 km to
1000 km (low-earth orbit, or LEO) or at 36,000 km (geostationary orbit or
GEO) distance from the Earths surface. This altitude influences to a large
extent the area that can be viewed (coverage) and the details that can be
observed (resolution).
Orbital inclination angle, which is the angle (in degrees) between the orbital
plane and the equatorial plane. The inclination angle of the orbit determines, together with the field of view of the sensor, the latitudes up to
which the Earth can be observed. If the inclination is 60, then the satellite
flies over the Earth between the latitudes 60 north and 60 south. If the
satellite is in a low-earth orbit with an inclination of 60, then it cannot observe parts of the Earth at latitudes above 60 north and below 60 south,
which means it cannot be used for observations of the polar regions of the
Earth.
Orbital period, which is the time (in minutes) required to complete one full
first

previous

next

last

back

exit

zoom

contents

index

about

111

3.3. Platforms
orbit. The orbital period and the mean distance to the centre of the Earth
are interrelated (Keplers third law). For instance, if a polar satellite orbits
at 806 km mean altitude, then it has an orbital period of 101 minutes, and
its ground speed of 23,700 km/h is equal to 6.5 km/s. Compare this figure
with the speed of an aircraft of around 400 km/h. The satellite is roughly
60 times faster. The speed of the platform has implications for the type of
images that can be acquired (exposure time).
Repeat cycle, which is the time (in days) between two successive identical
orbits. The revisit time, the time between two subsequent images of the
same area, is determined by the repeat cycle together with the pointing
capability of the sensor. Pointing capability refers to the possibility of the
sensor-platform to look to the side, or fore and aft. The sensors that are
mounted on SPOT, IRS and Ikonos (Section 5.4) have such a capability.
The following orbit types are most common for remote sensing missions:
Polar orbit. An orbit with inclination angle between 80 and 100. An orbit
having an inclination larger than 90 means that the satellites motion is in
westward direction. (Launching a satellite in eastward direction requires
less energy, because of the eastward rotation of the Earth.) Such a polar orbit enables observation of the whole globe, also near the poles. The satellite
is typically placed in orbit at 600 km to 1000 km altitude.
Sun-synchronous orbit. This is a near-polar orbit chosen in such a way
that the satellite always passes overhead at the same time. To achieve
this, the inclination angle must be carefully chosen between 98 to 99.
Most sun-synchronous orbits cross the equator at mid-morning at around
10:30 hour local solar time. At that moment the Sun angle is low and the
first

previous

next

last

back

exit

zoom

contents

index

about

112

3.3. Platforms
resultant shadows reveal terrain relief. In addition to day-time images, a
sun-synchronous orbit also allows the satellite to record night-time images
(thermal or radar) during the ascending phase of the orbit at the dark side
of the Earth. Examples of polar orbiting, sun-synchronous satellites are
Landsat, SPOT and IRS.
Geostationary orbit. This refers to orbits in which the satellite is placed
above the equator (inclination angle: 0) at an altitude of approximately
36,000 km. At this distance, the orbital period of the satellite is equal to the
rotational period of the Earth, exactly one sidereal day. The result is that
the satellite is at a fixed position relative to the Earth. Geostationary orbits
are used for meteorological and telecommunication satellites.

Figure 3.10:
Meteorological
observation system comprised
of geostationary and polar
satellites.

Todays meteorological weather satellite systems use a combination of geostationary satellites and polar orbiters. The geostationary satellites offer a
continuous hemispherical view of almost half the Earth (45%), while the
polar orbiters offer a higher spatial resolution (Figure 3.10).

first

previous

next

last

back

exit

zoom

contents

index

about

113

3.3. Platforms
Other orbits. For a satellite circling two massive bodies it appears that
there are five points where the pull of gravity of the two bodies is at equilibrium. These points are called the Lagrangian, or libration points, called
L1 to L5. In these points, a satellite can be positioned at zero velocity with
respect to both bodies. The L1 point of the Sun-Earth system, located between the Earth and the Sun at about 1.5 million kilometres from the Earth,
is already in use by a number of solar observation satellites. The Triana
Earth observation satellite was developed to be put in this L1 point, but
the satellite was never launched. In the future also the Lagrangian points
of the Earth-Moon system may be used for Earth observation.
The data of spaceborne sensors need to be sent to the ground for further analysis and processing. Some older spaceborne systems utilized film cartridges that
fell back to a designated area on Earth. Today, practically all Earth Observation
satellites apply satellite communication technology as a downlink of the data.
The acquired data are sent down directly to a receiving station, or to a (geostationary) communication satellite that transmits the data to receiving stations on
the ground. Another option is, if the satellite is outside the range of a receiving
station, to store the data temporarily on a tape recorder in the satellite and transmit them later. One of the current trends is that small receiving units, consisting
of a small dish with a PC, are being developed for local reception of image data.

first

previous

next

last

back

exit

zoom

contents

index

about

114

3.4. Image data characteristics

3.4 Image data characteristics


Remote sensing image data are more than a picture, they are measurements of
EM energy. Image data are stored in a regular grid format (rows and columns).
A single image element is called a pixel, a contraction of picture element. For
each pixel, the measurements are stored as Digital Numbers, or DN-values. Typically, for each measured wavelength range a separate data set is stored, which
is called a band or a channel, and sometimes a layer (Figure 3.11).
Columns

Figure 3.11: An image file


comprises a number of
bands.
For each band
the Digital Numbers or
DN-values, corresponding
to the measurements, are
stored in a row/column
system.

Rows
45

band 3

26 81
53 35 57

band 2
band 1

DN-values

Single pixel

The quality of image data is primarily determined by the characteristics of


the sensor-platform system. The image characteristics are usually referred to as:
1. Spatial characteristics, which refer to the area measured.
2. Spectral characteristics, which refer to the spectral. wavelengths that the
sensor is sensitive to.

first

previous

next

last

back

exit

zoom

contents

index

about

115

3.4. Image data characteristics


3. Radiometric characteristics, which refer to the energy levels that are measured by the sensor.
4. Temporal characteristics, which refer to the time of the acquisition.
Each of these characteristics kan be further specified by the extremes that are
observed (coverage) and the smallest units that can be distinguished (resolution):
Spatial coverage, which refers to the total area covered by one image. With
multispectral scanners this is proportional to the total field of view (FOV) of
the instrument, which determines the swath width on the ground.
Spatial resolution, which refers to the smallest unit-area measured. This
indicates the minimum detail of objects that can be distinguished.
Spectral coverage, which is the total wavelength range observed by the sensor.
Spectral resolution, which is related to the widths of the spectral wavelength
bands that the sensor is sensitive to.
Dynamic range, which refers to the minimum and maximum energy levels
that can be measured by the sensor.
Radiometric resolution, which refers to the smallest differences in levels of
energy that can be distinguished by the sensor.
Temporal coverage, is the span of time over which images are recorded and
stored in image archives.

first

previous

next

last

back

exit

zoom

contents

index

about

116

3.4. Image data characteristics


Revisit time, which is the (minimum) time between two successive image
acquisitions over the same location on Earth. This is sometimes referred to
as temporal resolution.
Some characteristics need a more complicated, sensor specific explanation.
These are explained in the respective chapters (camera, multispectral scanner,
radar). Additional (related) properties of image data are:
Pixel size, determined by the image coverage and the image size, is the area
covered by one pixel on the ground. Pixel sizes of different sensor systems
may range from less than 1 meter (high spatial resolution) to larger than
5 km (low spatial resolution).
Number of bands, refers to the number of distinct wavelength-bands stored.
Typical values are, for example, 1 panchromatic band (black/white aerial
photography), 15 multispectral bands (Terra/ASTER), or 220 hyperspectral bands (EO-1/Hyperion).
Quantization, refers to the technique of representing radiometric levels by
a limited set of discrete numbers. Typically, 8, 10, or 12 bits are used for
representing the radiometric levels. For instance, if for each measurement
8 bits (1 byte) are used, then these levels are represented by the integer
values ranging from 0 to 255. These values are also referred to as Digital
Numbers, or DN-values. Using the sensor-specific calibration parameters,
DN-values can be converted into physical parameters, such as measured
energy (watt) or reflection.
Image size, is related to the spatial coverage and the spatial resolution. It
is expressed as the number of rows (or lines) and number of columns (or
first

previous

next

last

back

exit

zoom

contents

index

about

117

3.4. Image data characteristics


samples) in one scene. Typically, remote sensing images contain thousands
of rows and columns.
The image size in bytes can be calculated from the number of rows and
columns, the number of bands and the number of bits per pixel. For example, a
4-band multispectral image may require as much as: 3000 rows 3000 columns
4 bands 1 byte = 36 Mb of storage.
Geostationary weather satellites, because of their almost continuous observation capability, may generate much more data. For instance, every 15 minutes,
Meteosat-8 generates a 3700 by 3700, 12-band, 10-bit image, which is: 3700 rows
3700 columns 12 bands 1.25 bytes/pixel 96 images/day = 20 Gb/day,
which is 20 gigabyte of image data per day. This means that Meteosat-8 alone
generates several terabyte (1012 byte) per year. Some data archives are rapidly
approaching the petabyte (1015 byte) limit.

first

previous

next

last

back

exit

zoom

contents

index

about

118

3.5. Data selection criteria

3.5 Data selection criteria

first

previous

next

last

back

exit

zoom

contents

index

about

119

3.5. Data selection criteria


Spatio-temporal characteristics
For the selection of the appropriate data type it is necessary to fully understand
the information requirements for a specific application. Therefore, you have to
analyse the spatio-temporal characteristics of the phenomena under investigation. For example, a different type of image data is required for topographic
mapping of an urban area than for studying changing global weather patterns.
In the case of urban area mapping, because cities contain much spatial detail,
a spatial resolution of 10 cm may be required. Aerial photography and airborne
digital scanners can fulfill this requirement.
Another consideration in your data selection process is the third dimension,
i.e. the height or elevation component. Stereo images or interferometric radar
data can provide height information.
The time of image acquisition should also be given thought. In aerial photography, a low Sun angle causes long shadows due to elevated buildings, which
may obscure features that we are interested in. To avoid long shadows, the aerial
photographs could be taken around noon.
Also the type and the amount of cloud cover plays a role, and under certain
conditions no images can be taken at all. Persistent cloud cover may be the reason for lack of image data in areas of the tropical belt. The use of radar satellites
may be a solution to this problem.
Yet another issue is seasonal cycles. In countries with a temperate climate,
deciduous trees bear no leafs in the winter and early spring, and therefore images taken during this time of year allow the best view on the infrastructure.
However, during the winter the Sun angle may be too low to take good images,
because of long shadows.
In the case of monitoring slow and long-term processes, such as desertifi effect, the temporal aspect is very important and can be
cation or the El Nino
first

previous

next

last

back

exit

zoom

contents

index

about

120

3.5. Data selection criteria


translated into conditions for data continuity during many years. Ideally, image
data of comparable characteristics (spatial, spectral, radiometric and temporal)
should be available over longer periods.

first

previous

next

last

back

exit

zoom

contents

index

about

121

3.5. Data selection criteria


Availability of image data
Once the image data requirements have been determined, you have to investigate their availability and costs. The availability depends on data that were
already acquired and stored in archives, or data that need to be acquired at your
request. The size and accessibility of image archives is growing at an ever faster
rate. If up-to-date data are required, these need to be requested through an aerial
survey company or from a remote sensing data distributor.
Examples of current operational spaceborne missions (platform/sensor) are:
High-resolution panchromatic sensors with a pixel size between 0.5 m and
6 m (OrbView/PAN, Ikonos/PAN, QuickBird/PAN, EROS/PAN, IRS PAN,
Spot/PAN).
Multispectral sensors with a spatial resolution between 4 m and 30 m (Landsat/ETM+, Spot/HRG, IRS/Liss3, Ikonos/OSA, CBERS/CCD, Terra/Aster).
A large number of weather satellites and other low-resolution sensors with
a pixel size between 0.1 km and 5 km (GOES/Imager, Meteosat/Seviri,
Insat/VHRR, NOAA/AVHRR, Envisat/Meris, Terra/MODIS, Spot/VEGETATION, IRS/WiFS).
Imaging radar missions with a spatial resolution between 8 m and 150 m
(Envisat/ASAR, ERS/SAR, Radarsat/SAR).
No attempt was made to make this list complete. Around 15 civil remote sensing satellites are launched each year. Check the ITCs Database of Satellites and

first

previous

next

last

back

exit

zoom

contents

index

about

122

3.5. Data selection criteria


Sensors for an up-to-date list. Some of the systems are discussed in more detail
in Chapter 5.
Today, more than 1000 aerial survey cameras are in service and used to acquire (vertical) aerial photography. Per year, an estimated 30 aerial survey cameras are sold.
In addition to film cameras, an increasing number of other types of sensors are used in airborne remote sensing; [21] lists more than 200 types of instruments, including 15 types of imaging spectrometers, 20 radar systems and
20 laser scanners. Worldwide, a growing number of airborne laser scanners
(more than 50) are available, with the majority being flown in North America.
Also a growing number of airborne imaging spectrometers (more than 30) are
available for data acquisition. These instruments are mainly owned by mining companies. The number of operational airborne radar systems is relatively
small. Although many experimental radar systems exist, there are only a few
commercially operated systems.

first

previous

next

last

back

exit

zoom

contents

index

about

123

3.5. Data selection criteria


Cost of image data
It is rather difficult to provide indications about the costs of image data. Costs of
different types of image data can only be compared when calculated for a specific project with specific data requirements. Existing data from the archive are
cheaper than data that have to be specially ordered. Another reason to be cautious in giving prices is that different qualities of image data (processing level)
exist.
The costs of vertical aerial photographs depend on the size of the area, the
photo scale, the type of film and processing used and the availability of aerial
reconnaissance companies. Under European conditions the costs of the aerial
photography is somewhere between 5 Euro/km2 to 20 Euro/km2 . The cost of
optical satellite data varies from free (public domain) to 45 Euro/km2 . Usually, the images either have a fixed size, or a minimum order applies. Low
resolution data (NOAA AVHRR) can be downloaded for free from the Internet.
Medium resolution data (Landsat, SPOT, IRS) cost in the range of 0.01 Euro/km2
to 0.70 Euro/km2 . High resolution satellite data (Ikonos, SPIN-2) cost between
15 Euro/km2 and 45 Euro/km2 . For Ikonos derived information products prices
can go up to 150 Euro/km2 . Educational institutes often receive a considerable
reduction.

first

previous

next

last

back

exit

zoom

contents

index

about

124

3.5. Data selection criteria

Summary
This chapter has provided an introduction of sensors and platforms used for remote sensing observations. Aircraft and satellites are the main platforms used in
remote sensing. Both types of platforms have their own specific characteristics.
Two main categories of sensors are distinguished, passive and active. Passive
sensors depend on an external source of energy such as the Sun. Active sensors
have their own source of energy. A sensor carries out measurements of reflected
or emitted (EM) energy. The energy measured in specific wavelength bands is
related to the Earths surface characteristics. The measurements are stored as
pixels in image data. The characteristics of image data are related to the characteristics of the sensor-platform system (spatial, spectral, radiometric and temporal). Depending on the spatio-spectral-temporal phenomena of interest, the
most appropriate remote sensing data can be determined. To a large extent, data
availability and costs may determine which remote sensing data are used.

first

previous

next

last

back

exit

zoom

contents

index

about

125

3.5. Data selection criteria

Questions
The following questions can help you to study Chapter 3.
1. Think of an application, define the spatio-spectral-temporal characteristics
of interest and determine the type of remote sensing image data required.
2. Which types of sensors are used in your discipline or field-of-interest?
3. Which aspects need to be considered to assess if the statement RS data
acquisition is a cost-effective method is to be true?
The following are sample exam questions:
1. Explain the sensor-platform concept.
2. Mention two types of passive and two types of active sensor.
3. What is a typical application of a multispectral satellite image, and what is
a typical application of a very high spatial resolution satellite image?

first

previous

next

last

back

exit

zoom

contents

index

about

126

3.5. Data selection criteria

4. Describe two differences between aircraft and satellite remote sensing and
their implications for the data acquired.
5. Which two types of satellite orbits are mainly used for Earth observation?
6. List and describe four characteristics of image data.

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 4
Aerial cameras

first

previous

next

last

back

exit

zoom

contents

index

about

127

128

4.1. Introduction

4.1 Introduction
Aerial photography has been used since the early 20th century to provide spatial data for a wide range of applications. It is the oldest, yet most commonly
applied remote sensing technique. The science and technique of making measurements from photos or image data is called photogrammetry. Nowadays,
almost all topographic maps are based on aerial photographs. Aerial photographs also provide the accurate data required for many cadastral surveys and
civil engineering projects. Aerial photography is a useful source of information
for specialists such as foresters, geologists and urban planners. General references for aerial photography are [23, 24].
The aerial camera, mounted in an aircraft and using a lens to record data
on to photographic film, is therefore by far the longest serving sensor and platform system used in remote sensing. Although usually mounted in aircraft, conventional photographic cameras are also carried by some low orbiting satellites
and on the NASA Space Shuttle missions. Also digital cameras (using charge-

nadir

(a)

first

previous

next

nadir

last

back

exit

Figure 4.1: Vertical (a)


and oblique (b) photography.

(b)

zoom

contents

index

about

129

4.1. Introduction
coupled devices [CCDs] instead of film as sensor) are used nowadays.
Two broad categories of aerial photography can be distinguished: vertical and
oblique photography (Figure 4.1). In most mapping applications, vertical aerial
photography is required. Vertical aerial photography is produced with a camera mounted in the floor of an aircraft. The resulting image is rather similar
to a map and has a scale that is approximately constant throughout the image
area. Usually, vertical aerial photography is acquired taken in stereo, in which
successive photos have a degree of overlap to enable stereo-interpretation and
stereo-measurements (see Section 4.7).
Oblique photographs are obtained when the axis of the camera is not vertical. They can also be made using a hand-held camera and shooting through
the (open) window of an aircraft. The scale of an oblique photo varies from the
foreground to the background. This scale variation complicates the measurement of positions from the image and, for this reason, oblique photographs are
rarely used for mapping purposes. Nevertheless, oblique images can be useful
for purposes such as viewing sides of buildings.

(a)

first

(b)

previous

next

last

back

exit

zoom

contents

index

about

Figure 4.2: Vertical (a)


and oblique (b) aerial
photo of the ITC building.
Photos by Paul Hofstee,
1999.

130

4.1. Introduction
This chapter focusses on the camera, films and methods used for vertical
aerial photography. First of all, Section 4.2 introduces the aerial camera and its
main components. Photography is based on exposure of a film, processing and
printing. The type of film applied largely determines the spectral and radiometric characteristics of the image products (Section 4.3). Section 4.4 discusses the
use of CCDs in digital cameras. Section 4.5 focusses on the geometric characteristics of aerial photography. Section 4.6. In Section 4.7 some aspects of aerial
photography missions are introduced. In Section 4.8, some recent technological
developments are discussed.

first

previous

next

last

back

exit

zoom

contents

index

about

131

4.2. Aerial camera

4.2 Aerial camera


A camera used for vertical aerial photography for mapping purposes is called an
aerial survey camera. In this section the standard aerial camera is introduced.
At present, there are only two major manufacturers of aerial survey cameras,
namely Leica-Helava Systems (LH Systems) and Z/I Imaging. These two companies produce the RC-30 and the RMK-TOP, respectively. There are also digital
cameras, and the number of producers of digital aerial survey cameras is growing. Just like a typical hand-held camera, the aerial survey camera contains a
number of common components, as well as a number of specialized ones necessary for its specific role. The large size of the camera results from the need to
acquire images of large areas with a high spatial resolution. This is realized by
using a very large film size. Modern aerial survey cameras produce negatives
measuring 23 cm 23 cm (9 by 9 inch). Up to 600 photographs may be recorded
on a single roll of film. For the digital camera to have the same kind of quality
as the aerial film camera it would have to have about 200 million pixels. Todays digital cameras do not have so many pixels (yet), but they do have other
advantages, as explained at the end of the chapter.

first

previous

next

last

back

exit

zoom

contents

index

about

132

4.2. Aerial camera

4.2.1

Lens cone

Perhaps the most important (and expensive) single component within the camera is the lens cone. This is interchangeable, and the manufacturers produce a
range of cones, each of different focal length. Focal length is the most important property of a lens cone since, together with flying height, it determines the
photo scale (Section 4.5.1). The focal length also determines the angle of view of
the camera. The longer the focal length, the narrower the angle of view. Lenses
are usually available in the following standard focal lengths, ranging from narrow angle (610 mm), to normal angle (305 mm) to wide angle (210 mm, 152 mm)
to super-wide angle (88 mm). The 152 mm lens is the most commonly used lens.

CAUTION

Focal Plane
Frame

Optical Axis

Lens
System
Diaphragm

Figure 4.3: The lens cone


comprises a lens system,
focal plane frame, shutter,
diaphragm, anti-vignetting
and coloured filters.

Shutter

Coloured +
A.V. Filter

The lens cone is responsible for projecting an optical image onto the film.
In an ideal lens, all the lines passing the lens can be thought of going through
one central point. Hence, the projection is called the central projection. The accuracy of this projection depends on the quality of the actual lens. Even with high
quality lenses some distortions still take place. These distortions are impercepti-

first

previous

next

last

back

exit

zoom

contents

index

about

133

4.2. Aerial camera


ble to the human eye, but aversely affect the photogrammetric operations where
very precise measurements (at m level) are required. Therefore, the distortion
of lenses is measured on a regular basis and reported in a calibration report.
This report is required in many photogrammetric processes.
In the lens cone (Figure 4.3), the diaphragm and shutter, respectively, control
the intensity and the duration of light reaching the film. Typically, optical filters
are placed over the lens to control two important aspects of the image quality:
Image contrast. A yellow filter is often used (in black and white photography) to absorb the ultra-violet and blue wavelengths, which are the most
highly scattered within the atmosphere (Section 2.3.2). This scattering effect, if not corrected for, normally leads to a reduction in image contrast.
Evenness of illumination. Any image formed by a lens is brightest at the
centre and darkest in the corners. This is known as light fall-off or vignetting, and is related to the angle with the optical axis. The greater the
angle, the darker the image. In wide angle lenses, this effect can be severe
and images appear dark in the corners. The effect can be partially corrected
by an anti-vignetting filter, which is a glass plate, in which the centre transmits less light than the corners. In this way the image illumination is made
more even over the whole image area, and the dark corners are avoided.

first

previous

next

last

back

exit

zoom

contents

index

about

134

4.2. Aerial camera

4.2.2

Film magazine and auxiliary data

The aerial camera is fitted with a system to record various items of relevant information onto the side of the negative: mission identifier, date and time, flying
height and the frame number (Figure 4.4).
Altimeter

Enschede
25 June
1991

NAGII 7142 213.40

Watch
Message
Pad

Lens Number &


Focal Length

Fiducial Marks

Frame
Number

Spirit
Level

Figure 4.4: Auxiliary data


annotation on an aerial
photograph.

2205

A vacuum plate is used for flattening the film at the instant of exposure. Socalled fiducial marks are recorded in all corners and/or on the sides of the film.
The fiducial marks are required to determine the optical centre of the photo,
needed to align photos for stereoviewing. The fiducials are also used to record
the precise position of the film in relation to the optical system, which is required

first

previous

next

last

back

exit

zoom

contents

index

about

135

4.2. Aerial camera


in photogrammetric processes (interior orientation, Section 9.3.1). Modern cameras also have an image motion compensation facility (Section 4.8).

first

previous

next

last

back

exit

zoom

contents

index

about

136

4.2. Aerial camera

4.2.3

Camera mounting

The camera mounting enables the camera to be levelled by the operator. This is
usually performed at the start of a mission once the aircraft is in a stable configuration. Formal specifications for aerial survey photography usually require that
the majority of images are maintained within a few degrees (less than 3) of true
vertical. Errors exceeding this cause difficulties in photogrammetric processing
(Section 9.3).
The levelling of the camera can also be done automatically and precisely by
a stabilized platform, but this is quite expensive.

first

previous

next

last

back

exit

zoom

contents

index

about

CAUTION

137

4.3. Spectral and radiometric characteristics

4.3 Spectral and radiometric characteristics


Photographic recording is a multi-stage process that involves film exposure and
chemical processing (development). It is usually followed by printing.
Photographic film comprises a light sensitive emulsion layer coated onto a
base material. The emulsion layer contains silver halide crystals, or grains,
suspended in gelatine. The emulsion is supported on a stable polyester base.
Light changes the silver halide into silver metal, which, after processing of the
film, appears black. The exposed film, before processing, contains a latent image.
The film emulsion type applied determines the spectral and radiometric characteristics of the photograph. Two terms are important in this context:
General sensitivity is a measure of how much light energy is required to
bring about a certain change in density of silver in the film. Given specific
illumination conditions, the general sensitivity of a film can be selected, for
example, to minimize exposure time.
Spectral sensitivity describes the range of wavelengths to which the emulsion is sensitive. For the study of vegetation the near-infrared wavelengths
yield much information and should be recorded. For other purposes a
standard colour photograph normally can yield the optimal basis for interpretation.
In the following section the general sensitivity is explained, followed by an
explanation of the spectral sensitivity. Subsequently, colour and colour infrared
photography are explained. Section 4.3.4 presents some remarks about the scanning of photos.

first

previous

next

last

back

exit

zoom

contents

index

about

138

4.3. Spectral and radiometric characteristics

4.3.1

General sensitivity

The energy of a light photon is inversely proportional to the light wavelength. In


the visible range, therefore, the blue light has the highest energy. For a normal
silver halide grain, only blue light photons have sufficient energy to form the
latent image, and hence a raw emulsion is only sensitive to blue light.
The sensitivity of a film can be increased by increasing the mean size of the
silver grains: larger grains produce more metallic silver per input light photon.
The mean grain size of aerial films is in the order of a few m. There is a problem
related to increasing the grain size: larger grains are unable to record small details, i.e. the spatial resolution is decreased (Section 4.5.2). The other technique
to improve the general sensitivity of a film is to perform a sensitization of the
emulsion by adding small quantities of chemicals, such as gold or sulphur.
General sensitivity is often referred to as film speed. For a scene of given average brightness, the higher the film speed, the shorter the exposure time required
to record the optical image on the film. Similarly, the higher the film speed, the
less bright an object needs to be in order to be recorded upon the film.

first

previous

next

last

back

exit

zoom

contents

index

about

139

4.3. Spectral and radiometric characteristics

4.3.2

Spectral sensitivity

1.0

1.0

0.8

0.8

Relative Sensitivity

Relative Sensitivity

Sensitization techniques are used not only to increase the general sensitivity but
also to produce films that are sensitive to longer wavelengths. By adding sensitizing dyes to the basic silver halide emulsion, the energy of longer light wavelengths becomes sufficient to produce latent images. In this way, a monochrome
film can be made sensitive to green, red or infrared wavelengths (Section 3.3.2).
A black-and-white (monochrome) type of film has one emulsion layer. Using sensitization techniques, different types of monochrome films are available.
Most common are the panchromatic and infrared sensitive film. The sensitivity
curves of these films are shown in Figure 4.5.

0.6
0.4
0.2

0.4
0.2
Blue + Green + Red + Infra Red Sensitive

0.0

(a)

first

Figure 4.5:
Spectral
sensitivity curves of a
panchromatic film (a) and
a black/white infrared film
(b). Note the difference in
scaling on the x-axis.

0.6

Blue, Green & Red Sensitive


0.0

400

previous

500
600
Wavelength (nm)

next

last

700

back

400

exit

500

600
700
Wavelength (nm)

zoom

800

contents

900

(b)

index

CAUTION

about

140

4.3. Spectral and radiometric characteristics

4.3.3

True colour and colour infrared photography

Colour photography uses an emulsion with three sensitive layers to record three
wavelength bands corresponding to the three primary colours of the spectrum,
i.e. blue, green and red. There are two types of colour photography: true colour
and false colour infrared.
In colour photography each of the primary colours creates colour particles of
the opposite colour, i.e. dyes that remove this colour, but allow the other two
colours to pass. The result is a colour negative. If we make a colour copy of that
negative, we will get the original colours in that copy.
The emulsion used for colour infrared film creates yellow dyes for green
light, magenta dyes for red light and cyan dyes for infrared light (IR). Blue light
should be kept out of the camera by filter. IR, Red and Green give the same
result as Red, Green and Blue, respectively, in the normal case. If a copy of
this IR-negative is made with a normal colour emulsion, the result is an image
which shows blue for green objects, green for red objects and red for IR objects.
This is called a false colour infrared image.

first

previous

next

last

back

exit

zoom

contents

index

about

CAUTION

141

4.3. Spectral and radiometric characteristics

4.3.4

Scanning

Classical photogrammetric techniques as well as visual photo-interpretation generally employ hard-copy photographic images. These can be the original negatives, positive prints or diapositives. Digital photogrammetric systems, as well
as geographic information systems, require digital photographic images. A scanner is used to convert a film or print into a digital form. The scanner samples
the image with an optical detector and measures the brightness of small areas
(pixels). The brightness values are then represented as a digital number (DN)
on a given scale. In the case of a monochrome image, a single measurement is
made for each pixel area. In the case of a coloured image, separate red, green
and blue values are measured. For simple visualization purposes, a standard
office scanner can be used, but high metric quality scanners are required if the
digital photos are to be used in precise photogrammetric procedures.
In the scanning process, the setting of the size of the scanning resolution is
most relevant. This is also referred to as the scanning density and is expressed in
dots per inch (dpi; 1 inch = 2.54 cm). The dpi-setting depends on the detail required for the application and is usually limited by the scanner. Office scanners
permit around 600 dpi, which gives a dot size of 42 m (2.54 cm 600 dots).
Photogrammetric scanners, on the other hand, may produce 3600 dpi (7 m dot
size).
For a monochrome 23 cm 23 cm negative, 600 dpi scanning results in a file
size of 9 600 = 5,400 rows and the same number of columns. Assuming that
1 byte is used per pixel (i.e. there are 256 grey levels), the resulting files requires
29 Mbyte of disk space. When the scale of the negative is given, the ground pixel
size of the resulting image can be calculated. Assuming a photo scale of 1:18,000,
the first step is to calculate the size of one dot: 25.4 mm / 600 dots = 0.04 mm
per dot. The next step is to relate this to the scale: 0.04 mm 18,000 = 720 mm in
first

previous

next

last

back

exit

zoom

contents

index

about

142

4.3. Spectral and radiometric characteristics


the terrain. The ground pixel size of the resulting image is, therefore, 0.72 meter.

first

previous

next

last

back

exit

zoom

contents

index

about

143

4.4. CCD as image recording device

4.4 CCD as image recording device


Charge-coupled devices (CCDs) are light sensitive semiconductor chips.
The area is subdivided into individual sensors (pixels) and equipped with
electronic control devices. When light hits the area of a sensor, a charge is created
there. Electronically, this charge is fenced in to stay in this area. After exposure
of the chip, the charges at each pixel are proportional to the amount of light
received. Subsequently, each measurement is immediately converted to a digital
number and stored.
The geometric fidelity of such sensors is very high. With respect to radiometry the following is important to know:
Blooming. There is a limit to the charge a pixel can hold. If a pixel is
over-exposed, then the charge will spill over to neighbouring pixels, which
causes an increase in their values. This effect is called blooming. There
are anti-blooming constructions, but this complicates the chip and reduces
the charge capacity of each pixel. Proper exposure control can avoid this.
Uniformity. The individual pixels do not have exactly the same sensitivity.
These differences can be calibrated and corrected for in the resulting digital
image.
Dynamic range. There are also random variations in the collected charge
and its measurement. This random signal is called noise. The ratio between the maximum charge capacity of the pixels and the noise, which
determines the minimum meaningful level, is an important quantity. The
range between the minimum and maximum level that can be measured is
called the dynamic range.

first

previous

next

last

back

exit

zoom

contents

index

about

144

4.4. CCD as image recording device


Normal CCDs have a much higher general sensitivity than film, i.e. they need
less light. CCDs are sensitive to wavelengths up to 1100 nm, which is further into
IR than special IR-film. For a panchromatic image (B/W) one needs to block the
UV and IR light by a filter. For colour, i.e. multispectral images, one needs to
make three (or more) images, one for each colour or band. The image is taken
through filters to block those wavelengths that should not contribute. In most
colour CCDs, each pixel is subdivided into several sensors, one (or more) for
each colour or wavelength range, each sensor being a CCD-cell with its own
filter right on top of it. Three elongated cells next to each other, one for each
colour, can constitute together one (square) colour pixel. Also four square pixels
can be used, where the fourth one might be used to enhance a weak colour, or
for a fourth wavelength, such as IR.
In scanning devices, such as used to convert hardcopy images into digital
form, but also in multispectral scanners used in airborne and spaceborne remote
sensing, often linear CCD arrays are used.

first

previous

next

last

back

exit

zoom

contents

index

about

145

4.5. Spatial characteristics

4.5 Spatial characteristics


Two important properties of an aerial photograph are scale and spatial resolution. These properties are determined by sensor (lens cone and film) and platform (flying height) characteristics. Lens cones are produced with different focal
length.

first

previous

next

last

back

exit

zoom

contents

index

about

146

4.5. Spatial characteristics

4.5.1

Scale

The relationship between the photo scale factor, s, flying height, H, and lens focal
length, f , is given by
s=

H
.
f

(4.1)

Hence, the same scale can be achieved with different combinations of focal length
and flying height. If the focal length of a lens is decreased, while the flying height
remains constant, then (also refer to Figure 4.6):
The image scale factor will increase and the size of the individual details in
the image becomes smaller. In the example shown in Figure 4.6, using a
150 mm and 300 mm lens at H =2000 m results in a scale factor of 13,333
and 6,666, respectively.
The ground coverage increases. A 23 cm negative covers a length (and width)
of respectively 3066 m and 1533 m using a 150 mm and 300 mm lens. This
has implications on the number of photos required for the mapping of a
certain area, which, in turn, affects the subsequent processing (in terms of
labour) of the photos.
The angular field of view increases and the image perspective changes. The
total field of view in situations (a) and (b) is 74 and 41, respectively. When
wide-angle photography is used for mapping, the measurement of height
information (z dimension) in a stereoscopic model is more accurate than
when long focal length lenses are used. The combination of a low flying
height with a wide-angle lens can be problematic when there are large terrain height differences or high man-made objects in the scene. Some areas
first

previous

next

last

back

exit

zoom

contents

index

about

147

4.5. Spatial characteristics


f=150mm

f=300mm

H=2000 m

74

Figure 4.6: Effects of using a different focal length


(a: 150 mm, b: 300 mm)
when operating a camera
from the same height.

41

3066 m

(a)

1533 m

(b)

may become hidden behind taller objects. This phenomenon of occlusion


is called the dead ground effect. This effect can be clearly seen in Figure 4.8.

first

previous

next

last

back

exit

zoom

contents

index

about

148

4.5. Spatial characteristics

4.5.2

Spatial resolution

While scale is a generally understood and applied term, the use of spatial resolution in aerial photography is quite difficult. Spatial resolution refers to the
ability to distinguish small adjacent objects in an image. The spatial resolution
of monochrome aerial photographs ranges from 40 to 800 line pairs per mm.
The better the resolution of a recording system, the more easily the structure of
objects on the ground can be viewed in the image. The spatial resolution of an
aerial photograph depends on:
the image scale factorspatial resolution decreases as the scale factor increases,
the quality of the optical systemexpensive high quality aerial lenses give
much better performance than the inexpensive lenses on amateur cameras,
the grain structure of the photographic filmthe larger the grains, the
poorer the resolution,
the contrast of the original objectsthe higher the target contrast, the better the resolution,
atmospheric scattering effectsthis leads to loss of contrast and resolution,
image motionthe relative motion between the camera and the ground
causes blurring and loss of resolution.
From the above list it can be concluded that the physical value of resolution
in an aerial photograph depends on a number of factors. The most variable
factor is the atmospheric condition, which can change from mission to mission,
and even during a mission.
first

previous

next

last

back

exit

zoom

contents

index

about

149

4.5. Spatial characteristics

Figure 4.7: Illustration of


the effect of terrain topography on the relationship between AB (on the
ground) and ab (on the
photograph). Flat terrain
(a), significant height difference (b).

(a)

first

previous

next

last

back

exit

zoom

(b)

contents

index

about

150

4.6. Relief displacement

4.6 Relief displacement


A characteristic of most sensor systems is the distortion of the geometric relationship between the image data and the terrain, caused by relief differences
on the ground. This effect is most apparent in aerial photographs and airborne
scanner data. The effect of relief displacement is illustrated in Figure 4.7. Consider the situation on the left, in which a true vertical aerial photograph is taken
of a flat terrain. The distances (A B) and (a b) are proportional to the total
width of the scene and its image size on the negative, respectively. In the left
hand situation, by using the scale factor, we can compute (A B) from a measurement of (ab) in the negative. In the right hand situation, there is significant
terrain relief difference. As you can now observe, the distance between a and b
in the negative has become larger, although when measured in the terrain system, it is still the same as in the left hand situation. This phenomenon does not
occur in the centre of the photo but becomes increasingly prominent towards the
edges of the photo. This effect is called relief displacement: terrain points whose
elevation is above or below the reference elevation are displaced away from or
towards the nadir point, A, the point on the ground directly beneath the sensor,
respectively. The magnitude of displacement, r (mm), is approximated by:
r =

rh
.
H

(4.2)

In this equation, r is the radial distance (mm) from the nadir, h (m) is the height
of the terrain above the reference plane, and H (m) is the flying height above the
reference plane (where nadir intersects the terrain). The equation shows that the
amount of relief displacement is zero at nadir (r = 0), greatest at the corners of
the photograph, and is inversely proportional to the flying height.
first

previous

next

last

back

exit

zoom

contents

index

about

151

4.6. Relief displacement

Figure 4.8: Fragment of a


large scale aerial photograph of the centre of Enschede. Note the effect
of height displacement on
the higher buildings.

In addition to relief displacement you can imagine that buildings and other
tall objects also can cause displacement (height displacement). This effect is, for
example, encountered when dealing with large scale photos of urban or forest
areas (Figure 4.8). In this chapter the subject of height displacement is not further
elaborated.
The main effect of relief displacement is that inaccurate or wrong coordinates
might be determined when, for example, digitizing from image data. Whether
relief displacement should be considered in the geometric processing of the image data depends on its impact on the required accuracy of the geometric information derived from the images. Relief displacement can be corrected for
if information on the terrain topography is available (in the form of a DTM).
first

previous

next

last

back

exit

zoom

contents

index

about

152

4.6. Relief displacement


The procedure is explained later in Section 9.3. However, it is also important to
remember that it is relief displacement that allows us to view overlapping, i.e.
stereo-images, in 3D, and to extract 3D information from such data. For more on
those concepts see Sections 9.3 and 11.2.3.

first

previous

next

last

back

exit

zoom

contents

index

about

153

4.7. Aerial photography missions

4.7 Aerial photography missions


Mission planning When a mapping project requires aerial photographs, one
of the first tasks is to select the required photo scale factor, the type of lens to
be used, the type of film to be used and the required percentage of overlap for
stereo viewing. Forward overlap usually is around 60%, while sideways overlap
typically is around 20%. Figure 4.9 shows a survey area that is covered by a
number of flight lines. Furthermore, the date and time of acquisition should
be considered with respect to growing season, light conditions and shadowing
effects.
Forward overlap along flight-line

Sideways
overlap
between
flight lines

Survey Area
3

Figure 4.9: Example survey area for aerial photography. Note the forward
and sideways overlap of
the photographs.

If the required scale is defined, the following parameters can be determined:


the flying height required above the terrain,
first

previous

next

last

back

exit

zoom

contents

index

about

154

4.7. Aerial photography missions


the ground coverage of a single photograph,
the number of photos required along a flight line,
the number of flight lines required.
After completion of the necessary calculations, mission maps are prepared for
use by the survey navigator in flight.
Mission execution The advent of Satellite Positioning Systems, especially GPS
of the USA, has caused a tremendous change to survey flight missions. Today
a computer program determinesafter entering a number of relevant mission
parameters and the area of interestthe (3D) coordinates of all positions from
which photographs are to be taken, and stored in a job data base. On board, the
crew can obtain all relevant information from that database, such as project area,
camera and type of film to be used, and number of images, constraints regarding
time of day or sun angle, season, and atmospheric conditions.
Also the list of camera positions is loaded to a guidance system. The pilot
is then guided along the flight lines such that the deviation from the ideal line
(horizontal and vertical) and the time to the next exposure station is shown on
a display (together with other relevant parameters). If the airplane passes close
enough to the predetermined exposure station, the camera is fired automatically
at the nearest position. This allows having the data of several projects on board
and making the choice of project (or project part) according to the local weather
conditions. If necessary, one can also abandon a project and resume it later.
The viewing system is still used to line out the camera and for visual checks
like the downward visibility.

first

previous

next

last

back

exit

zoom

contents

index

about

155

4.7. Aerial photography missions


In the absence of GPS-guidance the navigator has to observe the terrain using
the viewing system, check the flight lines against the planned ones, which are
shown graphically in topographic maps, and give the required corrections to
the left or to the right to the pilot, and also to tune the overlap regulator to the
apparent forward speed of the airplane.

first

previous

next

last

back

exit

zoom

contents

index

about

156

4.8. Recent developments in aerial photography

4.8 Recent developments in aerial photography


The most significant improvements to standard aerial photography made during the last decade can be summarized as follows:
Global Navigation Satellite Systems (GPS-USA, Glonass-SU and the forthcoming Galileo-EU) provide a means of achieving accurate navigation.
They offer precise positioning of the aircraft along the survey run, ensuring
that the photographs are taken at the correct points. This method of navigation is especially important in survey areas where topographic maps do
not exist, are old, or are of small scale or poor quality. It is also helpful
in areas where the terrain has few features (deserts, forests, etc.) because
in these cases conventional visual navigation is particularly difficult. The
major aerial camera manufacturers (as well as some independent suppliers), now offer complete software packages that enable the flight crew to
plan, execute and evaluate an entire aerial survey mission.
The gyroscopically stabilized camera mounting enables the camera to be
maintained in an accurate level position so that it is continuously pointing
vertically downward. It compensates rapid oscillations of the aircraft and
reduces effects of aircraft and camera vibration and movement.
Forward motion compensation in aerial survey cameras causes the image
projected on the film to be displaced across the film during the time that
the shutter is open. Forward motion is potentially the most damaging of
the effects that disturb the geometric quality of the photograph, and occurs as a direct result of the relative forward motion of the aircraft over the
terrain during exposure. If the displacement is such that adjacent grains

first

previous

next

last

back

exit

zoom

contents

index

about

CAUTION

157

4.8. Recent developments in aerial photography


become exposed, then the resolution of fine detail will be degraded. Forward motion compensation permits the use of (slower) finer grained films
and results in images of improved spatial resolution.
A recent significant development concerns the new type of digital cameras. Since the middle of the year 2000 the first examples of these modern sensors have been on the market. These include the Airborne Digital
Sensor (ADS) of LH Systems, and the Digital Modular Camera (DMC) of
Z/I Imaging. The sensors developed have characteristics that relate both
to a camera and to a multispectral scanner. Charge-coupled devices (CCD)
are used to record the electromagnetic energy. The design of these cameras
is such that it enables multispectral data acquisition and means that overlapping (multi-angle) images are taken along track, enabling direct generation of digital elevation models. Compared to films, CCD recording allows
a larger spectral domain and smaller wavelengths bands to be defined. In
addition, the general sensitivity is higher and more flexible when compared to film-based systems. Another major advantage of a digital camera
is that the resulting data are already in digital format, which avoids the
need for scanning of the film or prints. One main disadvantage of the digital camera is that its spatial resolution is still somewhat lower than that
achieved by film-based systems.

first

previous

next

last

back

exit

zoom

contents

index

about

158

4.8. Recent developments in aerial photography

Summary
This chapter provided an overview of aerial photography. First, the characteristics of oblique and vertical aerial photography were distinguished. Vertical
aerial photography requires a specially adapted aircraft. The main components
of an aerial camera system are the lens and film. The focal length of the lens, in
combination with the flying height, determines the photo scale factor. The film
type used determines which wavelengths bands are recorded. The most commonly used film types are panchromatic, black-and-white infrared, true-colour
and false-colour infrared. Other characteristics of film are the general sensitivity, which is related to the size of the grains, and spectral sensitivity, which is
related to the wavelengths that the film is sensitive to. After exposure, the film
is developed and printed. The printed photo can be scanned to use the photo in
a digital environment.
In digital cameras the CCD is used as an image recording device.
Relief displacement is the distortion of the geometric relationship between
the image data and the terrain, caused by relief differences on the ground.
There have been many technological developments to improve mission execution as well the image quality itself. Most recent in this development is the
digital camera, which directly yields digital data. The advent of GPS has enabled
automatic flight planning and image acquisition.

first

previous

next

last

back

exit

zoom

contents

index

about

159

4.8. Recent developments in aerial photography

Questions
The following questions can help you to study Chapter 4.
1. Consider an area of 500 km2 that needs aerial photo coverage for topographic mapping at 1:50,000. Which specifications would you give on film,
photo scale, overlap, etc.?
2. Go to the Internet and locate three catalogues (archives) of aerial photographs. Compare the descriptions and specifications of the photographs
(in terms of scale, resolution, format, . . . ).
The following are typical exam questions:
1. Calculate the scale factor for an aerial photo taken at 2500 m height by a
camera with a focal length of 88 mm.
2. Consider a (monochrome) black-and-white film. What determines the general sensitivity of this film and why is it important?

first

previous

next

last

back

exit

zoom

contents

index

about

160

4.8. Recent developments in aerial photography

3. Explain spectral sensitivity.


4. A hard copy aerial photograph is to be scanned using a flat bed scanner.
List three factors that influence the choice of the scanner resolution setting,
and explain their significance.
5. Make a drawing to explain the dead ground effect.

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 5
Multispectral scanners

first

previous

next

last

back

exit

zoom

contents

index

about

161

162

5.1. Introduction

5.1 Introduction
Multispectral scanners measure reflected electromagnetic energy by scanning
the Earths surface. This results in digital image data, of which the elementary unit is a picture element, a pixel. As the name multispectral suggests, the
measurements are made for different ranges of the EM spectrum. Multispectral
scanners have been used in remote sensing since 1972 when the first Landsat
satellite was launched. After the aerial camera it is the most commonly used
sensor. Applications of multispectral scanner data are mainly in the mapping of
land cover, vegetation, surface mineralogy and surface water.
Two types of multispectral scanners can be distinguished, the whiskbroom
scanner and the pushbroom sensor. The principles of these scanners and their
characteristics are explained in Section 5.2 and Section 5.3, respectively. Multispectral scanners are mounted on airborne and spaceborne platforms. Section 5.4 describes the most widely used satellite based scanners.

first

previous

next

last

back

exit

zoom

contents

index

about

163

5.2. Whiskbroom scanner

5.2 Whiskbroom scanner


A combination of a single detector plus a rotating mirror can be arranged in such
a way that the detector beam sweeps in a straight line over the Earth across the
track of the satellite at each rotation of the mirror (Figure 5.1). In this way, the
Earths surface is scanned systematically line by line as the satellite moves forward. Because of this sweeping motion, the whiskbroom scanner is also known as
the across-track scanner. The first multispectral scanners applied the whiskbroom
principle. Today, many scanners are still based on this principle (platform/sensor), such as NOAA/AVHRR and Landsat/TM.

Figure 5.1: Principle of


the whiskbroom scanner.

first

previous

next

last

back

exit

zoom

contents

index

about

164

5.2. Whiskbroom scanner

5.2.1

Spectral characteristics of a whiskbroom

Whiskbroom scanners use solid state detectors (i.e. made of semi-conducting material) for measuring the energy transferred by the optical system to the sensor.
This optical system focusses the incoming radiation at the surface of the detector.
Various techniques, using prisms or gratings, are applied to split the incoming
radiation into spectral components that each have their own detector.
The detector transforms the electromagnetic radiation (photons) into electrons. The electrons are input to an electronic device that quantifies the level of
energy into the required units. In digital imaging systems, a discrete value is
used to store the level of energy. These discrete levels are referred to as Digital Number values, or DN-values. The fact that the input is measured in discrete levels is also referred to as quantization. Using the expression introduced
in Chapter 2, Equation 2.2, one can calculate the amount of energy of a photon
corresponding to a specific wavelength, using
(5.1)

Q = h v,

where Q is the energy of a photon (J), h is Plancks constant 6.6260693 1034 (J s),
and, v is the frequency (Hz). The solid state detector measures the amount of
energy (J) during a specific time period, which results in (J/s) or (W) (watt).
The range of input radiance, between a minimum and a maximum level, that
a detector can handle is called the dynamic range. This range is converted into the
range of a specified data format. Typically an 8-bit, 10-bit or 12-bit data format
is used. The 8-bit format allows 28 = 256 levels or DN-values. Similarly, the
12-bit format allows 212 = 4096 distinct DN-values. The smallest difference in
input level that can be distinguished is called the radiometric resolution. Consider
a dynamic range of energy between 0.5 W and 3 W. Using 100 or 250 DN-values
results in a radiometric resolution of 25 mW and 10 mW respectively.
first

previous

next

last

back

exit

zoom

contents

index

about

165

5.2. Whiskbroom scanner


The other main characteristic of a detector is the spectral sensitivity, which is
similar to film sensitivity, as explained in Section 4.3.2. Each detector has a characteristic graph that is called the spectral response curve (Figure 5.2). The bandwidth is usually determined by the difference between the two wavelengths
where the curve is at 50% of its maximum value. A multispectral scanner uses
a number of detectors to measure a number of bands, each band having its own
detector.
Normalized spectral response (%)

100

Figure 5.2:
Normalized
(maximum is 100%) spectral response curve of
a specific sensor.
It
shows that the setting of
this band (band 1 of the
NOAA-14 AVHRR sensor)
ranges from approximately
570 nm to 710 nm.

80
60
40
20
0
500

550

600

650

700

750

800

Wavelength (nm)

first

previous

next

last

back

exit

zoom

contents

index

about

166

5.2. Whiskbroom scanner

5.2.2

Geometric characteristics of a whiskbroom

At any instant, via the mirror system, the detector of the whiskbroom scanner
observes a circular area on the ground. Directly below the platform (at nadir),
the diameter, D, depends on the opening angle of a single detector, , and the
flying height, H:
(5.2)

D = H.

D and H should be expressed in meter, in radians. Consider a scanner with


= 2.5 mrad (milliradian) that is operated at 4000 m. Using Formula 5.2 one
can easily calculate that the diameter of the area observed under the platform is
10 m.
The angle of the detector () is also referred to as Instantaneous Field of View,
abbreviated as IFOV. The IFOV determines the spatial resolution of a scanner.
The Field of View (FOV) describes the total angle that is scanned. The FOV can
be used to determine the swath width of the image, expressed as the width of the
scanned line on the ground.
Consider a single line scanned by a whiskbroom scanner mounted to a static
platform. This results in a series of measurements along the line. The value
for a single pixel is obtained during a certain time interval that is available for
the measurement. This time interval needed for one measurement is called the
dwell time or integration time. To obtain images of a higher spatial resolution
the available dwell time is reduced, simply because the sensor must measure
more pixels per second. This poses a limit to the spatial resolution that can be
obtained with a scanning sensor.

first

previous

next

last

back

exit

zoom

contents

index

about

167

5.3. Pushbroom sensor

5.3 Pushbroom sensor


The pushbroom sensor is based on the use of charge-coupled devices (CCDs) for
measuring the electromagnetic energy (Figure 5.3). A CCD-array is a line of
photo-sensitive, solid state detectors. A single element can be as small as 5 m.
Today, two-dimensional CCD-arrays are used in digital cameras and video recorders. The CCD-arrays used in remote sensing are more sensitive and have
larger dimensions. The first satellite sensor using this technology was SPOT-1
HRV. High resolution sensors such as IKONOS and OrbView-3 also apply the
pushbroom principle.
The pushbroom sensor records one entire line at a time. The principal advantage over the whiskbroom scanner is that each position (pixel) in the line has its
own detector. This enables a longer period of measurement over a certain area,
resulting in less noise and a relatively stable geometry. Since the CCD-array
builds up images by scanning entire lines along the direction of motion of the
platform, the pushbroom sensor is also referred to as along-track scanner.
In pushbroom sensors there is no need for a scanning mirror, and therefore
they have a higher reliability and a longer life expectancy than whiskbroom
scanners. Because of this, together with the excellent geometrical properties,
pushbroom sensors are used extensively in satellite remote sensing.

first

previous

next

last

back

exit

zoom

contents

index

about

168

5.3. Pushbroom sensor

Figure 5.3: Principle of a


pushbroom sensor.

first

previous

next

last

back

exit

zoom

contents

index

about

169

5.3. Pushbroom sensor

5.3.1

Spectral characteristics of a pushbroom

To a large extent, the characteristics of the detectors of whiskbroom scanners


are also valid for a CCD-array. In principle, one CCD-array corresponds to a
spectral band and all the detectors in the array are sensitive to a specific range
of wavelengths. With current state-of-the-art technology, CCD-array sensitivity
stops at 2.5 m wavelength. If longer wavelengths are to be measured, other
detectors (whiskbroom) need to be used.
All the individual detectors of the CCD-array have their own characteristics,
which is a typical problem on CCD sensors due to variability in manufacture.
Furthermore, during the mission life of the satellite, the detectors may show
varying degrees of degradation. Regularly, each detector element must be correctly calibrated. Differences between the detectors of linear CCD-arrays may
be visible in the recorded images as vertical banding.

first

previous

next

last

back

exit

zoom

contents

index

about

170

5.3. Pushbroom sensor

5.3.2

Geometric characteristics of a pushbroom

For each single line, pushbroom sensors have a geometry similar to that of aerial
photos, which have a central projection. Because of the central projection, images from pushbroom sensors exhibit less geometric distortions than images
from whiskbroom scanners. In the case of flat terrain, and a limited total field
of view (FOV), the scale is the same over the line, resulting in equally spaced
pixels. The concept of IFOV cannot be applied to pushbroom sensors.
Most pushbroom sensors have the ability for off-nadir viewing. In such a situation, the scanner can be pointed towards areas left or right of the orbit track
(across-track), or fore or aft (along-track). This characteristic has a number of
advantages. Firstly, it can be used to observe areas that are not at nadir of the
satellite, which reduces the time between successive observations (revisit time).
Secondly, it can be used to image an area that is not covered by clouds at that
particular moment. And lastly, off-nadir viewing is used to produce stereo images.
The production of a stereo image pair using across-track stereo viewing needs
a second image taken from a different track. When using along-track stereo
viewing, the second image can be taken in quick succession of the first image by
the same sensor along the same track. This means that the images are taken at
almost the same time and under the same conditions, such as season, weather,
and plant phenology.
When applying off-nadir viewing, similar to oblique photography, the scale
in an image varies and should be corrected for.
As with whiskbroom scanners, an integration over time takes place in pushbroom sensors. Consider a moving platform with a pushbroom sensor. Each
element of the CCD-array measures the energy related to a small area below the
platform. At 10 m spatial resolution and 6.5 km/s ground speed, every 1.5 milfirst

previous

next

last

back

exit

zoom

contents

index

about

171

5.3. Pushbroom sensor


lisecond (10 m 6.5 km/s) the recorded energy (W) is measured to determine
the DN-values for all the pixels along the line.

first

previous

next

last

back

exit

zoom

contents

index

about

172

5.4. Some operational Earth observation systems

5.4 Some operational Earth observation systems


This section gives some details about specific spaceborne missions that carry
sensors for Earth observation and describes some of their applications. The systems are grouped into the following categories:
Low-resolution systems, with a spatial resolution of 1 km to 5 km.
Medium-resolution systems, with a spatial resolution between 10 m and
100 m.
High-resolution systems, with a spatial resolution better than 10 m.
Imaging spectrometry systems, with a high spectral resolution.
In addition, an example of a large multi-instrument system is given, carrying an
active microwave sensor. Radar sensors are discussed in Chapter 6.
It should be noted that the distinction between low, medium and high-resolution systems is quite arbitrary, and other groupings are possible. Furthermore,
many satellites carry more than one sensor. If this is the case, then the satellite is
described with what is considered its most important sensor. Additional sensors
may be listed in the text of the particular satellite. No attempt was made to make
this a complete list. For a complete and up-to-date list refer to the ITCs Database
of Satellites and Sensors.

first

previous

next

last

back

exit

zoom

contents

index

about

CAUTION

173

5.4. Some operational Earth observation systems

5.4.1

first

Low-resolution systems

previous

next

last

back

exit

zoom

contents

index

about

174

5.4. Some operational Earth observation systems


Meteosat-8
Meteosat is a geostationary satellite that is used in the World Meteorological
Organizations (WMO) space programme. The total programme comprises between 30 and 40 polar-orbiting and geostationary satellites. The first Meteosat
satellite was placed in orbit in 1977. Meteosat satellites are owned by the European organisation Eumetsat. Meteosat-8, launched in August 2002, also called
MSG-1, is the first of the Meteosat Second Generation (MSG) satellites and, compared to the first generation, offers considerable improvements in spatial, spectral, radiometric and temporal resolution. At this moment, Meteosat-8 is operational with Meteosat-7 as a back-up.
System
Orbit
Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)

Ground pixel size


Data archive at

Table 5.1: Meteosat-8 SEVIRI characteristics.

Meteosat-8
Geo-stationary, 0 longitude
SEVIRI
(Spinning Enhanced VIS and IR Imager)
Full Earth disc (FOV = 18)
Not applicable
15 minutes
0.50.9 (PAN), 0.6, 0.8 (VIS), 1.6, 3.9 (IR),
6.2, 7.3 (WV), 8.7, 9.7 10.8, 12.0, 13.4
(TIR)
1 km (PAN), 3 km (all other bands)
www.eumetsat.de

The spectral bands of the SEVIRI sensor (Table 5.1) are chosen for observing phenomena that are relevant to meteorologists: a panchromatic band (PAN),
mid-infrared bands which give information about the water vapour (WV) present
first

previous

next

last

back

exit

zoom

contents

index

about

175

5.4. Some operational Earth observation systems


in the atmosphere, and thermal bands (TIR). In case of clouds, the thermal data
relate to the cloud top temperature, which is used for rainfall estimates and forecasts. Under cloud-free conditions the thermal data relate to the surface temperature of land and sea and are used to detect thermal anomalies, such as forest
fires or volcanic activity.

first

previous

next

last

back

exit

zoom

contents

index

about

176

5.4. Some operational Earth observation systems


NOAA-17
NOAA stands for National Oceanic and Atmospheric Administration, which is
a US-government body. The sensor onboard NOAA missions that is relevant for
Earth Observation is the Advanced Very High Resolution Radiometer (AVHRR).
Today, two NOAA satellites (NOAA-16, 17) are operational, with three others
(NOAA-12, 14 and 15) standing by as a backup.
System
Orbit
Sensor

Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)

Spatial resolution
Data archive at

Table 5.2:
NOAA-17
AVHRR characteristics.

NOAA-17
812 km, 98.7 inclination, sun-synchronous
AVHRR-3
(Advanced Very High Resolution Radiometer)
2800 km (FOV = 110)
No
214 times per day, depending on latitude
0.580.68 (1), 0.731.00 (2),
1.581.64 (3A day), 3.553.93 (3B night),
10.311.3 (4), 11.512.5 (5)
1 km 1 km (at nadir), 6 km 2 km (at
limb), IFOV=1.4 mrad
www.saa.noaa.gov

As the AVHRR sensor (Table 5.2) has a very wide FOV (110) and is at a large
distance from the Earth, the whiskbroom principle causes a large difference in
the ground cell measured within one scanline (Figure 5.4). The standard image
data products of AVHRR are resampled to image data with equally sized ground
pixels.
first

previous

next

last

back

exit

zoom

contents

index

about

177

5.4. Some operational Earth observation systems

Figure 5.4: The NOAA


AVHRR sensor observes
an area of 1.1 km by
1.1 km at the centre and
6.1 km by 2.3 km at the
edge, due to the extreme
field of view of the sensor. The solid line shows
the across-track resolution.
The dashed line
shows the along-track resolution. The ellipses show
the shape of the ground
cells along a scanned line.

Pixel size (km)

6
5
4
3
2
1

60

50

40

30

20

10

10

20

30

40

50

60

Elevation angle (degrees)

AVHRR data are used primarily in day-to-day meteorological forecasting


where they give more detailed information than Meteosat. In addition, there
are many land and water applications.
AVHRR data are used to generate Sea Surface Temperature maps (SST maps),
the detection
which can be used in climate monitoring, the study of El Nino,
of eddies to guide vessels to rich fishing grounds, etc. Cloud cover maps based
on AVHRR data are used for rainfall estimates, which can be input into crop
growing models. Another derived product of AVHRR data are the Normalized
Difference Vegetation Index maps (NDVI). These maps give an indication about
the quantity of biomass (t/ha) and the health of the vegetation. NDVI data are
used as input into crop growth models and also for climate change models. The
NDVI data are, for instance, used by FAO, the Food and Agriculture Organization of the United Nations, in their Food-security Early Warning System (FEWS).

first

previous

next

last

back

exit

zoom

contents

index

about

178

5.4. Some operational Earth observation systems


AVHRR data are appropriate to map and monitor regional land cover, and to assess the energy balance of agricultural areas.
Another application of Meteosat and NOAA data is the tracking of cloud-free
areas to optimize the data acquisition of high resolution satellites.

first

previous

next

last

back

exit

zoom

contents

index

about

179

5.4. Some operational Earth observation systems

5.4.2

first

Medium-resolution systems

previous

next

last

back

exit

zoom

contents

index

about

180

5.4. Some operational Earth observation systems


System
Orbit

Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)

Spatial resolution
Data archives at

Table 5.3:
Landsat-7
ETM+ characteristics.

Landsat-7
705 km, 98.2 inclination,
sun-synchronous, 10:00 AM crossing,
16-day repeat cycle
ETM+ (Enhanced Thematic Mapper)
185 km (FOV = 15)
No
16 days
0.450.52 (1), 0.520.60 (2), 0.630.69 (3),
0.760.90 (4), 1.551.75 (5), 10.412.50
(6),
2.082.34 (7), 0.500.90 (PAN)
15 m (PAN), 30 m (bands 15,7), 60 m
(band 6)
earthexplorer.usgv.gov
edcimswww.cr.usgs.gov/imswelcome

Landsat-7
The Landsat programme is the oldest civil Earth Observation programme. It
started in 1972 with the Landsat-1 satellite carrying the MSS multispectral sensor. In 1982, the Thematic Mapper (TM) replaced the MSS sensor. Both MSS
and TM are whiskbroom scanners. In April 1999 Landsat-7 was launched carrying the ETM+ scanner (Table 5.3). Today, only Landsat-5 and -7 are operational.
On its 20th anniversary (March 1st , 2004) Landsat-5 was still generating valuable
data.
There are many applications of Landsat TM data in land-cover mapping,
first

previous

next

last

back

exit

zoom

contents

index

about

181

5.4. Some operational Earth observation systems


land-use mapping, soil mapping, geological mapping, sea-surface temperature
mapping, etc. For land-cover and land-use mapping, Landsat TM data are preferred, e.g. over SPOT multispectral data, because of the inclusion of middle
infrared bands. Landsat TM is one of the few non-meteorological satellites that
have a thermal infrared band. Thermal data are required to study energy processes at the Earths surface, such as temperature variability within irrigated areas due to differences in soil moisture. Table 5.4 lists some example applications
of the various TM bands.

first

previous

next

last

back

exit

zoom

contents

index

about

182

5.4. Some operational Earth observation systems


Band
1

first

Wavelength
(m)
0.450.52
(Blue)

0.520.60
(Green)

0.630.69
(Red)

0.760.90
(NIR)

1.551.75
(SWIR)

10.412.5
(TIR)

2.082.35
(SWIR)

0.500.90
(15-m PAN)

previous

next

last

Table 5.4: Example applications of the Landsat-7


ETM+ bands (after [23]).

Example Applications
Coastal water mapping: bathymetry &
quality
Ocean phytoplankton & sediment mapping
Atmosphere: pollution & haze detection
Chlorophyll reflectance peak
Vegetation species mapping
Vegetation stress
Chlorophyll absorption
Plant species differentiation
Biomass content
Vegetation species & stress
Biomass content
Soil moisture
Vegetation-soil delineation
Urban area mapping
Snow-cloud differentiation
Vegetation stress analysis
Soil moisture & evapotranspiration mapping
Surface temperature mapping
Geology: mineral and rock type mapping
Water-body delineation
Vegetation moisture content mapping
Medium-scale topographic mapping
Image sharpening
Snow-cover classification

back

exit

zoom

contents

index

about

183

5.4. Some operational Earth observation systems


Terra
EOS (Earth Observing System) is the centerpiece of NASAs Earth Science mission. The EOS AM-1 satellite, later renamed Terra , is the flagship of the fleet and
was launched in December 1999. It carries five remote sensing instruments, including the Moderate Resolution Imaging Spectroradiometer (MODIS) and the
Advanced Spaceborne Thermal Emission and Reflectance Radiometer (ASTER).
The ASTER instrument (Table 5.5) is designed with three bands in the visible
and near-infrared spectral range with a 15 m resolution, six bands in the shortwave infrared with a 30 m resolution, and five bands in the thermal infrared
with a 90 m resolution. The VNIR and SWIR bands have a spectral bandwidth
in the order of 10 nm. ASTER consists of three separate telescope systems, each
of which can be pointed at selected targets. The near-infrared bands (3N and
3B) can generate along-track stereo image pairs with a large intersection angle
of 27.7 , which means they can be used to generate high-quality digital elevation models. The swath width of the image is 60 km and the revisit time is about
5 days for the VNIR channels.
MODIS observes the entire surface of the Earth every 12 days with a whiskbroom scanning imaging radiometer. Its wide field of view (2300 km) provides
daylight images of reflected solar radiation, and day-and-night thermal emissions over the entire globe. Its spatial resolution ranges from 250 in the visible
bands to 1000 m in the thermal bands. MODIS has 36 narrow spectral bands,
from which a broad range of standard data products are generated, ranging
from atmospheric aerosol concentration, land and sea surface temperature, vegetation indices, thermal anomalies, snow and ice cover, to ocean chlorophyll and
organic matter concentration products.
Terra is revolutionary in that users will receive much more than the raw
satellite data. Until Terra, most satellite data has been available as raw digifirst

previous

next

last

back

exit

zoom

contents

index

about

184

5.4. Some operational Earth observation systems


tal numbers. Imagery from the Terra sensors are easy to browse, purchase and
download via the internet. The data are distributed worldwide at the cost of
reproduction. With more bands, a better resolution and a low price, the ASTER
images are an excellent replacement for commercial Landsat images. MODIS is a
global-scale, multi-spectral instrument useful for addressing questions in many
scientific disciplines.
In addition to the Terra mission, the NASA has developed two other major
satellites, called Aqua and Aura. While Terra is mainly focused on land applications, Aqua focuses on water, and Aura on trace gases in the atmosphere
(specifically ozone). Aqua was launched in May 2002. Aura was successfully
put in orbit in July 2004. Together, the satellites form a strong combination for
Earth observation, complementing each other with their data. The satellites are
the backbone of what NASA calls its Earth Science Enterprise (ESE).

first

previous

next

last

back

exit

zoom

contents

index

about

185

5.4. Some operational Earth observation systems

System
Orbit

Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)

Spatial resolution
Data archives at

first

previous

next

Table 5.5: Terra ASTER


characteristics.

Terra
705 km, 98.2 inclination,
sun-synchronous, 10:30 AM crossing,
16-day repeat cycle
ASTER
60 km
across-track 8.5 SWIR and TIR 24
VNIR along-track 27.7 backwards band 3B
5 days (VNIR)
VIS (bands 12) 0.56, 0.66,
NIR 0.81 (3N nadir and 3B backward
27.7),
SWIR (49) 1.65, 2.17, 2.21, 2.26, 2.33,
2.40
TIR (bands 1014) 8.3, 8.65, 9.10, 10.6,
11.3
15 m (VNIR), 30 m (SWIR), 90 m (TIR)
terra.nasa.gov
edcimswww.cr.usgs.gov/imswelcome

last

back

exit

zoom

contents

index

about

186

5.4. Some operational Earth observation systems

5.4.3

High-resolution systems
System
Orbit

Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)

Spatial resolution
Data archives at

first

previous

next

Table 5.6: SPOT-5 HRG


characteristics.

SPOT-5
822 km, 98.7 inclination,
sun-synchronous, 10:30 AM crossing,
26-day repeat cycle
2 HRG (High Resolution Geometric) and
HRS (High Resolution Stereoscopic)
60 km
31 across-track
23 days (depending on latitude)
0.500.59 (Green), 0.610.68 (Red),
0.780.89 (NIR), 1.581.75 (SWIR),
0.480.70 (PAN)
10 m, 5 m (PAN)
sirius.spotimage.fr
www.vgt.vito.be (free VEGETATION data,
older than 3 months)

last

back

exit

zoom

contents

index

about

187

5.4. Some operational Earth observation systems


SPOT-5
SPOT stands for Systeme Pour lObservation de la Terre. The SPOT satellites
are owned by a consortium of the French, Swedish and Belgium governments.
SPOT-1 was launched in 1986. It was the first operational pushbroom CCD sensor with across-track viewing capability to be put into space. At that time, the
10 m panchromatic spatial resolution was unprecedented in civil remote sensing. In March 1998 a significantly improved SPOT-4 was launched. Its HRVIR
sensor has 4 instead of 3 bands, and the VEGETATION instrument was added.
VEGETATION was designed for frequent (12 days revisit time) and accurate
monitoring of the globes landmasses at 1 km resolution. Some VEGETATION
data is freely available from VITO, Belgium. SPOT-5 was launched in May 2002
and has further improved spatial resolutions (Table 5.6).

first

previous

next

last

back

exit

zoom

contents

index

about

188

5.4. Some operational Earth observation systems


Resourcesat-1
India puts much effort into remote sensing and has many operational missions
and missions under development. The most important Earth Observation programme is the Indian Remote Sensing (IRS) programme. Launched in 1995
and 1997, two identical satellites, IRS-1C and IRS-1D, can deliver image data
at high revisit times. IRS-1C and IRS-1D carry three sensors, the Wide Field
Sensor (WiFS) designed for regional vegetation mapping, the Linear Imaging
Self-Scanning Sensor 3 (LISS3), which yields multispectral data in four bands
with a spatial resolution of 24 m, and the PAN with a high spatial resolution of
5.8 m. For a number of years, up to the launch of Ikonos in September 1999, the
IRS-1C and -1D were the civilian satellites with the highest spatial resolution.
Applications are similar to those of SPOT and Landsat.
In 2003, Resourcesat-1 was launched. Resourcesat-1 is the most advanced
satellite built by the Indian Space Research Organisation (ISRO), India, bringing
continuity to the current IRS-1C and 1D programs. Resourcesat-1 carries three
sensors (LISS4, LISS3, AWiFS) that deliver an array of spectral bands and resolutions ranging from 6 m to 70 m. In addition, Resourcesat-1 has 60 Gigabits of
onboard memory that allows for out-of-contact imaging. In Table 5.7, the characteristics of the LISS4 sensor are given.
Resourcesat-1 (IRS-P6), which was meant for agricultural applications, is part
of Indias ongoing Remote Sensing program, which consists of many satellites
for different applications:
Oceansat-1 (IRS-P4) was launched in May 1999 to study physical and biological aspects of oceanography.
The Technology Experiment Satellite (TES) was launched in October 2001.
The satellite was intended to demonstrate and validate technologies that
first

previous

next

last

back

exit

zoom

contents

index

about

189

5.4. Some operational Earth observation systems


System
Orbit

Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archive at

Table 5.7: Resourcesat-1


LISS4 characteristics.

Resourcesat-1
817 km, 98.8 inclination,
sun-synchronous, 10:30 AM crossing,
24-day repeat cycle
LISS4
70 km
20 across-track
524 days
0.56, 0.65, 0.80
6m
www.spaceimaging.com

could be used in the future cartographic satellite missions of ISRO. TES


carries a panchromatic camera with a spatial resolution of 1 m.
Cartosat-1 (IRS-P5) is planned for launch in 20042005 and is intended for
advanced mapping applications. It will have two panchromatic cameras
with a spatial resolution of 2.5 m and a swath of 30 km each.
Cartosat-2 will be an advanced remote sensing satellite with a single panchromatic camera capable of providing scene-specific spot images for cartographic applications. The panchromatic camera is designed to provide
images with better than one-meter spatial resolution and a swath of 10 km.
It is planned for launch in 20052006.
Furthermore, a Radar Imaging Satellite (RISAT) has been taken up for development. The satellite will involve the development of a multi-mode,
multi-polarization, Synthetic Aperture Radar (SAR), operating in C-band
first

previous

next

last

back

exit

zoom

contents

index

about

190

5.4. Some operational Earth observation systems


and providing a 3 m to 50 m spatial resolution. RISAT is expected to be
launched in 20062007.

first

previous

next

last

back

exit

zoom

contents

index

about

191

5.4. Some operational Earth observation systems


Ikonos
Ikonos was the first commercial high resolution satellite to be placed into orbit.
Ikonos is owned by SpaceImaging, a USA-based Earth observation company.
The other commercial high resolution satellites are OrbView-3 (launched in 2003,
owned by OrbImage), Quickbird (launched in 2001, owned by EarthWatch), and
EROS-A1 (launched in 2000, owned by West Indian Space). Ikonos was launched
in September 1999 and regular data ordering has been taking place since March
2000.
The OSA sensor onboard Ikonos (Table 5.8) is based on the pushbroom principle and can simultaneously take panchromatic and multispectral images. In
addition to the high spatial resolution of 1 m panchromatic and 4 m multispectral, it also has a high radiometric resolution using 11-bit quantization.
System
Orbit

Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archive at

Table 5.8: Ikonos OSA


characteristics.

Ikonos
681 km, 98.2 inclination,
sun-synchronous, 10:30 AM crossing,
14-day repeat cycle
Optical Sensor Assembly (OSA)
11 km
50 omnidirectional
13 days
0.450.52 (1), 0.520.60 (2), 0.630.69 (3),
0.760.90 (4), 0.450.90 (PAN)
1 m (PAN), 4 m (bands 14)
www.spaceimaging.com

It is expected that, in the long term, 50% of the aerial photography will be refirst

previous

next

last

back

exit

zoom

contents

index

about

192

5.4. Some operational Earth observation systems


placed by high resolution imagery from space, and that digital airborne cameras
will largely replace the remaining aerial photography. One of the first tasks of
Ikonos was to acquire imagery of all major USA cities. Previously, the mapping
and monitoring of urban areas from space, not only in America, was possible
only to a limited extent.
Ikonos data can be used for small to medium scale topographic mapping, not
only to produce new maps, but also to update existing topographic maps.
Another application is what is called precision agriculture. This is reflected
in the multispectral band setting, which includes a near-infrared band. Regular
updates of the situation on the field can help farmers to optimize the use of
fertilizers and herbicides.

first

previous

next

last

back

exit

zoom

contents

index

about

193

5.4. Some operational Earth observation systems

5.4.4

Imaging spectrometry, or hyperspectral systems

Due to the limited success and availability of imaging spectrometry (or hyperspectral) systems, the Earth Observing-1 (EO-1), which is in fact a multi-instrument
satellite, is described in the first section as an example of an imaging spectrometry system.
ESAs micro-satellite Proba is discussed in the next section.
System
Orbit

Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands
Spatial resolution
Data archive at

first

previous

next

Table 5.9: EO-1 Hyperion


characteristics.

EO-1
705 km, 98.7 inclination,
sun-synchronous, 10:30 AM crossing,
16-day repeat cycle
Hyperion
7.5 km
No
16 days
220 bands, covering 0.4 m to 2.5 m
30 m
eo1.gsfc.nasa.gov

last

back

exit

zoom

contents

index

about

194

5.4. Some operational Earth observation systems


EO-1
The Earth Observing-1 (EO-1) mission is part of the NASA New Millennium
Program and is focused on new sensor and spacecraft technologies that can directly reduce the cost of Landsat and related Earth monitoring systems. EO-1
was launched in 2000 for a mission of 1 year. The satellite is on an extended
mission that is expected to run at least until the end of 2004.
The EO-1 satellite is in an orbit that covers the same ground track as Landsat 7, approximately one minute later. This enables EO-1 to obtain images of
the same ground area at nearly the same time, so that direct comparison of results can be obtained from Landsat ETM+ and the three primary EO-1 instruments. The three primary instruments on the EO-1 spacecraft are the Hyperion,
the LEISA Atmospheric Corrector (LAC), and the Advanced Land Imager (ALI).
LEISA is the Linear Etalon Imaging Spectrometer Array.
Hyperion is a 220-band imaging spectrometer with a 30 m ground sample
distance over a 7.5 km swath, providing 10 nm (sampling interval) contiguous
bands of the solar reflected spectrum from 400 nm to 2500 nm. Hyperion is the
first imaging spectrometer for civil Earth observation. The specifications of EO1/Hyperion are summarized in Table 5.9.
LAC is an imaging spectrometer covering the spectral range from 900 nm to
1600 nm which is well-suited to monitor the atmospheric water absorption lines
for correction of atmospheric effects in multispectral imagers such as ETM+ on
Landsat.
The EO-1 Advanced Land Imager (ALI) is a technology verification instrument. Operating in a pushbroom fashion at an orbit of 705 km, the ALI provides Landsat-type panchromatic and multispectral bands. These bands have
been designed to mimic six Landsat bands with three additional bands covering
0.433 m to 0.453 m, 0.845 m to 0.890 m, and 1.20 m to 1.30 m.
first

previous

next

last

back

exit

zoom

contents

index

about

195

5.4. Some operational Earth observation systems

System
Orbit

Sensor

Swath width
Off-nadir viewing
Revisit time
Spectral bands
Spatial resolution
Data archive at

first

previous

next

Table 5.10: Proba CHRIS


characteristics.

Proba micro-satellite (94 kg)


615 km, 97.9 inclination,
sun-synchronous, 10:30 AM crossing,
7-day repeat cycle
CHRIS
(Compact High-Res. Imaging Spectrometer)
14 km
along-track 55, across-track 36
less than 1 week, typically 23 days
19 or 63 bands, 410 nm to 1050 nm
18 m (full spatial resolution,
36 m (full spectral resolution)
www.chris-proba.org.uk

last

back

exit

zoom

contents

index

about

196

5.4. Some operational Earth observation systems


Proba/CHRIS
ESAs micro-satellite Proba (Project for On-Board Autonomy), launched in October 2001, is a technology experiment to demonstrate the onboard autonomy of a
generic platform suitable for small scientific or application missions. Proba carries several instruments. One of these is CHRIS, the Compact High Resolution
Imaging Spectrometer (Table 5.10). CHRIS is used to measure directional spectral reflectance of land areas, thus providing new biophysical and biochemical
data, and information on land surfaces.
It appears that the reflectance of most materials depends on the angle of incidence, the angle of reflectance, and the two azimuthal angles. This dependency
is captured by what is called the Bidirectional Reflectance Distribution Function
(BRDF), which fully describes the directional dependency of the reflected energy. Measuring the BRDF may give us clues about the characteristics of the
observed materials or objects.
In addition to CHRIS, Proba carries a panchromatic High-Resolution Camera
(HRC) with a spatial resolution of 5 m.
Proba was designed to be a one-year technology demonstration mission, but
has since had its lifetime extended as an Earth Observation mission. A follow-on
mission, Proba-2, is scheduled for launch in 2006.

first

previous

next

last

back

exit

zoom

contents

index

about

197

5.4. Some operational Earth observation systems

5.4.5

first

Example of a large multi-instument system

previous

next

last

back

exit

zoom

contents

index

about

198

Atmosphere
Clouds
Humidity
Radiative Fluxes
Temperature
Trace Gases
Aerosols
Land
Surface Temperature
Vegetation Characteristics
Surface Elevation
Ocean
Ocean Colour
Sea Surface Temperature
Surface Topography
Turbidity
Wave Characteristics
Marine Geoid
Ice
Extent
Snow Cover
Topography
Temperature

first

previous

next

last

Table 5.11: Instruments of


Envisat-1 listed with their
applications.

AATSR

DORIS

SCIAMACHY

LR

MWR

MIPAS

MERIS

RA-2

ASAR

Envisat-1

GOMOS

5.4. Some operational Earth observation systems

back

exit

zoom

contents

index

about

199

5.4. Some operational Earth observation systems


Envisat-1
Envisat-1 is the most advanced satellite ever built by ESA. It was launched in
2001 to provide measurements of the atmosphere, ocean, land and ice. Envisat-1
is a large satellite (8200 kg, in-orbit configuration 26 m 10 m 5 m) carrying
an array of different sensors. To meet the mission requirements, a coherent,
multidisciplinary set of sensors was selected, each contributing with its distinct
measurement performance in various ways to the mission, as well as providing
synergy between various scientific disciplines, thus making the total payload
complement more than just the sum of the instruments (Table 5.11).
The Envisat-1 satellite comprises a set of seven ESA developed instruments
supported by three complementary instruments (AATSR, SCIAMACHY and DORIS):
ASAR, Advanced Synthetic Aperture Radar.
MERIS, Medium Resolution Imaging Spectrometer.
RA-2, Radar Altimeter 2.
MWR, Microwave Radiometer.
LR, Laser Retro-Reflector.
GOMOS, Global Ozone Monitoring by Occultation of Stars.
MIPAS, Michelson Interferometer for Passive Atmospheric Sounding.
AATSR, Advanced Along Track Scanning Radiometer.
DORIS, Doppler Orbitography and Radiopositioning Integrated by Satellite.

first

previous

next

last

back

exit

zoom

contents

index

about

200

5.4. Some operational Earth observation systems


System
Orbit

Sensor
Swath width
Off-nadir viewing
Revisit time
Frequency
Polarization
Spatial resolution
Sensor
Swath width
Revisit time
Spectral range (m)
Spectral bandwidth
Bands
Spatial resolution
Data archive at

Table 5.12:
Characteristics of the
Envisat-1 satellite and
its ASAR and MERIS
sensors.

Envisat-1
800 km, 98.6 inclination,
sun-synchronous, 10:00 AM crossing,
35-day repeat cycle
ASAR (Advanced SAR)
56 km to 405 km
across-track 17 to 45
35 days
C-band, 5.331 GHz
several modes: HH+VV, HH+HV or VV+VH
30 m or 150 m (depending on mode)
MERIS
1150 km
3 days
0.391.04 (VNIR)
1.25 nm to 25 nm (programmable)
15 bands (due to limited capacity)
300 m (land), 1200 m (ocean)
envisat.esa.int

SCIAMACHY,
Scanning Imaging Absorption Spectrometer for Atmospheric Cartography.

Additionally, ESAs Artemis data relay satellite system is used for communication to the ground.
The most important sensors for land applications are the Advanced Synthetic Aperture Radar (ASAR), the Medium-Resolution Imaging Spectrometer
(MERIS), and the Advanced Along-Track Scanning Radiometer (AATSR).
first

previous

next

last

back

exit

zoom

contents

index

about

201

5.4. Some operational Earth observation systems


Envisat-1s ASAR ensures the continuation of the ERS-1 and ERS-2 radar
satellites. It features enhanced capabilities in terms of coverage, range of incidence angles, polarizations and modes of operation.
In normal image mode, the ASAR generates high-spatial resolution products
(30 m) similar to the ERS SAR products. It can image seven different swaths located over a range of incidence angles from 15 to 45 in HH or VV polarization.
In other modes the ASAR is capable of recording the cross-polarization modes
HV and VH. In addition to the 30-m resolution, it offers a wide swath mode, for
providing images of a wider strip (405 km) with a medium resolution (150 m).

Band
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

first

Band
center
(nm)
412.5
442.5
490
510
560
620
665
681.25
705
753.75
760
775
865
890
900

previous

next

Band
width
(nm)
10
10
10
10
10
10
10
7.5
10
7.5
2.5
15
20
10
10

last

Table 5.13:
Envisat-1
MERIS band characteristics.

Potential Applications
Yellow substance, turbidity
Chlorophyll absorption maximum
Chlorophyll and other pigments
Turbidity, suspended sediment, red tides
Chlorophyll reference, suspended sediment
Suspended sediment
Chlorophyll absorption
Chlorophyll fluorescence
Atmospheric correction, vegetation
Oxygen absorption reference, vegetation
Oxygen absorption band
Aerosols, vegetation
Aerosols corrections over ocean
Water vapour absorption reference
Water vapour absorption, vegetation

back

exit

zoom

contents

index

about

202

5.4. Some operational Earth observation systems


The Medium Resolution Imaging Spectrometer Instrument, MERIS, is a 68.5
field-of-view pushbroom imaging spectrometer that measures the solar radiation reflected by the Earth, at a ground spatial resolution of 300m, in 15 spectral
bands in the visible and near infra-red. In fact, the bands of MERIS are fully
programmable in width and position, making MERIS a fully-fledged imaging
spectrometer. However, due to the capacity constraints on the entire Envisat1 system, a choice was made to offer a standard set of 15 bands (Table 5.13).
MERIS allows global coverage of the Earth in 3 days.
The MERIS instrument can be operated either in direct or in averaging observation modes. In averaging mode, the data are spatially averaged onboard to
produce a 1200 m resolution image. In direct mode the instrument delivers the
full resolution product of 300 m resolution. Typically, the direct mode is used on
land and coastal features, while the averaging mode is used over the ocean. The
full spatial resolution is available for up to 20 minutes per orbit, or about 20% of
the time.
The characteristics of the Envisat-1 satellite and its ASAR and MERIS sensors
are listed in Table 5.12.

first

previous

next

last

back

exit

zoom

contents

index

about

203

5.4. Some operational Earth observation systems

5.4.6

Future developments

There have been trends towards higher spatial resolution (1-meter detail), higher
spectral resolution (more than 100 bands), higher temporal resolution (global
coverage within 3 days), and higher radiometric resolution (simultaneous observation of dark and bright targets). Although most of these trends were driven
by advancing technology, the trend towards higher spatial resolution was initiated by an act of politics, when the US president Clinton issued the US Land
Remote Sensing Act of 1992. The act initiated a new Race to Space, this time
about which private company would be the first to launch their high-resolution
satellite. Nowadays, private companies spend more money on remote sensing
than the governments.
In addition to the traditional spacefaring countries, new countries have launched their own remote sensing satellites. Countries such as India, China and
Brazil, for instance, have a strong ongoing remote sensing program. As remote
sensing technology matures and becomes available at lower cost, still other players may enter the field.
The following list shows a number of technological developments:
Fast development. Nowadays, the time from the drawing board to the launch
in space of a remote sensing satellite can be as short as one year. This means
that satellites can be developed and deployed very fast.
New sensor types. We may see a development of sensor types that were
previously unused in space. One example of this may be P-band radar.
Up to now, no P-band radar was used in space. P-band has a longer wavelength (30 cm to 100 cm) than the radar systems used before and penetrates
deeper into the soil, which means it may offer an unprecedented view on

first

previous

next

last

back

exit

zoom

contents

index

about

204

5.4. Some operational Earth observation systems


the subsurface. Other new sensors are hyperspectral sensors (not really
operational yet) and lidar (not used before in space).
Sophisticated radar systems. New systems may offer multi-frequency, multipolarization and multiple look angles (Radarsat-2/SAR, ALOS/PALSAR).
New attitude control methods. Traditionally, a large part of the mass of a
satellite consists of fuel needed for attitude control and orbital corrections.
The use of new electric propulsion methods, such as magnetic torque and
ion motors, may drastically reduce the mass of a satellite, thus opening the
road to miniaturization.
Better pointing capabilities. Many of the current satellite have improved
pointing capabilities, which allows faster revisits. This means that the
same area can be observed twice within a couple of days rather than a
couple of weeks.
Big versus small. Nowadays, satellites come in all sizes ranging from the
huge Envisat-1, with a mass larger than 8000 kg, to the tiny SNAP-1, with
a mass as little as 6 kg. Currently there seem to be two lines of developments. One is the development of large, multi-instrument satellites, such
as Terra and Envisat-1. The other is the development small, mostly singleinstrument satellites. Mini satellites can be developed and launched at a
low cost. For the price of one large satellite, one can have a constellation of
small satellites (see below).
High-speed laser communication. One of the bottlenecks in remote sensing
has always been the capacity of the downlink channel. Basically, the amount

first

previous

next

last

back

exit

zoom

contents

index

about

205

5.4. Some operational Earth observation systems


of image data that could be sent to the ground was limited by the microwave channels that were used. The use of modern laser optical communication channels may give rise to an unprecedented flow of data to the
Earth. Already SPOT-4 and Envisat-1 were equipped with experimental
laser-optical communication instruments to send data via ESAs geostationary communication satellite Artemis.
Special missions. Instead of having one multi-purpose satellite, there may
be many specialized remote sensing satellites. For instance, the BritishChinese satellite Tsinghua-1, launched in 2000, was developed to provide
daily world-wide high-resolution imaging for disaster monitoring and mitigation. Tsinghua-1 is the first demonstrator for the Disaster Monitoring
Constellation (see next item).
Constellations of satellites. AlSAT-1, the 90 kg Algerian microsatellite, which
was launched in November 2002, is part of the first dedicated Disaster
Monitoring Constellation (DMC). The DMC will comprise seven Earth observation microsatellites launched into low Earth orbit to provide daily
imaging revisit anywhere in the world.
New spacefaring countries. The DMC Consortium comprises a partnership
between organizations in Algeria, China, Nigeria, Thailand, Turkey, Vietnam and the United Kingdom. Six of the seven microsatellites for the
DMC are being constructed at SSTL (Surrey Satellite Technology Limited)
in the UK. The seventh microsatellite (Thai-Paht-2) is being built at the
Mahanakorn University of Technology (MUT) in Bangkok, Thailand.
Use of new orbits. In the future, satellites may be placed in orbits that were
not used before for Earth observation, such as the Lagrangian points of the
first

previous

next

last

back

exit

zoom

contents

index

about

206

5.4. Some operational Earth observation systems


Earth-Moon system, or the Sun-Earth system (Section 3.3.2).
Improved coverage, achieved through having more satellites, more receiving
stations, higher onboard storage capacity, better data throughput, better
pointing capabilities, etc.
Rapid dissemination. By exploiting the capabilities of modern computing
systems and networks, remote sensing data can be distributed to the enduser almost in real-time.
Cheap & useful data. Stiff competition on the remote sensing market has
caused prices to drop significantly. On the other hand, the quality of free
data sources has improved considerably. When starting a project, it is advisable to check the free and low-cost data sources first before spending a
huge amount of money on commercial data.

first

previous

next

last

back

exit

zoom

contents

index

about

207

5.4. Some operational Earth observation systems

Summary
The multispectral scanner is a sensor that collects data in various wavelength
bands of the EM spectrum. The scanner can be mounted on an aircraft or on
a satellite. There are two types of scanners, whiskbroom scanner and pushbroom sensor. They use single solid state detectors and arrays of solid state
detectors (CCD), respectively, for measuring EM energy levels. The resulting
image data store the measurements as Digital Numbers. Multispectral scanners
provide multi-band data.
In terms of geometrically distortions and operational reliability the pushbroom sensor is superior over the whiskbroom scanner. However, because of
the limited spectral range of current CCDs, whiskbroom scanners are still used
for spectral ranges above 2.5 m, such as the mid-infrared and thermal infrared
bands.
Operational remote sensing satellites can be distinguished by the resolution
and the characteristics of their (main) sensor. A number of sensors and platforms
were discussed in the following categories: low resolution, medium resolution,
high resolution, imaging spectrometry, and, finally, a large multi-instrument system with an imaging radar. In addition, some future developments were discussed. Keywords such as small, high resolution, high quality, low cost, agile,
constellation, large coverage, rapid dissemination, describe the trend.

first

previous

next

last

back

exit

zoom

contents

index

about

208

5.4. Some operational Earth observation systems

Questions
The following questions can help you to study Chapter 5.
1. Compare multispectral scanner data with scanned aerial photographs.
Which similarities and differences can you identify?
The following are typical exam questions:
1. Explain the principle of the whiskbroom scanner.
2. Explain the principle of the pushbroom sensor.
3. What does CCD stand for and what is it used for?
4. Consider a whiskbroom scanner at 5000 m height and with a Field of View
() of 2 mrad. Calculate the diameter of the area observed on the ground.

first

previous

next

last

back

exit

zoom

contents

index

about

209

5.4. Some operational Earth observation systems

5. Explain the difference between IFOV and FOV.


6. Explain off-nadir viewing. Which advantage does it offer?
7. What is quantization and to which part of the scanning process does it
relate?
8. Which range of spatial resolutions are encountered with todays multispectral and panchromatic scanners?

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 6
CAUTION

Active sensors

first

previous

next

last

back

exit

zoom

contents

index

about

210

211

6.1. Introduction

6.1 Introduction
Active remote sensing technologies have the potential to provide accurate information about the land surface by imaging Synthetic Aperture Radar (SAR) from
airborne or spaceborne platforms, and three-dimensional measurement of the
surface by interferometric SAR (INSAR or IFSAR) and airborne laser scanning
(LIDAR). While these technologies are both active ranging systems utilizing precise GPS and INS systems, they represent fundamentally different sensing processes. Radar systems, introduced in Section 6.2, are based upon microwave
sensing principles, while laser scanners are optical sensors, typically operating
in the near-infrared portion of the electromagnetic spectrum. They are introduced in Section 6.3.

first

previous

next

last

back

exit

zoom

contents

index

about

212

6.2. Radar

6.2 Radar

first

previous

next

last

back

exit

zoom

contents

index

about

213

6.2. Radar

6.2.1

What is radar?

So far, you have learned about remote sensing using the visible and infrared
part of the electromagnetic spectrum. Microwave remote sensing uses electromagnetic waves with wavelengths between 1 cm and 1 m (Figure 2.5). These relatively longer wavelengths have the advantage that they can penetrate clouds
and are independent of atmospheric conditions, such as haze. Although microwave remote sensing is primarily considered an active technique, also passive sensors are used. They operate similarly to thermal sensors by detecting
naturally emitted microwave energy. They are primarily used in meteorology,
hydrology and oceanography. In active systems on the other hand, the antenna
transmits microwave signals from an antenna to the Earths surface where they
are backscattered. The part of the electromagnetic energy that is scattered into
the direction of the antenna is detected by the sensor as illustrated in Figure 6.1.
There are several advantages to be gained from the use of active sensors, which
have their own energy source:
It is possible to acquire data at any time, including during the night (similar
to thermal remote sensing).
Since the waves are created actively, the signal characteristics are fully controlled (e.g. wavelength, polarization, incidence angle, etc.) and can be adjusted according to the desired application.
Active sensors are divided into two groups: imaging and non-imaging sensors.
Radar sensors belong to the group of most commonly used active (imaging) microwave sensors. The term radar is an acronym for radio detection and ranging.
Radio stands for the microwave and range is another term for distance. Radar
sensors were originally developed and used by the military. Nowadays, radar
first

previous

next

last

back

exit

zoom

contents

index

about

214

6.2. Radar

Figure 6.1: Principle of


active microwave remote
sensing

.
sensors are widely used in civil applications as well, such as environmental monitoring. To the group of non-imaging microwave instruments belong altimeters,
which collect distance information (e.g. sea surface height), and scatterometers,
which acquire information about the object properties (e.g. wind speed).
This section focuses on the principles of imaging radar and its applications.
The interpretation of radar imagery is less intuitive than of that obtained from
optical remote sensing. This is because of differences in physical interaction of
the waves with the Earths surface. The section explains which interactions take
place and how radar images can be interpreted.

first

previous

next

last

back

exit

zoom

contents

index

about

215

6.2. Radar

6.2.2

Principles of imaging radar

Imaging radar systems include several components: a transmitter, a receiver,


an antenna and a recorder. The transmitter is used to generate the microwave
signal and transmit the energy to the antenna from where it is emitted towards
the Earths surface. The receiver accepts the backscattered signal as received by
the antenna, filters and amplifies it as required for recording. The recorder then
stores the received signal.
Imaging radar acquires an image in which each pixel contains a digital number according to the strength of the backscattered energy that is received from
the ground. The energy received from each transmitted radar pulse can be expressed in terms of the physical parameters and illumination geometry using
the so-called radar equation:
Pr =

G 2 2 P t
,
(4)3 R4

(6.1)

where
Pr
G

Pt

=
=
=
=
=

received energy,
antenna gain,
wavelength,
transmitted energy,
radar cross section, which is a function of the object characteristics and the size of the illuminated area, and
range from the sensor to the object.

From this equation you can see that there are three main factors that influence
the strength of the backscattered received energy:

first

previous

next

last

back

exit

zoom

contents

index

about

216

6.2. Radar
radar system properties, i.e. wavelength, antenna and transmitted power,
radar imaging geometry, that defines the size of the illuminated area which
is a function of, for example, beam-width, incidence angle and range, and
characteristics of interaction of the radar signal with objects, i.e. surface
roughness and composition, and terrain topography and orientation.
They are explained in the following sections in more detail.

Intensity
Time
Intensity
Time
Sampling

Figure 6.2: Illustration of


how radar pixels result
from pulses. For each sequence shown, one image
line is generated.

Time

Intensity
Time

What exactly does a radar system measure? To interpret radar images correctly, it is important to understand what a radar sensor detects. The physical
properties of a radar wave are the same as those introduced in Section 2.2. Radar
waves, too, are electric and magnetic fields that oscillate in the shape of a sine
wave, oriented perpendicular to each other. The concepts of wavelength and frequency are used as described before. In addition, amplitude, phase and period are
first

previous

next

last

back

exit

zoom

contents

index

about

217

6.2. Radar
relevant. The amplitude is the peak value of the wave. It relates to the amount
of energy contained in the wave. The phase is the fraction of the period that has
elapsed relative to the start point of the wave. In other words, the phase rotates
360, i.e. a full circle, for one full wave. The time required to complete that full
wave is the period.
The radar transmitter creates microwave signals, i.e. pulses of microwaves at
a fixed frequency (the Pulse Repetition Frequency [PRF]), that are directed by the
antenna into a beam. A pulse travels in this beam through the atmosphere, illuminates a portion of the Earths surface, is backscattered and passes through
the atmosphere again to reach the antenna where the signal is received and the
intensity of it is determined. The signal needs to pass twice the distance between object and antenna, and, knowing the speed of light, the distance (range)
between sensor and object can be derived.
To create an image, the return signal of a single pulse is sampled and these
samples are stored in an image line (Figure 6.2). With the movement of the sensor emitting pulses, a two-dimensional image is created (each pulse defines one
line). The radar sensor, therefore, measures distances and detects backscattered
signal intensities.
Commonly used imaging radar bands Similarly to optical remote sensing,
radar sensors operate with one or more different bands. For better identification,
a standard has been established that defines various wavelength ranges using
letters to distinguish among the various bands (Figure 6.3). In the description
of different radar missions you will recognize the different wavelengths used if
you see the letters. The European ERS mission and the Canadian Radarsat, for
example, use C-band radar. Just like multispectral bands, different radar bands
provide information about different object characteristics.

first

previous

next

last

back

exit

zoom

contents

index

about

218

6.2. Radar
Band

Q V

Frequency (GHz)

0.3

1.0

3.0

10.0

30.0

100.0

Wavelength (cm)

100

30

10

0.3

Figure 6.3:
Microwave
spectrum
and
band
identification by letters.

Microwave polarizations The polarization of an electromagnetic wave is important in the field of radar remote sensing. Depending on the orientation of
the transmitted and received radar wave, polarization will result in different
images (Figure 6.4). It is possible to work with horizontally, vertically or crosspolarized radar waves. Using different polarizations and wavelengths, you can
collect information that is useful for particular applications, for example, to classify agricultural fields. In radar system descriptions you will come across the
following abbreviations:
HH: horizontal transmission and horizontal reception,
VV: vertical transmission and vertical reception,
HV: horizontal transmission and vertical reception, and
VH: vertical transmission and horizontal reception.

first

previous

next

last

back

exit

zoom

contents

index

about

219

6.2. Radar
Electric field

Wavelength, l

Figure 6.4: A vertically


polarized electromagnetic
wave; the electric fields
variation occurs in the vertical plane in this example.

Distance
Magnetic field

Velocity of light, c

Ra

ng

ve

cto

Local
incidence
angle

ath
Flight p
Illumination angle

Ra

ng

Altitude

Earth
normal
vector

fa

cto
an r
tr
an
ge Incidence

Sl

Swath width
ange
e
g
n
a
Far r
nge
th ra
Swa

ar R
angle Ne

ir
Nad
uth
Azim

Ground range

Figure 6.5: Radar remote


sensing geometry.

first

previous

next

last

back

exit

zoom

contents

index

about

220

6.2. Radar

6.2.3

Geometric properties of radar

The platform carrying the radar sensor moves along the orbit in the flight direction (Figure 6.5). You can see the ground track of the orbit/flight path on the
Earths surface at nadir. The microwave beam illuminates an area, or swath, on
the Earths surface, with an offset from the nadir, i.e. side-looking. The direction
along-track is called azimuth, the direction perpendicular (across-track) is called
range.

first

previous

next

last

back

exit

zoom

contents

index

about

221

6.2. Radar
Radar viewing geometry
Radar sensors are side-looking instruments. The portion of the image that is
closest to the nadir track of the satellite carrying the radar is called near range.
The part of the image that is farthest from the nadir is called far range (Figure 6.5).
The incidence angle of the system is defined as the angle between the radar beam
and the local vertical. Moving from near range to far range, the incidence angle increases. It is important to distinguish between the incidence angle of the
sensor and the local incidence angle, which differs depending on terrain slope and
earth-curvature (Figure 6.5). It is defined as the angle between the radar beam
and the local surface normal. The radar sensor measures the distance between
antenna and object. This line is called slant range. But the true horizontal distance along the ground corresponding to each measured point in slant range is
called ground range (Figure 6.5).

Pu
lse
le
PL

ngt
h
Front of return wave from A

Rear of outgoing wave

Front of return wave from B


(overlaps return from A)

<

first

previous

next

last

back

exit

Figure 6.6: Illustration of


the slant range resolution.

PL
2

zoom

contents

index

about

222

6.2. Radar
Spatial resolution
In radar remote sensing, the images are created from the backscattered portion
of transmitted signals. Without further sophisticated processing, the spatial resolutions in slant range and azimuth direction are defined by pulse length and antenna beam width, respectively. This setup is called Real Aperture Radar (RAR).
Due to the different parameters that determine the spatial resolution in range
and azimuth direction, it is obvious that the spatial resolution in the two directions is different. For radar image processing and interpretation it is useful to
resample the image data to regular pixel spacing in both directions.
Slant range resolution In slant range the spatial resolution is defined as the
distance that two objects on the ground have to be apart to give two different
echoes in the return signal. Two objects can be resolved in range direction if
they are separated by at least half a pulse length. In that case the return signals
will not overlap. The slant range resolution is independent of the range (see
Figure 6.6).
Azimuth resolution The spatial resolution in azimuth direction depends on
the beam width and the range. The radar beam width is proportional to the
wavelength and inversely proportional to the antenna length, i.e. aperture; this
means the longer the antenna, the narrower the beam and the higher the spatial
resolution in azimuth direction.

first

previous

next

last

back

exit

zoom

contents

index

about

223

6.2. Radar
Synthetic Aperture Radar (SAR)
To get useful spatial resolutions in radar images the RAR systems have their limitations, since there is a physical limit to the length of the antenna, that can be
carried on an aircraft or satellite. On the other hand, shortening the wavelength
will reduce the penetrating capability of clouds. To improve the spatial resolution a large antenna is synthesized. The synthesization is achieved by taking
advantage of the forward motion of the platform. Using all the backscattered
signals in which a contribution of the same object is present, a very long antenna
can be synthesized. This length is equal to the part of the orbit or the flightpass
in which the object is visible. Most airborne and spaceborne radar systems
use this type of radar. Systems using this approach are called Synthetic Aperture
Radar (SAR).

first

previous

next

last

back

exit

zoom

contents

index

about

224

6.2. Radar

6.2.4

Data formats

SAR data are recorded in so-called raw format. It can be processed with a SAR
processor into a number of derived products, such as intensity images, geocoded
images and phase-containing data. The highest possible spatial resolution of the
raw data is defined by the radar system characteristics.

first

previous

next

last

back

exit

zoom

contents

index

about

225

6.2. Radar
Raw data
Raw data contain the backscatter of objects on the ground seen at different
points in the sensor orbit. The received backscatter signals are sampled and
separated into two components, together forming a complex number. The components contain information about the amplitude and the phase of the detected
signal. The two components are stored in different layers. In this format, all
backscatter information is still available in the elements of the data layers and:
Each line consists of the sampled return signal of one pulse.
An object is included in many lines (about 1000 for ERS).
The position of an object in the different lines varies (different range).
Each object has a unique Doppler history which is included in the data
layers.

first

previous

next

last

back

exit

zoom

contents

index

about

226

6.2. Radar
SLC data
The raw data are compressed based on the unique Doppler shift and range information for each pixel, which means that the many backscatters of a point are
combined into one. The output of that compression is stored in one pixel which
is still in complex format. Each pixel still contains information of the returned
microwave. The phase and amplitude belonging to that pixel can be computed
from the complex number. If all backscatter information of a point is used in the
compression, then the output data is in Single Look Complex (SLC) format. The
data still have their highest possible spatial resolution.

first

previous

next

last

back

exit

zoom

contents

index

about

227

6.2. Radar
Multi-look data
In the case of multi-look processing, the total range of the orbit in which an object
can be seen is divided into several parts. Each part provides a look at the object.
Using the average of these multiple looks, the final image is obtained which is
still in complex format. Multi-look processing reduces the spatial resolution but
it reduces unwanted effects (speckle) by averaging.

first

previous

next

last

back

exit

zoom

contents

index

about

228

6.2. Radar
Intensity image
To get a visually interpretable image, the SLC or Multi-look data need to be
processed. The complex format is transformed into an Intensity image. In fact
the norm (length) of the complex vector gives the intensity of the pixel. The
spatial resolution of the intensity image is related to the number of looks that
are used in the compression step.

first

previous

next

last

back

exit

zoom

contents

index

about

229

6.2. Radar

6.2.5

Distortions in radar images

Due to the side-looking viewing geometry, radar images suffer from serious geometric and radiometric distortions. In radar imagery, you encounter variations
in scale (caused by slant range to ground range conversion), foreshortening, layover and shadows (due to terrain elevation; Figure 6.7). Interference due to the
coherence of the signal causes speckle effect.

first

previous

next

last

back

exit

zoom

contents

index

about

230

6.2. Radar
Scale distortions
Radar measures ranges to objects in slant range rather than true horizontal distances along the ground. Therefore, the image has different scales moving from
near to far range (Figure 6.5). This means that objects in near range are compressed with respect to objects at far range. For proper interpretation, the image
has to be corrected and transformed into ground range geometry.
sensor
1

slant range

43

altitude

Figure 6.7:
Geometric
distortions in radar imagery due to terrain
elevations.

terrain
F
2
ground range 1
Foreshortening

first

previous

next

last

L
3
4
Layover

back

exit

S
5

S
6
7
Shadow

zoom

contents

index

about

231

6.2. Radar
Terrain-induced distortions
Similarly to optical sensors that can operate in an oblique manner (e.g. SPOT)
radar images are subject to relief displacements. In the case of radar, these distortions can be severe. There are three effects that are typical for radar: foreshortening, layover and shadow (see Figure 6.7).
Foreshortening Radar measures distance in slant range. The slope area facing
the radar is compressed in the image. The amount of shortening depends on the
angle that the slope forms in relation to the incidence angle. The distortion is at
its maximum if the radar beam is almost perpendicular to the slope. Foreshortened areas in the radar image are very bright.
Layover If the radar beam reaches the top of the slope earlier than the bottom, the slope is imaged upside down, i.e. the slope lays over. As you can
understand from the definition of foreshortening, layover is an extreme case of
foreshortening. Layover areas in the image are very bright.
Shadow In the case of slopes that are facing away from the sensor, the radar
beam cannot illuminate the area. Therefore, there is no energy that can be backscattered to the sensor and those regions remain dark in the image.

first

previous

next

last

back

exit

zoom

contents

index

about

232

6.2. Radar
Radiometric distortions
The above-mentioned geometric distortions also have an influence on the received energy. Since the backscattered energy is collected in slant range, the
received energy coming from a slope facing the sensor is stored in a reduced
area in the image, i.e. it is compressed into fewer image pixels than should be
the case if obtained in ground range geometry. This results in high digital numbers because the energy collected from different objects is combined. Slopes
facing the radar appear (very) bright. Unfortunately this effect cannot be corrected for. This is why especially layover and shadow areas in radar imagery
cannot be used for interpretation. However, they are useful in the sense that
they contribute to a three-dimensional look of the image and therefore help the
understanding of the terrain structure and topography.
A typical property of radar images is the so-called speckle. It appears as
grainy salt and pepper effects in the image (Figure 6.8). Speckle is caused by
the interference of backscattered signals coming from an area which is included
in one pixel. The wave interactions are called interference. Interference causes
the return signals to be extinguished or amplified, resulting in dark and bright
pixels in the image even when the sensor observes a homogenous area. Speckle
degrades the quality of the image and makes the interpretation of radar imagery
difficult.

first

previous

next

last

back

exit

zoom

contents

index

about

233

6.2. Radar

(a)

first

(b)

previous

next

last

back

exit

zoom

contents

index

about

Figure 6.8: Original (a)


and speckle filtered (b)
radar image.

234

6.2. Radar
Speckle reduction
It is possible to reduce speckle by means of multi-look processing or spatial filtering. If you purchase an ERS SAR scene in Intensity (PRI)-format you will
receive a 3-look or 4-look image. Another way to reduce speckle is to apply spatial filters on the images. Speckle filters are designed to adapt to local image
variations in order to smooth the values to reduce speckle but to enhance lines
and edges to maintain the sharpness of the imagery.

first

previous

next

last

back

exit

zoom

contents

index

about

235

6.2. Radar

6.2.6

Interpretation of radar images

The brightness of features in a radar image depends on the strength of the backscattered signal. In turn, the amount of energy that is backscattered depends on various factors. An understanding of these factors will help you to interpret radar
images properly.

first

previous

next

last

back

exit

zoom

contents

index

about

236

6.2. Radar
Microwave signal and object interactions
For interpreters who are concerned with visual interpretation of radar images,
the degree to which they can interpret an image depends upon whether they
can identify typical/representative tones related to surface characteristics. The
amount of energy that is received at the radar antenna depends on the illuminating signal (radar system parameters such as wavelength, polarization, viewing geometry, etc.) and the characteristics of the illuminated object (roughness,
shape, orientation, dielectric constant, etc.).
Surface roughness is the terrain property that most strongly influences the
strength of radar returns. It is determined by textural features comparable to
the size of radar wavelength (typically between 5 and 40 cm), such as leafs and
twigs of vegetation, and sand, gravel and cobble particles. A distinction should
be made between surface roughness and topographic relief. Surface roughness
occurs at the level of the radar wavelength (centimetres to decimetres). Topographic relief occurs at a quite different level (metres to kilometres). Snells law
states that the angle of reflection is equal and opposite to the angle of incidence.
A smooth surface reflects the energy away from the antenna without returning a
signal, thereby resulting in a black image. With an increase in surface roughness,
the amount of energy reflected away is reduced, and there is an increase in the
amount of signal returned to the antenna. This is known as the backscattered
component. The greater the amount of energy returned, the brighter the signal
is shown on the image. Radar imagery is, therefore, a measure of the backscatter
component, and is related to object- or surface roughness.

first

previous

next

last

back

exit

zoom

contents

index

about

237

6.2. Radar
Complex dielectric constant Microwave reflectivity is a function of the complex dielectric constant. The complex dielectric constant is a measure of the
electrical properties of surface materials. The dielectric constant of a medium
consists of a part referred to as permittivity, and a part referred to as conductivity ([39]). Both properties, permittivity and conductivity, are strongly dependent
on the moisture or liquid water content of a medium. Material with a high dielectric constant has a strongly reflective surface. Therefore, the difference in the
intensity of the radar return for two surfaces of equal roughness is an indication
of the difference in their dielectric properties. In case of soils this could be due
to differences in soil moisture content.
Surface Orientation Scattering is also related to the orientation of the object
relative to the radar antenna. For example the roof of a building appears bright
if it faces the antenna and dark if the incoming signal is reflected away from the
antenna. Thus backscatter depends also on the local incidence angle.
Volume scattering is related to multiple scattering processes within a group of
objects, such as the vegetation canopy of a wheat field or a forest. The cover may
be all trees, as in forested area, which may be of different species with variation
in leaf form and size, or grasses and bushes with variations in form, stalk size,
leaf and angle, fruiting and a variable soil surface. Some of the energy will be
backscattered from the vegetated surface, but some, depending on the characteristics of radar system used and the object material, will penetrate the object
and be backscattered from surfaces within the vegetation. Volume scattering is
therefore dependent upon the inhomogeneous nature of the object surface and
the physical properties of the object such as leaf size, direction, density, height,
presence of lower vegetation, etc., together with the characteristics of the radar
first

previous

next

last

back

exit

zoom

contents

index

about

238

6.2. Radar
used, such as wavelength and related effective penetration depth ([3]).
Point objects are discrete objects of limited size, which gives a very strong
radar return. Usually the high backscatter is caused by the so called corner reflection. An example is the dihedral corner reflectora point object situation
resulting from two flat surfaces intersecting at 90and situated orthogonal to
the radar incident beam. Common forms of dihedral configurations are manmade features, such as transmission towers, railroad tracks, or the smooth side
of buildings on a smooth ground surface. Another type of point object is trihedral corner reflector, which is formed by the intersection of three mutually perpendicular flat surfaces. Point objects of the corner reflector type are commonly
used to identify known fixed points in an area in order to perform precise calibration measurements. Such objects can occur naturally and are best seen in
urban areas where buildings can act as trihedral or dihedral corner reflectors.
These objects give rise to intense bright spots on an image and are typical for
urban areas. Point objects are examples of objects that are sometimes below the
resolution of the radar system, but because they dominate the return from a cell
they give a clearly visible point, and may even dominate the surrounding cells.

first

previous

next

last

back

exit

zoom

contents

index

about

239

6.2. Radar

6.2.7

Applications of radar

There are many useful applications of radar images. Radar data provide complementary information to visible and infrared remote sensing data. In the case
of forestry, radar images can be used to obtain information about forest canopy,
biomass and different forest types. Radar images also allow the differentiation of
different land cover types, such as urban areas, agricultural fields, water bodies,
etc. In agricultural crop identification, the use of radar images acquired using
different polarization (mainly airborne) is quite effective. It is crucial for agricultural applications to acquire data at a certain point in time (season) to obtain
the necessary parameters. This is possible because radar can operate independently of weather or daylight conditions. In geology and geomorphology the
fact that radar provides information about surface texture and roughness plays
an important role in lineament detection and geological mapping. Other successful applications of radar include hydrological modelling and soil moisture
estimation, based on the sensitivity of the microwave to the dielectric properties
of the observed surface. The interaction of microwaves with ocean surfaces and
ice provides useful data for oceanography and ice monitoring. Radar data is also
used for oil slick monitoring and environmental protection.

first

previous

next

last

back

exit

zoom

contents

index

about

240

6.2. Radar

6.2.8

INSAR

Radar data provide a wealth of information that is not only based on a derived
intensity image but also on other data properties that measure characteristics of
the objects. One example is SAR interferometry (INSAR), an advanced processing
method that takes advantage of the phase information of the microwave. INSAR is a technique that enables the extraction of 3D information of the Earths
surface. It is based on the phase differences between corresponding pixels in
two SAR images of the same scene but acquired at a slightly different position.
The different path lengths from these positions to the target on the earth surface
cause the differences in phase. SAR systems can detect the phase of the return
signals very accurately (Figure 6.9).

f1

first

previous

f2

next

last

back

Figure 6.9: Phase differences forming an interferogram.

f = f1 - f2

exit

zoom

contents

index

about

241

6.2. Radar
Data acquisition modes
Radar data for INSAR can be collected in two different modes:
Single or simultaneous pass interferometry. In this mode, two images are
simultaneously acquired from two antennas mounted on the same platform and separated by a distance known as baseline. This mode is mainly
applied with aircraft systems but also the Shuttle Radar Topographic Mapper (SRTM) was based on this principle where receiving antennas are located at two ends of a mast of 60 metres as baseline length.
Repeat or dual pass interferometry. In this mode, two images of the same
area are taken in different passes of the platform. The SAR data acquired
from satellites such as ERS-1 and ERS-2, JERS-1 and RADARSAT may be
used in this mode to produce SAR interferograms but also some aircraft
systems are based on this mode.

first

previous

next

last

back

exit

zoom

contents

index

about

242

6.2. Radar

dr

A1

Two images of the same area are


measured from slightly different
positions (A1 and A2).

A2
a

q
r

Due to the range difference,dr, a phase


difference between the return signals
appears. These phase differences
contain height information of the
surface point and are used to create the
Interferograms.

z(x,y)

Figure 6.10: Illustration of


the InSAR geometry.

first

previous

next

last

back

exit

zoom

contents

index

about

243

6.2. Radar
Concept
The phase information of two radar datasets (in SLC format) of the same region
is used to create a DEM of the region (Figure 6.10). The corresponding elements
of two SLC datasets are acquired at two slightly different antenna positions (A1
and A2 ). The connection between these positions forms the baseline. This baseline has a length (B) and an orientation () relative to the horizontal direction.
The positions A1 and A2 are extracted from the platform orbits or flight lines.
The difference in antenna position results in a difference in the range ( and 0 )
from the target to those positions. This difference in range () causes a phase
difference (). This phase difference can be computed from the phase differences
between corresponding elements in the SLC datasets, and are stored in the socalled interferogram. Finally, the terrain height is a function of the phase difference , the baseline B, some additional orbit parameters and it is represented in
a chosen reference system.

first

previous

next

last

back

exit

zoom

contents

index

about

244

6.2. Radar
Coherence
The basis of INSAR is phase comparison over many pixels. This means that the
phase between scenes must be statistically similar. The coherence is a measure of
the phase noise of the interferogram. It is estimated by window-based computation of the magnitude of the complex cross correlation coefficient of the SAR
images. The interferometric coherence is defined as the absolute value of the
normalized complex cross correlation between the two signals. The correlation
will always be a number between 0 and 1. If corresponding pixels are similar
then the correlation is high. If the pixels are not similar, i.e. not correlated to a
certain degree, then the phase will vary significantly and the coherence is low,
meaning that the particular image part is de-correlated. Low coherence (e.g.
less than 0.1) indicates low phase quality and results in a noisy interferogram
that causes problems in the DTM generation. The coherence decreases with increasing change in the random component of the backscattered fields between
passes (temporal decorrelation) due to the physical change of the surface roughness structure reducing the signal correlation as do vegetation, water, shifting
sand dunes, farm work (planting fields), etc. Geometric distortions caused by
steep topography and orbit inaccuracies also decorrelate the images.

first

previous

next

last

back

exit

zoom

contents

index

about

245

6.2. Radar

6.2.9

Differential INSAR

Differential interferometry using spaceborne sensors has become an established


tool for the analysis of very small surface deformations. Its idea is to analyse the
phase differences between SAR interferograms caused by surface displacements
between the data acquisitions. Due to the short wavelength of SAR sensors,
surface movements on the centimetre scale can easily be detected with an orbiting satellite from several hundred kilometres distance. However, atmospheric
effects can also contribute to phase differences, which cannot be easily distinguished from surface displacements.

first

previous

next

last

back

exit

zoom

contents

index

about

246

6.2. Radar
Concept
Three SAR images of the same area are acquired in three different passes to generate two interferograms. The arithmetic difference of the two interferograms
is used to produce a differential interferogram. The elements of the differential
interferogram, in combination with orbit information and sensor characteristics,
is used to compute surface changes.

first

previous

next

last

back

exit

zoom

contents

index

about

247

6.2. Radar

6.2.10

Application of (D)InSAR

One of the possibilities for interferometry using data from space is its possibility
to derive global DEMs. This allows for global topographic elevation mapping
in areas such as the tropics which were previously inaccessible without radars
ability to penetrate cloud cover, and to acquire imagery regardless to sunlight.
Utilizing ERS, SRTM and RADARSAT imagery can lead to DEMs with absolute
height errors of less than 10 metres (assuming three or more GCPs are collected
and optimal circumstances). Horizontal resolution with ERS data of 20 metres
and with RADARSAT Fine Beam mode 10 metres are possible. Georeferencing
accuracy can be better than 20 metres. In addition, the ability to create DEMs
is beneficial for the generation of 3D imagery to assist in the identification of
targets for military and environmental purposes. DEMs also provide critical
information with reference to the geomorphic process. In this practice interferometry is useful in detecting changes caused by alluvial fans, broad flood plain
sedimentation patterns, sediment extraction, delta extensions and the formation
and movement of large dune fields.
Change detection is an important field of study and is based on the practice
of Differential Interferometry. This allows for super-sensitive change detection
with accurate measurements into millimetres accuracy. This practice can be used
for landslide monitoring and erosion. In addition, this can be used to gather
information on changes in areas where mining and water extraction have taken
place (see Figure 6.11).
SAR can provide high-resolution imagery of earthquake-prone areas, highresolution topographic data, and a high-resolution map of co-seismic deformation generated by an earthquake. Of these the last one is probably the most
useful, primarily because it is unique. Other techniques are capable of generating images of the Earths surface and topographic data, but no other technique
first

previous

next

last

back

exit

zoom

contents

index

about

248

6.2. Radar

Figure 6.11: Surface deformation mapping with


(D)InSAR.

provides high-spatial-resolution maps of earthquake deformation. Crust deformation is a direct manifestation of the processes that lead to earthquakes. Consequently, it is one of the most useful physical measurements we can make to
improve estimates of earthquake potential. SAR interferometry can provide the
requisite information.
Interferograms were calculated to help study the activity of volcanoes through
the creation of DEMs and mapping of land deformation. Researchers have used
over 300 images of ERS-1 data to create many interferograms of Mount Etna
(Italy). They were able to measure the increase in the size of the volcano (change
detection and deformation) caused by the pressure of the magma in its interior.
They were also able to follow the shrinking of the volcano once its activity had
subsided as well as the changes in the surrounding topography caused by the
lava flows. This technique can be used to monitor awakening volcanoes to prevent mass destruction, and for local and international relief planning.
Interferometry is useful in coherence-based land use classification. In this

first

previous

next

last

back

exit

zoom

contents

index

about

249

6.2. Radar
practice, the coherence of a repeat pass interferogram provides additional information contained in the radar backscatter. This information can be used as
an input channel into land use classification. The use of coherence has proven
successful for the separation of forests and open fields.
Interferometry is also useful in polar studies, including the measuring of flow
velocities, tidal displacements and ice sheet monitoring. Researchers at the University of Alaska at Fairbanks were able to use interferometry to calculate the
surface velocity field on the Bagley Icefield (Alaska) before and during the 199394 surge of the Bearing Glacier using ERS-1 data.
In terms of ocean dynamics, interferometry can be used to study ocean wave
and current dynamics, wave forecasting, ship routing, and placement and design of coastal and off shore installations.
Many major cities are located in areas undergoing subsidence as a result of
withdrawal of ground water, oil or gas, and other minerals. Several metres of
subsidence over several decades are not uncommon. Examples of cities with
significant problems include Houston, Mexico City, Maracaibo, and Katmandu.
High rates of subsidence can have a major impact on flood control, utility distribution, and water supply.
Subsidence is also a result of natural processes such as limestone or marble
dissolution that forms karst topography. In western Pennsylvania, an underground coal fire has burned for many years causing localized subsidence and
threatening a much wider region with similar risks. Successive SAR images in
urban areas over periods of several months may be able to detect subsidence
directly. The surface structure of many parts of urban areas remains unchanged
over several years, suggesting that interferometry over several years may be
possible. Subsidence rates of several centimetres per year or more may be occurring in affected cities and should be detectable.

first

previous

next

last

back

exit

zoom

contents

index

about

250

6.2. Radar
Satellite-based SAR interferometry has two important roles to play in polar
studies. First, SAR interferometry can provide complete high-resolution highaccuracy topographic data. Second, repeat-pass interferometry can be used to
measure ice flow and assess other changes. The cover image shows the first
direct measurement of ice flow velocity from space without ground control.

first

previous

next

last

back

exit

zoom

contents

index

about

251

6.2. Radar

6.2.11

Supply market

Spaceborne SAR interferometry holds great promise as a change-detection tool


in the fields of earthquake studies, volcano monitoring, land subsidence detection and glacier and ice-stream flow studies. In a number of other fields, such
as hydrology, geomorphology, and ecosystem studies, generating an accurate,
globally consistent DEM with SAR interferometry is a major goal in itself.
The market for Airborne interferometric DEM generation is the same as for
the laser altimetry. At this moment (September 2004) only one company is producing DTMs in this way.

first

previous

next

last

back

exit

zoom

contents

index

about

252

6.2. Radar

6.2.12

SAR systems

The following SAR systems mounted on airborne (Table 6.1) and spaceborne (Table 6.2) platforms did or do produce SAR data. However, not all of these systems
generate data that can be processed into interferograms because of inaccuracies
in orbit or flight pass data. This list, created in May 2004, is most probably not
complete. For the latest situation refer to ITCs Database of Satellites and Sensors.
Instrument
Emisar
Pharus
Star-31
Airsar/Topsar
Carabas
Geosar
WINSAR

Instrument
Radarsat
ERS-1
ERS-2
Envisat
JERS -1
SRTM

first

previous

Band/
C,L band
C band
X band
P,L,C band
3-15 cm
X,P band
4 bands

Band
C-band
C-band
C-band
C-band
L-band
C and X band

next

last

Organization
Techn. Univ. of Denmark
FEL-TNO
Intermap
Nasa/JPL
Chalmers University/FOI
JPL and others
Metratec

Owner
Denmark
Netherlands
Canada
USA
Sweden
USA
USA

Table 6.1: Airborne SAR


systems.

Owner
Canada
ESA
ESA
ESA
Japan
NASA

Table 6.2:
Spaceborne
SAR systems.

Remarks
Not operational anymore

Not operational anymore


Space shuttle mission

back

exit

zoom

contents

index

about

253

6.2. Radar

6.2.13

Trends

The trend in airborne INSAR is towards multi-frequency and multi-polarization


systems. The advantages of a long-wave-band (L or P) are that they can penetrate canopy and will probably result in a ground surface height map in dense
forest. The use of combinations of short wavebands (X or C) with long wavebands enables bio mass estimation. The use of multi polarization INSAR enables
the creation of optimized interferograms applying a weighted contribution of
the different polarizations (HH, VH, HV, VV). The use of airborne SAR sensors
for differential interferometry is also of great interest. The use of longer wavelengths with better coherence behaviour, such as L-or P-band, offers the possibility of an analysis of long-term processes even in vegetated areas. The capability
for monitoring of short-term processes is improved by the greater flexibility of
airborne sensors. Particularly, the combination of space-borne interferometric
SAR data with flexibly acquired airborne data is promising.
The future in spaceborne interferometry will be mainly in the direction of Differential INSAR for several applications where change detection is important. In
the coming years two space borne systems will be launched, the Japanese PALSAR system on the ALOS satellite, and the German TerraSar providing radar
data in higher spatial and spectral resolution mode.

first

previous

next

last

back

exit

zoom

contents

index

about

254

6.3. Laser scanning

6.3 Laser scanning

first

previous

next

last

back

exit

zoom

contents

index

about

255

6.3. Laser scanning

6.3.1

Basic principle

In functional terms, Laser Scanning can be defined as system that produces digital surface models. The system comprises an assemblage of various sensors,
recording devices, and software. The core component is the laser instrument,
which measures distance, also referred to as laser ranging. When mounted on
an aircraft, the laser range finder measures the near vertical distance to the terrain in very short time intervals. Combining a laser range finder with sensors
that can measure the position (GPS) and attitude (IMU) of the aircraft makes it
possible to determine a model of the terrain surface in terms of a set of (X,Y ,Z)
coordinates, following the polar measuring principle (Figure 6.12).
Z

Y X

GPS

LASER
X

(U,V)

IMU

Y X

GPS

(U,V)

previous

TERRAIN

(a)

first

next

last

Figure 6.12: Polar measuring principle (a), and


concept of airborne laser
scanning (b).

(b)

back

exit

zoom

contents

index

about

256

6.3. Laser scanning


We can define the coordinate system in such a way that Z refers to elevation.
The digital surface model (DSM) thus becomes a digital elevation model (DEM),
so we model the surface of interest by providing its elevation at many points
with position coordinates (X,Y ). Do the elevation values, which are produced
by airborne laser scanning (ALS), refer to elevation of the bare ground above
a predefined datum? Not necessarily, since the raw DEM gives us elevation of
the surface the sensor sees (Figure 6.13). Post-processing is required to obtain
a digital terrain relief model (DTM) from the DSM.
The key performance characteristics of ALS are: high ranging precision, yielding high resolution DSMs in near real time, and little dependence on weather
conditions and time of flying. Typical applications of ALS are, therefore, forest surveys, surveying of coastal areas and sand deserts, flood plain mapping,
power line and pipeline mapping, monitoring open-pit mining, 3D city modelling, etc.

first

previous

next

last

back

exit

zoom

contents

index

about

257

6.3. Laser scanning

Figure 6.13: DSM of part


of Frankfurt/Oder, Germany (1 m point spacing).
Courtesy of [38].

first

previous

next

last

back

exit

zoom

contents

index

about

258

6.3. Laser scanning

6.3.2

ALS components and processes

LASER stands for Light Amplification by Stimulated Emission of Radiation.


Einstein can be considered the father of laser, although he did not invent it.
Roughly 80 years ago he postulated photons and stimulated emission; he won
the Nobel Prize for related research on the photoelectric effect. In 1960 Theodore
Maiman at Hughes Research Laboratories developed a device to amplify light,
thus building the first laser (instrument). A laser emits a beam of high-intensity,
i.e. monochromatic light. The light is not really of a single wavelength, but has
a very narrow spectral band (smaller than 10 nm). Laser ranging is mostly done
with light in the near-infrared range.
Today lasers are used for many different purposes, among them for surgery.
Lasers can damage cells (by boiling their water content), so they are a potential
hazard to eye safety. Therefore, safety classes have been established for laser
range finders, which must be observed for surveying applications.
R

Pe

laser source

Photodiode

Pr
Figure 6.14: Concept of
Laser Ranging [42].

first

previous

next

last

back

exit

zoom

contents

index

about

259

6.3. Laser scanning


Laser range finders and scanners come in various forms. A typical airborne
laser instrument emits infrared pulses at a high frequency (e.g. 30,000 pulses per
second). Scanning can be achieved by an oscillating or rotating mirror, which
deflects the laser beam. Adding a scanning device to a ranging device has made
surveying of a large area more efficient; a strip (swath of points) can be captured
with a single flight line instead of a just a line of points as it was the case with
the earlier versions of laser systems, the laser profilers. The emitted signal is reflected on the ground and its return is sensed by a photodiode. The transmitting
and receiving apertures (commonly of 8 to 15 cm in diameter) are mounted such
that the sent and received signals share the same optical path (thus ensuring
that the photodiode sees the illuminated point and not something else). With
the pulse travelling at the speed of the light, the receiver senses the return pulse
before the next pulse is sent out (Figure 6.14). A time counter is started when a
pulse is sent out and stopped on its return. The elapsed time, measured with a
resolution of 0.1 nanoseconds, can easily be converted to a distance as we know
the speed of light, c:
R=

1
c t.
2

(6.2)

Several laser range finders for airborne applications can record multiple returns
from the same pulse. The cheaper single return sensors only register one return
pulse for every emitted pulse. Some of the range finders equipped with a single return sensor allow the operator to select either first or last return ranging.
The difference is especially relevant when flying terrain with vegetation. Many
returns from first-return systems are from the top of the tree canopy, while the
last returns are more likely to be of the ground. A multiple-return sensor can
either register the first and the last return or even several returned pulses per
first

previous

next

last

back

exit

zoom

contents

index

about

260

6.3. Laser scanning


emitted pulse. In such systems the beam may hit leafs at the top of tree canopy,
while part of the beam travels farther and may hit branches or even the ground
(Figure 6.15). Each return can be converted to (X,Y ,Z), facilitating the differentiation ground surface - tree canopy. An example of a first return and last return
DSM is shown in Figure 6.16.
-9

Time [s ]

Distance [m]
0

0
Pulse emission
20

0
First return

100%

50%

50%

0
100%
6

20
No return

12

40
50%

100%

60

18

Second return
100%

80

50%

100

50%100%

Third return

24
30

120

Figure 6.15:
Multiple
return
laser
ranging.
Adapted from Mosaic
Mapping Systems, Inc.

36

Fourth return
140

50%

100%

42

Along with measuring the range, some laser instruments measure the intensity of the returned signal. The benefit of imaging lasers, however, is still a
matter of discussion in the laser community. The obtained images are monochromatic and are of lower quality than panchromatic images. A separate optical
sensor can produce much richer spectral information.
first

previous

next

last

back

exit

zoom

contents

index

about

261

6.3. Laser scanning

Figure 6.16: First and last


return DSM of the same
area. Courtesy of [38].

ALS provides 3D coordinates of terrain points. Terrain consists of terrain


relief (the ground surface) and terrain features such as forest, trees, buildings,
water bodies, etc. To calculate accurate coordinates we must accurately observe
all needed elements. Measuring the distance from the aircraft to the terrain can
be done very precisely by the laser range finder, within centimetres. We can
accurately determine the position of the aircraft by differential GPS, using dual
frequency receivers (see Principles of Geographic Information Systems for an introduction to GPS). To determine accurately the attitude of the aircraft at the moment a distance is measured, we employ an Inertial Measuring Unit (IMU). An
IMU is an assemblage of gyros and accelerometers. IMUs have originally been
used for aircraft navigation only (in INS, i.e. an Inertial Navigation System).
The total weight of the GPS - IMU - scanner equipment is about 150 kg. Most

first

previous

next

last

back

exit

zoom

contents

index

about

262

6.3. Laser scanning


ALS systems are used in a small to medium fixed wing aircraft. Usually an
aircraft that is equipped for an aerial photography mission can be used to fly
current commercially available ALS systems. There are several ALS systems
that can be can be mounted on helicopters, and several have been designed
to be exclusive to this type of platform. Helicopters are better suited for very
high-resolution surveys, because they can easily fly slowly. The minimum flying height is among other parameters dependent on the eye-safe distance of the
laser. The major limiting factor of the maximum flying height is energy loss of
the laser beam. 1000 m and less are frequent flying heights, yet there are now
systems that can be flown at up to 8000 m.
Different from an aerial survey for a stereo coverage of photographs (see
chapter 9), where each terrain point should get recorded at least twice, in ALS
a terrain point is only collected once in principle, even if we fly overlapping
strips. This is of advantage for surveying urban areas and forests, but of disadvantage for error detection.
After the flight, the recordings from the laser instrument and the position and
orientation system (i.e. integrated GPS and IMU) are co-registered to the same
time and then converted to (X, Y, Z) for each point that was hit by the laser
beam. The resulting data set may still contain systematic errors and is often
referred to as raw data.
Further data processing then has to solve the problem of extracting information from the uninterpreted set of (X, Y, Z) coordinates. Typical tasks are extracting buildings, modelling trees (e.g. to compute timber volumes), and, most
prominently, to filter the DSM to obtain a DTM. Replacing the elevation value
at non-ground points by an estimate of the elevation of the ground surface is
also referred to as vegetation removal or short devegging, a term being maintained from the early days when ALS was primarily used for forested areas (Fig-

first

previous

next

last

back

exit

zoom

contents

index

about

263

6.3. Laser scanning

Figure 6.17: Devegging


laser data: filtering a DSM
to a DTM. Courtesy of
SCOP.

ure 6.17).
Critical to getting the right data at the right time are proper system calibration, accurate flight planning and execution (including the GPS logistics), and
adequate software.

first

previous

next

last

back

exit

zoom

contents

index

about

264

6.3. Laser scanning

6.3.3

System characteristics

ALS produces a DSM directly comparable with what is obtained by image matching of aerial photographs. Image matching is the core process of automatically
generating a DSM from stereophotos. Alternatively we can also use microwave
radar to generate DSMs and eventually DTMs. The question is, why go for ALS?
There are several good reasons for using ALS for terrain modelling:
A laser range finder measures distance by recoding the elapse time between emitting a signal and receiving the reflected signal from the terrain.
Hence, the laser range finder is an active sensor (emitting the radiation,
which is to be sensed). Being an active sensor, the data collection does not
depend on reflected sunlight (as opposed to recordings of passive sensors
such as aerial cameras), meaning that the laser range finder can be flown
at day and night.
Different from indirect distance measuring as done when using overlapping photographs, laser ranging does not depend on surface/terrain texture.
Laser ranging is less weather dependent than using passive optical sensors. A laser cannot penetrate clouds as microwave radar can, but it can be
flown at low altitude, thus very often below the cloud ceiling.
The laser beam is very narrow, therefore it can see through the tree canopy,
unless very dense. ALS can see objects that are much smaller than the
footprint of the laser beam, therefore, we can use it for mapping power
lines, etc.

first

previous

next

last

back

exit

zoom

contents

index

about

265

6.3. Laser scanning


A laser range finder can measure distances very precisely and very frequently; therefore, a DSM with a high density of points can be obtained
and precise elevation values. The attainable elevation (vertical coordinate)
accuracy is 7 to 30 cm (as reported from various projects under good conditions).
The first-return and last-return recording facility offers some benefits in
feature extraction, especially for forest applications and urban mapping
(building extraction), both subjects of vivid research.
Moreover, the entire data collection process is digital and can be automated
to a high degree, giving it a potential of fast production.
Another important advantage is that ALS does not need ground control
other than a calibration site, which can usually be created near the airfield.
There are two major advantages of laser ranging compared to microwave
radar: high energy pulses can be generated in short intervals and highly directional light rays can be emitted by using small apertures. The latter is possible
because of the short wavelength of lasers (10,000 to 1,000,000 times shorter than
microwave). The consequence is much higher ranging accuracy.
Note that the term radar (radio detection and ranging) is often used for short
for microwave radar; however, in the literature you may also come across the
term laser radar, which is a synonym for laser ranging. The term lidar, which
stands for light detection and ranging, is often used as a synonym for laser
range finding (a lidar instrument being a laser range finder), although there
are also lidar instruments that do not measure distance but the velocity of a target (Doppler lidars).

first

previous

next

last

back

exit

zoom

contents

index

about

266

6.3. Laser scanning

6.3.4

Variants of Laser Scanning

The first airborne laser ranging experiments were conducted in North America
in the 1970s, aimed at bathymetric applications. There are currently a half dozen
airborne laser bathymetry systems in operation. These are heavy and very expensive systems commonly employing lasers in two different wavelengths, near
infrared or green. Near infrared light has good atmospheric penetration but does
not penetrate water or the ground. The near infrared beam is reflected by a water surface, while the green beam penetrates (clear) water and is reflected from
the bottom of the water body. Measuring the time difference between the returns
of the co-aligned laser pulses allows determining the water depth (shallow water, not deeper than 80 m). For precise topographic surveys (using near infrared
lasers), progress was first required in satellite positioning (GPS), which allowed
the development of development of laser profiling in the late 1980s.
The idea of creating surface models and true 3D models by laser ranging can
also be applied to problems where an aircraft does not offer the right perspective, and where ground-based cameras would fail to do a good job (e.g. because
the objects/scene to be surveyed have/has little texture). Ground-based laser
scanners, also called terrestrial laser scanners (TLSs), combine the concepts of
tacheometry (polar measuring, e.g. by Total Station; explicit distance) and photogrammetry (bundle of rays, implicit distance). TLSs (e.g. Leica Cyrax 2500,
Riegl 3D-Imaging Sensor LMS-Z210) do not require a prism/reflector at a target point and can yield a very high density of points in a very short time. Such
instruments may have a scanning range of 1-200 m and record 1000 points per
second with an accuracy of 2-6 mm.
TLSs are particularly suited for surveying in dangerous or inaccessible environments, and for high precision work. All kinds of civil engineering, architectural, and archaeological surveys are the prime application areas. Figure 6.18
first

previous

next

last

back

exit

zoom

contents

index

about

267

6.3. Laser scanning

Figure 6.18: Images related to various stages of


3D modelling by a TLS
(Leica).

illustrates a typical TLS process chain. The TLS scans an object, usually from different positions to enable true 3D modelling of the object of interest (e.g. a bridge,
a building, a statue). For each viewing position of the TLS we obtain a point
cloud, which can be shown as a 2D picture by colour coding the ranges to the
objects surface. We can also co-register all point clouds into one common 3D coordinate system and then fit simple geometric shapes (cylinders, spheres, cubes,
etc.) to the (X,Y ,Z) data. This way we can create CAD models and use computer aided design software, e.g. for intelligible visualisations. An even more
recent application area of ground-based laser scanning is mobile mapping and
automatic vehicle navigation.
There is also satellite laser ranging, used to measure the distance from a
ground station to a satellite with high precision. Moreover, NASA operates laser
systems on spaceborne platforms with a particular interest in ice and clouds.
There are also military/spy laser satellites orbiting.

first

previous

next

last

back

exit

zoom

contents

index

about

268

6.3. Laser scanning

6.3.5

Supply Market

The acceptance of ALS data is rapidly growing. Growth rates in the commercial
sector in terms of installed instrument base have been averaging about 25 percent per year since 1998, with projections for an installed instrument base of 150
- 200 sensors by 2005. There is also a growing number of value-added resellers
and product developers who include laser mapping and laser data analysis as
an integral part of their activities. The website www.airbornelasermapping.com
tries to maintain a current inventory of ALS organisations; in 2001 there were
already 70 commercial ones throughout the world. By 2004 the ALS supply
landscape has become diverse. A potential client of ALS products has the option of purchasing a total service for getting a tailor-made DSM and products
derived from it, or purchasing a value-added product (e.g. derived from a national DSM), purchase a part of a national DSM and process it further in-house,
or buy ALS hardware and software and try to make it all work.
The Netherlands was the first country to establish a DSM for the entire territory using ALS. The product (AHN) has become available for any part of the
country by in 2004, with a density of at least 1 point per 16 m2 and 1p/32 m2 in
forest areas. Several other states in Europe are currently in the process of creating a countrywide high density DSM based on ALS, typically with a spacing
between points in the order of 10 m.
With the increasing availability of systems and services, researchers look into
possibilities of refined processing (e.g. further reduction of systematic errors and
elimination of blunders) and automated post-processing. The latter refers to attempts to derive a DTM from the DSM, and to classify and delineate terrain features, in particular buildings. Another research topic concerns the derivation of
terrain breaklines. Meanwhile, the manufactures will continue to aim for higher
measuring speeds and higher laser power, so that higher density DSMs can be
first

previous

next

last

back

exit

zoom

contents

index

about

269

6.3. Laser scanning


produced from higher altitudes. Another trend is toward multimodal 3D systems, in particular integrating a laser scanner with an RGB scanner or a digital
camera.

first

previous

next

last

back

exit

zoom

contents

index

about

270

6.3. Laser scanning

Summary
In this chapter, the principles of imaging radar, interferometric SAR, laser scanning and their respective applications have been introduced. The microwave
interactions with the surface have been explained to illustrate how radar images are generated, and how that can be interpreted. Radar sensors measure
distances and detect backscattered signal intensities. In radar processing, special attention has to be paid to geometric corrections and speckle reduction for
improved interpretability. Radar data have many potential applications in the
fields of geology, oceanography, hydrology, environmental monitoring, land use
and land cover mapping and change detection. The concept and applications of
INSAR were also explained, including how differential INSAR allows the detection of surface deformation.
In the second part the principle of laser scanning and its historical development have been outlined. The principal product is a digital surface model.
Capabilities of airborne laser scanning and operational aspects have been introduced, and current trends reviewed. Typical applications of laser scanning
data are forest surveys, surveying of coastal areas and sand deserts, flood plain
mapping, power line and pipeline mapping, and 3D city modelling.

first

previous

next

last

back

exit

zoom

contents

index

about

271

6.3. Laser scanning

Questions
The following questions can help to study Chapter 6.
1. List three major differences between optical and microwave remote sensing?
2. What type of information can you extract from imaging radar data?
3. What are the limitations of radar images in terms of visual interpretation?
4. What kind of processing is necessary to prepare radar images for interpretation? Which steps are obligatory and which are optional?
5. Search the Internet for successful applications of radar images from ERS1/2, Radarsat and other sensors.
6. What are the components of an airborne laser scanning system, and what
are the operational aspects to consider in planning and executing an ALS
mission?

first

previous

next

last

back

exit

zoom

contents

index

about

272

6.3. Laser scanning

7. What are the key performance characteristics of ALS, and what are the
major differences with airborne microwave radar?
8. What makes ALS especially suited for the mentioned applications?

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 7
Remote sensing below the ground
surface

first

previous

next

last

back

exit

zoom

contents

index

CAUTION

about

273

274

7.1. Introduction

7.1 Introduction
When it comes to exploration for the resources of the solid earth, methods of detecting directly or indirectly resources of minerals, hydrocarbons and groundwater are needed that can see deep into the Earth. Similar capabilities may
be required at a more local scale for foundation studies or environmental pollution problems. Methods that map the Earths surface in or near the visible
spectrum are a good start, and the interpretation of surface geology (for example from RS imagery) allows inference of what may lie below. However, there
are other methods that actually probe more deeply into the ground by making
use of the physical or chemical properties of the buried rocks. These properties or changes in properties from one rock-type to another are detected by
carrying out geophysical surveys with dedicated sensors such as gravimeters,
magnetometers and seismometers. Interpretation of geophysical surveys, along
with all other available data, is the basis for planning of further investment in
exploration, such as drilling, or using more expensive, dedicated geophysical
techniques to probe in more detail into the most promising locations. Applied
geophysical techniques form an important part of practical earth science, and the
theory and application of the available methods is set out in several introductory
textbooks [27, 19, 12, 37].

first

previous

next

last

back

exit

zoom

contents

index

about

275

7.2. Gamma-ray surveys

7.2 Gamma-ray surveys


Gamma radiation (electromagnetic radiation of very short wavelength; see Section 2.2) arises from the spontaneous radioactive decay of certain naturally occurring isotopes. These gamma rays have sufficient energy to penetrate a few
hundred metres of air and so may be detected conveniently from a low-flying
aircraft. Their ability to penetrate rock and soil is modest, so only gamma rays
from radioactive sources within a few tens of centimetres of the ground surface
ever reach the air in significant numbers. As a result, gamma radiation mapping
is limited to the shallowest sub-surface. However, where soils are derived directly from the underlying bedrock and where bedrock outcrops exist, gamma
rays are useful in mapping large areas for their geology. Where the soil has been
deposited from distant origins, gamma radiation from the underlying bedrock
is obscured, but the method can still reveal interesting features of soil composition and origin that have, so far, been little used. Only three isotopes lead to
the emission of gamma rays when they undergo their radioactive decay chain.
These are isotopes of the elements Thorium (Th), Uranium (U) and Potassium
(K). While potassium is often present in rocks at the level of a few per cent,
the abundance of Th and U is usually measured in only parts per million. The
energy of a gamma-ray is characteristic of its elemental source. A gamma-ray
spectrometer, therefore, not only counts the number of incoming rays (counts
per second) but also, through analysing the energy spectrum of all incoming
gamma rays, attributes gamma-rays to their source elements and so estimates
the abundances of Th, U and K in the source area. This requires suitable precautions and careful calibration. While the abundance of the radio-elements is
itself of little interest (except, of course, in uranium exploration), in practice it is
found that each rock unit has a relative abundance of Th, U and K that is distinct

first

previous

next

last

back

exit

zoom

contents

index

about

276

7.2. Gamma-ray surveys


from that of adjacent rock units. Hence, if the abundance of each of the three
elements is imaged as a primary colour (say, Th = green, U = blue and K = red)
with appropriate contrast-stretching and the three colours are combined in a visual display, each rock unit appears with its own characteristic hue. The changes
in the hue evident in such an image correspond to geological boundaries and so,
under favourable circumstances, gamma-ray spectrometer surveys can lead to a
kind of instant geological map (Figure 7.1).

Figure 7.1: Ternary image (K=red, Th=green,


U=blue) of an area of
Archean geology in NW
Australia.
Domes of
gneisses (A) show up as
bright and red-orange in
colour largely on account
of their potassium content.
The layers in an old sedimentary basin or syncline,
cut through horizontally by
erosion, are visible around
B, where each layer has a
different hue. Courtesy of
Geoscience Australia.

first

previous

next

last

back

exit

zoom

contents

index

about

277

7.3. Gravity and magnetic anomaly mapping

7.3 Gravity and magnetic anomaly mapping


The Earth has a gravity field and a magnetic field. The former we experience
as the weight of any mass and its tendency to accelerate towards the centre of
the Earth when dropped. The latter is comparatively weak but is exploited, for
example, in the design of the magnetic compass that points towards magnetic
north. Rocks that have abnormal density or magnetic propertiesparticularly
rocks lying in the uppermost few kilometres of the Earths crustdistort the
broad gravity and magnetic fields of the main body of the Earth by tiny but perceptible amounts, producing local gravity and magnetic anomalies. Careful and
detailed mapping of these anomalies over any area reveals complex patterns
that are related to the structure and composition of the bedrock geology. Both
methods therefore provide important windows on the geology, even when it
is completely concealed by cover formations such as soil, water, younger sediments and vegetation. The unit of measurement in gravimetry is the milligal
(mGal), an acceleration of 105 m/s2 . The normal acceleration due to gravity (g)
is about 9.8 m/s2 (980,000 mGal) and, to be useful, a gravity survey must be able
to detect changes in g as small as 1 mGal or about 1 part per million (ppm) of
the total acceleration. This may be achieved easily by reading a gravimeter at
rest on the ground surface, but is still at the limit of technical capability from a
moving vehicle such as an aircraft.
Conventional gravity surveys are ground-based and, therefore, slow and
costly; the systematic scanning of the Earths surface by gravity survey is still
confined largely to point observations that lack the continuity of coverage achievable with other geophysical methods. An exception is over the worlds oceans
where radar altimetry of the sea-level surface from a satellite has been achieved
with a precision of better than 10 cm. The sea-surface is an equipotential surface

first

previous

next

last

back

exit

zoom

contents

index

about

278

7.3. Gravity and magnetic anomaly mapping


with undulations of a few metres in height attributable to gravity anomalies.
These arise from density variations in the subsurface, which, at sea, are due
mainly to the topography of the ocean floor. Mapping sea-level undulations has
therefore made possible the mapping of sea-floor topography at the scale of a
5 km pixel for all the worlds oceans (Figure 7.2). In 2002 the Gravity Recovery
and Climate Experiment (GRACE) satellite was launched. Built jointly by NASA
and the German Aerospace Centre (DLR), GRACE is actually a tandem of two
satellites equipped with GPS and a microwave ranging system, that allow an
efficient and cost-effective way to map gravity globally with high accuracy.
Mapping of magnetic anomalies from low-flying aircraft has been widely
used in commercial exploration for over 50 years. The Earths main field has a
value that varies between 20,000 and 80,000 nano-Teslas (nT) over the Earths
surface. At a ground clearance of 50 to 100 metres, magnetic anomalies due

Figure 7.2:
Sea floor
topography in the western
Indian Ocean, revealed by
satellite altimetry of the
sea surface (GEOSAT).
Note the mid-ocean ridges
and the transform faults
either side of them. Note
also the (largely submarine) chains of volcanic
islands between India and
Madagascar.
Scene is
roughly 3800 km across
(from [35]).

first

previous

next

last

back

exit

zoom

contents

index

about

279

7.3. Gravity and magnetic anomaly mapping


to rocks are usually no more than a few hundred nT in amplitude. Modern
airborne magnetometers can reliably record variations as small as 0.1 nT in an
airborne profile, about 2 parts per million of the total field. Each reading takes
only 0.1 second, corresponding to an interval of about 6 metres on the ground,
and a normal survey flight of six hours duration can collect 1500 km of profile, keeping costs low. When ground clearance is only 50 m and flight-lines are
closely spaced (200 m), a great deal of geological detail may be revealed by an
aeromagnetic survey (Figure 7.3). In the exploration of ancient terrains that have
been levelled by weathering and erosion and consequently rendered difficult to
map by conventional means, aeromagnetic surveys are invaluable in directing
ground exploration to the most promising location for mineral occurrences.

first

previous

next

last

back

exit

zoom

contents

index

about

280

7.3. Gravity and magnetic anomaly mapping

(a)

first

previous

Figure 7.3: (a) Magnetic


anomaly image of an area
of Western Australia. Red
= high magnetic values,
Blue = low. Granitic bodies
such as A and B are distinct from the tightly-folded
greenstone rocks seen at
C that have some highly
magnetic and almost vertical layers within them.
Faults offset the greenstones at D.
Approximately EW striking dykes
(e.g., E) cut all formations.
Note that faults such as D
predate the emplacement
of the dykes. Courtesy of
Fugro Airborne Surveys;
(b) Conventional geological map of the same area.
Courtesy of Geoscience
Australia and GSWA.

(b)

next

last

back

exit

zoom

contents

index

about

281

7.4. Electrical imaging

7.4 Electrical imaging


Solid rocks are normally rather resistive to the passage of electricity. The presence of water (groundwater) in pores, cracks and fissures and the electrical properties of certain minerals nevertheless allow applied currents to flow through the
large volume of the subsurface. This has been exploited in methods developed
to permit the mapping of subsurface electrical conductivity in two and three dimensions. While seldom of such regional (geological mapping) application as
gravity and magnetic methods, electrical methods have found application both
in the search for groundwater and in mineral exploration where certain ore minerals have distinctive electrical properties.
Where the ground is stratified an electrical sounding can be interpreted to reveal the layering in terms of the resistivity or conductivity of each layer. Electrical
profiling can be used to reveal lateral variations in rock resistivity, such as often
occur across fissures and faults. Ground-based methods that require physical
contact between the apparatus and the ground by way of electrodes are supplemented by so-called electromagnetic (EM) methods where current is induced
to flow in the ground by the passage of an alternating current (typically of low
audio frequency) through a transmitter coil. EM methods require no electrical
contact with the ground and can therefore also be operated from an aircraft, increasing the speed of survey and the uniformity of the data coverage. Airborne
EM surveys have been developed largely by the mineral exploration community since many important ore bodiessuch as the massive sulphide ores of the
base metalsare highly conductive and stand out clearly from their host rocks
through electrical imaging (Figure 7.4). Other important ore bodies are made
up of disseminated sulphides that display an electrochemical property known
as chargeability. Mapping of chargeability variations is the objective in induced

first

previous

next

last

back

exit

zoom

contents

index

about

282

7.4. Electrical imaging


polarization (IP) surveys.

Figure 7.4: Conductivity


cross-sections through the
Bushman copper sulphide
ore body in Botswana derived from airborne EM
traverses flown east-west
across the strike of the
body.
Red and yellow
colours = highly conductive zones, blue = nonconductive zones. Courtesy of Fugro Airborne
Surveys.

first

previous

next

last

back

exit

zoom

contents

index

about

283

7.5. Seismic surveying

7.5 Seismic surveying


Virtually all new discoveries of oil and gas are these days made possible by
seismic imaging of the Earths subsurface. Such surveys probably account for
over 90 per cent of the expenditure on geophysical surveys for all exploration
purposes. Seismic waves are initiated by a small explosion or a vibratory source
at the surface, in a shallow borehole or in the water above marine areas. Energy
in a typically sub-audio frequency range (10 to 100 Hz) radiates from the source
and is reflected off changes in acoustic properties of the rock, typically changes
in lithology from one stratum to the next, and are detectable from depths of
many kilometres.
By deploying a suitable array of seismic sources and receiving reflected energy at a large number of receiving stations known as geophones, an image of the
sub-surface may be built up in three dimensions. This involves processing an
enormous amount of data to correct for multiple reflections and the geometry of
the source-receiver configurations. To achieve the detail necessary for the successful siting of expensive, deep exploratory wells, most surveys now carried
out are known as 3D surveys, though isolated lines of 2D survey, typical of earlier decades, are still carried out for reconnaissance purposes in new areas. The
accuracy and precision of the seismic method in mapping the subsurface (Figure 7.5) is now sufficient not only to find trapped oil and gas but also to assess
the volume and geometry of the reservoir to plan optimum extraction strategies.
Repeated surveys during the production lifetime of a given field (time lapse
seismic) permit the draw-down to be monitored and so maximize the recovery
of the oil and gas in a field. Similar seismic technology, adapted to more modest
scales of exploration, can be applied for shallow investigations (depths of a few
tens of metres), useful in groundwater exploration and site investigation.

first

previous

next

last

back

exit

zoom

contents

index

about

284

7.5. Seismic surveying

Figure 7.5: 3D seismic


surveys map the layering
in the subsurface, vital in
oil exploration. The top
surface of the cube is a
satellite image draped on
topography (20 times vertical exaggeration). Side
faces show seismic sections through the underlying strata.

first

previous

next

last

back

exit

zoom

contents

index

about

285

7.5. Seismic surveying

Summary
Geophysical methods therefore provide a wide range of possible methods of
imaging the subsurface. Some are used routinely, others only for special applications. All are potentially useful to the alert geoscientist.
Gravity and magnetic anomaly mapping has been carried out for over 50 years.
While most countries have national programmes, achievements to date are somewhat variable from country to country. The data are primarily useful for geological reconnaissance at scales from 1:250,000 to 1:1,000,000. Gamma-ray spectrometry, flown simultaneously with aeromagnetic surveys, has joined the airborne
geophysical programmes supporting geological mapping in the past decade.
All three methods are therefore used primarily by national geological surveys
to support basic geoscience mapping, alongside conventional field and photogeology, and to set the regional scene for dedicated mineral and oil exploration.
It is normal that the results are published at nominal cost for the benefit of all
potential users.
Geophysical surveys for mineral exploration are applied on those more limited
areas (typically at scales 1:50,000 to 1:10,000) selected as being promising for
closer (and more expensive!) examination. Typically this might start with an
airborne EM and magnetometer survey that would reveal targets suitable for
detailed investigation with yet more expensive methods (such as EM and IP)
on the ground. Once accurately located in position (X,Y ) and depth, the most
promising anomalies can be tested further by drilling.
Groundwater exploration has historically relied on electrical sounding and profiling, but has been supplemented in some cases by EM profiling and sounding and shallow seismic surveys. Regrettably, poor funding often dictates that
such surveys are less thorough and systematic than is the case in mineral ex-

first

previous

next

last

back

exit

zoom

contents

index

about

286

7.5. Seismic surveying


ploration, despite the fact that drilling (especially the drilling of non-productive
boreholes!) is such an expensive item.
New technology is emerging to map and quantify the presence of water in
the ground directly using nuclear magnetic resonance (NMR). If protons, such
as those present in water, are aligned by applying a magnetic field, they precess when the applied field is turned off. Detection of the precession signal is,
therefore, a direct indication of the presence of water below the surface. Magnetic resonance sounding (MRS) and mapping promises to be a powerful tool
for efficient groundwater exploration in the future.
Oil exploration relies almost entirely on detailed seismic surveys, once their
location has been selected on the basis of all available geological and regional
geophysical data. The surveys are carried out by highly specialized contractors,
up to date with the latest technology in this complex and sophisticated industry.
Ground penetrating radar. In many circumstances, radar signals of appropriate wavelength may be used to probe several metres - or even tens of metres into the ground. The profiling and mapping of features such as the bedrock surface, buried pipelines and cables and old building foundations is useful in the
context of highway construction, dam-site investigations and site engineering at
a scale that is very detailed in comparison to many geophysical investigations.
Cost implies that systematic application of such detailed methods to larger areas is seldom an affordable option. Nevertheless, geophysical methods have
successfully been brought to bear on a range of local problems such as archeological sites, unexploded ordnance, shipwrecks and pollution detection. In most
of these cases, methods that do not probe beyond the ground (or water) surface
are of comparatively little value.

first

previous

next

last

back

exit

zoom

contents

index

about

287

7.5. Seismic surveying

Questions
The following questions can help you to study Chapter 7.
1. Make a list of geophysical maps (and their scales) that you are aware of in
your own country (or that part of it you are familiar with).
2. Trace the geophysical features revealed in Figure 7.3(a) on a transparent
overlay and compare your result with the geological map in Figure 7.3(b).
The following are typical exam questions:
1. Why is it necessary to use geophysical methods to explore the subsurface?
What are the limitations of visual RS methods in this respect?
2. Make a list of the physical properties of rocks that have been used as the
basis of geophysical mapping methods.
3. In the process of systematic exploration for Earth resources, why is it important to use inexpensive methods for the reconnaissance of large areas
before using more expensive methods over much smaller ones?

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 8
Radiometric correction

first

previous

next

last

back

exit

zoom

contents

index

about

288

289

8.1. Introduction

8.1 Introduction
The previous chapters have examined remote sensing as a means of producing
image data for a variety of purposes. The following chapters deal with processing of the image data for rectification, visualization and interpretation. The first
step in the processing chain, often referred to as pre-processing, involves radiometric and geometric corrections (Figure 8.1). The radiometric aspects are dealt
with in this chapter, and the geometric aspects in the following. Three groups of
radiometric corrections are identified:
cosmetic rectification to compensate for data errors,
relative atmospheric correction based on ground reflectance properties,
and
absolute atmospheric correction based on atmospheric process information.
The radiance values of reflected polychromatic solar radiation and/or the
emitted thermal radiance from a certain specific target (pixel) at the Earth surface
are for researchers the most valuable information obtainable from a RS scanner.
In the absence of an atmosphere, these radiances leaving the ground will reach
the orbiting sensor practically unaltered in any wavelength, in others words,
what is recorded by the satellite directly corresponds to the radiance leaving the
target on Earth in the wavelength range (band) under consideration.

first

previous

next

last

back

exit

zoom

contents

index

about

290

8.1. Introduction

Cosmetic
corrections

- Radiometric

corrections

- Atmospheric

Pre-processing

corrections
-

first

previous

next

last

Geometric
corrections

back

exit

Figure 8.1: Image preprocessing steps.

zoom

contents

index

about

291

8.2. From satellite to ground radiances: the atmospheric correction

8.2 From satellite to ground radiances: the atmospheric


correction
The presence of a heterogeneous, dense and layered terrestrial atmosphere composed of water vapour, aerosols and gases disturbs the signal reaching the sensor
in many ways. Therefore, methods of atmospheric corrections (AC) are needed
to clean the images from these disturbances, in order to allow the retrieval of
pure ground radiances from the target. The physics behind the AC techniques
in the visible and in the thermal range is essentially the same, meaning that the
same AC procedures applicable in one also apply to the other. However, there
are a number of reasons and facts that allow a distinction between techniques
applicable to visible and thermal data:
Incident and reflected solar radiation and terrestrial thermal emission fall
into very different parts of the spectrum.
Solar emission and reflection depends on the position of the sun and the
satellite at the moment of image acquisition. Thermal emission is theoretically less dependent on this geometry.
Solar rays travel twice through the atmosphere before they reach the sensor
(Top of the Atmosphere [TOA] - ground - sensor), whereas ground thermal
emissions only pass the atmosphere once (ground - sensor; see Figure 8.2).
Solar reflection from earth surface materials is a function of reflectivity
(). However, thermal emissions from earth materials depend on their
emissivity (). The reflective and emissive behaviours are related only at
monochromatic level, as described by Kirchoffs law (see Section 13.2.2 and
first

previous

next

last

back

exit

zoom

contents

index

about

292

8.2. From satellite to ground radiances: the atmospheric correction


Incoming EMR
Reflected EMR
Emitted EMR

Figure 8.2: Solar radiation


travels twice through the
atmosphere before reaching the sensor. Emitted
thermal energy from the
ground target passes the
atmosphere only once.

13.4). Since solar reflection and earth thermal emission occur in different
wavelengths, the behaviour of one is not an indication of the other.
The processes of atmospheric attenuation, i.e. scattering and absorption,
are both wavelength dependent, and affect the two sectors of the spectrum
differently.
Because of the previous statement, AC techniques are applied at monochromatic level (individual wavelengths). It means that attenuation of energy
is calculated at every individual wavelength and then integrated in the
spectrum of the sensor by mathematical integration.
Atmospheric components affect different areas of the spectrum in different

first

previous

next

last

back

exit

zoom

contents

index

about

293

8.2. From satellite to ground radiances: the atmospheric correction


ways, meaning that some components can be neglected when dealing with
thermal or visible imagery.
A classification of different AC methods allows us to assess what kind of effort is required to apply to the imagery in light of a given application or objective
of a project. Some RS applications do not require AC procedures at all, except for
some cosmetics, while some call for rigourous and complex procedures. Many
applications require intermediate solutions.
In general, applications where the actual radiance at ground level is not
needed do not require atmospheric correction. Some cosmetic and/or image enhancements procedures may suffice. Among them are cartographic applications
where image geometry is of principal importance, or qualitative visual interpretation.
On the other side, applications requiring the quantification of fluxes at ground
level must include rigourous atmospheric correction procedures. Quantification
of evapotranspiration or CO2 sequestration, or surface temperature and reflectivity mapping are examples.
In between there are applications concerned with the evolution of certain parameters or land properties over time, rather than their absolute quantification.
In those cases knowledge of the relative trend may suffice. These procedures
apply mainly when the mapping parameters do not really have a meaningful
physical value, simply because they were designed primarily for multitemporal
relative comparison. Index evolution and correlation procedures, where radiances are associated with the evolution of certain parameter (i.e. turbidity), are
examples of this category. Be aware that some indexes such as NDVI typically
require some absolute atmospheric correction.
Nowadays the effort required is synonymous with the amount of information required to describe the components of the atmosphere at different altitudes
first

previous

next

last

back

exit

zoom

contents

index

about

294

8.2. From satellite to ground radiances: the atmospheric correction


(atmospheric profiling) at the moment and position the image is taken, and less
so with sophistication of the AC procedure itself. Current state-of-the-art atmospheric models allow the cleaning of any cloudless image regardless the sensor type, as long as atmospheric profile data are available. Unfortunately, such
detailed atmospheric information can only be obtained through atmospheric
sounding procedures, consisting of a series of instruments able to sample the
atmosphere at fixed intervals while transported vertically by a balloon. This
kind of profiling is carried out daily at some atmospheric centres at fixed times,
regardless of the satellite overpass time. However, the atmosphere is dynamic.
Atmospheric processes and composition change rapidly, mainly at low altitude
(water vapour and aerosols), meaning that sounding made somewhere close to
the target and near the time of the satellite overpass might not be enough to
ensure an adequate atmospheric description. As a rule of thumb regarding AC
techniques, first consider the objectives of the project, then identify the appropriate AC procedure, and finally establish the effort, i.e. the required information
to execute the chosen correction procedure.

first

previous

next

last

back

exit

zoom

contents

index

about

295

8.3. Atmospheric correction in the visible part of the spectrum

8.3 Atmospheric correction in the visible part of the


spectrum
As mentioned earlier, the signal correction methods in the visible part of the
spectrum (solar radiation) can be grouped according to the rigour of the final
product required by the application. In increasing order of difficulty, the following methods are discussed below:
Cosmetic corrections;
Relative AC methods based on ground reflectance properties;
Absolute AC methods based on atmospheric process information.

first

previous

next

last

back

exit

zoom

contents

index

about

296

8.3. Atmospheric correction in the visible part of the spectrum

8.3.1

Cosmetic corrections

These procedures are not true AC techniques. Their objective is to correct visible errors and noise in the image data. No atmospheric model of any kind is
involved at all in these correction processes; instead, corrections are achieved
using especially designed filters and image stretching and enhancement procedures. Nowadays these corrections are typically executed (if required) at the
satellite data receiving stations or image pre-processing centres, before reaching
the final user. All applications require this form of correction. True AC methods,
if required, follow these cosmetic modifications.
Typical problems requiring cosmetic corrections are:
Periodic line dropouts;
Line striping;
Random noise or spike corrections.
These effects can be identified visually and automatically, and are here illustrated on a Landsat Enhanced Thematic Mapper (ETM) image of Enschede
(Figure 8.3).

first

previous

next

last

back

exit

zoom

contents

index

about

297

8.3. Atmospheric correction in the visible part of the spectrum

96
89
104
114
131
122
112
137
138
131
108
88
78
70
87
106
106
99
88
91

87
89
108
97
143
155
127
152
149
137
132
101
68
85
114
126
116
116
95
91

83
96
93
93
107
147
148
162
155
139
136
113
81
111
128
125
123
123
99
85

94
94
92
86
101
115
151
164
159
141
134
100
88
118
127
127
119
119
96
87

114
98
97
96
122
145
159
161
168
148
135
94
73
100
119
108
89
89
112
80

109
108
94
100
127
155
155
157
162
158
124
93
76
89
96
88
95
95
76
97

107
111
86
92
88
101
134
150
154
159
115
90
76
86
87
92
99
99
104
115

104
110
81
88
86
91
91
98
123
137
106
84
78
85
86
87
95
93
93
105

94
94
75
88
87
92
92
86
87
96
96
89
79
84
80
80
82
85
85
94

83
80
83
92
90
95
91
96
97
92
88
79
72
81
74
80
76
74
74
87

81
88
91
93
90
102
106
118
126
122
90
82
76
73
69
71
79
69
69
78

96
79
75
92
92
119
126
127
123
117
96
91
83
68
72
75
79
66
66
70

90
73
75
105
124
131
121
104
108
114
113
88
86
73
76
77
79
70
70
73

110
107
110
110
122
128
112
121
125
121
120
102
86
79
80
84
71
71
71
65

109
121
131
106
116
132
126
122
121
121
116
114
91
85
81
68
69
78
85
85

(a)

first

(b)

previous

next

last

back

exit

zoom

contents

index

about

Figure 8.3: Original Landsat ETM image of Enschede and environs (a),
and corresponding Digital
Numbers (DN) of the the
indicated subset (b).

298

8.3. Atmospheric correction in the visible part of the spectrum


Periodic line dropouts
Periodic line dropouts occur due to recording problems when one of the detectors of the sensor in question either gives wrong data or stops functioning. The
Landsat ETM, for example, has 16 detectors in all its bands, except the thermal
band. A loss of one of the detectors would result in every sixteenth scan line being a string of zeros that would plot as a black line on the image (see Figure 8.4).
96
89
104
0
131
122
112
137
138
131
108
88
78
70
87
106
106
99
88
0

87
89
108
0
143
155
127
152
149
137
132
101
68
85
114
126
116
116
95
0

83
96
93
0
107
147
148
162
155
139
136
113
81
111
128
125
123
123
99
0

94
94
92
0
101
115
151
164
159
141
134
100
88
118
127
127
119
119
96
0

114
98
97
0
122
145
159
161
168
148
135
94
73
100
119
108
89
89
112
0

109
108
94
0
127
155
155
157
162
158
124
93
76
89
96
88
95
95
76
0

107
111
86
0
88
101
134
150
154
159
115
90
76
86
87
92
99
99
104
0

104
110
81
0
86
91
91
98
123
137
106
84
78
85
86
87
95
93
93
0

94
94
75
0
87
92
92
86
87
96
96
89
79
84
80
80
82
85
85
0

83
80
83
0
90
95
91
96
97
92
88
79
72
81
74
80
76
74
74
0

81
88
91
0
90
102
106
118
126
122
90
82
76
73
69
71
79
69
69
0

96
79
75
0
92
119
126
127
123
117
96
91
83
68
72
75
79
66
66
0

90
73
75
0
124
131
121
104
108
114
113
88
86
73
76
77
79
70
70
0

110
107
110
0
122
128
112
121
125
121
120
102
86
79
80
84
71
71
71
0

109
121
131
0
116
132
126
122
121
121
116
114
91
85
81
68
69
78
85
0

(a)

(b)

The first step in the restoration process is to calculate the average DN-value
per scan line for the entire scene. The average DN-value for each scan line is then
compared with this scene average. Any scan line deviating from the average by
more than a designated threshold value is identified as defective. In regions
of very diverse land cover, better results can be achieved by considering the
histogram for sub-scenes and processing these sub-scenes separately.
The next step is to replace the defective lines. For each pixel in a defective
line, an average DN is calculated using DNs for the corresponding pixel in the
first

previous

next

last

back

exit

zoom

contents

index

about

Figure 8.4:
The image with periodic line
dropouts (a) and the
DN-values (b). All erroneous DN-values in these
examples are shown in
bold.

299

8.3. Atmospheric correction in the visible part of the spectrum


preceding and succeeding scan lines. The average DN is then substituted for
the defective pixel. The resulting image is a major improvement, although every
sixteenth scan line (or every sixth scan line, in case of Landsat MSS data) consists
of artificial data (see Figure 8.5). This restoration program is equally effective for
random line dropouts that do not follow a systematic pattern.
96
89
104
118
131
122
112
137
138
131
108
88
78
70
87
106
106
99
88
90

87
89
108
126
143
155
127
152
149
137
132
101
68
85
114
126
116
116
95
93

83
96
93
100
107
147
148
162
155
139
136
113
81
111
128
125
123
123
99
93

94
94
92
97
101
115
151
164
159
141
134
100
88
118
127
127
119
119
96
90

114
98
97
110
122
145
159
161
168
148
135
94
73
100
119
108
89
89
112
102

109
108
94
111
127
155
155
157
162
158
124
93
76
89
96
88
95
95
76
95

107
111
86
87
88
101
134
150
154
159
115
90
76
86
87
92
99
99
104
110

104
110
81
84
86
91
91
98
123
137
106
84
78
85
86
87
95
93
93
101

94
94
75
81
87
92
92
86
87
96
96
89
79
84
80
80
82
85
85
87

83
80
83
87
90
95
91
96
97
92
88
79
72
81
74
80
76
74
74
81

81
88
91
91
90
102
106
118
126
122
90
82
76
73
69
71
79
69
69
77

96
79
75
84
92
119
126
127
123
117
96
91
83
68
72
75
79
66
66
74

90
73
75
100
124
131
121
104
108
114
113
88
86
73
76
77
79
70
70
74

110
107
110
116
122
128
112
121
125
121
120
102
86
79
80
84
71
71
71
81

109
121
131
124
116
132
126
122
121
121
116
114
91
85
81
68
69
78
85
95

(a)

first

(b)

previous

next

last

back

exit

zoom

contents

index

about

Figure 8.5: The image


after correction for line
dropouts (a) and the DNvalues (b).

300

8.3. Atmospheric correction in the visible part of the spectrum


Line striping
Line striping is far more common than line dropouts. Line striping often occurs
due to non-identical detector response. Although the detectors for all satellite
sensors are carefully calibrated and matched before the launch of the satellite,
with time the response of some detectors may drift to higher or lower levels.
As a result, every scan line recorded by that detector is brighter or darker than
the other lines (see Figure 8.6). It is important to understand that valid data are
present in the defective lines, but these must be corrected to match the overall
scene.
96
89
104
150
131
122
112
137
138
131
108
88
78
70
87
106
106
99
88
125

87
89
108
127
143
155
127
152
149
137
132
101
68
85
114
126
116
116
95
121

83
96
93
121
107
147
148
162
155
139
136
113
81
111
128
125
123
123
99
111

94
94
92
113
101
115
151
164
159
141
134
100
88
118
127
127
119
119
96
119

114
98
97
129
122
145
159
161
168
148
135
94
73
100
119
108
89
89
112
107

109
108
94
130
127
155
155
157
162
158
124
93
76
89
96
88
95
95
76
132

107
111
86
121
88
101
134
150
154
159
115
90
76
86
87
92
99
99
104
160

104
110
81
116
86
91
91
98
123
137
106
84
78
85
86
87
95
93
93
139

94
94
75
115
87
92
92
86
87
96
96
89
79
84
80
80
82
85
85
128

83
80
83
120
90
95
91
96
97
92
88
79
72
81
74
80
76
74
74
121

81
88
91
127
90
102
106
118
126
122
90
82
76
73
69
71
79
69
69
104

96
79
75
124
92
119
126
127
123
117
96
91
83
68
72
75
79
66
66
93

90
73
75
143
124
131
121
104
108
114
113
88
86
73
76
77
79
70
70
99

110
107
110
148
122
128
112
121
125
121
120
102
86
79
80
84
71
71
71
90

109
121
131
142
116
132
126
122
121
121
116
114
91
85
81
68
69
78
85
113

(a)

(b)

Though several procedures can be adopted to correct this effect, the most
popular is the histogram matching. Separate histograms corresponding to each
detector unit are constructed and matched. Taking one response as standard, the
gain (rate of increase of DN) and offset (relative shift of mean) for all other detector units are suitably adjusted, and new DN-values are computed and assigned.
first

previous

next

last

back

exit

zoom

contents

index

about

Figure 8.6: The image


with line striping (a) and
the DN-values (b). Note
that the destriped image
would look similar to the
original image.

301

8.3. Atmospheric correction in the visible part of the spectrum


This yields a destriped image in which all DN-values conform to the reference
level and scale.

first

previous

next

last

back

exit

zoom

contents

index

about

302

8.3. Atmospheric correction in the visible part of the spectrum


Random noise or spike noise
The periodic line dropouts and striping are forms of non-random noise that may
be recognized and restored by simple means. Random noise, on the other hand,
requires a more sophisticated restoration method such as digital filtering.
Random noise or spike noise may be due to errors during transmission of
data or to a temporary disturbance. Here, individual pixels acquire DN-values
that are much higher or lower than the surrounding pixels (Figure 8.7). In the
image these pixels produce bright and dark spots that interfere with information
extraction procedures.
A spike noise can be detected by mutually comparing neighbouring pixel
values. If neighbouring pixel values differ by more than a specific threshold
margin, it is designated as a spike noise and the DN is replaced by an interpolated DN-value (based on the values of the surrounding pixels).
96
89
104
114
131
122
112
137
138
131
108
88
78
70
87
106
106
99
88
91

87
89
108
97
143
155
127
152
149
137
132
101
68
85
114
126
116
116
95
91

83
96
93
93
107
147
148
162
155
139
136
113
81
111
128
125
123
123
99
85

94
94
92
86
101
115
151
0
159
141
134
100
88
118
127
127
119
119
96
87

114
98
97
96
122
145
159
161
168
148
135
94
73
100
119
108
89
89
112
80

109
108
94
100
127
155
155
157
162
158
124
93
76
89
96
88
95
95
76
97

107
111
86
92
88
101
134
150
154
159
115
90
76
86
87
92
99
99
104
115

104
110
81
88
86
91
91
98
123
137
106
84
78
85
86
87
95
93
93
105

94
94
75
88
87
92
92
86
87
96
96
89
79
84
80
255
82
85
85
94

83
80
255
92
90
95
91
96
97
92
88
79
72
81
74
80
76
74
74
87

81
88
91
93
90
102
106
118
126
122
90
82
76
73
69
71
79
69
69
78

96
79
75
92
92
119
126
127
123
117
96
91
83
68
72
75
79
66
66
70

90
73
75
105
124
131
121
104
108
114
113
88
86
73
76
77
79
70
70
73

110
107
110
110
122
128
112
121
125
121
120
102
86
79
80
84
71
71
71
65

109
121
131
106
116
132
126
122
121
121
116
114
91
85
81
68
69
78
85
85

(a)

first

(b)

previous

next

last

back

exit

zoom

contents

index

about

Figure 8.7: The image


with spike errors (a) and
the DN-values (b).

303

8.3. Atmospheric correction in the visible part of the spectrum

8.3.2

Relative AC methods based on ground reflectance

Relative AC methods avoid the evaluation of atmospheric components of any


kind. They rely on the fact that for one image band/channel the relation between
the radiances at TOA and at ground level follows a linear trend for the variety
of earth features present in the image. This linear relation is in fact an approximation to reality, but precise enough to solve practical applications where there
are other more important source of errors. The methods are:
Two reflectance measurements: The output of this method is an absolute atmospherically corrected image, so it can be used on an individual basis, for multitemporal comparison or parameter evolution, and for flux quantification. Absolute means that the image output has physical units, and the calculated ground
radiances are compatible with the actual atmospheric constituents. The application of this method requires the use of a portable radiometer able to measure in
the same wavelength range of the image band to be corrected. If many bands
are to be corrected, then the radiometer should have filters allowing the measurements in all these individual bands separately.
The procedure is straightforward. Prior to the satellite pass, some bright and
dark sample areas in the image, preferably having the size of more than 3 pixels, are selected. It is not necessary that these targets are reflective invariant,
although it is preferable if they are of uniform land cover. Reflective invariant
areas are those that retain their reflective properties over time. Deep reservoir
lakes, sandy beaches or desserts, open quarries, large salt deposits, big asphalted
areas, are examples. A uniform wheat field is not reflective invariant (i.e. it will
change reflection after harvest), but it can still be used for the two reflectance
measurements method. Another fundamental condition is that these pixels are
univocally recognizable in the image. During the satellite pass, the reflectance
first

previous

next

last

back

exit

zoom

contents

index

about

304

8.3. Atmospheric correction in the visible part of the spectrum


of these targets is measured in all bands in the field with the radiometer. After
image acquisition the bands are individually calibrated, and the TOA radiance
is read for these sample pixels. Plotting the ground radiance as a function of the
TOA radiance for the target pixels, a linear equation expressing the AC linearity
is found. Then, this equation is applied to the whole original TOA image to obtain the corrected image.
Two reference surfaces: The output of this method is an image that matches a reflectance that is compatible with the atmosphere of a similar image taken on
other previous date. No absolute values of radiances are obtained in any of the
two images, only allowing comparative results. This method works on an individual band/channel basis, and is valid to establish a uniform comparison basis
to study, for example, the evolution of non-flux related parameters like indexes,
or when certain convenient land properties can be derived directly or indirectly
from the normalized radiance values in a band. The method relies on the existence of at least one dark and one bright invariant area. Normally a sizable area
should avoid mixed pixels (mixed landcover). As rule of thumb it should be a
minimum of 2 or 3 times larger than the image spatial resolution. Reflective invariant areas are considered to retain their reflective properties over time. Deep
reservoir lakes, sandy beaches or desserts, open quarries, large salt deposits, big
asphalted areas, are examples. A uniform wheat field is not reflective invariant
(i.e. it will change reflection after harvest), but it can still be used for the two
reflectance measurements method. It is supposed that for these pixels the reflectance should always be the same, since their composition keeps the reflective
properties with time. If a difference in reflectance occurs at the reflective invariant area in the two date images, it can only be attributed to the different state
of the atmosphere on these dates. The atmospheric composition is unknown in

first

previous

next

last

back

exit

zoom

contents

index

about

305

8.3. Atmospheric correction in the visible part of the spectrum


the two images, but their influence is measurable by calculating the difference
in radiance between the two dates on the reflective invariant areas.
The procedure defines one image as master and the other as slave. The reflectance of one invariant pixel at the master image on date 1 is M , and the
reflectance of the same pixel in the slave image on other date 2 is S . If the atmosphere is the same, then M = S . If not, the difference in reflectance produced
by the distinctive atmospheric conditions can be measured as = M S .
i can be calculated for all i invariant reflective pixels found in the image,
although two extremes (bright and dark), b and d , will suffice. A linear
relation, = a s + b is built out of these extreme invariant pixels. This equation = f (s ) can then be used to correct the slave image to ensure that the
invariant pixels match the reflectance of the master image. This is done in a new
slave image, 0S = S + = S + a S + b. At the end 0S is then an image showing the reflection of the second date image having the atmosphere of the first. In
other words, the atmosphere of the master image was artificially imposed to the
slave image, so that the evolution of reflectance can now be compared in both
images, since they have the same atmosphere.

first

previous

next

last

back

exit

zoom

contents

index

about

306

8.3. Atmospheric correction in the visible part of the spectrum

8.3.3

Absolute AC methods based on atmospheric processes

These methods require a description of the components in the atmospheric profile. The output of these methods is an image that matches the reflectance of the
ground pixels with a maximum estimated error of 10 %, if atmospheric profiling
is adequate enough. This image can be used for flux quantifications, parameter
evolution assessments, etc., as mentioned above. The advantage of these methods is that ground reflectance can be evaluated under any atmospheric condition, altitude and relative geometry between sun and satellite. The disadvantage
is that the atmospheric profiling required for these methods is rarely available.
Regarding this inconvenience, a sub-classification of absolute AC methods could
be based on the accuracy of the method related to the effort in obtaining the required data.

first

previous

next

last

back

exit

zoom

contents

index

about

307

8.3. Atmospheric correction in the visible part of the spectrum

Numerical approximation method(s)

Spectral resolution
Clouds
Aerosols
Gas absorption 1
Atmospheric profiles
Surface
characteristics

Primary output parameter


User interface

first

previous

LOWTRAN/ MODTRAN
Two-stream, including atmospheric refraction; discrete
ordinates also in MODTRAN3
20 cm1 (LOWTRAN); 2
cm1 (MODTRAN)
Eight cloud models; userspecified optical properties
Four optical models
Principle and trace gases
Standard and user-specified
Lambertian, no built-in models

10 cm1 , shortwave only


No clouds
Six optical models plus userdefined
Principle and trace gases
Standard and user-specified

Radiance

Lambertian spectral albedo


models built-in;
bidi-rectionally reflecting surface
possible
Radiance/reflectance

Formatted input file

Input file

next

last

back

exit

Table 8.1: Characteristics


of LOWTRAN/MODTRAN
and 6S RTMs (http:// stratus.ssec.wisc.edu).

6S
Successive orders of scattering

zoom

contents

index

about

308

8.3. Atmospheric correction in the visible part of the spectrum


Radiative transfer models
Radiative transfer models can be used for computing either radiances (intensities) or irradiances (fluxes) for a wide variety of atmospheric and surface conditions. They require a full descriptions of the atmospheric components at fixed
altitudes throughout the atmosphere. LOWTRAN ([20]), MODTRAN ([2]), Code
5S ([36]) and 6S ([41]) are all reference RTMs. MODTRAN is becoming the standard for research studies in both the thermal and the visible spectrums. LOWTRAN was developed first, but was later superseded by MODTRAN. Code 6S
is perhaps the most complete in the visible spectrum. Table 8.1 shows some
characteristics of these physically based RTMs.
The use of RTMs is not limited to mere calculation of reflectance and radiances at ground level. Since they operate with a wide variety of parameters, they
also allow the study of optics in many other fields of the science. The complexity
of the actual internal physical solutions provided by these models is beyond the
scope of this publication.
Data input to RTMs Although a detailed description of the physical parameters involved on these models is outside of the scope of this book, a description
of the type of input gives some indication on the effort required to run them.
The user must select:
Options that allow a more accurate calculation of molecular absorption in
the presence of multiple scattering;
The type of atmospheric path;
The kind of operating mode (output) desired;

first

previous

next

last

back

exit

zoom

contents

index

about

309

8.3. Atmospheric correction in the visible part of the spectrum


The kind of model atmosphere: a selection of standard atmospheres (explained below) or a user defined ones;
The temperature and pressure profile;
The water vapour, ozone, methane, nitrous oxide, carbon monoxide and
other gases altitude profile;
Eventually the altitude and sensor type.
RTM are relatively easy to use when the complexity of the atmospheric input is
simplified by using one standard atmosphere as input.
A more detailed description of these inputs, as well as detailed examples of
RTM calculations, are available in a separate, more technical text prepared by
the author of this chapter.
Band transmission models adapted for RS image processing RTMs process
extensive AC calculations and are not normally used for image processing. Instead, they are used on in individual cases (e.g. to study a target of a certain
characteristics), or for research studies where different RTM scenarios are constructed to assess the multitude of possible combinations of reflectance, viewing
geometry and atmospheric conditions.
The results of those investigations can be used to build look up tables (LUT
databases), with which users can estimate the ground reflectance of a certain target without actually involving a complex and time consuming RTM. This process is as accurate as the RTM used to build the LUT, and several times faster. It
is used in models built especially for image processing, such as ATCOR ([31]),
implemented in ERDAS Imagine. The advantage of these models is not only

first

previous

next

last

back

exit

zoom

contents

index

about

310

8.3. Atmospheric correction in the visible part of the spectrum


speed. The range of available options is very much focused on the problems
routinely occurring during image pre-processing, for example where the atmospheric description is unknown for the image being processed.
An alternative approach conveniently built into the software is the possibility of working backwards from Earth to satellite. Materials on Earth have
unique spectral signatures that are normally distorted by atmospheric interaction along the path towards the satellite. If the material of an observed feature
is known (in a certain recognizable pixel), then the real spectral signature of this
feature can be retrieved from a database of material reflectance, normally included in the software (see also Section 14.1). Due to atmospheric interferences
the satellite reflectance does not match the database information. The model
then provides tools to the user to modify the characteristics of the atmosphere
within certain limits until the TOA reflectance matches the database reflectance.
At this stage the user manages to recreate the atmosphere that produces the
distortion. This atmosphere is applied to the entire image using the LUT in the
software, performing a fast atmospheric correction without the need of measuring atmospheric components.
Other simplified RTMs for image processing still rely on the atmospheric
description, but the load of necessary input information is reduced to a few standard parameters more widely measured. The algorithms inside these models
normally assume some restrictive assumptions (i.e. due to the spatial resolution
of the satellite) that allows a faster calculation, while keeping the error associated to these assumptions at tolerable values, normally reported in the software
documentation.
As an example, SMAC ([30]) is a simplified version of Code 5S and 6S. It
was originally designed for NOAA AVHRR imagery, and has been extended to
include some high resolution satellites. It still requires information of ozone,

first

previous

next

last

back

exit

zoom

contents

index

about

311

8.3. Atmospheric correction in the visible part of the spectrum


aerosols and water vapour, but adapted to total amounts in vertical columns of
atmosphere, while detailed profiles are not necessary. This information is more
widely available, since it can be synthesized using sun photometers that are operated at ground stations in many countries around the world, and produce the
necessary information on an hourly basis. The SMAC interface is available from
the author, and it has proved easily adaptable for rapid image processing.
Standard atmospheres Due to the rapid dynamics of the atmosphere in terms
of temporal and spatial variation of its constituents, researchers have found the
need to define some often-observed common profiles corresponding to average atmospheric conditions in different parts of Earth. Compilation of these
fixed atmospheres was based on actual radiosoundings carried out at different research sites, resulting in so-called standard atmospheres, for example for
midlatitude summer, midlatitude winter, tropical, desert, arctic, US standard,
etc. Researchers use these well described standards to characterize the typical
on-site atmospherics. RTMs have these standards built into the system, allowing
the influence of different constituents to be compared under strict simulations.
For instance, the influence of water vapour in the thermal, or of aerosols and
air molecules in the visible part of the spectrum can be accurately predicted for
different atmospheres, allowing sensitivity analyses to evaluate the importance
of these constituents in attenuation processes at different wavelengths.
Figure 8.8 shows four standard atmosphere profiles. The left hand graph of
each figure shows the variation of temperature [K] and pressure [mb] with altitude for the atmosphere up to a height of 100 km. The right hand graph shows
the variation of ozone and water vapour with altitude on a logarithmic scale
[g/cm3 ]. Note that in general the pressure profile is similar in all atmospheres.
Temperature profiles also have a similar shape, but the absolute values are very

first

previous

next

last

back

exit

zoom

contents

index

about

312

8.3. Atmospheric correction in the visible part of the spectrum


different, showing the importance of a good profile selection when analysing
atmospheric influences in thermal imagery. Ozone is mainly concentrated between 15 to 30 km, where most of the attenuation of the ultraviolet takes place.
Water vapour is concentrated in the lower atmosphere in all cases, with a maximum close to the Earth surface.

first

previous

next

last

back

exit

zoom

contents

index

about

313

8.3. Atmospheric correction in the visible part of the spectrum

(a)

(b)
3

Log(O3dens.[g/m ])

240

260

280

-11 -10 -9 -8 -7 -6 -5 -4 -3
100

P
T

60

Mid.Sum.atm.

40

60

40

Mid.Sum.atm.

-4

-3 -2 -1

-8

-6

-4

-2

first

Log(P[mb])

previous

next

60

40

US62 atm.

-6

-4

-2

-2

H 2O
O3

80

Trop. atm.

40

60

40

Figure 8.8: Model atmospheric profiles for midlatitude summer (a), midlatitude winter (b), US 62
standard (c) and tropical
mode (d).

Trop. atm.

20

0
-4

-3 -2 -1

Log(H2O dens.[g/m ])

back

-4

Log(O3dens.[g/m3])

60

last

-6

0
-8

-8

Log(H2O dens.[g/m ])

20

-10

-10

P
T

80

Temperature [K]

20

20

-11 -10 -9 -8 -7 -6 -5 -4 -3
180 200 220 240 260 280 300
100
100

Altitude [km]

Altitude [km]

Altitude [km]

US62 atm.

Log(P[mb])

H 2O
O3

80

60

-3 -2 -1

-3 -2 -1

Log(O3dens.[g/m3])

P
T

-4

0
-4

-11 -10 -9 -8 -7 -6 -5 -4 -3
180 200 220 240 260 280 300
100
100

80

40

Mid.Win.atm.

Log(H2O dens.[g/m ])

Temperature [K]

60

20

Log(P[mb])

40

Mid.Win.atm.

-10

60

40

H 2O
O3

80

20

P
T

80

20

20

(c)

-11 -10 -9 -8 -7 -6 -5 -4 -3
180 200 220 240 260 280 300
100
100

H 2O
O3

80

Altitude [km]

80

Altitude [km]

300

Altitude [km]

220

Altitude [km]

200
100

Log(O3dens.[g/m3])

Temperature [K]

Altitude [km]

Temperature [K]

Log(P[mb])

exit

zoom

-10

-8

-6

-4

-2

0
3

Log(H2O dens.[g/m ])

contents

(d)

index

about

314

8.3. Atmospheric correction in the visible part of the spectrum

Summary
Radiometric corrections can be divided into relatively simple cosmetic rectifications, as well as atmospheric corrections. The cosmetic modifications are useful
to reduce or compensate for data errors.
Atmospheric corrections constitute an important step in the pre-processing
of remotely sensed data. Their effect is to re-scale the raw radiance data provided by the sensor to reflectance by correcting for atmospheric influence. They
are important for generating image mosaics and for comparing multitemporal
remote sensing data, but are also a critical prerequisite for the quantitative use of
remote sensing data, for example to calculate surface temperatures (see Chapter 13), or to determine specific surface materials in spectrometer data (Chapter 14).
Following an overview of different techniques to correct data errors, absolute correction methods were explained. Focus was placed on radiative transfer
models. Note that such corrections should be applied with care, and only after
understanding the physical principles behind these corrections.

first

previous

next

last

back

exit

zoom

contents

index

about

315

8.3. Atmospheric correction in the visible part of the spectrum

Questions
The following questions can help to study Chapter 8.
1. Should radiometric corrections be performed before or after geometric corrections, and why?
2. Why is the effect of haze more pronounced in shorter wavelength bands?
3. In a change detection study, if there were images from different years but
from the same season, would it still be necessary to perform atmospheric
corrections? Why or why not?
4. Why is the selection of the appropriate standard atmosphere for a RTM
critical?
The following are typical exam questions:
1. Which are the radiometric errors that can be introduced due to malfunctioning of satellite sensors?
2. What are the differences between line dropouts and line striping?

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 9
Geometric aspects

first

previous

next

last

back

exit

zoom

contents

index

about

316

317

9.1. Introduction

9.1 Introduction
Chapters 4, 5 and 6 described some of the geometric characteristics of different
sensor and platform systems, and explained how these different characteristics
affect the geometry of the resulting image products. For example, Chapter 4
outlined how the tilt and roll of an aircraft during a photo mission causes gradual changes in scale across an aerial photograph. These distortions are quite
different from those caused by the rotation of the Earth during the scanning of
an image with a space borne multispectral scanner (Chapter 5), or to the effects
of viewing the terrain under an oblique angle with side looking airborne radar
(Chapter 6). After considering these different geometric characteristics and correcting for their effects, we are able to use remotely sensed data to:
1. Derive 2-dimensional (X,Y ) and 3-dimensional (X,Y ,Z) coordinate information.
One of the most important applications of remote sensing is to use photographs or images as an intermediate product from which maps are made.
2D geometric descriptions of objects (points, lines, areas) can be derived
relatively easily from a single image or photo through the process of georeferencing. More complicated orientation procedures have to be applied
before 3D geometric descriptions (2.5D terrain relief, 3D objects as volumes) from overlapping, stereoscopic pairs of images or photos can be
derived, or if the relief displacement is too large to be tolerated.
2. Visualize the image data in a GIS environment.
Sometimes it can be helpful to combine image data with vector data, for
example to study cadastral boundaries in the context of land cover. In such
cases, the raster image data form a backdrop for the vector data. To enable

first

previous

next

last

back

exit

zoom

contents

index

about

318

9.1. Introduction
such integration, the image data must be georeferenced to the coordinate
system of the vector data.
3. Merge different types of image data in a GIS for integrated processing
and analysis.
Instead of using them to make maps, image data may be incorporated directly as raster layers in a GIS. For example, in a monitoring project it
might be necessary to compare land cover maps and classified multispectral Landsat and SPOT data of various dates. All these different data sets
need to be georeferenced to the same geometric grid of a given coordinate
system before they can be processed simultaneously. That means that the
pixels of all raster data sets (rasterized maps and image data) have to be
geocoded. This is a rather complex process in which the various sets of
georeferenced pixels are resampled, i.e. rearranged to a common system
of columns and rows.
Traditionally, the subjects of georeferencing and orientation have been addressed
by photogrammetry. This is the discipline concerned with making spatial measurements on images, and to create accurate products from those measurements,
for example Digital Terrain Models (DTMs). Today, not only aerial photographs
but also various other types of image data are used in the field of digital photogrammetry. Most digital image processing software includes at least simple
routines for georeferencing and geocoding. Deriving 3D measurements from
radar data is the focus of the related disciplines of interferometry and radargrammetry (see Chapter 6).
The remainder of this chapter is divided into two sections. Section 9.2 outlines the procedures used to determine 2D coordinates from photos and images.
As noted above, this is a relatively simple procedure in which a transformation
first

previous

next

last

back

exit

zoom

contents

index

about

319

9.1. Introduction
between two 2D coordinate systems (image and terrain) is determined. However, it is only applicable if the relief displacement is small enough to be neglected. Application of this transformation enables the terrain coordinates of all
image points subsequently to be determined. Note that the general concept of
spatial referencing and map projections are introduced in the Principles of Geographic Information Systems textbook.
Section 9.3 discusses the more complex procedures used to deal with the effects of relief differences in the terrain and to determine 3D coordinates from
stereo pairs. These procedures include: Monoplotting, which is an approach to
correct for terrain relief during digitizing of terrain features from aerial photos that results in relatively accurate (X,Y ) coordinates. Orthoimage production,
which is an approach to correct image data for terrain relief and store the image in a specific map projection. Orthoproducts can be used as a backdrop to
other data or, used to directly determine the (X,Y ) geometry of the features of
interest. Stereoplotting, which is used to extract 3D information from stereo pairs,
i.e. partially overlapping images, which can be viewed and measured in three
dimensions. Examples of 3D products include large-scale databases used for
topographic and cadastral mapping, and DTM, acquired for the design of civil
engineering projects.

first

previous

next

last

back

exit

zoom

contents

index

about

320

9.2. Two-dimensional approaches


Column (i)
1
Row
(j)

4
y

Figure 9.1:
Coordinate
system of the image defined by rows and columns
(a), and map coordinate
system with x- and y-axes
(b).

(x,y)

2
3
4

(i,j)

(0,0)

(a)

9.2

(b)

Two-dimensional approaches

In this section we consider the geometric processing of image data in situations


where relief displacement can be neglected. Examples of such image data are
a digitized aerial photograph of a flat area (for practical purposes, flat may be
considered as h/H < 1/1000, though this also depends on project accuracy requirements). For images taken from space with only a medium resolution, the
relief displacement usually is less than a few pixels and thus less important, as
long as near vertical imagery is acquired. These data are stored in a column-row
system in which columns and rows are indicated by index i and j respectively.
The objective is to relate the image coordinate system to a specific map coordinate system (Figure 9.1).

first

previous

next

last

back

exit

zoom

contents

index

about

321

9.2. Two-dimensional approaches

9.2.1

Georeferencing

The simplest way to link an image to a map projection system is to use a geometric
transformation. A transformation is a function that relates the coordinates of two
systems. A transformation relating (x, y) to (i, j) is typically defined by linear
equations, such as: x = 3 + 5i, and, y = 2 + 2.5j.
Using the above transformation, for example, image position (i = 3, j = 4)
relates to map coordinates (x = 18, y = 12). Once such a transformation has been
determined, the map coordinates for each image pixel can be calculated. Images
for which such a transformation has been carried out are said to be georeferenced.
It allows the superimposition of vector data and the storage of the data in map
coordinates when applying on-screen digitizing. Note that the image in the case
of georeferencing remains stored in the original (i, j) raster structure, and that
its geometry is not altered. As we will see later on, transformations can also
be used to change the actual shape of imagery, and thus make it geometrically
equivalent to a true map.
The process of georeferencing involves two steps: selection of the appropriate type of transformation, and determination of the transformation parameters.
The type of transformation depends mainly on the sensor-platform system used.
For aerial photographs (of a flat terrain) usually a so-called perspective transformation is used to correct for the effect of pitch and roll (Section 3.3.1). A more
general type is the polynomial transformation, which enables 1st , 2nd to nth order
transformations. In many situations a 1st order transformation is adequate. A
1st order transformation relates map coordinates (x, y) with image coordinates
(i, j) as follows:

first

x = a + bi + cj

(9.1)

y = d + ei + f j

(9.2)

previous

next

last

back

exit

zoom

contents

index

about

322

9.2. Two-dimensional approaches


Equations 9.1 and 9.2 require that six parameters (a . . . f ) be determined. This is
in fact a linear transformation, and sometimes not considered a true (higher order) polynomial transformation, which require more parameters. The transformation parameters can be determined by means of ground control points (GCPs).
GCPs are points that can be clearly identified in the image and in a source that
is in the required map projection system. One possibility is to use topographical maps of an adequate scale. The operator then needs to identify identical
points on both sources e.g. using road crossings, waterways, typical morphological structures, etc. Another possibility is to identify points in the image and
to measure the coordinates in the field by satellite positioning. It is important
to note that it can be quite difficult to identify the GCPs in the imagery, especially in lower-resolution satellite data. To solve the above equations, only three
GCPs are required; however, since the transformation parameters are calculated
based on all points, selecting more than is strictly necessary is preferable. We
also require more points to calculate the error of the transformation. For an
independent error assessment, some GCPs should be excluded from the transformation. The coordinates of those points can then be compared with the final,
transformed image. Refer to the Principles of Geographic Information Systems book
for a more detailed discussion on data quality assessment.
Table 9.1 provides an example in which 5 GCPs have been used. Each GCP
with (X, Y ) is identified in the image at location (i, j). A least-squares adjustment then determines the transformation parameters shown in Equations 9.1
and 9.2. With these parameters the new coordinates (xc , yc ) can be calculated for
each image pixel:
xc = 902.76 + 0.206i + 0.051j,

first

previous

next

last

back

exit

zoom

contents

index

about

323

9.2. Two-dimensional approaches


GCP
1
2
3
4
5

i
254
149
40
26
193

j
68
22
132
269
228

x
958
936
916
923
954

y
155
151
176
206
189

xc
958.552
934.576
917.732
921.835
954.146

yc
154.935
150.401
177.087
204.966
189.459

dx
0.552
-1.424
1.732
-1.165
0.146

Table 9.1: A set of five


ground control points,
which are used to determine a 1st order
transformation. xc and yc
are calculated using the
transformation, dx and dy
are the residual errors.

dy
-0.065
-0.599
1.087
-1.034
0.459

and
yc = 152.579 0.044i + 0.199j.
For example, for the pixel corresponding to GCP 1 (i=254 and j=68) we can
calculate new coordinates xc and yc as 958.552 and 154.935, respectively. These
values deviate slightly from the measured coordinates of the GCPs, because the
least-squares adjustment provides the best overall fit. The errors that remain
after the transformation are called residual errors, and are listed in the table as dx
and dy . Their magnitude is an indicator of the quality of the transformation. The
residual errors can be used to analyse which GCPs have the largest contribution
to the errors. This may indicate, for example, a GCP that has been inaccurately
identified.
The overall accuracy of a transformation is usually expressed by the Root
Mean Square Error (RMS error), which calculates a mean value from the individual residuals. The RMS error in x-direction, mx , is calculated using the following
equation:
v
u
n
u1 X
t
mx =
xi 2 .

(9.3)

n i=1

first

previous

next

last

back

exit

zoom

contents

index

about

324

9.2. Two-dimensional approaches

first

previous

next

last

back

exit

zoom

contents

index

about

325

9.2. Two-dimensional approaches


For the y-direction, a similar equation can be used to calculate my . The overall
error, mtotal , is calculated by:
q

mtotal =

mx 2 + my 2 .

(9.4)

For the example data set given in Table 9.1, the residuals were calculated.
The respective mx , my and mtotal are 1.159, 0.752 and 1.381. The RMS error is
a quantitative method to check the accuracy of the transformation. However,
the RMS error does not take into account the spatial distribution of the GCPs.
It cannot, therefore, tell us which parts of the image are accurately transformed.
Furthermore, the RMS error is only valid for the area that is bounded by the
GCPs. In the selection of GCPs, therefore, points should be well distributed and
include location near the edges of the image.

first

previous

next

last

back

exit

zoom

contents

index

about

326

9.2. Two-dimensional approaches


Original image

Image after transformation

Transformation

Figure 9.3: Illustration of


the transformation and resampling process. Note
that the image after transformation is a conceptual
illustration, since the actual pixel shape does not
change.

Grid overlay

Corrected image
Resampling

9.2.2

Geocoding

The previous section explained that two-dimensional coordinate systems, for


example an image system and a map projection system, can be related using geometric transformations. This georeferencing approach is useful in many situations. However, in other situations a geocoding approach, in which the row/column
structure of the image is also transformed, is required. Geocoding is required
when different images need to be combined or when the image data are used in
a GIS environment that requires all data to be stored in the same map projection.
The effect of georeferencing and geocoding is well illustrated by Figure 9.2.
Geocoding is georeferencing with subsequent resampling of the image raster.
This means that a new image raster is defined along the xy-axes of the selected
map projection. The geocoding process comprises two main steps: first each new
raster element is projected (using the transformation parameters) onto the original image, secondly a (DN) value for the new pixel is determined and stored.

first

previous

next

last

back

exit

zoom

contents

index

about

327

9.2. Two-dimensional approaches


We can consider the relationship between projection/transformation and resampling in Figure 9.3. The original raster image is transformed, whereby its
shape changes. Since raster data are stored in regular rows and columns, the
cells of the new image need to to be resampled to reestablish a regular grid.
Imagine that the transformed image is overlaid with a grid corresponding to the
output image, and cell-values for the new image are written out. The shape of
the image to be resampled depends on the type of transformation used. Figure 9.4 shows four transformation types that are frequently used in remote sensing. The types shown increase in complexity, and parameter requirement, from
left to right. In a conformal transformation the image shape, including the right
angles, are retained. Therefore, only 4 parameters are needed to describe a shift
along x and y axis, a scale change, and the rotation. However, if you want to
geocode an image to make it fit with another image or map, a higher-order transformation, such as projective or polynomial, may be required.

Conformal

(4 parameters)

Affine

(6 parameters)

Projective

Polynomial

Figure 9.4: Illustration of


different image transformation types, and the
number of required parameters.

(8 parameters) (>12 parameters)

Regardless of the shape of the transformed image, a resampling procedure as


shown in Figure 9.5 is used. As the orientation and size of original (input) and
first

previous

next

last

back

exit

zoom

contents

index

about

328

9.2. Two-dimensional approaches


required (output) raster are different, there is no exclusive one-to-one relationship between elements (pixels) of these rasters. Therefore, interpolation methods
are required to make a decision regarding the new value of each pixel. Various
resampling algorithms are available (Figure 9.5). The main methods are: nearest
neighbour, bilinear interpolation and cubic convolution. Consider the green grid
to be the output image to be created. To determine the value of the centre pixel
(bold), in nearest neighbour the value of the nearest original pixel is assigned, the
value of the black pixel in this example. Note that always the respective pixel
centres, marked by small crosses, are used for this process. Using bilinear interpolation the weighted mean is calculated for the four nearest pixels in the original
image (dark gray and black pixels). In cubic convolution applies a polynomial
approach based on the values of 16 surrounding pixels (the black and all gray
pixel).
The choice of the resampling algorithm depends, among others, on the ratio
between input and output pixel size and the purpose of the resampled image
data. Nearest neighbour resampling can lead to the edges of features to be offset

Nearest Neighbour

Bilinear Interpolation

+
Cubic Convolution

first

previous

next

last

back

exit

zoom

Figure 9.5: Principle of


resampling using nearest
neighbour, bilinear and cubic convolution.

contents

index

about

329

9.2. Two-dimensional approaches


in a step-like pattern. However, since the value of the original cell is assigned
to a new cell without being changed, all spectral information is retained, which
means that the resampled image is still useful in applications such as image
classification (see Chapter 12). The spatial information, on the other hand, may
be altered in this process, since some original pixels may be omitted from the
output image, or appear twice. Bilinear and cubic convolution reduce this effect
and lead to smoother images; however, because the values from a number of
cells are averaged, the spectral information is changed (Figure 9.6).

Original

Nearest
Neighbour

Bilinear
Convolution

Cubic
Convolution
Figure 9.6: The effect of
nearest neighbour, and bilinear and cubic convolution resampling on the
original data.

first

previous

next

last

back

exit

zoom

contents

index

about

330

9.3. Three-dimensional approaches

9.3 Three-dimensional approaches


The third dimension needs to be considered in two types of situations:
1. If 2D data are to be collected, but neglecting the third dimension gives positional errors due to relief displacement that are too large to be tolerated;
2. if 3D data are to be collected.
Since a single image in itself only contains 2D information, additional information is required to account for the third dimension. This is generally either
explicit information about the terrain elevation, such as a DTM, or implicit elevation information from a second image taken from a different position.
The following processes for 3D correction or information extraction are explained in the subsequent sections:
Monoplotting, which is an approach to measure/digitize locations in an
image and calculate the corresponding terrain coordinates by taking the
terrain relief into account (using a DTM). The accuracy of the obtained coordinates depends primarily on the accuracy of the DTM used for the correction of the relief displacement. Normally, only the 2D coordinates are
stored, since the height comes from the DTM and thus provides no new
information.
Orthoimage production, which resamples the image into map geometry taking the terrain relief in account, also using a DTM. It can be considered the
3D variant of geocoding (Section 9.2.2). The result is a 2D image with map
geometry. Accuracy considerations are very similar to monoplotting.

first

previous

next

last

back

exit

zoom

contents

index

about

331

9.3. Three-dimensional approaches


Stereorestitution, which uses two images (a stereo pair) to extract 3D information. The method allows the determination of (X, Y , Z) positions from
the two image positions of the same terrain feature. It can be used to obtain
highly accurate 2D data in cases of considerable relief, or to obtain 3D data
(e.g. for the production of a DTM), a database with building heights, or a
3D topographic data base. This method generally gives the most accurate
results, as the height of each individual point is measured rather than interpolated from a DTM, and the planimetric position is determined using
this more accurate height information.
Orientation, which determines the location and the attitude of the sensor,
is a prerequisite to all of the above processes and is thus explained first.
The underlying concept is that an observed terrain point, the sensor projection centre (i.e. the lens), and the corresponding point on the image all
lie on one straight line (possibly curved by atmospheric refraction), also
called collinearity (Figure 9.7). Orientation determines the geometry of
those imaging rays, including the actual location of the sensor, and its
pointing angles with respect to the ground.

first

previous

next

last

back

exit

zoom

contents

index

about

332

9.3. Three-dimensional approaches

xi,yi
X0,Y0,Z0
Figure 9.7: Illustration of
the collinearity concept,
where image point, lens
centre and terrain point all
lie on one line.

X,Y,Z

first

previous

next

last

back

exit

zoom

contents

index

about

333

9.3. Three-dimensional approaches

9.3.1

Orientation

Orientation results in a formula to calculate image coordinates (x, y or row, column) from terrain coordinates (X, Y , Z).
Aerial sensor
(film or area CCD)
col

Fiducial marks

Image

Principal
point
Principal
distance

Figure 9.8: Inner geometry of a camera and the associated image.

Projection center

Lens

row

For a single image it can be separated into:


Interior orientation, which reconstructs the position of the projection centre of the sensor with respect to the image, with the help of the principal point and the principal distance as shown in Figure 9.8. For digital
sensors this is known in the pixel domain from calibration and needs no
further consideration. In the case of film cameras, special markings in the
camera, known as fiducial marks, whose locations with respect to the
principal point are precisely calibrated, are imaged on the film (see Figure 9.8). They can be measured in the film, and a transformation can be
first

previous

next

last

back

exit

zoom

contents

index

about

334

9.3. Three-dimensional approaches


found to calculate the calibrated positions in the camera from the measured positions in the image. This transformation can also be used for all
other measured image points to calculate the exact location in the camera,
where the actual exposure has taken place.
Exterior orientation, which reconstructs the position and inclination and rotation of the sensor with respect to the terrain coordinate system. Frame
cameras (film as well as digital ones) acquire the entire image at once, thus
three coordinates for the projection centre (X, Y , Z) and three angles for
the attitude of the sensor (angles , and ) are sufficient to define the exterior orientation of an entire image. In case of push-broom scanners and
radar images, each image line is taken from a different position due to the
motion of the sensor, whose attitude can also change (slightly) from line to
line. In this case we need a way to calculate the coordinates of each projection centre and the three attitude angles of the sensor for each exposure
moment. Satellites have a very smooth motion, thus low order polynomials will be sufficient to model this movement, but for airborne scanners
rather complicated formulae have to be used.
The parameters for exterior orientation can be found by:
1. Measuring reference points (GCPs; see Section 9.2.1) in the terrain (X, Y
and Z, or at least X and Y ), and the row/column locations of the image pixels corresponding to those reference points. In the absence of GPSmeasured reference points, the positions of clearly identifiable points can
be measured in topographic maps. If the map contains detailed contour
lines and/or spot heights, it maybe possible to estimate the Z-value of
those points.

first

previous

next

last

back

exit

zoom

contents

index

about

335

9.3. Three-dimensional approaches


2. Direct Sensor Orientation, which means measuring the position and attitude of the sensor at the time of image acquisition, using a GPS and IMU
(Inertial Measurement Unit); no GCPs collected.
3. A combination of (1) and (2), often called Integrated Sensor Orientation.
Method (3) is generally used for airborne scanner images and satellite images.
Method (1) alone is normally not feasible for airborne scanners, as one would
need a large amount of reference points to reconstruct the flight path and attitude changes with sufficient accuracy. Also for satellite images position and
attitude data (called ephemeris) are typically available. However, the use of
reference points is advisable even in those situations, either to make the orientation more accurate, or to check it independently (see Section 9.2.1).
For frame camera images, method (1) is the one traditionally used. As explained in Section 9.2.1, it requires at least 3 well-distributed reference points
with (X, Y , Z) known, but more are advisable. Method (1) is becoming increasingly replaced by method (3), and to some extent even by method (2). For very
accurate projects, method (2), which does not make use of ground information,
must be approached with caution, because minor influences such as temperature
or atmospheric refraction can have a significant influence on the result.
After orientation, for any reference point one can use its terrain coordinates
and the orientation parameters to calculate its image position. If the actual location in the image is measured, it should be the same as the calculated one.
The difference between the measured and the calculated positions (the residual
errors) allows one to estimate the quality of the achieved orientation. A leastsquares adjustment should find the orientation parameters such that the sum of
the squares of all residuals is as small as possible.

first

previous

next

last

back

exit

zoom

contents

index

about

336

9.3. Three-dimensional approaches


The following procedures are possible if a stereo pair is to be oriented:
Individual orientation, one by one for each image as described above. No
knowledge about the other image is used during the orientation of each
image.
Interior, relative and absolute orientation. The interior orientation is carried
out individually for each image, as described above. For a relative orientation of two overlapping images no GCPs are required. It is only possible
for frame camera images and is done by measuring at least 5 homologous
points (i.e. image positions in the two images of the same terrain feature;
Figure 9.9). The relative orientation will cause the corresponding imaging
rays to intersect as accurately as possible. Using this relative procedure, 5
of the 12 parameters of exterior orientation (6 parameters for each image)
are determined. After the relative orientation has been carried out, a model
of the terrain can be reconstructed from the two images; however, its scale,
location and inclination/rotation are unknown. The absolute orientation
determines these 7 unknowns (scale, 3 angles of rotation, and shifts in each
of the three coordinate directions). In other words, the relative orientation
determines the orientation of one camera with respect to the other (i.e. no
absolute angles), while the absolute orientation calculates the (X, Y , Z)
position and absolute angles (attitude) of the camera during each image
acquisition. For the absolute orientation actual ground information in the
form of GCPs is required. However, once the procedure is finished, the
oriented stereo model can be used to establish the absolute coordinates in
object space, i.e. in the real world, of any point that appears in both images.

first

previous

next

last

back

exit

zoom

contents

index

about

337

9.3. Three-dimensional approaches

t2

t1

Figure 9.9: Illustration of


relative image orientation.

first

previous

next

last

back

exit

zoom

contents

index

about

338

9.3. Three-dimensional approaches


Aerial photo

Digital terrain model


Ground control
points

(x,y)

Figure 9.10: The process


of digital monoplotting enables accurate determination of terrain coordinates
from a single aerial photograph.

Processing

(X,Y,Z)

9.3.2

Monoplotting

Suppose you need to derive accurate planimetric (X, Y ) positions from an aerial
photograph expressed in a specific map projection. This can be achieved for flat
terrain using a vertical photograph and a georeferencing approach. Recall from
the earlier discussion of relief displacement (Figure 4.7) how elevation differences lead to distortions in the image, preventing the use of such data for direct
measurements. Therefore, if there are significant terrain relief differences, the resulting relief displacement has to be corrected for. For this purpose the method
of monoplotting has been developed.
Monoplotting is based on the reconstruction of the position of the camera at
the moment of image exposure with respect to the GCPs, i.e. the terrain. This
is achieved by identification of a number (at least four) of GCPs for which both
the photo and map coordinates are known. The applied DTM should be stored

first

previous

next

last

back

exit

zoom

contents

index

about

339

9.3. Three-dimensional approaches


in the required map projection system and the heights should be expressed in
an adequate vertical reference system. When digitizing features from the photograph, the computer uses the DTM to calculate the effect of relief displacement
and corrects for it (Figure 9.10). A monoplotting approach is possible by using
a hard-copy image on a digitizer tablet, or by on-screen digitizing from a digital
image. In the latter situation, vector information can be superimposed over the
image to update the changed features. Note that monoplotting is a (real time)
correction procedure and does not yield new image data, i.e. no resampling is
carried out.

first

previous

next

last

back

exit

zoom

contents

index

about

340

9.3. Three-dimensional approaches

9.3.3

Orthoimage production

Monoplotting can be considered a georeferencing procedure that incorporates


corrections for relief displacement, without involving any resampling. For some
applications, however, it is useful to correct the photograph or satellite image itself. In such cases the image should be transformed and resampled into a product with the geometric properties of a specific map projection. Such an image is
called an orthophoto.
The production of orthophotos is quite similar to the process of monoplotting. Consider a digitized aerial photograph. First, the photo is oriented using
ground control points. The terrain height differences are modelled by a DTM.
The computer then calculates the position in the original photo for each output
element (pixel). Using a resampling algorithm, the output value is determined
and stored in the required raster. The result is geometrically equivalent to a true
map, i.e. direct distance or area measurements on the orthoimage can be carried
out.
In the past, special optical instruments were employed for orthophoto production. Their use required substantial effort. Nowadays, application of digital
image data together with digital photogrammetric software enables easy production of orthophotos.

first

previous

next

last

back

exit

zoom

contents

index

about

341

9.3. Three-dimensional approaches

9.3.4

Stereo restitution

Stereo restitution is closely related with stereo observation (Section 11.2.3). Similar to our two eyes allowing us to perceived our environment in 3D, two overlapping images can be arranged under a stereoscope or on a computer screen
and viewed stereoscopically, that is in 3D. The arranging required is somewhat analogous to the relative orientation of an image pair as explained above.
The resulting image pair allows us to form a stereo model of the terrain imaged,
which can be used to digitize features or make measurements in 3D. In effect we
arrange the position and angular orientation (attitude) of the sensor to duplicate
its exact configuration during image acquisition at times t1 and t2 . As described
in Section 4.7, the 60% forward overlap guarantees that the entire surveyed area
can be viewed and processed stereoscopically. Stereo pairs can also be derived
from other sensors, such as multispectral scanners and imaging radar.
The measurements made in a stereo model make use of a phenomenon called
parallax (Figure 9.11). Parallax refers to the fact that an object photographed from
different camera locations (e.g. from a moving aircraft) has different relative positions in the two images. In other words, there is an apparent shift of an object
as it is observed from different locations. Figure 9.11 illustrates that points at
two different elevations, regardless of whether it is the top and bottom of a hill
or of a building, experience this relative shift. The measurement of the difference
in position is a basic input for elevation calculations.
A stereo model enables parallax measurement using a special 3D cursor. If
the stereo model is appropriately oriented, the parallax measurements yield (X,
Y , Z) coordinates. Analogue systems use hardcopy images and perform the
computation by mechanical, optical or electrical means. Analytical systems
also use hardcopy images, but do the computation digitally, while in modern
digital systems both the images and the computation are digital. Using digital
first

previous

next

last

back

exit

zoom

contents

index

about

342

9.3. Three-dimensional approaches

Figure 9.11: The same


building is observed from
two different positions. Because of the height of
the building the positions
of the building top and
base relative to photo centres are different.
This
difference (parallax) can
be used to calculate its
height.

Parallax
pa
pb

instruments we can not only perform spot elevation measurements, but complete DTMs can be calculated for the overlapping part of the two images. Recall,
however, that reliable elevation values can only be extracted if the orientation
steps were carried out accurately, using reliable ground control points.

first

previous

next

last

back

exit

zoom

contents

index

about

343

9.3. Three-dimensional approaches

Summary
This chapter has introduced some general geometric aspects of dealing with image data. A basic principle in dealing with remote sensing sensors is terrain
relief, which can be neglected (2D approaches) or taken into account (3D approaches). In both approaches there is a possibility to keep the image data stored
in their (i, j) system and relate it to other data through coordinate transformations (georeferencing and monoplotting). The other possibility is to change the
image raster into a specific map projection system using resampling techniques
(geocoding and orthoimage production). A true 3D approach is that of stereoplotting, which applies parallax differences as observed in stereo pairs to measure (X, Y , Z) coordinates of terrain and objects.

first

previous

next

last

back

exit

zoom

contents

index

about

344

9.3. Three-dimensional approaches

Questions
The following questions can help you to study Chapter 9.
1. Suppose your organization develops a GIS application for road maintenance. What would be the consequences of using georeferenced versus
geocoded image data as a backdrop?
2. Think of two situations in which image data are applied and in which you
need to take relief displacement into account.
3. For a transformation of a specific image into a specific coordinate system,
an mtotal error of two pixels is given. What additional information do you
need to assess the quality of the transformation?
The following are typical exam questions:
1. Compare an image and map coordinate system (give figure with comment).
2. What is the purpose of acquiring stereo pairs of image data?
3. What are ground control points used for?

first

previous

next

last

back

exit

zoom

contents

index

about

345

9.3. Three-dimensional approaches

4. Calculate the map position (x, y) for image position (10, 20) using the two
following equations: x = 10 + 5i j and y = 5 + 2i + 2j
5. Explain the purpose of monoplotting. What inputs do you need?

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 10
Image enhancement and
visualisation

first

previous

next

last

back

exit

zoom

contents

index

about

346

347

10.1. Introduction

10.1

Introduction

Many of the figures in the previous chapters have presented examples of remote
sensing image data. There is a need to visualize image data at most stages of
the remote sensing process. For example, the procedures for georeferencing, explained in Chapter 8, cannot be performed without visual examination to measure the location of ground control points on the image. However, it is in the
process of information extraction that visualization plays the most important
role. This is particularly so in the case of visual interpretation (Chapter 11), but
also during automated classification procedures (Chapter 12).
Because many remote sensing projects make use of multispectral data, this
chapter focuses on the visualization of colour imagery. An understanding of
how we perceive colour is required at two main stages in the remote sensing
process. In the first instance, it is required to produce optimal pictures from
(multispectral) image data on the computer screen or as a (printed) hard-copy.
Subsequently, the theory of colour perception plays an important role in the subsequent interpretation of these pictures. To understand how we perceive colour,
Section 10.2 deals with the theory of colour perception and colour definition.
Section 10.3 explains the basic principles you need to understand and interpret
the colours of a displayed image. Section 10.4 introduces some filter operations
for enhancing specific characteristics of the image, while the last section (Section 10.5) introduces the concepts of colour composites and image fusion.

first

previous

next

last

back

exit

zoom

contents

index

about

348

10.2. Perception of colour

10.2

Perception of colour

Colour perception takes place in the human eye and the associated part of the
brain. Colour perception concerns our ability to identify and distinguish colours,
which, in turn, enables us to identify and distinguish entities in the real world.
It is not completely known how human vision works, or what exactly happens
in the eyes and brain before someone decides that an object is (for example) light
blue. Some theoretical models, supported by experimental results are, however,
generally accepted. Colour perception theory is applied whenever colours are
reproduced, for example in colour photography, TV, printing and computer animation.

first

previous

next

last

back

exit

zoom

contents

index

about

349

10.2. Perception of colour

10.2.1

Tri-stimuli model

The eyes general sensitivity is to wavelengths between 400700 nm. Different


wavelengths in this range are experienced as different colours.
The retinas in our eyes have cones (light-sensitive receptors) that send signals
to the brain when they are hit by photons with energy levels that correspond
to different wavelengths in the visible range of the electromagnetic spectrum.
There are three different kinds of cones, responding to blue, green and red wavelengths (Figure 10.1). The signals sent to our brain by these cones, and the differences between them, give us colour-sensations. In addition to cones, we have
rods, which do not contribute to colour vision. The rods can operate with less
light than the cones. For this reason, objects appear less colourful in low light
conditions.
Screens of colour television sets and computer monitors are composed of a
large number of small dots arranged in a regular pattern of groups of three: a
red, a green and a blue dot. At a normal viewing distance from the screen we
cannot distinguish the individual dots. Electron-guns for red, green and blue
are positioned at the back-end of the cathode-ray tube. The number of electrons
fired by these guns at a certain position on the screen determines the amount
of (red, green and blue) light emitted from that position. All colours visible on
such a screen are therefore created by mixing different amounts of red, green
and blue. This mixing takes place in our brain. When we see monochromatic
yellow light (i.e., with a distinct wavelength of, say, 570 nm), we get the same
impression as when we see a mixture of red (say, 700 nm) and green (530 nm). In
both cases, the cones are stimulated in the same way. According to the tri-stimuli
model, therefore, three different kinds of dots are necessary and sufficient.

first

previous

next

last

back

exit

zoom

contents

index

about

350

10.2. Perception of colour

eye sensitivity

Blue

400

wavelength (nm)

first

previous

next

last

Green Red

500

back

600

exit

Figure 10.1: Visible range


of electromagnetic spectrum including the sensitivity curves of cones in the
human eye.

700

zoom

contents

index

about

351

10.2. Perception of colour

10.2.2

Colour spaces

The tri-stimuli model of colour perception is generally accepted. This states that
there are three degrees of freedom in the description of a colour. Various threedimensional spaces are used to describe and define colours. For our purpose the
following three are sufficient.
1. Red Green Blue (RGB) space, based on the additive principle of colours.
2. Intensity Hue Saturation (IHS) space, which is most related to our, intuitive, perception of colour.
3. Yellow Magenta Cyan (YMC) space based on the subtractive principle of
colours.

first

previous

next

last

back

exit

zoom

contents

index

about

352

10.2. Perception of colour


RGB
The RGB definition of colours is directly related to the way in which computer
and television monitors function. Three channels (RGB) directly related to the
red, green and blue dots are input to the monitor. When we look at the result,
our brain combines the stimuli from the red, green and blue dots and enables
us to perceive all possible colours from the visible part of the spectrum. During
the combination, the three colours are added. When green dots are illuminated
in addition to red ones, we see yellow. This principle is called the additive colour
scheme. Figure 10.2 illustrates the additive colours caused by bundles of light
from red, green and blue spotlights shining on a white wall in a dark room.
When only red and green light occurs, the result is yellow. In the central area
there are equal amounts of light from all three the spotlights, and we experience
white.

(a)

Figure 10.2: Comparison


of the (a) additive and
(b) subtractive colour
schemes.

(b)

In the additive colour scheme, all visible colours can be expressed as combinations of red, green and blue, and can therefore be plotted in a three-dimensional
space with R, G and B along the axes. The space is bounded by minimum and
maximum values (intensities) for red, green and blue, defining the so-called
first

previous

next

last

back

exit

zoom

contents

index

about

353

10.2. Perception of colour


[0,1,0] green

yellow
[1,1,0]
white

cyan
[0,1,1]

ine

[0,0,0]

[0,0,1]

ro

h
ac

[1,1,1]

l
tic

red

black

[1,0,0]

Figure 10.3: The RGB


cube; note the red, green
and blue corner points.

magenta
[1,0,1]

blue

colour cube (Figure 10.3).

first

previous

next

last

back

exit

zoom

contents

index

about

354

10.2. Perception of colour


IHS
In daily speech we do not express colours in the red, green and blue of the RGB
system. The IHS system, which refers to intensity, hue and saturation, more
naturally reflects our sensation of colour. Intensity describes whether a colour
is light or dark. Hue refers to the names that we give to colours: red, green,
yellow, orange, purple, etc. Saturation describes a colour in terms of pale versus
vivid. Pastel colours have low saturation; grey has zero saturation. As in the
RGB model, three degrees of freedom are sufficient to describe any colour.
G

I
S

Figure 10.4: Relationship


between RGB and IHS
colour spaces.

b
r
B

Figure 10.4 illustrates the correspondence between the RGB and the IHS system. Although the mathematical model for this description is tricky, the description is, in fact, more natural. For example, light, pale red is easier to imagine
than a lot of red with considerable amounts of green and blue. The result, however, is the same. Since the IHS scheme deals with colour perception, which is
somewhat subjective, complete agreement about the definitions does not exist.
It is safe to define intensity as the sum of the R, G and B values. On the main
first

previous

next

last

back

exit

zoom

contents

index

about

355

10.2. Perception of colour


diagonal of the colour cube (R = G = B), running from black to white, where
we find all the grey tones, the saturation equals 0. At the red-green, green-blue
and blue-red sides of the cube (these are the sides that contain the origin), the
saturation is maximum (100%). If we regard colours as (additive) mixtures of
a (saturated) colour and grey, then saturation is the percentage of the colour in
the mixture. For example, (R,G,B) = (100, 40, 40) is a mixture of the saturated
colour (60, 0, 0) and the grey tone (40, 40, 40). Therefore, its saturation equals
60 / 100 = 60%. To define the hue of a colour, we look in the plane, perpendicular to the main diagonal, that contains the colour. In that plane, we can turn
around through 360 degrees, respectively looking towards red, yellow, green,
cyan, blue, magenta and back to red. The angle at which we find our colour is
its hue.

first

previous

next

last

back

exit

zoom

contents

index

about

356

10.2. Perception of colour


YMC
Whereas RGB is used in computer and TV displays, the YMC colour description
is used in colour definition on hard copy, for example printed pictures but also
photographic films and paper. The principle of the YMC colour definition is to
consider each component as a coloured filter. The filters are yellow, magenta and
cyan. Each filter subtracts one primary colour from the white light: the magenta
filter subtracts green, so that only red and blue are left; the cyan filter subtracts
red, and the yellow one blue. Where the magenta filter overlaps the cyan one,
both green and red are subtracted, and we see blue. In the central area, all light
is filtered away and the result is black. Colour printing, which uses white paper
and yellow, magenta and cyan ink, is based on the subtractive colour scheme.
When white light falls on the document part is filtered out by the ink layers and
the remainder is reflected from the underlying paper (Figure 10.2).

first

previous

next

last

back

exit

zoom

contents

index

about

357

10.3. Visualization of image data

10.3

Visualization of image data

In this section, various ways of visualizing single and multi-band image data
are introduced. The section starts with an explanation of the concept of an image histogram. The histogram has a crucial role in realizing optimal contrast of
images. An advanced section deals with the application of RGB-IHS transformation to integrate different types of image data.

first

previous

next

last

back

exit

zoom

contents

index

about

358

10.3. Visualization of image data

10.3.1

Histograms

A number of important characteristics of a single-band image, such as a panchromatic satellite image, a scanned monochrome photograph or a single band
from a multi-band image, are found in the histogram of that image. The histogram describes the distribution of the pixel values (Digital Numbers, DN) of
that image. In the usual case, the DN-values range between 0255. A histogram
indicates the number of pixels for each value in this range. In other words, the
histogram contains the frequencies of DN-values in an image. Histogram data
can be represented either in tabular form or graphically. The tabular representation (Table 10.1) normally shows five columns. From left to right these are:
DN: Digital Numbers, in the range [0. . . 255]
Npix: the number of pixels in the image with this DN (frequency)
Perc: frequency as a percentage of the total number of image pixels
CumNpix: cumulative number of pixels in the image with values less than
or equal to DN
CumPerc: cumulative frequency as a percentage of the total number of
image pixels
Histogram data can be further summarized in some characteristic statistics:
mean, standard deviation, minimum and maximum, as well as the 1% and 99%
values (Table 10.2). Standard deviation is a statistical measure of the spread of
the values around the mean. The 1% value, for example, defines the cut-off value
below which only 1% of all the values are found. 1% and 99% values can be used
to define an optimal stretch for visualization.
first

previous

next

last

back

exit

zoom

contents

index

about

359

10.3. Visualization of image data


In addition to the frequency distribution, the graphical representation shows
the cumulative frequency (see Figure 10.5). The cumulative frequency curve
shows the percentage of pixels with DN that are less than or equal to a given
value.
100%

Cumulative histogram

3%

2%
Ordinary histogram

50%

1%

0%

first

previous

0 14

next

114

last

back

0%
255

173

exit

Figure 10.5:
Standard
histogram and cumulative
histogram corresponding
with Table 10.1.

zoom

contents

index

about

360

10.3. Visualization of image data

first

previous

DN
0
13
14
15
16
51
52
53
54
102
103
104
105
106
107
108
109
110
111
163
164
165
166
173
174
255

Npix
0
0
1
3
2
55
59
94
138
1392
1719
1162
1332
1491
1685
1399
1199
1488
1460
720
597
416
274
3
0
0

next

last

Perc
0.00
0.00
0.00
0.00
0.00
0.08
0.08
0.13
0.19
1.90
2.35
1.59
1.82
2.04
2.31
1.91
1.64
2.04
2.00
0.98
0.82
0.57
0.37
0.00
0.00
0.00

back

CumNpix
0
0
1
4
6
627
686
780
918
25118
26837
27999
29331
30822
32507
33906
35105
36593
38053
71461
72058
72474
72748
73100
73100
73100

exit

Table 10.1: Example histogram in tabular format.

CumPerc
0.00
0.00
0.00
0.01
0.01
0.86
0.94
1.07
1.26
34.36
36.71
38.30
40.12
42.16
44.47
46.38
48.02
50.06
52.06
97.76
98.57
99.14
99.52
100.00
100.00
100.00

zoom

contents

index

about

361

10.3. Visualization of image data

Mean
113.79

first

previous

StdDev
27.84

next

last

Min
14

Max
173

back

1%-value
53

exit

zoom

Table 10.2:
Summary
statistics for the example
histogram given above.

99%-value
165

contents

index

about

362

10.3. Visualization of image data

10.3.2

Single band image display

The histogram is used to obtain optimum display of single band images. Single band images are normally displayed using a grey scale. Grey shades of the
monitor typically range from black (value 0) to white (value 255). When applying grey shades, the same signal is input to each of the three (RGB) channels of
the computer monitor (Figure 10.6).

Blue
Green
Red
NIR
MIR
TIR

B
G
Figure 10.6: Multi-band
image displayed on a monitor using the monitors
Red, Green and Blue input
channels.

Using the original image values to control the monitor values usually results in an image with little contrast since only a limited number of grey values
are used. In the example introduced in the previous section (Table 10.2) only
173 14 = 159 out of 255 grey levels would be used. To optimize the range of
grey values, a transfer function maps DN-values into grey shades on the monitor
(Figure 10.7). The transfer function can be chosen in a number of ways. Linear
contrast stretch is obtained by finding the DN-values where the cumulative his-

first

previous

next

last

back

exit

zoom

contents

index

about

363

10.3. Visualization of image data


255
(white)

no stretch

linear stretch

histogram stretch

Figure 10.7: The relationship between image value


and monitor grey value
is defined by the transfer
function, which can have
different shapes.

grey
shade
(black)
0

DN

255

DN

255

DN

255

togram of the image passes 1% and 99%. DNs below the 1% value become black
(0), DNs above the 99% value are white (255), and grey levels for the intermediate values are found by linear interpolation. Histogram equalization, or histogram
stretch, shapes the transfer function according to the cumulative histogram. As
a result, the DNs in the image are distributed as equally as possible over the
available grey levels (Figure 10.7).

Figure 10.8: Single band


of a Landsat TM image
of a polder area in The
Netherlands.
Image
without contrast enhancement (left);
histogram
equalization (middle) and
pseudo-colour representation (right).

first

previous

next

last

back

exit

zoom

contents

index

about

364

10.3. Visualization of image data


Transfer function manipulations are usually implemented as modifications
of the values in the colour lookup table (in the memory of the monitor). Using
a transfer function for visualization, therefore, does not result in a new image
file on hard disk. You should be aware that transfer functions may already have
been applied when you want to interpret the intensity or colour observed in a
picture.
An alternative way to display single band data is to make use of a pseudocolour lookup table that assigns colours ranging from blue via cyan, green and
yellow to red (Figure 10.8. The use of pseudo-colour is especially useful for displaying data that are not reflection measurements. With thermal infrared data,
for example, the association of cold-warm with blue-red is more intuitive than
with dark-light.

Figure 10.9: Input and


output result of a filtering
operation: the neighbourhood in the original image determines the value
of the output. In this situation a smoothing filter was
applied.

Output

Input

16 12 20

first

previous

13

15

12

next

last

12

back

exit

zoom

contents

index

about

365

10.4. Filter operations

10.4

Filter operations

A further step in producing optimal images for interpretation is the use of filter
operations. Filter operations are local image transformations: a new image is
calculated and the value of a pixel depends on the values of its former neighbours. Filter operations are usually carried out on a single band. Filters are
used for spatial image enhancement, for example to reduce noise or to sharpen
blurred images. Filter operations are extensively used in various semi-automatic
procedures that are outside the scope of this chapter.
To define a filter, a kernel is used. A kernel defines the output pixel value as a
linear combination of pixel values in a neighbourhood around the corresponding position in the input image. For a specific kernel, a so-called gain can be
calculated as follows:
gain =

1
ki

(10.1)

The gain sums all kernel coefficients (ki ). In general, the sum of the kernel
coefficient, after multiplication by the gain, should be equal to 1 to result in an
image with approximately the same range of grey values. The effect of using a
kernel is illustrated in Figure 10.9, which shows how the output value is calculated in terms of average filtering.
The significance of the gain factor is explained in the next two subsections.
In these examples only small neighbourhoods of 3 3 kernels are considered. In
practice other kernel dimensions may be used.

first

previous

next

last

back

exit

zoom

contents

index

about

366

10.4. Filter operations

10.4.1

Noise reduction

Consider the kernel shown in Table 10.3 in which all coefficients equal 1. This
means that the values of the nine pixels in the neighbourhood are summed. Subsequently, the result is divided by 9 to achieve that the overall pixel values in the
output image are in the same range as the input image. In this situation the gain
is 1/9 = 0.11. The effect of applying this averaging filter is that image will become
blurred or smoothed. When dealing with speckle effect in radar imagery the
result of applying this filter is to reduce the speckle.
1
1
1

1
1
1

Table 10.3: Filter kernel


for smoothing.

1
1
1

In the above kernel, all pixels have equal contribution in the calculation of the
result. It is also possible to define a weighted average. To emphasize the value of
the central pixel, a larger value can be put in the centre of the kernel. As a result,
less drastic blurring takes place. In addition, it is necessary to take into account
that the horizontal and vertical neighbours influence the result more strongly
that the diagonal ones. The reason for this is that the direct neighbours are closer
to the central pixel. The resulting kernel, for which the gain is 1/16 = 0.0625, is
given in Table 10.4.
1
2
1

first

previous

next

last

2
4
2

back

Table 10.4: Filter kernel


for weighted smoothing.

1
2
1

exit

zoom

contents

index

about

367

10.4. Filter operations


-1
-1
-1

10.4.2

-1
16
-1

Table 10.5: Filter kernel


used for edge enhancement.

-1
-1
-1

Edge enhancement

Another application of filtering is to emphasize local differences in grey values,


for example related to linear features such as roads, canals, geological faults, etc.
This is done using an edge enhancing filter, which calculates the difference between the central pixel and its neighbours. This is implemented using negative
values for the non-central kernel coefficients. An example of an edge enhancement filter is given in Table 10.5.

Figure 10.10: Original image (middle), edge enhanced image (left) and
smoothed image (right).

The gain is calculated as follows: 1/(16 8) = 1/8 = 0.125. The sharpening


effect can be made stronger by using smaller values for the centre pixel (with a
minimum of 9). An example of the effect of using smoothing and edge enhancement is shown in Figure 10.10.

first

previous

next

last

back

exit

zoom

contents

index

about

368

10.5. Colour composites

10.5

Colour composites

The previous section explained visualization of single band images. When dealing with a multi-band image, any combination of three bands can, in principle,
be used as input to the RGB channels of the monitor. The choice should be made
based on the application of the image data. To increase contrast, the three bands
can be subjected to linear contrast stretch or histogram equalization.
Sometimes a true colour composite, where the RGB channels relate to the red,
green and blue wavelength bands of a scanner, is made. A popular choice is
to link RGB to the near-infrared, red and green bands respectively to yield a
false colour composite (Figure 10.11). The results look similar to prints of colourinfrared photography (CIR). As explained in Chapter 4, the three layers in a
false colour infrared film are sensitive to the NIR, R, and G parts of the spectrum
and made visible as R, G and B respectively in the printed photo. The most
striking characteristic of false colour composites is that vegetation appears in
a red-purple colour. In the visible part of the spectrum, plants reflect mostly
green light (this is why plants appear green), but their infrared reflection is even
higher. Therefore, vegetation in a false colour composite is shown as a combination of some blue but even more red, resulting in a reddish tint of purple.
Depending on the application, band combinations other than true or false
colour may be used. Land-use categories can often be distinguished quite well
by assigning a combination of Landsat TM bands 543 or 453 to RGB. Combinations that display near-infrared as green show vegetation in a green colour
and are, therefore, called pseudo-natural colour composites (Figure 10.11).

first

previous

next

last

back

exit

zoom

contents

index

about

369

10.5. Colour composites


band 1 (blue)
2 (green)
3 (red)
4 (near-ir)
5 (near-ir)
6 (thermal-ir)
7 (mid-ir)

Natural colour composite


(3,2,1)

False colour composite


(4,3,2)

Pseudo-natural colour
composite (3,5,2)

first

previous

next

last

back

Figure 10.11:
Landsat
TM false colour composite
of Enschede and surrounding. Three different
colour composites are
shown:
natural colour,
pseudo-natural colour and
false colour composite.
Other band combinations
are possible.

exit

zoom

contents

index

about

370

10.5. Colour composites

10.5.1

Application of RGB and IHS for image fusion

Section 10.3 explained how three bands of a multispectral image dataset can be
displayed as a colour composite image using the RGB colour space functions of
a computer monitor. Once one becomes familiar with the additive mixing of the
red, green and blue primaries, the colours perceived on the computer monitor
can be intuitively related to the digital numbers of the three input bands, thereby
providing qualitative insight in the spectral properties of an imaged landscape.
This colour coding principle can also be exploited to display images acquired
by different sensor systems, an enhancement technique known as image fusion.
Like with any enhancement technique, the aim of image fusion is to optimize
image display for visual interpretation. One objective that uniquely applies to
image fusion, however, is to enhance the properties of images acquired by multiple sensors. This additional objective allows the interpreter instantly to extract
complementary information from a multi-sensor image dataset by looking at a
single image. Various combinations of image characteristics may be exploited,
some of which are illustrated in the examples below, such as: (i) the high spatial resolution of a panchromatic image with a multispectral image dataset of
lower spatial resolution, (ii) the textural properties of a synthetic aperture radar
image with the multispectral properties of an optical dataset, (iii) various terrain properties derived from a digital elevation model with a single band image,
(iv) gridded airborne geophysical measurements with a relief-shaded digital elevation model or satellite image data of higher spatial resolution, and (v) image
data with multi-temporal resolution for change detection.
The processing technique underlying all image fusion methods is based on
applying a mathematical function on the co-registered pixels of the merged image set, yielding a single image display optimized for visual interpretation. Without considering the standard pre-processing steps, such as atmospheric correcfirst

previous

next

last

back

exit

zoom

contents

index

about

CAUTION

371

10.5. Colour composites


tions and the removal of striping and other noise artifacts (see Section 8.3.1), the
image processing procedures employed in image fusion can be subdivided into
four steps:
1. Geometric co-registration of the image data to be fused;
2. Enhancement of individual image channels to be fused;
3. Application of an arithmetic or other mathematical function to the co-registered pixels of the multi-sensor image dataset;
4. Visualization of the fused image dataset on a computer monitor.
The first step refers to the geometric registration procedures that were explained in 9. Whether one can employ a two-dimensional approach (geometric
corrections that ignore relief displacement) is dependent on the image data to be
fused. It is crucial to register the image data at sub-pixel accuracy, because mismatches between image channels that are registered at a lower accuracy become
obvious as distracting coloured artifacts in the image fusion result. The selection
of the final spatial resolution for the fused image is included in this first step. It is
a subjective choice that depends on the type of image data involved, and on the
interpretation problem to which image fusion is applied. In most applications
of image fusion, however, a pixel size is chosen that is equal or similar to that of
the image(s) with the highest spatial resolution. This allows one to exploit image
details in the fused product that may not be apparent in lower spatial resolution
images.
The second step is often used to ensure that features of interest are optimally
enhanced in the image fusion result. This step usually employs common enhancement techniques applicable to single band images, such as contrast enhancement (Section 10.3) and spatial filtering (10.4). It also requires sufficient
first

previous

next

last

back

exit

zoom

contents

index

about

372

10.5. Colour composites

SPOT XS
band 3

SPOT XS
band 2

SPOT XS
band 1

SPOT
Pan

Resampling and (optional) contrast stretch

knowledge on how the properties of an image acquired by a particular sensor


system contribute in solving the interpretation problem.
Red

Green

Hue

RGB to
HSI

Blue

Red

HSI
to RGB

Satur.

Int.

Green

Color
composite

Blue

Figure 10.12: Procedure


to merge SPOT panchromatic and multispectral
data using RGB to IHS
conversion and vice versa.

contrast
stretch

Intensity

Fusion of SPOT XS and SPOT Panchromatic into 10 m resolution color composite.

One commonly used method to fuse co-registered image channels from multiple sources is to skip the third step and directly display the georeferenced image channels as a colour composite image. Although this is the simplest method
to fuse multi-sensor image data, this may result in colours that are not at all
intuitive to the interpreter. Considering, for example Figure 10.13(c) below, evidently it is very difficult to appreciate the characteristics of the original images
from the various additive colour mixtures in such a colour composite image. The
algebraic manipulations of step 3 aim to overcome this problem. The transformations of step 3 have in common that they map the DN variations in the input
images in one way or another on the perceptual attributes of human colour vision. These colour attributes are usually approached by using the IHS colour

first

previous

next

last

back

exit

zoom

contents

index

about

373

10.5. Colour composites


space presented in Section 10.2.2, which explains the frequent use of the RGBIHS colour space forward and inverse transformations in image fusion. Most image processing systems, therefore, provide RGB-IHS colour space forward and
inverse transformations as standard spectral enhancement tools.
Regardless of the colour space transformation selected, there are various
ways in which an image data set can be mapped to the perceptual attributes of
the IHS space. Herein we discuss only the most commonly used methods. Complementary information extraction by image fusion often exploits the so called
image sharpening principle, in which a single band image of higher spatial
resolution is merged with a multispectral band triplet of lower spatial resolution, in effect leading to a sharpened colour composite image. This image fusion
method is also named intensity substitution and is schematically illustrated in
Figure 10.12. First a multispectral band triplet is transformed from RGB to IHS
space. Second, the intensity is replaced by the high spatial resolution image enhanced to a similar dynamic range by a linear contrast stretch or a histogram
match. Third, the original hue and saturation and new intensity images are
transformed back to RGB display for visualization on the computer monitor.
There are many variants to this technique, including the contrast stretch of the
saturation image or the mapping of two image channels on hue and intensity
while setting saturation to a constant value. Usually, however, the hue image
is left untouched, because its manipulation will result in a distorted representation of multispectral information. A simple algebraic operation to carry out an
intensity substitution without saturation enhancement is known as the Brovey
transform, written as:
R 0
I
I

G0 =

previous

next

R0 =

first

G 0
I
I

last

B0 =

B 0
I
I

back

exit

zoom

contents

index

about

374

10.5. Colour composites


with
1
I = (R + G + B),
3
where R, G, B are the contrast stretched bands of the multispectral band triplet,
I is the intensity of the multispectral band triplet, I 0 is the image substituted for
the intensity and R0 , G0 , B 0 the band triplet of the fused image dataset.
An alternative arithmetic algorithm used for image sharpening adds the
high spatial resolution image in equal amounts to each multispectral band in a
technique known as pixel addition ([7]):
R0 = aR + bI 0

G0 = aG + bI 0

B 0 = aB + bI 0

with
a, b > 0

and

a + b = 1,

where a and b are scaling factors to balance intensity sharpening from the high
spatial resolution image versus the hue and saturation from the multispectral
band triplet.
Next to colour space transformations and arithmetic combinations, statistical
transforms such as principal component and regression analysis have been commonly used in image fusion, since they have the theoretical advantage of being
able to combine a large number of images ([14]). In practice, however, fused
images derived from many images are difficult to interpret.
The fourth and final step is essentially equivalent to the display of colour
composite images. Regardless of which fusion method is used, one needs to assure that the results are visualized as unambiguously as possible using the RGB

first

previous

next

last

back

exit

zoom

contents

index

about

375

10.5. Colour composites


space properties of the computer monitor. In practice this means that the perceptual colour attributes intensity, hue and saturation should be addressed
proportionally to the dynamic ranges of the input image channels. Therefore,
standard contrast enhancement procedures based on the histogram of each image channel may yield poorly optimized or even misleading displays. Because
the DNs of each fused image channel are actually composite values derived from
two or more images, the contrast enhancement should be uniformly applied to
each of the RGB image channels. This is to avoid ranges in hue out of proportion
to the hue variations in a colour composite image obtained from the multispectral image data ([34]). Ideally, as shown in Figures 10.13 and 10.14, the colour
composite image generated from the fused RGB channels should have the same
hue range as the colour composite image generated from the original multispectral bands.

first

previous

next

last

back

exit

zoom

contents

index

about

376

10.5. Colour composites

Figure 10.13:
Fused
images generated from
bands 7, 3 and 1 of a
Landsat 7 ETM subscene
and an orthophoto mosaic
of the Tabernas area,
Southeast Spain.
(a)
colour composite of red
= band 7, green = band
3 and blue = band 1;
(b) orthophoto mosaic
resampled to 5 metre
pixels; (c) fused image
of ETM bands, its intensity substituted with the
orthophoto mosaic using
RGB-IHS colour space
transformations; (d) as (c)
but with preservation of
intensity of ETM bands by
adding back 50 % of the
original intensity to the intensity substitute, leading
to better preservation of
the spectral properties.

first

previous

next

last

back

exit

zoom

contents

index

about

377

10.5. Colour composites

Figure 10.14: Fused images generated from ERS


1 SAR image and bands 3,
2 and 1 of a SPOT 2 subscene acquired over Amazon rain forest in Southeastern Colombia.
(a)
ERS1 image, (b) colour
composite image of SPOT
image data, band 3 = red,
band 2 = green and band 1
= blue; (c) colour composite image of ERS1 image
= red, band 3 = green and
band 2 = blue; (d) fused
image of the three SPOT
bands and the ERS1 image by using the pixel addition technique. Note that
in the false colour composite the spectral information has been approximately preserved by using
this technique.

first

previous

next

last

back

exit

zoom

contents

index

about

378

10.5. Colour composites


Image fusion examples
In this section some examples are presented to illustrate various applications of
image fusion and image processing details by which image fusion results can be
optimized.
Figure 10.13 shows an example of an image sharpening application. Bands
7, 3 and 1 of a Landsat 7 ETM scene acquired over an arid desert-like area in
Southeastern Spain displayed as a colour composite in Figure 10.13(a), are coregistered and fused with an orthophoto mosaic (b) at a spatial resolution of
5 metres, by using the RGB-IHS colour space transformations. Figure 10.13(c)
shows the image fusion result by substituting the intensity computed from the
three contrast-enhanced ETM bands with the orthophoto mosaic. The high spatial resolution details of the orthophoto mosaic helps the interpreter to associate
drainage patterns and terrain morphology with recent alluvial sediments and
landforms in areas of badland erosion underlain by sedimentary rock units with
contrasting spectral properties. Note that the hue and saturation of the colour
composite in (a) have been preserved. The intensity information (e.g. areas of
high correlation among the ETM bands) however, has been lost in this image.
This can be overcome by replacing the intensity by a weighted average of the
intensity and the orthophoto mosaic, as illustrated in (d).
Figure 10.14 shows an image fusion application based on an ERS-1 synthetic
aperture radar image (a) and SPOT 2 multispectral image data (b) of an area covering the Amazon forest in southern Colombia. Figure 10.14(c) shows a colour
composite image derived from these input data, where the ERS-1 image is displayed through the red channel, SPOT band 3 through the green channel and
SPOT band 2 through the blue channel. Note that, although this image shows
an acceptable colour enhancement, the image colours cannot be intuitively related to the dynamic ranges of the input image channels, thereby hampering
first

previous

next

last

back

exit

zoom

contents

index

about

379

10.5. Colour composites


interpretation of the multispectral and backscatter properties of the SPOT and
ERS-1 data, respectively. By contrast, Figure 10.14(d) shows an image fusion result which respects the perceptual attributes of IHS space. This image has been
generated using the pixel addition technique explained above. Terrain morphology of the plateau in the west and the river flood plain, apparent on the ERS1
image are enhanced without loosing the false colour spectral information modulated by the three SPOT bands vital to the interpretation of vegetation cover.
Remember that in areas of high relief, distortions caused by layover and foreshortening (see Section 6.2.5) can be severe, leading to potential problems in
pixel-to-pixel registration unless properly rectified.

first

previous

next

last

back

exit

zoom

contents

index

about

380

10.5. Colour composites

Summary
The way we perceive colour is most intuitively described by the Hue component
of the IHS colour space. The colour space used to describe colours on computer
monitors is the RGB space.
When displaying an image on a screen (or on hard copy) many choices need
to be made: the selection of bands, the sequence in which these are linked to the
Red-Green-Blue channels of the monitor, the use of stretching techniques and
the possible use of (spatial) filtering techniques.
The histogram, and the derived cumulative histogram, are the basis for all
stretching methods. Stretching, or contrast enhancement, is realized using transfer functions.
Filter operations are based on the use of a kernel. The weights of the coefficients in the kernel determine the effect of the filter which can be, for example,
to smooth or sharpen the original image.

first

previous

next

last

back

exit

zoom

contents

index

about

381

10.5. Colour composites

Questions
The following questions can help you to study Chapter 10.
1. How many possibilities are there to visualize a 4 band image using a computer monitor?
2. You are shown a picture in which grass looks green and houses are red
what is your conclusion? Now, you are shown a picture in which grass
shows as purple and houses are blackwhat is your conclusion now?
3. What would be a reason for not using the default application of histogram
equalization for all image data?
4. Can you think of a situation in your own context where you would probably use filters to optimize interpretation of image data?

first

previous

next

last

back

exit

zoom

contents

index

about

382

10.5. Colour composites


The following are typical exam questions:
1. List the three colour spaces used in the context of remote sensing and visualization.
2. Which colour space should be applied when using computer monitors?
How is the colour white produced?
3. What information is contained in a histogram of image data?
4. Which technique is used to maximize the range of colours (or grey values)
when displaying an image?
5. Using an example, explain how a filter works.

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 11
Visual image interpretation

first

previous

next

last

back

exit

zoom

contents

index

about

383

384

11.1. Introduction

11.1

Introduction

Up to now, we have been dealing with acquisition and preparation of image


data. The data acquired still needs to be interpreted (or analysed) to extract the
required information. In general, information extraction methods from remote
sensing imagery can be subdivided into two groups:
Information extraction based on visual analysis or interpretation of the
data. Typical examples of this approach are visual interpretation methods for land use or soil mapping. Also the generation/updating of topographic maps from aerial photographs is based on visual interpretation.
Visual image interpretation is introduced in this Chapter.
Information extraction based on semi-automatic processing by the computer. Examples include automatic generation of DTMs, image classification and calculation of surface parameters. Image classification is introduced
in Chapter 12.
The most intuitive way to extract information from remote sensing images
is by visual image interpretation, which is based on mans ability to relate colours
and patterns in an image to real world features. Chapter 10 has explained different methods used to visualize remote sensing image data.
In some situations pictures are studied to find evidence of the presence of
features, for example, to study natural vegetation patterns. Most often the result
of the interpretation is made explicit by digitizing the geometric and thematic
data of relevant objects (mapping). The digitizing of 2D features (points, lines
and areas) is carried out using a digitizer tablet or on-screen digitizing. 3D features interpreted in stereopairs can be digitized using stereoplotters or digital
photogrammetric workstations.
first

previous

next

last

back

exit

zoom

contents

index

about

385

11.1. Introduction
In Section 11.2 some theory about image understanding is explained. Visual
image interpretation is used to produce spatial information in all of ITCs fields
of interest: urban mapping, soil mapping, geomorphological mapping, forest
mapping, natural vegetation mapping, cadastral mapping, land use mapping
and many others. As visual image interpretation is application specific, it is
illustrated by two examples (soil mapping, land cover mapping) in Section 11.3.
The last section (11.4) addresses some aspects of quality.

first

previous

next

last

back

exit

zoom

contents

index

about

386

11.2. Image understanding and interpretation

11.2

first

Image understanding and interpretation

previous

next

last

back

exit

zoom

contents

index

about

387

11.2. Image understanding and interpretation

11.2.1

Human vision

In Chapter 10 humans perception of colour was explained. Human vision goes


a step beyond perception of colour: it deals with the ability of a person to draw
conclusions from visual observations. In analysing a picture, typically you are
somewhere between the following two situations: direct and spontaneous recognition, or using several clues to draw conclusions by a reasoning process (logical
inference).
Spontaneous recognition refers to the ability of an interpreter to identify objects
or phenomena at a first glance. Consider Figure 11.1. An agronomist would immediately recognize the pivot irrigation systems with their circular shape. S/he
would be able to do so because of earlier (professional) experience. Similarly,
most people can directly relate an aerial photo to their local environment. The
quote from people that are shown an aerial photograph for the first time I see
because I know refers to spontaneous recognition.
Logical inference means that the interpreter applies reasoning. In the reasoning
the interpreter will use his/her professional knowledge and experience. Logical
inference is, for example, concluding that a rectangular shape is a swimming
pool because of its location in a garden and near to a house. Sometimes, logical
inference alone cannot help you in interpreting images, so that field observations
are required (Section 11.4). Consider the aerial photograph in Figure 11.2. Would
you be able to interpret the material and function of the white mushroom like
objects? A field visit would be required for most of us to relate the different
features to elements of a house or settlement.

first

previous

next

last

back

exit

zoom

contents

index

about

388

11.2. Image understanding and interpretation

Figure 11.1: Satellite image of Antequera area in


Spain, the circular features are pivot irrigation
systems. Area pictured is
5 km wide.

Figure 11.2: Mud huts


of Labbezanga near the
Niger river.
Photo by
Georg Gerster, 1972.

first

previous

next

last

back

exit

zoom

contents

index

about

389

11.2. Image understanding and interpretation

11.2.2

Interpretation elements

When dealing with image data, visualized as pictures, a set of terms is required
to express and define characteristics present in a picture. These characteristics
are called interpretation elements and are used, for example, to define interpretation
keys, which provide guidelines on how to recognize certain objects.
The following seven interpretation elements are distinguished: tone/hue,
texture, shape, size, pattern, site and association.
Tone is defined as the relative brightness of a black/white image. Hue refers
to the colour on the image as defined in the intensity-hue-saturation (IHS)
system. Tonal variations are an important interpretation element in an image interpretation. The tonal expression of objects on the image is directly
related to the amount of light (energy) reflected from the surface. Different
types of rock, soil or vegetation most likely have different tones. Variations
in moisture conditions are also reflected as tonal differences in the image:
increasing moisture content gives darker grey tones. Variations in hue are
primarily related to the spectral characteristics of the measured area and
also to the bands selected for visualization (see Chapter 10). The advantage of hue over tone is that the human eye has a much larger sensitivity
for variations in colour (approximately 10,000 colours) as compared to tone
(approximately 200 grey levels).
Shape or form characterizes many terrain objects visible in the image. Shape
also relates to (relative) height when dealing with stereo-images, which
we discuss in Section 11.2.3. Height differences are important to distinguish between different vegetation types and also in geomorphological
mapping. The shape of objects often helps to determine the character of
the object (built-up areas, roads and railroads, agricultural fields, etc.).
first

previous

next

last

back

exit

zoom

contents

index

about

390

11.2. Image understanding and interpretation


Size of objects can be considered in a relative or absolute sense. The width
of a road can be estimated, for example, by comparing it to the size of the
cars, which is generally known. Subsequently this width determines the
road type, e.g. primary road, secondary road, etc.
Pattern refers to the spatial arrangement of objects and implies the characteristic repetition of certain forms or relationships. Pattern can be described by terms such as concentric, radial, checkerboard, etc. Some land
uses, however, have specific and characteristic patterns when observed on
aerospace data. You may think of different irrigation types but also different types of housing in the urban fringe. Other typical examples include
the hydrological system (river with its branches) and patterns related to
erosion.
Texture relates to the frequency of tonal change. Texture may be described
by terms as coarse or fine, smooth or rough, even or uneven, mottled,
speckled, granular, linear, woolly, etc. Texture can often be related to terrain roughness. Texture is strongly related to the spatial resolution of the
sensor applied. A pattern on a large scale image may show as texture on a
small scale image of the same scene.
Site relates to the topographic or geographic location. A typical example of
this interpretation element is that backswamps can be found in a floodplain
but not in the centre of a city area. Similarly, a large building at the end of a
number of converging railroads is likely to be a railway stationwe would
not expect a hospital at this site.
Association refers to the fact that a combination of objects makes it possible
to infer about its meaning or function. An example of the use of associafirst

previous

next

last

back

exit

zoom

contents

index

about

391

11.2. Image understanding and interpretation


tion is an interpretation of a thermal power plant based on the combined
recognition of high chimneys, large buildings, cooling towers, coal heaps
and transportation belts. In the same way the land use pattern associated
with small scale farming will be characteristically different to that of large
scale farming.
Having introduced these seven interpretation elements you may have noticed a relation with the spatial extent of the feature to which they relate. Tone
or hue can be defined for a single pixel; texture is defined for a neighbouring
group of pixels, not for single pixels. The other interpretation elements relate to
individual objects or a combination of objects. The simultaneous and often implicit use of all these elements is the strength of visual image interpretation. In
standard image classification (Chapter 12) only hue is applied, which explains
the limitations of automated methods compared to visual image interpretation.

first

previous

next

last

back

exit

zoom

contents

index

about

392

11.2. Image understanding and interpretation

11.2.3

Stereoscopic vision

The impression of depth encountered in the real world can also be realized by
images of the same object that are taken from different positions. Such a pair of
images (photographs or digital images) is separated and observed at the same
time by both eyes. These give images on the retinas of the viewers eyes, in
which objects at different positions in space are projected on relatively different
positions. We call this stereoscopic vision. Pairs of images that can be viewed
stereoscopically are called stereograms. Stereoscopic vision is explained here because the impression of height and height differences is important in the interpretation of both natural and man-made features from image data. Note that
in Chapter 9 we explained that under specific conditions stereo-models can be
used to derive 3D coordinates.
Under normal conditions the human eye can focus on objects between 150 mm
distance and infinity. In doing so we direct both eyes to the object (point) of interest. This is known as convergence, and is how humans normally see in three
dimensions. To view the stereoscopic model formed by a pair of overlapping
photographs, the two images have to be separated so that the left and right eyes
see only the left and right photographs, respectively. In addition one should not
focus on the photo itself but at infinity. Some experienced persons can experience stereo by putting the two photos at a suitable distance from their eyes.
Most of us need some help and different methods have been developed.
Pocket and mirror stereoscopes, and also the photogrammetric plotters, use a
system of lenses and mirrors to feed one image into one eye. Pocket and mirror
stereoscopes are mainly applied in mapping applications related to vegetation,
forest, soil and geomorphology (Figure 11.3) Photogrammetric plotters are used
in topographic and large scale mapping activities.
Another way of achieving stereovision is to project the two images in two
first

previous

next

last

back

exit

zoom

contents

index

about

393

11.2. Image understanding and interpretation


colours. Most often red and green colours are applied; the corresponding spectacles comprise one red and one green glass. This method is known as the
anaglyph system and is particularly suited to viewing overlapping images on
a computer screen. An approach used in digital photogrammetric systems is to
apply different polarizations for the left and right images. Polarized spectacles
make the left image visible to the left eye and the right image to the right eye.

Figure 11.3: The mirror


stereoscope
enables
stereoscopic vision of
stereograms. Each photo
is projected only onto one
eye.

first

previous

next

last

back

exit

zoom

contents

index

about

394

11.3. Application of visual image interpretation

11.3

first

Application of visual image interpretation

previous

next

last

back

exit

zoom

contents

index

about

395

11.3. Application of visual image interpretation

11.3.1

Soil mapping with aerial photographs

Semi-detailed soil mapping is a typical application of visual interpretation of


panchromatic stereopairs. The result is a soil map at scale 1:50,000. This type of
mapping is carried out either systematically or on a project basis, and is rarely
updated unless the soil landscape is radically changed, e.g. by natural disasters
or land reclamation works.
This section explains a method of soil mapping with the aid of aerial photos
that can be used in many areas of the world, i.e. the so-called geo-pedologic or
soil-landscape approach [11, 32, 44]. It starts from the assumption that the type
of soil is highly dependent on the position on the landscape (the relief), the type
of landscape itself (indicating the time at which the present soil formed), and
the underlying geology (so-called parent material in which the soil formed).
Local effects of climate and organisms (vegetation or animals) can also be seen
on aerial photos.
There are some soil landscapes where this method is not satisfactory. At more
detailed scales, field observations become more important than photo interpretation, and at more general scales, monoscopic interpretation of satellite imagery
is often more appropriate.
The overall process for semi-detailed soil mapping is as follows:
In the first phase, a transparency is placed over the working area of one
photo of a stereo pair and Aerial Photo Interpretation (API) units are delineated on it. The minimum size of the delineated units is 10 ha (0.4 cm2 on
the final map). Delineation is mainly based on geomorphological characteristics, using a structured legend. Stereopairs allow observation of height
and terrain form. In addition, the usual interpretation elements (colour,
texture, shape, . . . ) may be used. The interpretation is hierarchical: first

first

previous

next

last

back

exit

zoom

contents

index

about

396

11.3. Application of visual image interpretation

Figure 11.4:
Panchromatic
photograph to be interpreted.

the interpreter finds and draws master lines dividing major landscapes
(mountains, hill land, plateau, valley, . . . ). Each landscape is then divided
into relief types (e.g. sharply-dissected plateau), each of which is further
divided by lithology (e.g. fine-bedded shales and sandstones), and finally
by detailed landform (e.g. scarp slope). The landform consists of a topographic form, a geomorphic position, and a geochronological unit, which
together determine the environment in which the soil formed. A legend
category usually comprises many areas (polygons) with the same photointerpretation characteristics. Figure 11.4 shows a photograph and Fig-

first

previous

next

last

back

exit

zoom

contents

index

about

397

11.3. Application of visual image interpretation

Figure 11.5:
Photointerpretation
transparency related to the
aerial photo shown in
Figure 11.4.

ure 11.5 shows the interpretation units that resulted from its stereo interpretation.
In the next phase, a sample area of the map is visited in the field to study
the soil. The sampled area is between 1020% of the total area and comprises all legend classes introduced in the previous stage. The soils are
described in the field, and samples are taken for laboratory analysis, to determine their characteristics (layering, particle-size distribution, density,

first

previous

next

last

back

exit

zoom

contents

index

about

398

11.3. Application of visual image interpretation


chemical composition, . . . ). The observations are categorized into a local or more general soil classification system. In this way, each photointerpretation unit is characterized by the soil types it contains, their relative abundance in the unit, and their detailed landscape relation. Some
API units have only one dominant soil in a 10 ha area, but in complex terrain, there may be several. The field check may also reveal a deficiency
in photo-interpretation, in which case the lines are adjusted, both in the
sample area, and according to the same criteria in extrapolation areas.
The final legend of such a map shows two linked tables: the geopedologic
legend (Table 11.1) and the soil legend (Table 11.2). Each map unit belongs
to one geopedologic class, which can be linked to more than one soil class
(1-to-n relationship). It is possible for the same soil class to occur in several
geopedologic classes, in which case the relationship is n-to-m.
Following the field check and possible adjustment of the API lines, the
transparencies are digitized, then individually corrected by ground control points and, preferably, a DEM (for ortho-correction), and merged into
one map. Without ortho-correction, it is necessary to manually edge-match
the georeferenced transparencies and correct for the effects of relief displacement. Another possibility is to manually re-compile lines on either a
topographic map with sufficient detail, or on orthophotos made independently.
Validation of these maps is typically carried out by transects that cross the
most variation over the shortest distance. Soils are observed at points located by photo-reading or GPS at pre-determined intervals of the transect
(for example every 100 m), and the predicted soil class is compared with

first

previous

next

last

back

exit

zoom

contents

index

about

399

11.3. Application of visual image interpretation


the actual class. This provides a quantitative measure of the map accuracy.
Soil mapping is an example of geo-information production with the aid of
aerospace survey, which is based on complementary roles for remote sensing
and ground observations, as well as prior knowledge of the survey area (geology, geomorphology, climate history, ecology, . . . ) and a sound conceptual
framework about the object of study (here, soil forming factors). Remote sensing
is used to provide a synoptic view, based on a mental model of soil-landscape
relationships, and to limit the amount of work in the field using stratification.

first

previous

next

last

back

exit

zoom

contents

index

about

400

11.3. Application of visual image interpretation

Landscape

Relief

Lithology

Landform

API Code

Hilland

Dissected
ridge

Loess

Summit

Hi111

Shoulder &
backslope
Scarp

Hi112

Toe slope

Hi212

Slope

Hi311

Bottom
Slope

Hi312
Hi411

Tread

Pl311

Escarpment Loess over


basalt
Colluvium
from loess /
basalt
Vales
Alluvium
from loess
Glacis
Plain

High
race

ter-

Old floodplain

first

previous

next

last

Colluvium
from loess
Loess over
old river alluvium

Old
vium

allu-

back

exit

Table 11.1: Geopedologic


legend, which results in
(hierarchical) API-codes.

Hi211

Abandoned
Pl312
channel
Abandoned
Pl411
floodplain
(channelized)

zoom

contents

index

about

401

11.3. Application of visual image interpretation

API Code

Dominant Soil Type

Hi111
Hi112

Eutri-chromic Cambisols, fine silty


Eutri-chromic Cambisols, coarse silty
Calcaric Regosols, coarse silty
Calcaric Regosols, coarse silty
Stagnic Cambisols, silty skeletal
Eutri-chromic Cambisols, fine silty
Eutric Gleysols, fine silty
Eutric Luvisols, coarse silty
Silti-calcic Kastanozems, fine silty
Mollic Gleysols, fine loamy
Gleyic Fluvisols, fine loamy
Eutric Fluvisols, fine loamy

Hi211
Hi212
Hi311
Hi312
Hi411
Pl311
Pl312
Pl411

first

previous

next

last

back

exit

Table 11.2: Relationship


between the soil types and
the geo-pedologic legend
is made through the APIcodes.

zoom

60%
40%

50%
50%

contents

index

about

402

11.3. Application of visual image interpretation

11.3.2

Land cover mapping from multispectral data

This second example refers to the Coordination of Information on the Environment


(CORINE) land cover project, in which a land cover database for Europe is being
established. CORINE is based on the work done in different countries of the European Union. This example is given to illustrate a situation in which different
organizations and individuals are involved in the production of the total data
set. To enable such an approach, research activities have been carried out that
resulted in a technical guide for producing the land cover data from spaceborne
multispectral data [9, 28].
At this point, the terms land cover and land use should be defined since they
are often used in the context of image interpretation. Land cover refers to the
type of feature present on the surface of the land. It refers to a physical property or material, e.g. water, sand, potato crop, or asphalt. Land use relates to the
human activity or economic function for a specific piece of land, e.g. urban use,
industrial use or nature reserve. Another way to put it is that land cover is more
dynamic than land use. Most vegetated areas (land cover) change over time.
Land use as it is related to the human activity is more stable: land that is being used for irrigated crops or extensive cattle breeding will usually not change
within a year. The difference between land cover and land use is also explained
in Figure 11.6 showing that different land use classes can be composed of the
same land cover classes. Principally, remote sensing image data give information about land cover. Using contextual information the land use sometimes can
be deduced. The distinction between land cover and land use becomes highly
relevant in dealing with, for example, environmental impact assessment studies.
In such studies, not only the presence of a certain type of crop is relevant but also
the way in which this crop is grown and treated (in terms of manure, herbicides,
pesticides etc.) is important.
first

previous

next

last

back

exit

zoom

contents

index

about

403

11.3. Application of visual image interpretation


The overall CORINE mapping process is as follows:
The process starts with the selection of cloud free multispectral satellite
images of an area. Hard copy prints are produced from the image data.
Guidelines are given to yield prints that are similar in terms of scale and
colour enhancement. The prints are the basis for the interpretation, while
on-screen digitizing procedures are becoming increasingly used. Additional information in the form of topographical maps, photo-atlases, soil
maps, etc. is collected to aid the interpretation.
The interpretation of the land cover units is made on transparencies, which
are overlaid on the hard copy images and which are digitized at a later
stage. In the CORINE project the minimum mapping unit is set at 25 ha;
the minimum width of mapping units is set at 100 m. The interpretation
is based on the nomenclature, which gives the names of the land cover
classes. CORINE has a three level categorization (Table 11.3). The reason
for defining the land cover at three different levels is to ensure consistent
Road
Grass

Buildings

Buildings

first

Road

previous

Grass

Trees

next

Figure 11.6: Residential


land use (a), and industrial
land use (b) may be composed of similar land cover
classes.

P
(a)

Trees

last

back

exit

(b)
zoom

contents

index

about

404

11.3. Application of visual image interpretation


aggregation of the different classes. At the same time, one can imagine that
a map of level-1 classes is more accurate than a map of level-3 classes. For
each class, a more extended class description is given, followed by instructions on how to recognize the specific class (Table 11.4) together with some
example images of the class. The instructions with respect to discriminating different classes or categories are called an interpretation key.
The line patterns on the transparencies are digitized and the corresponding
class codes are entered into the computer. The result is a polygon database.
One of the attributes is the land cover code. A sample result of the interpretation is shown in Figure 11.7.
The final activity of any mapping project is an independent assessment of
accuracy (validation). In the CORINE project this is done by field check of
a limited number of objects that are selected by a sampling strategy.

311

311

112l

231

243

112 l

211

211

231

l121

231

142
312

242

231

231

112

211

111l

312

313

Figure 11.7: Example of


the result of the CORINE
Land Cover classification
showing a part around
one of the rivers in the
Netherlands. Courtesy of
Wageningen-UR.

231

111

121

l
l

211 l
l

512

111

first

previous

next

last

111

211
l

l
l121

111

231l

211

242

back

211

exit

zoom

111l

contents

index

about

405

11.3. Application of visual image interpretation


Level 1

1. Artificial

Level 2

1.1.

Level 3

1.1.1. Continuous urban fab-

Urban fabric

ric

1.1.2. Discontinuous

Surfaces

urban

fabric

1.2.

Industrial,
commercial
and transport
units

1.2.1. Industrial or commercial units

1.2.2. Road and rail networks

1.3.

1.4.

Mine,
dump
and construction sites

Artificial nonagricultural
vegetated
areas

1.2.3.
1.2.4.
1.3.1.

and associated land


Port areas
Airports
Mineral extraction sites

1.3.2. Dump sites


1.3.3. Construction sites
1.4.1 Green urban areas

1.4.2. Sport and leisure facili2. Agricultural 2.1.

Arable land

areas

2.2.
first

previous

next

Permanent
last
crops back

ties
2.1.1. Non-irrigated
arable
land
2.1.2. Permanently irrigated
land
2.1.3. Rice fields
2.2.1. Vineyards
exit
zoom contents index about

2.2.2. Fruit trees and berry


plantations

Table 11.3: Part of the


CORINE nomenclature for
land cover.

406

11.3. Application of visual image interpretation

Table 11.4:
CORINEs
extended description for
class 1.3.1 (Mineral extraction sites). Source:[9].

Class 1.3.1 Mineral Extraction Sites


Extended Description: Areas with open-pit extraction of construction material (sandpits, quarries) or other minerals (open-cast mines). Includes
flooded gravel pits, except for -bed extraction.
How to recognize: Quarries are easily recognizable on satellite images
(white patches) because they contrast with their surroundings. The same
is true for working gravel pits. For open-cast mines, the difference with
item 1.3.2 (dump sites) is not always obvious. In such cases, ancillary
data will be needed to remove any doubt.
Disused open-cast mines, quarries, sandpits, sludge quarries and gravel
pits (not filled with water) are included in this category, However, ruins do
not come under this heading.
Sites being worked or only recently abandoned, with no trace of vegetation, come under this heading. Where vegetal colonization is visible, sites
are classified under the appropriate vegetal cover category.
This heading includes buildings and associated industrial infrastructure
(e.g. cement factories) and small water bodies of less than 25 ha created
by mining.

first

previous

next

last

back

exit

zoom

contents

index

about

407

11.3. Application of visual image interpretation

11.3.3

Some general aspects

Mapping, in general, requires an abstraction of the world. The most simple


way to do this is by introducing classes or categories. The land is divided
into (discrete) objects, which are assigned to one class (only). Sometimes such
abstraction is defined beforehand and determined by the required information.
On other occasions, this abstraction is made during the process itself because of
lack of experience with the image data and terrain at hand. For different reasons
hierarchical systems are often used. Among others they provide an easy way
to aggregate data into higher level classes [26]. One of the drawbacks of such
a discrete approach is that the actual phenomenon that is mapped is not discrete. You may imagine that even a qualified photo-interpreter has difficulties
in discriminating between agricultural land including semi-natural areas and
semi-natural areas with limited agricultural activity. In the context of type of
objects, therefore, a distinction can be made made between objects with determinate and indeterminate boundaries. The latter category requires quite different
methods and approaches [5].
Another aspect of the interpretation concerns geometric properties. The minimum size (area) or width of the objects (units) to be distinguished needs to be
defined. Another variable is the degree to which boundaries are generalized.
This aspect is directly linked to the characteristics of the objects mapped: only
crisp objects (such as houses, roads, etc.) can be delineated accurately. Drawing
boundaries in natural vegetation areas can be highly problematic if not useless.
In general, all image based mapping processes require field observations. Field
observations can be used:
to gather local knowledge beforehand to guide the interpretation. When
dealing with a new area, some of the features observed on the images will

first

previous

next

last

back

exit

zoom

contents

index

about

408

11.3. Application of visual image interpretation


not be understood (for example Figure 11.2). Field observations will help
to interpret these features.
to gather data about areas or features that cannot be studied from the image data. For example, when dealing with large scale photos of an urban
environment parts of the area will be in dead ground or hidden by other
construction such as bridge. The only way to get information about these
areas is to visit them. Possibly, additional information that cannot be derived from the images needs to be collected, e.g. the address of a house.
This can only be done by walking through the streets.
to evaluate the intermediate and final interpretation result. The evaluation
of the final result is called validation. In a validation the accuracy of the established data is determined. For this purpose a limited number of objects
or areas are selected (using a sampling approach) and visited in the field.
The data collected in the field is referred to as ground truth. Comparison
of the ground truth and the interpretation result then is used to calculate
different measures of accuracy.

first

previous

next

last

back

exit

zoom

contents

index

about

409

11.4. Quality aspects

11.4

Quality aspects

The quality of the result of an image interpretation depends on a number of


factors: the interpreter, the image data used and the guidelines provided.
The professional experience and the experience with image interpretation
determine the skills of a photo-interpreter. A professional background is
required: a geological interpretation can only be made by a geologist since
s/he is able to related image features to geological phenomena. Local
knowledge, derived by field visits, is required to help in the interpretation.
The image data applied limit the phenomena that can be studied, both
in a thematic and geometric sense. One cannot, for example, generate a
reliable database on the tertiary road system using multispectral satellite
data. Likewise, blackandwhite aerial photos contain limited information
about agricultural crops.
Finally, the quality of the interpretation guidelines is of large influence.
Consider, for example, a project in which a group of persons is to carry out
a mapping project. Ambiguous guidelines will prevent a consistent mapping in which individual results form a seamless database of consistent
quality.
Especially in large projects and monitoring programmes, all the above three
points play an important role in ensuring the replicability of the work. Replicability refers to the degree of correspondence obtained by different persons for the
same area or by the same person for the same area at different moments in time.
Replicability does not provide information on the accuracy (the relation with
the real world) but it does give an indication of the quality of the class definition
first

previous

next

last

back

exit

zoom

contents

index

about

410

11.4. Quality aspects

Figure 11.8: Two interpretation results derived


by two photo-interpreters
analysing the same image. Note the overall differences but also differences in the generalization
of the lines. (From [13].)

(crisp or ambiguous) and the instructions and methods used. Two examples are
given here to give you an intuitive idea. Figure 11.8 gives two interpretation results for the same area. Note that both results differ in terms of total number of
objects (map units) and in terms of (line) generalization. Figure 11.9 compares
13 individual interpretation results of a geomorphological interpretation. Similar to the previous example, large differences are found along the boundaries.
In addition to this, you also can conclude that for some objects (map units) there
was no agreement on the thematic attribute.

first

previous

next

last

back

exit

zoom

contents

index

about

411

11.4. Quality aspects

first

previous

next

last

back

exit

11

2
3

7
8

13

4
5

9
10

zoom

Figure 11.9: Comparison


of 13 interpretations of
the same image.
The
grey value represents the
the degree of correspondence:
white indicates
agreement of all 13 interpreters, black indicates
that all 13 interpreters disagreed on the thematic
class for that location.
(From [25].)

12

contents

index

about

412

11.4. Quality aspects

Summary
Visual image interpretation is one of the methods used to extract information
from remote sensing image data. For that purpose, images need to be visualized on screen or in hard-copy. The human vision system is used to interpret
the colours and patterns on the picture. Spontaneous recognition and logical
inference (reasoning) are distinguished.
Interpretation keys or guidelines are required to instruct the image interpreter. In such guidelines, the (seven) interpretation elements can be used to
describe how to recognize certain objects. Guidelines also provide a classification scheme, which defines the thematic classes of interest and their (hierarchical) relationships. Finally, guidelines give rules on the minimum size of objects
to be included in the interpretation.
When dealing with a new area or a new application, no guidelines are available. An iterative approach is then required to establish the relationship between
features observed in the picture and the real world.
In all interpretation and mapping processes the use of ground observations is
essential to (i) acquire knowledge of the local situation, (ii) gather data for areas
that cannot be mapped from the images (iii) to check the result of the interpretation.
The quality of the result of visual image interpretation depends on the experience and skills of the interpreter, the appropriateness of the image data applied
and the quality of the guidelines being used.

first

previous

next

last

back

exit

zoom

contents

index

about

413

11.4. Quality aspects

Questions
The following questions can help you to study Chapter 11.
1. What is the relationship between image visualization and image interpretation?
2. Describe (to a colleague) how to recognize a road on an aerial photo (make
use of the interpretation elements).
3. Why is it necessary to have a sound conceptual model of how soils form
in the landscape to apply the aerial photo-interpretation method presented
in Section 11.3.1? What are the advantages of this approach in terms of efficiency and thematic accuracy, compared to interpretation element (only)
analysis?
4. Describe a relatively simple method to check the quality (in terms of replicability) of visual image interpretation.
5. Which products in your professional environment are based on visual image interpretation?

first

previous

next

last

back

exit

zoom

contents

index

about

414

11.4. Quality aspects

6. Consider the CORINE nomenclature; identify three classes which can be


accurately mapped; also identify three classes that can be expected to be
exchanged (confused) with other classes.
The following are typical exam questions:
1. List the seven interpretation elements.
2. Give three reasons for field observations in the process of image interpretation.
3. Give definitions of land cover and land use. Give an example of each of
them.
4. In what situations is visual image interpretation preferred to semi-automated interpretations?
5. Give two examples of cases where visual interpretation is not possible,
even with good quality imagery.

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 12
Digital image classification

first

previous

next

last

back

exit

zoom

contents

index

about

415

416

12.1. Introduction

12.1

Introduction

Chapter 11 explained the process of visual image interpretation. In this process,


human vision plays a crucial role in extracting information from image data.
Although computers may be used for visualization and digitization, the interpretation itself is carried out by the operator.
In this chapter digital image classification is introduced. In this process the (human) operator instructs the computer to perform an interpretation according to
certain conditions. These conditions are defined by the operator. Image classification is one of the techniques in the domain of digital image interpretation.
Other techniques include automatic object recognition (for example, road detection) and scene reconstruction (for example, generation of 3D object models).
Image classification, however, is the most commonly applied technique in the
ITC context.
Application of image classification is found in many regional scale projects.
In Asia, the Asian Association of Remote Sensing (AARS) is generating various
land cover data sets based on (un)supervised classification of multispectral satellite data. In the Africover project (by the Food and Agriculture Organization,
FAO), image classification techniques are being used to establish a pan-African
land cover data set. The European Commission requires national governments
to verify the claims of farmers related to subsidized crops. These national governments employ companies to make a first inventory, using image classification
techniques, which is followed by field checks.
Image classification is based on the different spectral characteristics of different materials on the Earths surface, as introduced in Section 2.4. This chapter
focuses on classification of multispectral image data. Section 12.2 explains the
concepts of image space and feature space. Image classification is a process that

first

previous

next

last

back

exit

zoom

contents

index

about

417

12.1. Introduction
operates in feature space. Section 12.3 gives an overview of the classification
process, the steps involved and the choices to be made. The result of an image
classification needs to be validated to assess its accuracy (Section 12.4). Finally,
two major problems in image classification are addressed in Section 12.5.

first

previous

next

last

back

exit

zoom

contents

index

about

418

12.2. Principle of image classification

12.2

first

Principle of image classification

previous

next

last

back

exit

zoom

contents

index

about

419

12.2. Principle of image classification

12.2.1

Image space

A digital image is a 2D-array of elements. In each element the energy reflected or


emitted from the corresponding area on the Earths surface is stored. The spatial
arrangement of the measurements defines the image or image space. Depending
on the sensor, data are recorded in n bands (Figure 3.11, repeated here as Figure 12.1). Digital image elements are usually stored as 8-bit DN-values (range:
0255).
Columns

Rows
45

band 3

26 81
53 35 57

previous

next

Figure 12.1: The structure of a multi-band image.

band 1

DN-values

Single pixel

first

band 2

last

back

exit

zoom

contents

index

about

420

12.2. Principle of image classification

12.2.2

Feature space

In one pixel, the values in (for example) two bands can be regarded as components of a two-dimensional vector, the feature vector. An example of a feature
vector is (13, 55), which tells that 13 DN and 55 DN are stored for band 1 and
band 2 respectively. This vector can be plotted in a two-dimensional graph.
Image

band 1
band 2

Image

band 1
band 2
band 3
(v1, v2, v3)

(v1, v2)
band 3
band 2

v3

v2

3-dimensional
Feature Space

2-dimensional
Feature Space

Figure 12.2: Plotting of


the values of a pixel in
the feature space for a two
and three band image.

v1
band 1

v1

v2

band 1

band 2

Similarly, this approach can be visualized for a three band situation in a threedimensional graph. A graph that shows the values of the feature vectors is called
a feature space, or also feature space plot or scatter plot. Figure 12.2 illustrates
how a feature vector (related to one pixel) is plotted in the feature space for two
and three bands, respectively. Two-dimensional feature space plots are most
common.

first

previous

next

last

back

exit

zoom

contents

index

about

421

12.2. Principle of image classification

Figure 12.3: Scatterplot


of two bands of a digital
image. Note the units (DNvalues) along the x- and yaxes. The intensity at a
point in the feature space
is related to the number of
pixels at that point.

Note that plotting values is difficult for a four- or more-dimensional case,


even though the concept remains the same. A practical solution when dealing
with four or more bands is that all the possible combinations of two bands are
plotted separately. For four bands, this already yields six combinations: bands 1
and 2, 1 and 3, 1 and 4, bands 2 and 3, 2 and 4, and bands 3 and 4.
Plotting the combinations of the values of all the pixels of one image yields
a large cluster of points. Such a plot is referred to as a scatterplot (Figure 12.3).
A scatterplot provides information about the combinations of pixel values that
occur within the image. Note that some combinations will occur more frequently
and can be visualized by using intensity or colour.

first

previous

next

last

back

exit

zoom

contents

index

about

422

12.2. Principle of image classification


Distances and clusters in the feature space

band y (units of 5 DN)

Distance in the feature space is expressed as Euclidian distance and the units
are DN (as this is the unit of the axes). In a two-dimensional feature space the
distance can be calculated according to Pythagoras theorem. In the situation of
Figure 12.4, the distance between (10, 10) and (40, 30) equals the square root of
(40 10)2 + (30 10)2 . For three or more dimensions, the distance is calculated
in a similar way.

(0,0)

first

previous

next

last

Figure 12.4:
Euclidian
distance between the two
points is calculated using
Pythagoras theorem.

band x (units of 5 DN)

back

exit

zoom

contents

index

about

423

12.2. Principle of image classification

12.2.3

Image classification

The scatterplot shown in Figure 12.3 gives information about the distribution
of corresponding pixel values in two bands of an image. Figure 12.5 shows a
feature space in which the feature vectors have been plotted for six specific land
cover classes (grass, water, trees, etc). Each cluster of feature vectors (class) occupies its own area in the feature space. Figure 12.5 shows the basic assumption
for image classification: a specific part of the feature space corresponds to a specific class. Once the classes have been defined in the feature space, each image
pixel can be compared to these classes and assigned to the corresponding class.
Classes to be distinguished in an image classification need to have different
spectral characteristics. This can, for example, be analysed by comparing spectral reflectance curves (Section 2.4). Figure 12.5 also illustrates the limitation of
image classification: if classes do not have distinct clusters in the feature space,
image classification can only give results to a certain level of reliability.
The principle of image classification is that a pixel is assigned to a class based
on its feature vector, by comparing it to predefined clusters in the feature space.
Doing so for all image pixels results in a classified image. The crux of image
classification is in comparing it to predefined clusters, which requires definition of
the clusters and methods for comparison. Definition of the clusters is an interactive process and is carried out during the training process. Comparison of the
individual pixels with the clusters takes place using classifier algorithms. Both are
explained in the next section.

first

previous

next

last

back

exit

zoom

contents

index

about

424

12.2. Principle of image classification

255

grass
water
band y

trees
houses
bare soil

Figure 12.5:
Feature
space showing the respective clusters of six
classes; note that each
class occupies a limited
area in the feature space.

wheat

band x

first

previous

next

last

back

255

exit

zoom

contents

index

about

425

12.3. Image classification process

12.3

Image classification process

The process of image classification (Figure 12.6) typically involves five steps:
1. Selection and preparation of the image data. Depending on the cover types
to be classified, the most appropriate sensor, the most appropriate date(s)
of acquisition and the most appropriate wavelength bands should be selected (Section 12.3.1).
Training
data

Remote
Sensing
data

Training and selection


of algorithm

running the
actual
classification

Figure 12.6: The classification process; most


important component is
the training in combination
with selection of the algorithm.

Classification
data

2. Definition of the clusters in the feature space. Here two approaches are
possible: supervised classification and unsupervised classification. In a supervised classification, the operator defines the clusters during the training
process (Section 12.3.2); in an unsupervised classification a clustering algorithm automatically finds and defines a number of clusters in the feature
space (Section 12.3.3).
3. Selection of classification algorithm. Once the spectral classes have been
defined in the feature space, the operator needs to decide on how the pixels
(based on their DN-values) are assigned to the classes. The assignment can
be based on different criteria (Section 12.3.4).
first

previous

next

last

back

exit

zoom

contents

index

about

426

12.3. Image classification process


4. Running the actual classification. Once the training data have been established and the classifier algorithm selected, the actual classification can be
carried out. This means that, based on its DN-values, each individual pixel
in the image is assigned to one of the predefined classes (Figure 12.7).
5. Validation of the result. Once the classified image has been produced its
quality is assessed by comparing it to reference data (ground truth). This
requires selection of a sampling technique, generation of an error matrix,
and the calculation of error parameters (Section 12.4).

(a)

Figure 12.7: The result


of classification of a multispectral image (a) is a
raster in which each cell
is assigned to some thematic class (b).

(b)

The above points are elaborated on in the next sections. Most examples deal
with a two-dimensional situation (two bands) for reasons of simplicity and visualization. In principle, however, image classification can be carried out on any
n-dimensional data set. Visual image interpretation, however, limits itself to an
image that is composed of a maximum of three bands.

first

previous

next

last

back

exit

zoom

contents

index

about

427

12.3. Image classification process

12.3.1

Preparation for image classification

Image classification serves a specific goal: converting image data into thematic
data. In the application context, one is rather interested in thematic characteristics of an area (pixel) rather than in its reflection values. Thematic characteristics
such as land cover, land use, soil type or mineral type can be used for further
analysis and input into models. In addition, image classification can also be
considered as data reduction: the n multispectral bands result in a single band
raster file.
With the particular application in mind, the information classes of interest
need to be defined and their spatio-temporal characteristics assessed. Based on
these characteristics the appropriate image data can be selected. Selection of the
adequate data set concerns the type of sensor, the relevant wavelength bands
and the date(s) of acquisition.
The possibilities for the classification of land cover types depend on the date
an image was acquired. This not only holds for crops, which have a certain
growing cycle, but also for other applications. Here you may think of snow
cover or illumination by the sun. In some situations, a multi-temporal data set is
required. A non-trivial point is that the required image data should be available
at the required moment. Limited image acquisition and cloud cover may force
you to make use of a less optimal data set.
Before starting to work with the acquired data, a selection of the available
spectral bands may be made. Reasons for not using all available bands (for example all seven bands of Landsat TM) lie in the problem of band correlation and,
sometimes, in limitations of hard- and software. Band correlation occurs when
the spectral reflection is similar for two bands. An example is the correlation
between the green and red wavelength bands for vegetation: a low reflectance
in green correlates with a low reflectance in red. For classification purposes, corfirst

previous

next

last

back

exit

zoom

contents

index

about

428

12.3. Image classification process


related bands give redundant information and might disturb the classification
process.

first

previous

next

last

back

exit

zoom

contents

index

about

429

12.3. Image classification process

12.3.2

Supervised image classification

One of the main steps in image classification is the partitioning of the feature
space. In supervised classification this is realized by an operator who defines the
spectral characteristics of the classes by identifying sample areas (training areas).
Supervised classification requires that the operator be familiar with the area of
interest. The operator needs to know where to find the classes of interest in the
area covered by the image. This information can be derived from general area
knowledge or from dedicated field observations (Section 11.3.1).
A sample of a specific class, comprising of a number of training pixels, forms
a cluster in the feature space (Figure 12.5). The clusters, as selected by the operator:
should form a representative data set for a given class; this means that
the variability of a class within the image should be taken into account.
Also, in an absolute sense, a minimum number of observations per cluster
is required. Although it depends on the classifier algorithm to be used, a
useful rule of thumb is 30 n (n = number of bands).
should not or only partially overlap with the other clusters, as otherwise a
reliable separation is not possible. Using a specific data set, some classes
may have significant spectral overlap, which, in principle, means that these
classes cannot be discriminated by image classification. Solutions are to
add other spectral bands, and/or, add image data acquired at other moments.

first

previous

next

last

back

exit

zoom

contents

index

about

430

12.3. Image classification process

12.3.3

Unsupervised image classification

Supervised classification requires knowledge of the area at hand. If this knowledge is not sufficiently available or the classes of interest are not yet defined,
an unsupervised classification can be applied. In an unsupervised classification,
clustering algorithms are used to partition the feature space into a number of
clusters.
Several methods of unsupervised classification exist, their main purpose being to produce spectral groupings based on certain spectral similarities. In one
of the most common approaches, the user has to define the maximum number
of clusters in a data set. Based on this, the computer locates arbitrary mean vectors as the centre points of the clusters. Each pixel is then assigned to a cluster
by the minimum distance to cluster centroid decision rule. Once all the pixels have been labelled, recalculation of the cluster centre takes place and the
process is repeated until the proper cluster centres are found and the pixels are
labelled accordingly. The iteration stops when the cluster centres do not change
any more. At any iteration, however, clusters with less than a specified number
of pixels are eliminated. Once the clustering is finished, analysis of the closeness
or separability of the clusters will take place by means of inter-cluster distance
or divergence measures. Merging of clusters needs to be done to reduce the
number of unnecessary subdivisions in the data set. This will be done using a
pre-specified threshold value. The user has to define the maximum number of
clusters/classes, the distance between two cluster centres, the radius of a cluster,
and the minimum number of pixels as a threshold number for cluster elimination. Analysis of the cluster compactness around its centre point is done by
means of the user-defined standard deviation for each spectral band. If a cluster
is elongated, separation of the cluster will be done perpendicular to the spectral
axis of elongation. Analysis of closeness of the clusters is carried out by meafirst

previous

next

last

back

exit

zoom

contents

index

about

431

12.3. Image classification process


suring the distance between the two cluster centres. If the distance between two
cluster centres is less than the pre-specified threshold, merging of the clusters
takes place. At each iteration, any cluster with less than a specified number of
pixels is eliminated. The clusters that result after the last iteration are described
by their statistics. Figure 12.8 shows the results of a clustering algorithm on a
data set. As you can observe, the cluster centres coincide with the high density
areas in the feature space.
The derived cluster statistics are then used to classify the complete image
using a selected classification algorithm (similar to the supervised approach).

first

previous

next

last

back

exit

zoom

contents

index

about

432

12.3. Image classification process

Figure 12.8: The subsequent results of an iterative clustering algorithm


on a sample data set.

first

previous

next

last

back

exit

zoom

contents

index

about

433

12.3. Image classification process

Means and standard deviations

Partitioned feature space


255

Band 2

Band 2

255

Band 1

255

Band 1

Figure 12.9: Principle of


the box classification in a
two-dimensional situation.

255

Feature space partitioning Box classifier

first

previous

next

last

back

exit

zoom

contents

index

about

434

12.3. Image classification process

12.3.4

Classification algorithms

After the training sample sets have been defined, classification of the image can
be carried out by applying a classification algorithm. Several classification algorithms exist. The choice of the algorithm depends on the purpose of the classification and the characteristics of the image and training data. The operator
needs to decide if a reject or unknown class is allowed. In the following, three
classifier algorithms are explained. First the box classifier is explained, for its simplicity to help in understanding of the principle. In practice, the box classifier is
hardly ever used. In practice the Minimum Distance to Mean and the Maximum
Likelihood classifiers are normally used.

first

previous

next

last

back

exit

zoom

contents

index

about

435

12.3. Image classification process


Box classifier
The box classifier is the simplest classification method. For this purpose, upper
and lower limits are defined for each class. The limits may be based on the
minimum and maximum values, or on the mean and standard deviation per
class. When the lower and the upper limits are used, they define a box-like
area in the feature space, which is why it is called box classifier. The number
of boxes depends on the number of classes. Box classification is also known as
parallelepiped classification since the opposite sides are parallel (Figure 12.9).
During classification, an unknown pixel will be checked to see if it falls in any
of the boxes. It is labelled with the class in which box it falls. Pixels that do not
fall inside any of the boxes will be assigned the unknown class, sometimes also
referred to as the reject class.
The disadvantage of the box classifier is the overlap between the classes. In
such a case, a pixel is arbitrarily assigned the label of the first box it encounters.

first

previous

next

last

back

exit

zoom

contents

index

about

436

12.3. Image classification process

Band 2

255

Mean vectors
255

0
Band 2

Band 1

255
Distance
Threshold
0

Band 1

255

first

Figure 12.10:
Principle
of the minimum distance
to mean classification in
a two-dimensional situation. The decision boundaries are shown for a
situation without threshold distance (upper right)
and with threshold distance (lower right).

Unknown

Band 2

255

previous

next

last

back

exit

Band 1

zoom

contents

255

index

about

437

12.3. Image classification process


Minimum Distance to Mean classifier
The basis for the Minimum Distance to Mean (MDM) classifier is the cluster centres. During classification the Euclidean distances from an unknown pixel to various cluster centres are calculated. The unknown pixel is assigned to that class
to which the distance to the mean DN value of that class is least. Figure 12.10
illustrates how a feature space is partitioned based on the cluster centres. One
of the flaws of the MDM classifier is that pixels that are at a large distance from
a cluster centre may also be assigned to this centre. This problem can be overcome by defining a threshold value that limits the search distance. Figure 12.10
illustrates this effect, the threshold distance to the centre is shown as a circle.
A further disadvantage of the MDM classifier is that it does not take the class
variability into account: some clusters are small and dense while others are large
and dispersed. Maximum likelihood classification takes class variability into
account.

first

previous

next

last

back

exit

zoom

contents

index

about

438

12.3. Image classification process


Maximum Likelihood classifier
The Maximum Likelihood (ML) classifier considers not only the cluster centre
but also its shape, size and orientation. This is achieved by calculating a statistical distance based on the mean values and covariance matrix of the clusters.
The statistical distance is a probability value: the probability that observation x
belongs to specific cluster. The pixel is assigned to the class (cluster) to which
it has the highest probability. The assumption of most ML classifiers is that the
statistics of the clusters have a normal (Gaussian) distribution.
For each cluster, so-called equiprobability contours can be drawn around
the centres of the clusters. Maximum likelihood also allows the operator to define a threshold distance by defining a maximum probability value. A small
ellipse centred on the mean defines the values with the highest probability of
membership of a class. Progressively larger ellipses surrounding the centre represent contours of probability of membership to a class, with the probability
decreasing away from the centre. Figure 12.11 shows the decision boundaries
for a situation with and without threshold distance.

first

previous

next

last

back

exit

zoom

contents

index

about

439

12.3. Image classification process

Band 2

255

Mean vectors and variancecovariance matrices


255

Band 2

Band 1

255

Band 1

255

Unknown

Figure 12.11:
Principle
of the maximum likelihood classification. The
decision boundaries are
shown for a situation without threshold distance (upper right) and with threshold distance (lower right).

Band 2

255

Feature space partitioning


Maximum Likelihood Classifier
0

first

previous

next

last

back

exit

Band 1

zoom

contents

255

index

about

440

12.4. Validation of the result

12.4

Validation of the result

Image classification results in a raster file in which the individual raster elements
are class labelled. As image classification is based on samples of the classes, the
actual quality should be checked and quantified afterwards. This is usually done
by a sampling approach in which a number of raster elements are selected and
both the classification result and the true world class are compared. Comparison
is done by creating an error matrix from which different accuracy measures can be
calculated. The true world class are preferably derived from field observations.
Sometimes, sources of an assumed higher accuracy, such as aerial photos, are
used as a reference.
Various sampling schemes have been proposed to select pixels to test. Choices
to be made relate to the design of the sampling strategy, the number of samples
required, and the area of the samples. Recommended sampling strategies in
the context of land cover data are simple random sampling or stratified random
sampling. The number of samples may be related to two factors in accuracy
assessment: (1) the number of samples that must be taken in order to reject a
data set as being inaccurate; or (2) the number of samples required to determine
the true accuracy, within some error bounds, for a data set. Sampling theory
is used to determine the number of samples required. The number of samples
must be traded-off against the area covered by a sample unit. A sample unit can
be a point but also an area of some size; it can be a single raster element but may
also include the surrounding raster elements. Among other considerations the
optimal sample area size depends on the heterogeneity of the class.
Once the sampling has been carried out and the data collected, an error matrix can be established (Table 12.1). Other terms for this table are confusion matrix
or contingency matrix. In the table, four classes (A, B, C, D) are listed. A total

first

previous

next

last

back

exit

zoom

contents

index

about

441

12.4. Validation of the result


A

Total

a
b
c
d

35
4
12
2

14
11
9
5

11
3
38
12

1
0
4
2

61
18
63
21

Total
Error of Omission
Producer Accuracy

53
34
66

39
72
28

64
41
59

7
71
29

163

Error of Commission (%)

User Accuracy
(%)

43
39
40
90

57
61
60
10

Table 12.1: The error matrix with derived errors


and accuracy expressed
as percentages. A, B, C
and D refer to the reference classes; a, b, c and d
refer to the classes in the
classification result. Overall accuracy is 53%.

of 163 samples were collected. From the table you can read that, for example,
53 cases of A were found in the real world (reference) while the classification
result yields 61 cases of a; in 35 cases they agree.
The first and most commonly cited measure of mapping accuracy is the overall accuracy, or Proportion Correctly Classified (PCC). Overall accuracy is the
number of correctly classified pixels (i.e., the sum of the diagonal cells in the
error matrix) divided by the total number of pixels checked. In Table 12.1 the
overall accuracy is (35 + 11 + 38 + 2)/163 = 53%. The overall accuracy yields one
figure for the result as a whole.
Most other measures derived from the error matrix are calculated per class.
Error of omission refers to those sample points that are omitted in the interpretation result. Consider class A, for which 53 samples were taken. 18 out of the 53
samples were interpreted as b, c or d. This results in an error of omission of
18/53 = 34%. Error of omission starts from the reference data and therefore relates to the columns in the error matrix. The error of commission starts from the
interpretation result and refers to the rows in the error matrix. The error of commission refers to incorrectly classified samples. Consider class d: only two of the
21 samples (10%) are correctly labelled. Errors of commission and omission are
first

previous

next

last

back

exit

zoom

contents

index

about

442

12.4. Validation of the result


also referred to as type I and type II errors respectively.
Omission error is the corollary of producer accuracy, while user accuracy is
the corollary of commission error. The user accuracy is the probability that a
certain reference class has also been labelled that class. The producer accuracy
is the probability that a sampled point on the map is that particular class.
Another widely used measure of map accuracy derived from the error matrix is the kappa or 0 statistic. Kappa statistics take into account the fact that
even assigning labels at random results in a certain degree of accuracy. Based
on Kappa statistics one can test if two data sets have a statistically different accuracy. This type of testing is used to evaluate different sources (image data) or
methods for the generation of spatial data.

first

previous

next

last

back

exit

zoom

contents

index

about

443

12.5. Problems in image classification

12.5

Problems in image classification

Pixel-based image classification is a powerful technique to derive thematic classes


from multi-band image data. However, it has certain limitations that you should
be aware of. The most important constraints of pixel-based image classification
are that it results in (i) spectral classes, and that (ii) each pixel is assigned to one
class only.
Spectral classes are classes that are directly linked to the spectral bands used
in the classification. In turn, these are linked to surface characteristics. In that
respect one can say that spectral classes correspond to land cover classes (see
also Chapter 11.3.3). In the classification process a spectral class may be represented by several training classes. Among others this is due to the variability
within a spectral class. Consider a class such as grass; there are different types
of grass, which have different spectral characteristics. Furthermore, the same
type of grass may have different spectral characteristics when considered over
larger areas due to, for example, different soil and climate conditions. A related
topic is that sometimes one is interested in land use classes rather than land
cover classes. Sometimes, a land use class may be comprised of several land
cover classes. Table 12.2 gives some examples of linking spectral land cover and
land use classes. Note that between two columns there can be 1-to-1, 1-to-n,
and n-to-1 relationships. The 1-to-n relationships are a serious problem and can
only be solved by adding data and/or knowledge to the classification procedure.
The data added can be other remote sensing image data (other bands, other moments) or existing spatial data, such as topographic maps, historical land inventories, road maps, etc. Usually this is done in combination with adding expert
knowledge to the process. An example is using historical land cover data and
defining the probability of certain land cover changes. Another example is to

first

previous

next

last

back

exit

zoom

contents

index

about

CAUTION

444

12.5. Problems in image classification


use elevation, slope and aspect information. This will prove especially useful
in mountainous regions where elevation differences play an important role in
variations in surface cover types.
Spectral Class
water
grass1
grass2
grass3
bare soil
trees1
trees2
trees3

Land Cover Class


water
grass
grass
grass
bare soil
forest
forest
forest

Table 12.2:
Spectral
classes
distinguished
during classification can
be aggregated to land
cover classes. 1-to-n and
n-to-1 relationships can
exist between land cover
and land use classes.

Land Use Class


shrimp cultivation
nature reserve
nature reserve
nature reserve
nature reserve
nature reserve
production forest
city park

The other main problem and limitation of pixel-based image classification


is that each pixel is only assigned to one class. When dealing with (relatively)
small pixels, this is not a problem. However, when dealing with (relatively) large
pixels, more land cover classes are likely to occur within this pixel. As a result,
the spectral value of the pixel is an average of the reflectance of the land cover
present within the pixel. In a standard classification these contributions cannot
be traced back and the pixel will be assigned to one of either classes or even to
another class. This phenomenon is usually referred to as the mixed pixel, or mixel
(Figure 12.12). This problem of mixed pixels is inherent to image classification:
assigning the pixel to one thematic class. The solution to this is to use a different
approach, for example, assigning the pixel to more than one class. This brief
introduction into the problem of mixed pixels also highlights the importance of
using data with the appropriate spatial resolution.
first

previous

next

last

back

exit

zoom

contents

index

about

445

12.5. Problems in image classification


Pixel-based classification, as illustrated in this chapter, is only one of several
possible approaches. An alternative would be to divide the image into spectrally
contiguous segments, which are then classified. This allows contextual information to be incorporated. We can further resort to spectral mixture analysis techniques that allow different surface materials within a pixel to be determined,
using spectrometer data (see Section 14.5.2). Alternatively, fuzzy classification
techniques or artificial neural networks can be used. For an introduction to
those methods see, for example, [23].

Terrain

Image

Figure 12.12: The origin


of mixed pixels: different land cover types occur within one pixel. Note
the relative abundance of
mixed pixels.

first

previous

next

last

back

exit

zoom

contents

index

about

446

12.5. Problems in image classification

Summary
Digital image classification is a technique to derive thematic classes from image data. Input are multi-band image data; output is a raster file containing
thematic (nominal) classes. In the process of image classification the role of the
operator and additional (field) data are significant. In a supervised classification, the operator needs to provide the computer with training data and select
the appropriate classification algorithm. The training data are defined based on
knowledge (derived by field work, or from secondary sources) of the area being
processed. Based on the similarity between pixel values (feature vector) and the
training classes a pixel is assigned to one of the classes defined by the training
data.
An integral part of image classification is validation of the results. Again,
independent data are required. The result of the validation process is an error
matrix from which different measures of error can be calculated.

first

previous

next

last

back

exit

zoom

contents

index

about

447

12.5. Problems in image classification

Questions
The following questions can help you to study Chapter 12.
1. Compare digital image classification with visual image interpretation in
terms of input of the operator/photo-interpreter and in terms of output.
2. What would be typical situations in which to apply digital image classification?
3. Another wording for image classification is partitioning of the feature
space. Explain what is meant by this.
The following are typical exam questions:
1. Name the different steps of the process of image classification.
2. What is the principle of image classification?

first

previous

next

last

back

exit

zoom

contents

index

about

448

12.5. Problems in image classification

3. What is a classification algorithm? Give two examples.


4. Image classification is sometimes referred to as automatic classification. Do
you agree or disagree? (give argumentation)
5. Draw a simple error matrix, indicate what is on the axes and explain how
the overall accuracy is calculated.

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 13
Thermal remote sensing

first

previous

next

last

back

exit

zoom

CAUTION

contents

index

about

449

450

13.1. Introduction

13.1

Introduction

Thermal Remote Sensing (TRS) is based on the measuring of electromagnetic radiation (EM) in the infrared region of the spectrum. Most commonly used are
the intervals from 35 m (MIR) and 814 m (TIR), in which the atmosphere
is fairly transparent and the signal is only lightly attenuated by atmospheric absorption. Since the source of the radiation is the heat of the imaged surface itself
(compare Figure 2.1), the handling and processing of TIR data is considerably
different from remote sensing based on reflected sunlight:
The surface temperature is the main factor that determines the amount of
energy that is radiated and measured in the thermal wavelengths. The
temperature of an object varies greatly depending on time of the day, season, location, exposure to solar irradiation, etc. and is difficult to predict.
In reflectance remote sensing, on the other hand, the incoming radiation
from the sun is constant and can be readily calculated, although of course
atmospheric correction has to be taken into account.
In reflectance remote sensing the characteristic property we are interested
in is the reflectance of the surface at different wavelengths. In thermal remote sensing, however, one property we are interested in is how well energy is emitted from the surface at different wavelengths.
Since thermal remote sensing does not depend on incoming sunlight, it can
also be performed during the night (for some applications even better than
during the day).
In section 13.2 the basic theory of thermal remote sensing is explained. Section 13.3 introduces the fundamental steps of processing TIR data to extract usefirst

previous

next

last

back

exit

zoom

contents

index

about

451

13.1. Introduction
ful information. Section 13.4 illustrates in a number of examples how thermal
remote sensing can be used in different application fields.

first

previous

next

last

back

exit

zoom

contents

index

about

452

13.2. Principles of Thermal Remote Sensing

13.2

first

Principles of Thermal Remote Sensing

previous

next

last

back

exit

zoom

contents

index

about

453

13.2. Principles of Thermal Remote Sensing

13.2.1

The physical laws

In chapter 2 some radiation principles were introduced already. From those


we know that all objects above absolute zero temperature radiate EM energy.
Plancks Radiation Law describes the amount of emitted energy per wavelength
depending on the objects temperature:
M,T =

C1
C
( T2

(13.1)

,

where M,T is the spectral radiant emittance in (W m3 ), is the wavelength


in (m), T is the absolute temperature in (K), C1 is the first radiation constant,
3.74151 1016 (W m2 ) and C2 is the second radiation constant, 0.01438377 (mK).
Plancks Radiation Law is also illustrated in Figure 13.1 for the approximate
temperature of the sun (about 6000 K) and the ambient temperature of the earth
surface (about 300 K), respectively. These graphs are often referred to as blackbody curves. The figure shows that for very hot surfaces (e.g. the sun), the
peak of the blackbody curve is at short wavelengths. For colder surfaces, such
as the earth, the peak of the blackbody curve moves to longer wavelengths. This
behaviour is described by Wiens displacement law:
max =

2898
,
T

(13.2)

where max is the wavelength of the radiation maximum in (m), T is temperature in (K) and 2898 is a physical constant in (m).
We can use Wiens law to predict the position of the peak of the blackbody
curve if we know the temperature of the emitting object. If you were interested
first

previous

next

last

back

exit

zoom

contents

index

about

454
Figure 13.1: Illustration of
Plancks radiation law for
the sun (6000 K) and
the average earth surface
temperature (300 K). Note
the logarithmic scale for
both x and y-axis. The
dashed lines mark the
wavelength with the emission maxima for the two
temperatures.
As predicted by Wiens law, the
radiation maximum shifts
to longer wavelengths as
temperature decreases.

visible band

1.0E+08

blackbody at
Suns temperature
6000 K

-2

-1

spectral exitance (Wm m )

13.2. Principles of Thermal Remote Sensing

1.0E+06

1.0E+04
blackbody at
Earths temperature
300 K

1.0E+02

Wienss shift
1.0E+00
0.1

10

100

wavelength (m)

in monitoring forest fires that burn at 1000 K, you could immediately turn to
bands around 2.9 m in the SWIR, where the radiation maximum for those fires
is expected. For ordinary land surface temperatures around 300 K, wavelengths
from 8 to 14 m are most useful (TIR range).
You can now understand why reflectance remote sensing (i.e. based on reflected sunlight) uses short wavelengths in the visible and short wave infrared,
and thermal remote sensing (based on emitted earth radiation) uses the longer
wavelengths around 3-14 m. Figure 13.1 also shows that the total energy (integrated area under the curve) is considerably higher for the sun than for the
cooler earth surface. This relationship between surface temperature and total

first

previous

next

last

back

exit

zoom

contents

index

about

455

13.2. Principles of Thermal Remote Sensing


radiant energy is known as the Stefan-Boltzmann law.
M = T 4 ,

(13.3)

where M is the total radiant emittance (W m2 ), is the Stefan-Boltzmann constant, 5.6697 108 (W m2 K4 ), and T is the temperature in (K).
The Stefan-Boltzmann law states that colder targets emit only small amounts
of EM radiation, and Wiens displacement law predicts that the peak of the radiation distribution will shift to longer wavelengths as the target gets colder. In
section 2.2.1 we learnt that photons at long wavelengths have less energy than
those at short wavelengths. Hence, in TRS we are dealing with a small amount
of low energy photons, which makes their detection difficult. As a consequence
of that we often have to reduce spatial or spectral resolution when acquiring
thermal imagery datasets to guarantee a reasonable signal-to-noise ratio.

first

previous

next

last

back

exit

zoom

contents

index

about

456

13.2. Principles of Thermal Remote Sensing

13.2.2

Blackbodies and emissivity

Blackbody The three laws described above are, strictly speaking, only valid
for an ideal radiator, which we refer to as a blackbody (BB). A BB is a perfect
absorber and a perfect radiator in all wavelengths. It can be thought of as a
black object that reflects no incoming EM radiation. But it is not only black in
the visible bands, but in all wavelengths of interest. If an object is a blackbody, it
behaves exactly as the theoretical laws predict. True blackbodies do not exist in
nature, although some materials (e.g. clean, deep water between 8-12 m) can
be very close.
Greybody Materials that absorb and radiate only a certain fraction compared
to a blackbody are called greybodies. The fraction is a constant for all wavelengths. Hence, a greybody curve is identical in shape to a blackbody curve, but
the absolute values are lower as it does not radiate as perfectly as a blackbody.
Selective Radiator A third group are the selective radiators. They also radiate
only a certain fraction of a blackbody, but this fraction changes with wavelength.
A selective radiator may radiate perfectly in some wavelengths, while acting
as a very poor radiator in other wavelengths. The radiant emittance curve of
a selective radiator can then also look quite different from a ideal, blackbody
curve.
Emissivity The fraction of energy that is radiated by a material compared to a
true blackbody is also referred to as emissivity ( ). Hence, emissivity is defined

first

previous

next

last

back

exit

zoom

contents

index

about

457

13.2. Principles of Thermal Remote Sensing


as:
 =

M,T
,
BB
M,T

(13.4)

where M,T is the radiant emittance of a real material at a given temperature,


BB
is a radiant emittance of a blackbody at the same temperature.
and M,T
Most materials are selective radiators. Their emissivity can change quite significantly with wavelength, and many materials have emissivity highs and lows
at distinct wavelengths. Therefore, an emissivity spectrum in the thermal infrared can be used to determine the composition of an object similarly to the
reflectance spectrum is used in the visible to short wave infrared. In fact, emissivity and reflectance for a given wavelength are also related to each other: objects that reflect well have a very poor ability to absorb/emit and vice versa. This
behaviour is described in Kirchhoffs law, which is valid as long as the material is
opaque.
(13.5)

 = 1 ,

where and  are reflectance and emissivity, respectively, at a given wavelength .


As it is easier to measure reflectance than emissivity with laboratory instruments, Kirchhoffs law is often applied to calculate emissivity spectra from reflectance data rather than measure it directly. Note that the term broad-band
emissivity indicates the average emissivity value over a large part of the thermal spectrum, usually from 8-14 m. We should indicate this by writing 814 .
However, the symbol  is often used without subscript.

first

previous

next

last

back

exit

zoom

contents

index

about

458

13.2. Principles of Thermal Remote Sensing

1.00

Emissivity

0.75

0.50

Figure 13.2: The thermal


infrared spectra of a sandy
soil and a marble. Distinct
emissivity highs and lows
can be observed in the two
spectra. Source: Johns
Hopkins University Spectral Library.

Sandy soil
Marble

0.25
3

10

11

12

13

14

Wavelength (mm)

first

previous

next

last

back

exit

zoom

contents

index

about

459

13.2. Principles of Thermal Remote Sensing

13.2.3

Radiant and kinetic temperatures

Radiant temperature The actual measurements acquired by a TIR sensor will


be in units of spectral radiance (W m2 sr1 m1 ) that reaches the sensor for a
certain wavelength band. We know that the amount of energy that is radiated
away from an object depends on its temperature and emissivity. That means that
a cold object with high emissivity can radiate just as much energy as a considerably hotter object with low emissivity. We can decide to ignore the emissivity
of the object. With the help of Plancks law we can directly calculate the ground
temperature that is needed to create this amount of radiance in the specified
wavelength of the sensor if the object had a perfect emissivity of 1.0. The temperature calculated is the radiant temperature or Trad . The terms brightness or
top-of-the-atmosphere temperatures are also used frequently.
Kinetic temperature The radiant temperature calculated from the radiant energy emitted is in most cases smaller than the true, kinetic temperature (Tkin ) that
we could measure with a contact thermometer on the ground. The reason is that
most objects have an emissivity lower than 1.0 and radiate incompletely. To calculate the true Tkin from the Trad , we need to know or estimate the emissivity.
The relationship between Tkin and Trad is:
Trad = 1/4 Tkin .

(13.6)

With a single thermal band (e.g. Landsat7 ETM+ sensor),  has to be estimated from other sources. One way is to do a land cover classification with all
available bands and then assign an  value for each class from an emissivity table
(e.g. 0.99 for water, 0.85 for granite).

first

previous

next

last

back

exit

zoom

contents

index

about

460

13.2. Principles of Thermal Remote Sensing


Separation of emissivity and temperature
In multispectral TIR, bands in several thermal wavelengths are available. With
emissivity in each band as well as the surface temperature (Tkin ) unknown, we
still have an underdetermined system of equations. For that reason, it is necessary to make certain assumptions about the shape of the emissivity spectrum
we are trying to retrieve. Different algorithms exist to separate the influence of
temperature from the emissivity. We will look at this issue in more detail in the
following section.

first

previous

next

last

back

exit

zoom

contents

index

about

461

13.3. Processing of thermal data

13.3

Processing of thermal data

As mentioned in the introduction, many processing steps of thermal data are


different from those in the visual and shortwave infrared regions. The different
processing steps strongly depend on the application the thermal data is used for.
In the following sections we will deal with some examples. These can be roughly
classified in two categories: those for which the emissivity values are most important, because these can be used to distinguish between surface geologies,
and those for which the actual surface temperature needs to be determined. For
some applications simple image enhancement techniques are sufficient.

first

previous

next

last

back

exit

zoom

contents

index

about

462

13.3. Processing of thermal data

13.3.1

Band ratios and transformations

For some applications, image enhancement of the thermal data bands is sufficient to achieve the necessary outputs. They can be used, for example, to delineate relative differences in surface emissivity or surface temperature.

first

previous

next

last

back

exit

zoom

contents

index

about

463

13.3. Processing of thermal data


Band ratios
By ratioing two bands we can highlight the areas where a surface material of interest is predominant. In order to do so, one would ratio two bands near a rapid
change in the emissivity spectrum of that material (e.g. low emissivity around
9 m in sandstone). We can consult spectral libraries to find out where we can
expect these sharp changes in the emissivity spectrum of a particular material
(compare Figure 13.2). Band ratios also reduce the influence of differences in
surface temperature. This can be an advantage if the study area is affected by
differential heating (hill slopes).

first

previous

next

last

back

exit

zoom

contents

index

about

464

13.3. Processing of thermal data


Transformations
In a multispectral, thermal dataset each band shows the influence of the emissivities of the surface materials as well as of the surface temperature. As the
surface temperature does not vary with wavelength, a large percentage of the
information contained in the images is identical in all bands; they are said to be
correlated. By applying image transformations, such as Principal Component
Analysis (PCA) or Decorrelation Stretching, we can minimize the common information (i.e. surface temperature) and enhance the visibility of the differences
in the bands.

first

previous

next

last

back

exit

zoom

contents

index

about

465

13.3. Processing of thermal data

13.3.2

Determining kinetic surface temperatures

The actual measurements acquired by a TIR sensor will be in units of spectral


radiance [W m2 sr1 m1 ] that reaches the sensor for a certain wavelength
band, as mentioned before. However, for reasons of storage, the radiance values
are usually scaled and stored as so-called digital numbers (DN). In the case of
Landsat the digital numbers range from 0 to 255 (1 byte per pixel, 8 bits per
pixel), whereas in the case of NOAA/AVHRR the digital numbers use 10 bits
per pixel (recall Section 3.4). ASTER uses 2 bytes per pixel in the TIR bands.
Therefore for most satellites the DN values need to be converted back to radiance
first. This is usually done through the use of slope and offset values, which are
always listed in the appropriate satellite manuals and websites.
The next step is to convert the radiance into radiant temperature at the top of
the atmosphere (TOA) using equations such as 13.7:
T =

K2
,
K1
ln
+1
L

(13.7)

where, for example, Landsat 7 ETM+ constants are K1 =666.09 W m2 sr1 m1


and K2 =1282.71 K.
The satellite sees only the radiation that emerges from the atmosphere, Lsat ,
that is the radiance leaving the surface multiplied by the transmissivity, plus the
radiance produced by the atmosphere. The radiance leaving the surface, L , is
a combination of the downwelling atmospheric radiance, L , and the radiance
produced by the surface of the Earth. Remember that the Earths surface is not a
perfect black body, but usually selective with an emissivity less than 1. Therefore, part of the downwelling radiance is reflected back. See also Kirchoffs Law

first

previous

next

last

back

exit

zoom

contents

index

about

466

13.3. Processing of thermal data


(13.5). The radiance from the Earth is given as the emissivity, , multiplied by
the Black Body radiation, LBB . These considerations are usually combined in
the following equation:
h

Lsat =  LBB + (1 )L + L ,

(13.8)

In summary, Lsat is measured by the satellite, but we need LBB to determine the
kinetic surface temperature. Equation 13.8 shows that in practice there are two
problems in the processing of thermal images:
Determination of the upwelling (L ) and downwelling (L ) atmospheric
radiances, together with the atmospheric transmittance ( ) in the thermal
range of the EM spectrum.
Determination of the emissivities and surface temperature.
The first of these problems requires knowledge of atmospheric parameters
during the time of satellite overpass. Once these are known, software such as
MODTRAN4 (see Section 8.3.3) can be applied to produce the required parameters L , L and .
Because the emissivities are wavelength dependent, equation 13.8 leads to n
equations with n + 1 unknowns, where n is the number of bands in the thermal
image. Additional information is therefore required to solve the set of equations. Most methods make use of laboratory derived information with regard to
the shape of the emissivity spectra. This process is called temperature-emissivity
separation (TES). The algorithms, however, are rather complex and outside the
scope of this chapter. A complete manual on thermal processing, including examples and more details on the mathematics involved, is available from Ambro
Gieske in the Department of Water Resources.
first

previous

next

last

back

exit

zoom

contents

index

about

467

13.4. Thermal applications

13.4

Thermal applications

This section provides a number of example applications, for which thermal data
can be used. In general the applications of thermal remote sensing can be divided into two groups:
The main interest is the study of the surface composition by looking at the
surface emissivity in one or several wavelengths.
The focus is on the surface temperature, its spatial distribution or change
over time.

first

previous

next

last

back

exit

zoom

contents

index

about

468

13.4. Thermal applications

13.4.1

Rock emissivity mapping

As we have seen in Figure 13.2, many rock and soil types show distinct spectra
in the thermal infrared. These absorption bands are mainly caused by silicate
minerals, such as quartz or feldspars, that make up a large percentage of the
worlds rocks and soils. By carefully studying thermal emissivity spectra, we can
identify different mineral components the target area is composed of. Figure 13.3
shows a thermal image taken by the MASTER airborne sensor over an area near
Cuprite, Nevada. The original colour composite was decorrelation stretched for
better contrasts. It clearly shows the different rock units in the area. The labels
show a limestone (lst) in tints of light green in the south, volcanic tuff (t) in cyan
colour in the north. A silica (s) capped hill shows shades of dark orange near the
centre of the image. Several additional units can also be distinguished, based on
this band combination alone.

first

previous

next

last

back

exit

zoom

contents

index

about

469

13.4. Thermal applications

t
s
Figure 13.3:
Decorrelation
stretched
colour composite of a
MASTER image; RGB =
Bands 46, 44, 42; see text
for more information on
rock units. Scene is 10 km
wide.

lst

first

previous

next

last

back

exit

zoom

contents

index

about

470

13.4. Thermal applications

13.4.2

Thermal hotspot detection

Another application of thermal remote sensing is the detection and monitoring


of small areas with thermal anomalies. The anomalies can be related to fires,
such as forest fires or underground coal fires, or to volcanic activity, such as
lava flows and geothermal fields. Figure 13.4 shows an ASTER scene that was
acquired at night. The advantage of night images is that the sun does not heat
up the rocks surrounding the anomaly, as would be the case during the day.
This results in better contrast between the anomaly temperatures themselves
and the surrounding rocks. This particular image was taken over the Wuda coalmining area in China in September 2002. Hotter temperatures are reflected by
brighter shades of grey. On the right side the yellow river is clearly visible, since
water does not cool down as quickly as the land surface does, due to thermal
inertia. Inside the mining area (fine, white box), several hotspots are visible with
elevated temperatures compared to the surrounding rocks. The inset shows the
same mining area slightly enlarged. The hottest pixels are coloured in orange
and show the locations of coal fires. If images are taken several weeks or even
years apart, the development of these underground coal fires as well as the effect
of fire fighting efforts can be monitored quite effectively with thermal remote
sensing.

first

previous

next

last

back

exit

zoom

contents

index

about

471

13.4. Thermal applications

Figure 13.4: ASTER thermal band 10 over Wuda,


China.
Light coloured
pixels inside the mining
area (fine, white box) are
caused mainly by coal
fires. Inset: pixels exceeding the background
temperature of 18C are
coloured in orange for better visibility of the fire locations. Scene is approximately 45 km wide.

first

previous

next

last

back

exit

zoom

contents

index

about

472

13.4. Thermal applications

Summary
This chapter has provided an introduction to thermal remote sensing (TRS), a
passive technique that is aimed at recording radiation emitted by the material or
surface of interest. It was explained how TRS is mostly applied to the middleand thermal infrared, and how the amount and peak wavelengths of the energy
emitted is a function of the objects temperature. This explained why reflectance
remote sensing (i.e. based on reflected sunlight) uses short wavelengths in the
visible and short wave infrared, while thermal remote sensing uses the longer
wavelengths. In addition to the basic physical laws, the concepts of blackbody
radiation and emissivity were explained. Incomplete radiation, i.e. a reduced
emissivity was shown to account for the kinetic temperatures often being lower
than the corresponding radiant temperature.
The subsequent section on the processing of thermal data gave an overview
of techniques aimed at the differentiation of surface materials, as well as methods to calculate actual surface temperatures. The last section provided examples
of how the surface distribution of different rock types can be mapped, and how
thermal anomalies can be assessed. The methods are applicable to many different problems, including coal-fire mapping, sea surface temperature monitoring,
weather forecasting, but also in the search-and-rescue of missing persons.

first

previous

next

last

back

exit

zoom

contents

index

about

473

13.4. Thermal applications

Questions
The following questions can help to study Chapter 13.
1. What is the total radiant energy from an object at a temperature of 300 K?
2. Calculate the peak wavelength of energy from a volcanic lava flow of about
1200 C.
3. Is the kinetic temperature higher than the radiant temperature or the other
way around? Explain your answer with an example.
4. For a Landsat 7 ETM+ image a certain pixel has a radiance of 10.3 W m2
sr1 m1 . Determine its radiant temperature.
5. For a sea level summer image (Landsat 5), the following atmospheric parameters are determined with MODTRAN4:
The atmospheric transmissivity is 0.7.
The upwelling radiance L is 2.4 W m2 sr1 m1 .
The downwelling radiance L is 3.7 W m2 sr1 m1 .
The broad-band surface emissivity  is 0.98 and the radiance Lsat observed
at the satellite is 8.82 W m2 sr1 m1 .
- Calculate the black-body radiance LBB .
- Determine the surface radiant and kinetic temperatures.

first

previous

next

last

back

exit

zoom

contents

index

about

474

13.4. Thermal applications

6. How can you visually enhance thermal images?


7. Given a satellite sensor with multiple thermal bands, can you determine
the emissivities ? What are the difficulties?

first

previous

next

last

back

exit

zoom

contents

index

about

Chapter 14
Imaging Spectrometry

first

previous

next

last

back

exit

CAUTION

zoom

contents

index

about

475

476

14.1. Introduction

14.1

Introduction

Most multispectral sensors that were discussed in Chapter 5 acquire data in a


number of relatively broad wavelength bands. However, typical diagnostic absorption features, characterizing materials of interest in reflectance spectra, are
on the order of 20 nm to 40 nm in width. Hence, the broadband sensors undersample this information and do not allow to exploit the full spectral resolving
potential available. Imaging spectrometers typically acquire images in a large
number of spectral bands (more than 100). These bands are narrow (less than
10 nm to 20 nm in width), and contiguous (i.e. adjacent), which enables the extraction of reflectance spectra at pixel scale (Figure 14.1). Such narrow spectra
enable the detection of the diagnostic absorption features.
Different names have been coined for this field of remote sensing, including
imaging spectrometry, imaging spectroscopy and hyperspectral imaging.
Figure 14.2 illustrates the effect of spectral resolution on the mineral kaolinite.
From top to bottom the spectral resolution increases from 100200 nm (Landsat),
2030 nm (GERIS), 20 nm (HIRIS), 10 nm (AVIRIS), to 12 nm (USGS laboratory
spectrum). With each improvement in spectral resolution, the diagnostic absorption features and, therefore, the unique shape of kaolinites spectrum, become
more apparent.

first

previous

next

last

back

exit

zoom

contents

index

about

477

14.1. Introduction

Each pixel has an


associated, continuous
spectrum that can be
used to identify the
surface materials

224 spectral
bands
Along track
(512 pixels
per scene)
20 m

Reflectance

Crosstrack
(614 pixels x
20 m/pixel)

100
Kaolinite
50
10
0.4

first

previous

next

last

back

exit

zoom

Figure 14.1:
Concept
of imaging spectrometry
(modified after De Jong).

1 1.5 2 2.5
Wavelength (m)

contents

index

about

478

14.1. Introduction

Landsat TM

Reflectance (+ offset for clarity)

2.0

GERIS

1.5

HIRIS

1.0

Figure 14.2: Example of


a kaolinite spectrum at the
original resolution (source:
USGS laboratory) and at
the spectral resolutions of
various imaging devices.
Note that spectrum are
progressively offset upward by 0.4 units for clarity
(adapted from USGS).

AVIRIS

0.5

USGS Lab
0.0

0.5

1.5

1.0

2.0

2.5

Wavelength (m)

first

previous

next

last

back

exit

zoom

contents

index

about

479

14.2. Reflection characteristics of rocks and minerals

14.2

Reflection characteristics of rocks and minerals

Rocks and minerals reflect and absorb electromagnetic radiation as a function


of the wavelength of the radiation. Reflectance spectra show these variations
in reflection and absorption for various wavelengths (Figure 14.3). By studying
the reflectance spectra of rocks, individual minerals and groups of minerals may
be identified. In the Earth sciences, absorption in the wavelength region from
0.4 m to 2.5 m is commonly used to determine the mineralogical content of
rocks. In this region various groups of minerals have characteristic reflection
spectra, for example phyllo-silicates, carbonates, sulphates, and iron oxides and
hydroxides. High-resolution reflectance spectra for studying mineralogy can
easily be obtained in the field or in a laboratory using field spectrometers.
Processes that cause absorption of electromagnetic radiation occur at a molecular and atomic level. Two types of processes are important in the 0.4 m to
2.5 m range: electronic processes and vibrational processes ([8]). In electronic
processes, individual atoms or ions in minerals absorb photons of specific wavelengths, which cause absorptions at certain wavelengths in reflectance spectra.
An example is absorption by Fe3+ atoms in iron oxides and hydroxides, which
gives these minerals a red colour. In vibrational processes, molecular bonds absorb photons, which results in vibration of these molecules. Examples of bonds
that absorb radiation are Al-OH bonds in clay minerals, bonds in H2 O and OH
in hydrous minerals, and in CO3 2 in carbonate minerals.
Reflectance spectra respond closely to the crystal structure of minerals and
can be used to obtain information of crystallinity and chemical composition of
minerals.

first

previous

next

last

back

exit

zoom

contents

index

about

480

14.2. Reflection characteristics of rocks and minerals

Electronic processes

40

Fe

30

Fe

20
10

Reflectance (%)

a) hematite
Al-OH

90
80
70
60

OH

50

b) kaolinite

40

Vibrational processes

90
CO3

80
70

Figure 14.3: Effects of


electronic and vibrational
processes on absorption
of electromagnetic radiation.

60
CO3

c) calcite

50
40
0.3

0.8

1.3

1.8

2.3

Wavelength (m)

first

previous

next

last

back

exit

zoom

contents

index

about

481

14.3. Pre-processing of imaging spectrometer data

14.3

Pre-processing of imaging spectrometer data

Pre-processing of imaging spectrometer data involves radiometric calibration


(see Chapter 8), which provides transfer functions to convert DN values to atsensor radiance. The at-sensor radiance data have to be corrected by the user for
atmospheric effects to obtain at-sensor or surface reflectance data. In Chapter 8
an overview is given of the use of radiative transfer models for atmospheric correction. The correction provides absolute reflectance data, because atmospheric
influence is modelled and removed.
Alternatively, users can perform a scene-dependent relative atmospheric correction using empirically derived models for the radiance-reflectance conversion, using calibration targets found in the imaging spectrometer data set. Empirical models that are often used include what are called the flat-field correction and the empirical-line correction. Flat-field correction achieves radiancereflectance conversion by dividing the whole data set on a pixel-by-pixel basis
by the mean value of a target area within the scene that is spectrally and morphologically flat, spectrally homogeneous and has a high albedo. Conversion
of raw imaging spectrometer data to reflectance data using the empirical-line
method requires the selection and spectral characterization (in the field with a
spectrometer) of two calibration targets (a dark and a bright target). This empirical correction uses a constant gain and offset for each band to force a best
fit between sets of field spectra and image spectra that characterize the same
ground areas, thus removing atmospheric effects, residual instrument artefacts
and viewing geometry effects.

first

previous

next

last

back

exit

zoom

contents

index

about

482

14.4. Atmospheric correction of imaging spectrometer data

14.4

Atmospheric correction of imaging spectrometer data

Spectral radiance curves of uncorrected imaging spectrometer data have the


general appearance of the solar irradiance curve, with radiance values decreasing towards longer wavelengths, and exhibiting several absorption bands due
to scattering and absorption by gasses in the atmosphere.
The effect of atmospheric calibration algorithms is to re-scale the raw radiance data provided by imaging spectrometers to reflectance by correcting for
atmospheric influence. The result is a data set in which each pixel is represented
by a reflectance spectrum that can be directly compared to reflectance spectra of
rocks and minerals acquired either in the field or in the laboratory (Figure 14.1).
Reflectance data obtained can be absolute radiant energy or apparent reflectance
relative to a certain standard in the scene. Calibration to reflectance can be conducted to result in absolute or relative reflectance data. See Chapter 8 for more
information on atmospheric correction.

first

previous

next

last

back

exit

zoom

contents

index

about

483

14.5. Thematic analysis of imaging spectrometer data

14.5

Thematic analysis of imaging spectrometer data

Once reflectance-like imaging spectrometer data are obtained, the logical next
step is to use diagnostic absorption features to determine and map variations
in surface composition. New analytical processing techniques have been developed to analyse such high-dimensional spectral data sets. These methods are
the focus of this section. Such techniques can be grouped into two categories:
spectral matching approaches and subpixel classification methods.

first

previous

next

last

back

exit

zoom

contents

index

about

484

14.5. Thematic analysis of imaging spectrometer data

14.5.1

Spectral matching algorithms

Spectral matching algorithms aim at quantifying the statistical or physical relationship between measurements at a pixel scale and field or laboratory spectral
responses of target materials of interest. A simple spectral matching algorithm
used is binary encoding, in which an imaged reflectance spectrum is encoded
as
(

hi =

0 if xi T
1 if xi > T

(14.1)

where xi is the brightness value of a pixel in the ith channel, T is the user
specified threshold (often the average brightness value of the spectrum is used
for T ), and hi is the resulting binary code for the pixel in the ith band.
This binary encoding provides a simple mean of analysing data sets for the
presence of absorption features, which can be directly related to similar encoding profiles of known materials.
An often used spectral matching technique in the analysis of imaging spectrometer data sets is the so-called spectral angle mapper (SAM). In this approach,
the spectra are treated as vectors in a space with a dimensionality equal to the
number of bands, n. SAM calculates the spectral similarity between an unknown
reflectance spectrum, ~t (consisting of band values ti ), and a reference (field or
laboratory) reflectance spectrum, ~r (consisting of band values ri ), and expresses
the spectral similarity of the two in terms of the vector angle, , between the two
spectra as calculated using all the bands, i, using the vector notation, as


= cos

first

~t ~r 
k~tk k~rk

previous

next

(14.2)

last

back

exit

zoom

contents

index

about

485

14.5. Thematic analysis of imaging spectrometer data


or, using the band notation, as
n
X

= cos

ti ri

i=1

(14.3)

v
.
u n
n
uX X
t
r2
t2
i

i=1

i=1

The outcome of the spectral angle mapping for each pixel is an angular difference, measured in radians ranging from zero to /2, which gives a qualitative
measure for comparing known and unknown spectra. The smaller the angle, the
more similar the two spectra.

first

previous

next

last

back

exit

zoom

contents

index

about

486

14.5. Thematic analysis of imaging spectrometer data

14.5.2

Spectral unmixing

Often the electromagnetic radiation observed as pixel reflectance values result


from the spectral mixture of a number of ground classes present at the sensed
surface (see Section 12.5). Researchers have shown that spectral mixing can be
considered a linear process if:
1. multiple scattering does not occur inside the same material, i.e. no multiple
bounces occur,
2. no interaction between materials occurs, i.e. each photon sees only one
material, and
3. the scale of mixing is very large as opposed to the grain size of the materials.
Various sources contribute to spectral mixing:
1. optical imaging systems integrate reflected light from each pixel,
2. all materials present in the field of view contribute to the mixed reflectance
sensed at a pixel, and
3. variable illumination conditions due to topographic effects result in spectrally mixed signals.
In general, there are 5 types of spectral mixtures:
1. Linear Mixture. The materials in the field of view are optically separated so
there is no multiple scattering between components. The combined signal
is simply the sum of the fractional area times the spectrum of each component. This is also called area mixture (Figure 14.4).
first

previous

next

last

back

exit

zoom

contents

index

about

487

14.5. Thematic analysis of imaging spectrometer data

A single pixel with three materials A, B and C


Fraction

Material

IFOV of pixel

0.25

0.25

0.50

Each endmember
has a unique spectrum

The mixed spectrum is just


a weighted average

Figure 14.4: Concept of


signal mixing and spectral
unmixing.

mix=0.25*A+0.25*B+0.5*C

2. Intimate Mixture. An intimate mixture occurs when different materials are


in close contact in a scattering surface, such as the mineral grains in a soil or
rock. Depending on the optical properties of each component, the resulting
signal is a highly non-linear combination of the end-member spectra.
3. Coatings. Coatings occur when one material coats another. Each coating is a scattering/transmitting layer whose optical thickness varies with
material properties and wavelength. Most coatings yield truly non-linear
reflectance properties of materials.

first

previous

next

last

back

exit

zoom

contents

index

about

488

14.5. Thematic analysis of imaging spectrometer data


4. Molecular Mixtures. Molecular mixtures occur on a molecular level, such
as two liquids, or a liquid and a solid mixed. Reflection is non-linear.
5. Multiple Scattering. Multiple reflections occur within the same material or
between different materials. With multiple reflections, the reflection spectra are multiplied rather than added, as occurs due to the integration at the
sensor. The multiplication results in a non-linear mixture.
The type and nature of the mixing systematics are crucial to the understanding
of the mixed signal. In many processing approaches, linearity is assumed. This
is also the case in spectral unmixing. From the list above it can be observed that
the assumption of linearity is only true in few cases.
Rather than aiming at representing the landscape in terms of a number of
fixed classes, as is done in image classification (see Chapter 12), spectral unmixing (Figure 14.4) acknowledges the compositional nature of natural surfaces.
Spectral unmixing strives at finding the relative or absolute fractions (or abundance) of a set of spectral components that together contribute to the observed
reflectance of pixels. The spectral components of the set are known reference
spectra, which, in spectral unmixing, we call end-members. The outcome of
such analysis is a new set of images that for each selected end-member portray
the fraction of this end-member spectrum within the total spectrum of the pixel.
Mixture modelling is the forward process of deriving mixed signals from pure
end-member spectra, while spectral unmixing aims at doing the reverse, deriving the fractions of the pure end-members from the mixed pixel, sometimes
called a mixel, signal.
A linear combination of spectral end-members is chosen to decompose the
~ into fractions of its end-members,
mixed reflectance spectrum of each pixel, R,

first

previous

next

last

back

exit

zoom

contents

index

about

489

14.5. Thematic analysis of imaging spectrometer data


~ j , by
Re
~ =
R

n
X

~ j + ~ and 0
fj Re

n
X

(14.4)

fj 1

j=1

j=1

~ is the reflectance of the mixed spectrum of each pixel, fj is the fracwhere R


~ j is the reflectance of the end-member spectrum
tion of each end-member j, Re
j, j indicates each of the n end-members, and ~ is the residual error, i.e. the difference between the measured and modelled DN values. A unique solution is
found from this equation by minimizing the residual error, ~, in a least-squares
method.

first

previous

next

last

back

exit

zoom

contents

index

about

490

14.6. Applications of imaging spectrometry data

14.6

Applications of imaging spectrometry data

In this last section, a brief outline of current applications in various fields relevant to the thematic context of ITC are given. These include examples in the area
of geologic mapping and resource exploration, vegetation sciences, and hydrology.

first

previous

next

last

back

exit

zoom

contents

index

about

491

14.6. Applications of imaging spectrometry data

14.6.1

Geology and resources exploration

Imaging spectrometry is used in an operational mode by the mineral industry


for surface mineralogy mapping to aid in ore exploration. Other applications of
the technology include lithological and structural mapping. The petroleum industry is developing methods for the implementation of imaging spectrometry
at a reconnaissance stage as well. The main targets are hydrocarbon seeps and
microseeps.
Other application fields include environmental geology (and related geobotany), in which currently much work is done on acid mine drainage and mine
waste monitoring. Atmospheric effects resulting from geologic processes, as for
example the prediction and quantification of various gases in the atmosphere,
such as sulfates emitted from volcanoes for hazard assessment, is also an important field. In soil science, much emphasis has been placed on the use of
spectrometry for soil surface properties and soil compositional analysis. Major elements, such as iron and calcium, as well as cation-ion exchange capacity, can be estimated from imaging spectrometry. In a more regional context,
imaging spectrometry has been used to monitor agricultural areas (per-lot monitoring) and semi-natural areas. Recently, spectral identification from imaging
spectrometers has been successfully applied to mapping of swelling clays minerals smectite, illite, and kaolinite in order to quantify the swelling potential of
expansive soils. It should be noted that mining companies, and to a lesser extent
petroleum companies, are operationally exploiting imaging spectrometer data
for reconnaissance-level exploration campaigns.

first

previous

next

last

back

exit

zoom

contents

index

about

492

14.6. Applications of imaging spectrometry data

14.6.2

Vegetation sciences

Much research in vegetation studies has emphasized on leaf biochemistry and


structure, and canopy structure. Biophysical models for leaf constituents are currently available, as are soil-vegetation models. Estimates of plant material and
structure and biophysical parameters include: carbon balance, yield/volume,
nitrogen, cellulose, chlorophyll, etc. The leaf area index and vegetation indices
have been extended to the hyperspectral domain and remain important physical
parameters for characterizing vegetation. One ultimate goal is the estimation of
biomass and the monitoring of changes therein. Several research groups investigate the bidirectional reflectance function (BRDF, see Section 5.4.4) in relation to
vegetation species analysis and floristics. Vegetation stress by water deficiency,
pollution sources (such as acid mine drainage), and geobotanical anomalies in
relation to ore deposits or petroleum and gas seepage, links vegetation analysis
to exploration. Another upcoming field is precision agriculture, in which imaging spectrometry aids in better agricultural practices. An important factor in
vegetation health status is the chlorophyll absorption, and, in relation to that,
the position of the red edge determined using the red-edge index. The red edge
is the name given to the steep increase in the reflectance spectrum of vegetation,
between the visible red and the near infrared wavelengths.

first

previous

next

last

back

exit

zoom

contents

index

about

493

14.6. Applications of imaging spectrometry data

14.6.3

Hydrology

In hydrological sciences, the interaction of electromagnetic radiation with water, and the inherent and apparent optical properties of water are a central issue.
Very important in imaging spectrometry of water bodies is the atmospheric correction and air-water interface corrections. Water quality of freshwater aquatic
environments, estuarine environments and coastal zones are of importance to
national water bodies. Detection and identification of phytoplankton-biomass,
suspended sediments and other matter, coloured dissolved organic matter, and
aquatic vegetation (i.e. macrophytes) are crucial parameters in optical models of
water quality. Much emphasis has been put on the mapping and monitoring of
the state and the growth or brake-down of coral reefs, as these are important in
the CO2 cycle. In general, many multi-sensor missions such as Terra and Envisat
are directed towards integrated approaches for global change studies and global
oceanography. Atmosphere models are important in global change studies and
aid in the correction of optical data for scattering and absorption due to atmospheric trace gasses. In particular, the optical properties and absorption characteristics of ozone, oxygen, water vapor, and other trace gasses, and scattering
by molecules and aerosols are important parameters in atmosphere studies. All
these can be and are derived from imaging spectrometers.

first

previous

next

last

back

exit

zoom

contents

index

about

494

14.7. Imaging spectrometer systems

14.7

Imaging spectrometer systems

In Chapter 5 an overview of spaceborne multispectral and hyperspectral sensors was given. Here we provide a short historic overview of imaging spectrometer sensor systems with examples of presently operational airborne and
spaceborne systems. The first civilian scanning imaging spectrometer was the
Scanning Imaging Spectroradiometer (SIS) constructed in the early 1970s for
NASAs Johnson Space Center. After that, civilian airborne spectrometer data
were collected in 1981 using a one-dimensional profile spectrometer developed
by the Geophysical Environmental Research Company, which acquired data in
576 channels covering the 0.42.5 m wavelength range, followed by the Shuttle Multispectral Infrared Radiometer (SMIRR) in 1981. The first imaging device
was the Fluorescence Line Imager (FLI, also known as the Programmable Multispectral Imager, PMI), developed by Canadas Department of Fisheries and
Oceans in 1981. This was followed by the Airborne Imaging Spectrometer (AIS),
developed at the NASA Jet Propulsion Laboratory, which was operational from
1983 onward acquiring 128 spectral bands in the range of 1.22.4 m. The fieldof-view of 3.7resulted in 32 pixels across-track. A later version of the instrument, AIS-2, covered the 0.82.4 m region, acquiring images 64 pixels wide.
Since 1987, NASA is operating the successor of the AIS systems, AVIRIS, the
Airborne Visible/Infrared Imaging Spectrometer, AVIRIS. The AVIRIS scanner
collects 224 contiguous bands resulting in a complete reflectance spectrum for
each 20 m by 20 m pixel in the 0.4 m to 2.5 m region, with a sampling interval of <10 nm. The field-of-view of the AVIRIS scanner is 30 degrees, resulting
in a ground field-of-view of 10.5 km from 20 km altitude. AVIRIS uses scanning optics and four spectrometers to image a 614 pixel swath simultaneously
in 224 contiguous spectral bands over the 400 nm to 2500 nm wavelength range.

first

previous

next

last

back

exit

zoom

contents

index

about

495

14.7. Imaging spectrometer systems


Table 14.1 provides examples of some currently operational airborne imaging
spectrometer systems.
The first satellite civil Earth-observing imaging spectrometer was the LEWIS
satellite, carrying the Hyperspectral Imager (HSI), from the TRW company. Lewis
was launched in 1997, but failed after 3 days in orbit. It was intended to cover
the 0.41.0 m range with 128 bands and the 0.92.5 m range with 256 bands
of 5 nm and 6.25 nm bandwidth, respectively.
On-board Terra, there are two spaceborne imaging spectrometer systems:
the US-Japanese Advanced Spaceborne Thermal Emission and Reflectance Radiometer (ASTER) and the Moderate resolution imaging spectroradiometer (MODIS,
see also Chapter 13). Terra was discussed in Chapter 5. In 2000, NASA launched
the Earth Observing-1 (EO-1, see also Chapter 5) satellite carrying the Hyperion
imaging spectrometer. EO-1 was discussed in Chapter 5.
The European Space Agency (ESA) is currently operating the Medium Resolution Imaging Spectrometer (MERIS) on the ENVISAT platform. MERIS is a
fully programmable imaging spectrometer. Envisat-1 was discussed in Chapter 5. Good material for further reading are [15], [40], and [8].

first

previous

next

last

back

exit

zoom

contents

index

about

496

14.7. Imaging spectrometer systems

Instrument
Airborne
Visible/Infrared
Imaging Spectrometer
(AVIRIS)
The Airborne Imaging
Spectrometer (AISA)
Compact Airborne
Spectrographic
Imager (CASI)
Digital Airborne
Imaging Spectrometer
(DAIS)

Hyperspectral Mapper
(HyMAP)
Multispectral Infrared
and Visible Imaging
Spectrometer (MIVIS)
Reflective Optics
System Imaging
Spectrometer
(ROSIS)

first

previous

next

Spectral
Manufacturer
range
(nm)
NASA (US)
4002500

224 bands

Bandwidth
(nm)
10

430900

288 bands

1.639.8

400870

288 bands

1.9

50012300

79 bands

152000

4002500

126 bands

1020

43012700

102 bands

8500

Dornier
Satellite
Systems
(Germany)

430850

84 bands

last

exit

SPECIM
(Finland)
ITRES
(Canada)
Geophysical
Environmental Research
Corporation
(US)
Integrated
Spectronics
(Australia)
SenSyTech,
Inc. (US)

back

Bands

zoom

contents

index

Table 14.1: Examples of


some operational airborne
imaging spectrometer systems.

about

497

14.8. Summary

14.8

Summary

Chapter 14 has given an overview of the concepts and methods of imaging spectrometry. It was explained how, on the one hand, it is similar to multi-spectral
remote sensing in that a number of visible and NIR bands are used to study the
characteristics of a given surface. However, imaging spectrometers typically acquire image data in a much larger number of narrow and contiguous spectral
bands. This makes it possible to extract per-pixel reflectance spectra, which are
useful for the detection of the diagnostic absorption features that allow us to
determine and map variations in surface composition.
The section on pre-processing showed that for some applications a scenedependent relative atmospheric correction is sufficient. However, it was also explained when an absolute radiometric calibration, which provides transfer functions to convert DN values to at-sensor radiance, must be applied.
Once the data have been corrected, spectral matching and subpixel classification methods can be used to relate observed spectra to known spectral responses
of different materials, or to identify which material are present within a single
pixel. Examples were provided in the areas of geologic mapping and resource
exploration, vegetation sciences, and hydrology.

first

previous

next

last

back

exit

zoom

contents

index

about

498

14.8. Summary

Questions
The following questions can help to study Chapter 14.
1. What are the advantages and disadvantages of hyperspectral remote sensing in comparison with multispectral Landsat-type scanning systems.
2. In which part of the electromagnetic spectrum do absorption bands occur
that are diagnostic for different mineral types?
3. Under which conditions can signal mixing be considered a linear process?
4. Assume you will design a hyperspectral scanner sensitive to radiation in
the 400 to 900 nm region. The instrument will carry 288 bands with a spectral resolution (bandwidth) of 1.8 nm. Thus this configuration result in
spectral overlap between the bands?

first

previous

next

last

back

exit

zoom

contents

index

about

Bibliography
[1] A RONOFF , S. Geographic Information Systems: A Management Perspective.
WDL Publications, Ottawa, 1989.
[2] B ERK , A., B ERNSTEIN , L., AND R OBERTSON , D. MODTRAN: A Moderate Resolution Model for LOWTRAN. Air Force Geophysics Laboratory,
Hanscom AFB, MA, US, 1997. 308
[3] B IJKER , W. Radar for Rain Forest: A Monitoring System for Land Cover Change
in the Colombian Amazon. Phd thesis, ITC, 1997. 238
[4] B UITEN , H. J., AND C LEVERS , J. G. P. W. Land Observation by Remote Sensing: Theory and Applications, vol. 3 of Current Topics in Remote Sensing. Gorden & Breach, 1993. 32
[5] B URROUGH , P. A., AND F RANK , A. U. Geographic Objects with Indeterminate
Boundaries. GISDATA Series. Taylor & Francis, London, 1996. 407
[6] B URROUGH , P. A., AND M C D ONNELL , R. Principles of Geographical Information Systems. Oxford University Press, Oxford, 1998.

first

previous

next

last

back

exit

zoom

contents

index

about

499

500

Bibliography
[7] C HAVEZ , P., S IDES , S., AND A NDERSON , J. Comparison of three different
methods to merge multiresolution and multispectral data: Landsat TM and
SPOT panchromatic. Photogrammetric Engineering & Remote Sensing 57, 3
(1991), 295303. 374
[8] C LARK , R. N. Spectroscopy of rocks and minerals, and principles of spectroscopy. In Manual of Remote Sensing: Remote Sensing for the Earth Sciences
(New York, 1999), A. Rencz, Ed., vol. 3, John Wiley and Sons, pp. 358. 479,
495
[9] C OMMUNITY, E. CORINE Land Cover Technical Guide. ECSCEECEAEC,
Brussels, Belgium, 1993. EUR 12585 EN. 402, 406
[10]

B Y, R. A., Ed. Principles of Geographic Information Systems, third ed.,


vol. 1 of ITC Educational Textbook Series. International Institute for GeoInformation Science and Earth Observation, Enschede, 2004. 20, 28, 34
DE

[11] D ECKERS , J. A., N ACHTERGAELE , F. O., AND S PAARGAREN , O. C., Eds.


World Reference Base for Soil Resources: Introduction. ACCO, Leuven, Belgium, 1998. 395
[12] D OBRIN , M., AND S AVIT, C. Introduction to Geophysical Prospecting, forth ed.
McGraw-Hill, New York, US, 1988. 274
[13] E DWARDS , G., AND L OWELL , K. E. Modeling uncertainty in photointerpretated boundaries. Photogrammetric Engineering and Remote Sensing
60, 4 (1996), 337391. 410
[14] H ARRIS , J., B OWIE , C., R ENCZ , A., AND G RAHAM , D. Computer enhancement techniques for the integration of remotely sensed, geophysical, and
first

previous

next

last

back

exit

zoom

contents

index

about

501

Bibliography
thematic data for the geosciences. Canadian Journal of Remote Sensing 20, 3
(1994), 210221. 374
[15] H UNT, G. R. Remote sensing in geology. In Electromagnetic Radiation:
The Communication Link in Remote Sensing (New York, 1980), B. Siegal and
A. Gillespie, Eds., John Wiley and Sons, p. 702. 495
[16] ITC. ITC Textbook of Photo-interpretation. Four volumes, 19631974. 20
[17] ITC. ITC Textbook of Photogrammetry. Five volumes, 19631974. 20
[18] J ANE S. Janes Space Directory 19971998, 13th ed. Alexandria, Janes Information Group, 1997. 83
[19] K EAREY, P., B ROOKS , M., AND H ILL , I. An Introduction to Geophysical Exploration. Blackwell Science, Oxford, UK, 2002. 274
[20] K NIEZYS , F., S HETTLE , E., A BREU , L., C HETWYND , J., A NDERSON , G.,
G ALLERY, W., S ELBY, J., AND C LOUGH , S. User Guide to LOWTRAN 7. Air
Force Geophysics Laboratory, Hanscom AFB, MA, US, 1988. 308
[21] K RAMER , H. J. Observation of the Earth and its Environment: Survey of Mission
and Sensors, fourth ed. Springer Verlag, Berlin, Germany, 2002. 83, 122
[22] L AURINI , R., AND T HOMPSON , D. Fundamentals of Spatial Information Systems, vol. 37 of The APIC Series. Academic Press, London, 1992.
[23] L ILLESAND , T. M., K IEFER , R. W., AND C HIPMAN , J. W. Remote Sensing
and Image Interpretation, fifth ed. John Wiley & Sons, New York, NY, 2004.
32, 59, 128, 182, 445

first

previous

next

last

back

exit

zoom

contents

index

about

502

Bibliography
[24] M C C LOY, K. R. Resource Management Information Systems. Taylor & Francis,
London, U.K., 1995. 76, 77, 128
[25] M IDDELKOOP, H. Uncertainty in a GIS, a test for quantifying interpretation
output. ITC Journal 1990, 3 (1990), 225232. 411
[26] M OLENAAR , M. An Introduction to the Theory of Spatial Object Modelling.
Research Monographs in GIS Series. Taylor & Francis, London, 1998. 407
[27] PARASNIS , D. Principles of Applied Geophysics. Kluwer Academic Publishing, Dordrecht, The Netherlands, 1996. 274
[28] P ERDIGAO , V., AND A NNONI , A. Technical and Methodological Guide for Updating CORINE Land Cover Data Base. EC-JRC, EEA, Brussels, Belgium, 1997.
EUR 17288 EN. 402
[29] P EUQUET, D. J., AND M ARBLE , D. F., Eds. Introductory Readings in Geographic Information Systems. Taylor & Francis, London, 1990.
[30] R AHMAN , H., AND D EDIEU , G. SMAC: A simplified method for the atmospheric correction of satellite measurements in the solar spectrum. International Journal of Remote Sensing 15 (1994), 123143. 310
[31] R ICHTER , R. A Spatially-Adaptive Fast Atmospheric Correction Algorithm. ERDAS Imagine ATCOR2 User Manual (Version 1.0), 1996. 309
[32] R OSSITER , D. G. Lecture Notes: Methodology for Soil Resource Inventories, 2nd
revised ed. ITC Lecture Notes SOL.27. ITC, Enschede, The Netherlands,
2000. 395

first

previous

next

last

back

exit

zoom

contents

index

about

503

Bibliography
[33] S ABINS , F. F. Remote Sensing: Principles and Interpretation, third ed. Freeman
& Co., New York, NY, 1996. 32
[34] S CHETSELAAR , E. On preserving spectral balance in image fusion and its
advantages for geological image interpretation. Photogrammetric Engineering & Remote Sensing 67, 8 (2001), 925934. 375
[35] S MITH , W., AND S ANDWELL , D. Measured and estimated seafloor topography, version 4.2. Poster RP1, 1997. World Data Center for Marine Geology
and Geophysics. 278
[36] TANRE , D., D EROO , C., D UHAUT, P., H ERMAN , M., M ORCETTE , J., P ER BOS , J., AND D ESCHAMPS , P. Description of a computer code to simulate
the satellite signal in the solar spectrum: the 5S code. International Journal of
Remote Sensing 11 (1990), 659668. 308
[37] T ELFORD , W., G ELDART, L., AND S HERIFF , R. Applied Geophysics, second ed. Cambridge University Press, Cambridge, UK, 1991. 274
[38] T OPO S YS, 2002. 257, 261
[39] T REVETT, J. W. Imaging Radar for Resources Surveys. Chapman and Hall
Ltd., London, U.K., 1986. 237
[40]

M EER , F., AND J ONG , S. D. Imaging Spectrometry: Basic Principles


and Prospective Applications. Kluwer Academic Publishers, Dordrecht, the
Netherlands, 2001. 495
VAN DER

[41] V ERMOTE , E., TANRE , D., D EUZE , J., H ERMAN , M., AND M ORCETTE , J.J. Second simulation of the satellite signal in the solar spectrum, 6S: an
first

previous

next

last

back

exit

zoom

contents

index

about

504

Bibliography
overview. IEEE Transactions on Geoscience and Remote Sensing 35, 3 (May
1997), 675686. 308
[42] W EHR , A., AND L OHR , U. Airborne laser scanning an introduction and
overview. ISPRS Journal of Photogrammetry & Remote Sensing 54 (1999), 68
82. 258
[43] W ORBOYS , M. F. GIS: A Computing Perspective. Taylor & Francis, London,
U.K., 1995.
[44] Z INCK , A. J. Physiography & Soils. ITC Lecture Notes SOL.41. ITC, Enschede, The Netherlands, 1988. 395

first

previous

next

last

back

exit

zoom

contents

index

about

Glossary

first

previous

next

last

back

exit

zoom

contents

index

about

505

506

Glossary

A
Absorption The process in which electromagnetic energy is converted in an
object into other forms of energy (e.g. heat).
Active sensor Sensor with a built in source of energy. The sensor both emits
and receives energy (e.g. radar and laser).
Additive colours The additive principle of colours is based on the three primary colours of light: red, green, blue. All three primary colours together produce white. Additive colour mixing is used, for example,
on computer screens and television.

first

previous

next

last

back

exit

zoom

contents

index

about

507

Glossary

B
Backscatter The microwave signal reflected by elements of an illuminated
surface in the direction of the radar antenna.
Band

Usually related to wavelength band, which indicates a specific range


of the electromagnetic spectrum to which the sensor is sensitive. In
general, it can indicate any layer of an n-dimensional image.

Brightness Property of a radar image in which the observed strength of the


radar reflectivity is expressed as being proportional to a digital number.

first

previous

next

last

back

exit

zoom

contents

index

about

508

Glossary

C
Charge-coupled device (CCD) Semi-conductor elements usually aligned as a
linear (scanner) or surface array (video, digital camera). CCDs produce image data.
Class

Classes, usually defined in visual image interpretation and image


classification, are a variable of the nominal type. Class schemes are
applied to describe hierarchical relationships between them.

Cluster

Used in the context of image classification to indicate a concentration


of observations (points in the feature space) related to a training class.

Colour

Colloquial term to indicate hue (IHS space).

Colour film Also known as true colour film used in (aerial) photography. The
principle of colour film is to add sensitized dyes to the silver halide.
Magenta, yellow and cyan dyes are sensitive to red, green and blue
light respectively.
Colour infrared film Film with specific sensitivity for infrared wavelengths.
Typically used in surveys of vegetation.
Corner reflector Combination of two or more intersecting specular surfaces
that combine to enhance the signal reflected back in the direction of
the radar, e.g. houses in urban areas.

first

previous

next

last

back

exit

zoom

contents

index

about

509

Glossary

D
Di-electric constant Parameter that describes the electrical properties of a medium. Reflectivity of a surface and penetration of microwaves into the
material are determined by this parameter.
Digital Terrain Model (DTM) Term indicating a digital description of the terrain relief. A DTM can be stored in different manners (contour lines,
TIN, raster) and may also contain semantic, relief-related information
(breaklines, saddlepoints).

first

previous

next

last

back

exit

zoom

contents

index

about

510

Glossary

E
Earth Observation (EO) Term indicating the collection of remote sensing techniques performed from space.
Electromagnetic energy Energy with both electric and magnetic components.
Both the wave model and photon model are used to explain this phenomenon. The measurement of reflected and emitted electromagnetic
energy is an essential aspect in remote sensing.
Electromagnetic spectrum The complete range of all wavelengths, from gamma rays (1012 m) up to very long radio waves (1012 m).
Emission

Radiation of electromagnetic energy. Each object with a temperature


above 0 K (-273 C) emits electromagnetic energy.

Emissivity The radiant energy of an object compared to the energy of a blackbody of the same temperature, expressed as a ratio.
Error matrix Matrix that compares samples taken from the source to be evaluated with observations that are considered as correct (reference). The
error matrix allows calculation of quality parameters such as overall
accuracy, error of omission and error of commission.

first

previous

next

last

back

exit

zoom

contents

index

about

511

Glossary

F
False colour infrared film see Colour infrared film.
Feature space The mathematical space describing the combinations of observations (DN values in the different bands) of a multispectral or multiband image. A single observation is defined by a feature vector.
Feature space plot A two- or three-dimensional graph in which the observations made in different bands are plotted against each other.
Field of view (FOV) The total swath as observed by a sensor-platform system.
Sometimes referred to as total field of view. It can be expressed as an
angle or by the absolute value of the width of the observation. See
also Instantaneous field of view.
Filter

(1) Physical product made out of glass and used in remote sensing
devices to block certain wavelenghts, e.g. ultraviolet-filter. (2) Mathematical operator used in image processing for modifying the signal,
e.g. a smoothing filter.

Foreshortening Spatial distortion whereby terrain slopes facing the side-looking


radar are mapped as having a compressed range scale relative to its
appearance if the same terrain was flat.

first

previous

next

last

back

exit

zoom

contents

index

about

512

Glossary

G
Geo-spatial data Data that includes positions in the geographic space. In this
book, usually abbreviated to spatial data.
Geocoding Process of transforming and resampling image data in such way
that these can be used simultaneously with data that are in a specific
map projection. Input for a geocoding process are image data and
control points, output is a geocoded image. A specific category of
geocoded images are orthophotos and orthoimages.
Geographic information Information derived from spatial data, and in the
context of this book, from image data. Information is what is relevant in a certain application context.
Geographic Information System (GIS) A software package that accommodates
the capture, analysis, manipulation and presentation of georeferenced
data. It is a generic tool applicable to many different types of use (GIS
applications).
Georeferencing Process of relating an image to a specific map projection. As a
result, vector data stored in this projection can for example be superimposed on the image. Input for a georeferencing process are image
data and coordinates of ground control points, output is a georeferenced image.
Global Navigation Satellite System (GNSS) The Global Navigation Satellite
System is a global infrastructure for the provision of positioning and
timing information. It consists of the American GPS and Russian

first

previous

next

last

back

exit

zoom

contents

index

about

513

Glossary
Glonass systems. There is also the forthcoming European Galileo system.
Ground control points (GCPs) Points which are used to define or validate
a geometric transformation process. Sometimes also referred to as
Ground Control Points stating these have been measured on the ground.
Ground control points should be recognizable both in the image and
in the real world.
Ground range Range direction of the side-looking radar image as projected
onto the horizontal reference plane.
Ground truth A term that may include different types of observations and
measurements performed in the field. The name is imprecise because
it suggests that these are 100% accurate and reliable, and this may be
difficult to achieve.

first

previous

next

last

back

exit

zoom

contents

index

about

514

Glossary

H
Histogram Tabular or graphical representation showing the (absolute and/or
relative) frequency. In the context of image data it relates to the distribution of the (DN) values of a set of pixels.
Histogram equalization Process used in the visualization of image data to optimize the overall image contrast. Based on the histogram, all available grey levels or colours are distributed in such way that all occur
with equal frequency in the result.

first

previous

next

last

back

exit

zoom

contents

index

about

515

Glossary

I
Image

A digital file comprising pixels that represent measured local reflectance


(emission or backscatter) values in some designated part of the electromagnetic spectrum. Typically, images are stored in a row-column
system. An image may comprise any number of bands (or channels).
After the reflectance values have been translated into some thematic
variable, the image becomes a raster. With an image, we talk of its
constituent pixels; with a raster we talk of its cells.

Image classification Image classification is the process of assigning pixels to


nominal, i.e. thematic, classes. Input is a multi-band image, output is
a raster in which each cell has a (thematic) code. Image classification
can be realized using a supervised or unsupervised approach.
Image enhancement The process of improving the visual representation of an
image, for example by histogram equalization or using filters.
Image interpretation The process of information extraction from image data.
What is information is defined by the application context. Visual interpretation and pattern recognition techniques can be used.
Image processing system A computer system that is specifically designed to
process image data and to extract information from it by visualizing
the data, application of models and pattern recognition techniques.
Image space The mathematical space describing the (relative) positions of the
observations. Image positions are expressed by their row and column
index.

first

previous

next

last

back

exit

zoom

contents

index

about

516

Glossary
Incidence angle Angle between the line of sight from the sensor to an element
of an imaged scene and a vertical direction to the scene. One must
distinguish between the nominal incidence angle determined by the
geometry of the radar and the Earths geoidal surface and the local
incidence angle, which takes into account the mean slope of the pixel
in the image.
Infrared waves Electromagnetic radiation in the infrared region of the electromagnetic spectrum. Near-infrared (0.71.2 m), middle infrared
(1.22.5 m) and thermal infrared (814 m) are distinguished.
Instantaneous field of view (IFOV) The area observed on the ground by a sensor, which can be expressed by an angle or in ground surface units.
Interferometry Computational process that makes use of the interference of
two coherent waves. In the case of imaging radar, two different paths
for imaging cause phase differences from which an interferogram can
be derived. In SAR applications, interferometry is used for constructing a DEM.
Interpretation elements The elements used by the human vision system to interprete a picture or image. Interpretation elements are: tone, texture,
shape, size, pattern, site, association and resolution.

first

previous

next

last

back

exit

zoom

contents

index

about

517

Glossary

L
Latent image When exposed to light, the silver halide crystals within the photographic emulsion undergo a chemical reaction, which results in an
invisible latent image. The latent is transformed into a visible image by the development process in which the exposed silver halide is
converted into silver grains that appear black.
Layover

Extreme form of foreshortening, i.e. relief distortion in imagery, in


which the top of the reflecting object (e.g. mountain) is closer to the
radar than the lower part of the object. The image of such a feature
appears to have fallen over towards the radar.

Look angle The angle of viewing relative to the vertical (nadir) as perceived
from the sensor.

first

previous

next

last

back

exit

zoom

contents

index

about

518

Glossary

M
Microwaves Electromagnetic radiation in the microwave window, which ranges from 1100 cm.
Mixel

Acronym for mixed pixel. Mixel is used in the context of image classification where different spectral classes occur within the area covered
by one pixel.

Monoplotting Process that enables extraction of accurate (x, y) coordinates from


image data by correcting for terrain relief.
Multispectral scanning Remote sensing technique in which the surface is scanned at the same moment in different wavelength bands. This book
distinguishes two types of scanners: pushbroom and wiskbroom scanners.

first

previous

next

last

back

exit

zoom

contents

index

about

519

Glossary

N
Nadir

The point (or line) directly under the platform during acquisition of
image data.

Noise

Any unwanted or contaminating signal competing with the desired


signal.

first

previous

next

last

back

exit

zoom

contents

index

about

520

Glossary

O
Objects

Objects are real world features and have clearly identifiable geometric characteristics. In a computer environment, objects are modelled
using an object-based approach in contrast to a field-based approach,
which is more suited for continuous phenomena.

Orbit

The path of a satellite through space. Types of orbits used for remote
sensing satellites are, for example, (near) polar and geostationary.

Orientation Refers to relative or absolute position of a sensor and/or platform.


Interior orientation relates to the components within a camera or sensor. Relative orientation refers to (successive) positions of the sensorplatform system. In photogrammetry, absolute orientation refers to
the orientation of the image relative to a spatial reference system.
Orthoimage An orthoimage and orthophoto have been corrected for terrain
relief. As a result, an orthoimage can be used directly in combination
with geocoded data.

first

previous

next

last

back

exit

zoom

contents

index

about

521

Glossary

P
Panchromatic Indication of one (wide or narrow) spectral band in the visible
and near-infrared part of the electromagnetic spectrum.
Passive sensor Sensor that records energy that is produced by external sources
such as the Sun and the Earth.
Pattern recognition Term for the collection of techniques used to detect and
identify patterns. Patterns can be found in the spatial, spectral and
temporal domains. An example of spectral pattern recognition is image classification; an example of spatial pattern recognition is segmentation.
Photogrammetry The science and techniques of making measurements from
photos or image data. Photogrammetric procedures are required for
accurate measurements from stereo pairs of aerial photos, image data
or radar data.
Photograph Image obtained by using a camera. The camera produces a negative film, which is can be printed into positive paper product.
Pixel

Contraction for picture element, which is the elementary unit of image


data. The ground pixel size of image data is related to the spatial
resolution of a sensor system it was produced by.

Pixel value The representation of the energy measured at a point, usually expressed as a Digital Number (DN-) value.
Polarization Orientation of the electric (E) vector in an electromagnetic wave,
frequently horizontal (H) or vertical (V).
first

previous

next

last

back

exit

zoom

contents

index

about

522

Glossary
Polychromatic Solar radiation comprised of a composite of monochromatic
wavelenghts.
Pulse

first

Group of waves with a distribution confined to a short interval of


time. Such a distribution is described in the time domain by its width
and its amplitude.

previous

next

last

back

exit

zoom

contents

index

about

523

Glossary

Q
Quantization The number of discrete levels applied to store the energy as measured by a sensor, e.g. 8-bit quantization allows 256 levels of energy.

first

previous

next

last

back

exit

zoom

contents

index

about

524

Glossary

R
Acronym for Radio Detection And Ranging. Radars are active sensors
at wavelengths between 1100 cm.

Radar

Radar equation Mathematical expression that describes the average received


signal level compared to the additive noise level in terms of system
parameters. Principal parameters include the transmitted power, antenna gain,radar cross section,wavelength and range.
The energy (flux) leaving an element of the surface.

Radiance

Radiometric resolution

The smallest observable difference in energy.

Range

Line of sight between the radar and each illuminated scatterer.

RAR

Acronym for Real Aperture Radar.

Raster

A regularly spaced set of cells with associated (field) values. In contrast to a grid, the associated values represent cell values, not point
values. This means that the value for a cell is assumed to be valid for
all locations within the cell.

Reflectance The ratio of the reflected radiation to the total irradiation. Reflectance depends on the wavelength.
Reflection

The process of scattering of of electromagnetic waves by an object.

Relief displacement A shift of the position of an imaged object in the image


due to the local elevation of the object. The shift (magnitude and

first

previous

next

last

back

exit

zoom

contents

index

about

525

Glossary
direction) depends on sensor-platform characteristics and on the elevation of the object.
Remote sensing (RS) Remote sensing is the instrumentation, techniques and
methods to observe the Earths surface at a distance and to interpret
the images or numerical values obtained in order to acquire meaningful information of particular objects on Earth.
Resampling Process to generate a raster with another orientation and/or different cell size and to assign DN-values using one of the following
methods,nearest neighbour selection, bilinear interpolation and cubic convolution.
Resolution Indicates the smallest observable (measurable) difference at which
objects can still be distinguished. In remote sensing context used in
spatial, spectral and radiometric resolution.
RMS error Root Mean Square error. A statistical measure of accuracy, similar
to standard deviation, indicating the spread of the measured values
around the true value.
Roughness Variation of surface height within an imaged resolution cell. A
surface appears rough to microwave illumination when the height
variations become larger than a fraction of the radar wavelength.

first

previous

next

last

back

exit

zoom

contents

index

about

526

Glossary

S
SAR

Acronym for Synthetic Aperture Radar. The (high) azimuth resolution


(direction of the flight line) is achieved through off-line processing.
The SAR is able to function as if it has a large virtual antenna aperture, synthesized from many observations with the (relative) small
real antenna of the SAR system.

Scale

The ratio of the distance measured on a (printed) map or image to


that measured on the ground surface between the two same points.
It is expressed as a ratio, e.g. 1 : 10,000. The 10,000 is referred to as
scale factor.

Scanner

(1) remote sensing sensor that is based on the scanning principle, e.g.
a multispectral scanner (2) office device to convert analogue products
(photo, map) into digital raster format.

Slant range Image direction as measured along the sequence of line of sight
rays from the radar to each reflecting point in the scene.
Spatial data In the broad sense, spatial data is any data with which position is
associated.
Spatial resolution
Speckle

See resolution.

Interference of backscattered waves stored in the cells of a radar image. It causes the return signals to be extinguished or amplified resulting in random dark and bright pixels in the image.

Spectral band The range of wavelengths to which a channel (single band) of


a multispectral scanner is sensitive.
first

previous

next

last

back

exit

zoom

contents

index

about

527

Glossary
Spectral resolution

See resolution.

Specular reflection

Mirror-like reflection; bounced-off radiation.

Stereo model A 3D relief model observed through stereoscopic vision of a


stereo pair.
Stereo pair A pair of overlapping photos or images that (partially) cover the
same area from a different position. When appropriately taken, stereo
pairs form a stereo model that can be used for stereoscopic vision and
stereoplotting.
Stereoplotting Process that allows to measure accurate (x, y, z) coordinates
from stereo models.
Stereoscopic vision The ability to perceive distance or depth by observation
with both eyes. In remote sensing, stereoscopic vision is used for the
three-dimensional observation of two images (photos) that are made
from different positions. Stereoscopy is used in visual image interpretation and stereoscopic measurements (stereoplotting). It can yield
3D coordinates.
Subtractive colours The subtractive principle of colours is based on the three
printing colours: cyan, magenta and yellow. All printed colours can
be produced by a combination of these three colours. The subtractive
principle is also used in colour photography.
Sun-synchronous Used to indicate a satellite orbit that is designed in such
way that the satellite always passes the same location on Earth at the
same local time.

first

previous

next

last

back

exit

zoom

contents

index

about

528

Glossary

T
Training stage Part of the image classification process in which pixels representative for a certain class are identified. Training results in a training set that comprises the statistical characteristics (signatures) of the
classes of interest.
Transmittance The ratio of the radiation transmitted to the total irradiation.

first

previous

next

last

back

exit

zoom

contents

index

about

529

Glossary

V
Variable, interval A variable that is measured on a continuous scale, but with
no natural zero. It cannot be used to form ratios.
Variable, nominal A variable that is organized in classes, with no natural order, i.e., cannot be ranked.
Variable, ordinal A variable that is organized in classes with a natural order,
and so it can be ranked.
Variable, ratio A variable that is measured on a continuous scale, and with a
natural zero, so can be used to form ratios.
Viewing angle
sor.

first

previous

Angle of observation referring to the vertical (nadir) of the sen-

next

last

back

exit

zoom

contents

index

about

530

Glossary

W
Wavelength Minimum distance between two events of a recurring feature in
a periodic sequence such as the crests of a wave. Wavelength is expressed as a distance (e.g. m or nm).

first

previous

next

last

back

exit

zoom

contents

index

about

Index
absorbtion, 61
absorptivity, 56
active sensor, 60
laser scanning, 255
radar, 213
additive colours, 353
aerial camera, 131
digital, 157
aerial photography
oblique, 129
vertical, 129
aerospace surveying, 33
altimeter, 214
amplitude, 217
aperture, 222
atmospheric window, 63

BRDF, 196, 492


Brovey transform, 373
camera attitude, 335
CCD, 143
blooming, 143
central projection, 132
charge-coupled device, 143, 167
classification
subpixel, 483
classification algorithm
box, 435
maximum likelihood, 438
minimum distance to mean, 437
coal fire, 95
collinearity, 331
colour, 348
hue, 355
IHS system, 354
RGB system, 352

bathymetry, 104
binary encoding, 484
blackbody, 56, 456

first

previous

next

last

back

exit

zoom

contents

index

about

531

532

Index
spaces, 351
tone, 389
YMC system, 356
constellation, 205
coordinate system, 320
coverage, 115
dynamic range, 115, 164
spatial, 115
spectral, 115
temporal, 115

end-member, 488
error of commission, 441
error of omission, 441
false colour composite, 368
feature space, 420
fiducial marks, 135, 333
field observations, 407
field of view, 166
angular, 147
instantaneous, 166
film, 137
emulsion, 139
false colour infrared, 140
scanning, 141
speed, 138
true colour, 140
filter
kernel, 365
optical, 133
filter operations, 365
averaging, 366
edge enhancement, 367
filtering, 234
flat-field correction, 481
focal length, 132, 146
frequency, 54, 217

dead ground effect, 147


detector, 164
opening angle, 166
digital number, 114, 141
digital photogrammetry, 340
digitizing, 339
DN-value, 114
DTM, 99, 340
dwell time, 166
dynamic range, 115, 143
Earth observation, 33
electromagnetic energy, 53
electromagnetic spectrum, 58
optical wavelengths, 58
emissivity, 56
empirical-line correction, 481
first

previous

next

last

back

exit

zoom

contents

index

about

533

Index
general sensitivity, 137
geocoding, 326
geometric transformation, 321
first order, 322
residual errors, 323
root mean square error, 323
georeferencing, 321
greybody, 456
ground control point (GCP), 322
ground observations, 36
ground truth, 408
ground-based observations, 30

supervised, 429
unsupervised, 430
image data, 32
imaging spectrometry, 475, 492
imaging spectroscopy, 476
Inertial Measuring Unit, 261
Inertial Navigation System, 261
information extraction, 33
interferometry, 318
differential (DINSAR), 245
interpretation, 236
interpretation elements, 389
irradiance, 73

height displacement, 151


histogram, 358
cumulative, 359
equalisation, 363
hue, 355, 389
human vision, 387
hyperspectral imaging, 476

kinetic temperature, 459


Kirchhoffs law, 457
Lagrangian points, 113, 206
land cover, 402
land use, 402, 443
libration points, 113

IFOV, 166
IHS transformation, 373
image, 114
data cost, 123
size, 117
stereo, 170
image classification, 391, 416
first

previous

next

last

mapping, 384
mapping unit
minimum, 403
mirror stereoscope, 392
mixed pixel, 488
mixel, 488

back

exit

zoom

contents

index

about

534

Index
monoplotting, 330, 338
multispectral scanner, 162

overlap, 153

nadir, 150
night-time image, 112
noise, 143
Normalized Difference Vegetation Index (NDVI), 177
off-nadir viewing, 170
orbit, 110
altitude, 110
GEO, 110
geostationary, 112
inclination angle, 110
LEO, 110
period, 110
polar, 111
repeat cycle, 111
sun-synchronous, 112
types of, 110
orientation, 330
exterior, 334
interior, 334
relative, 334
orthoimage, 330, 340
orthophoto, 340
overall accuracy, 441
first

previous

next

passive sensor, 60
pattern, 390
phase, 217
photogrammetry, 128, 318
photon, 55, 164
pixel, 114
mixed, 444
size, 116
Plancks constant, 453
platform, 106
airborne, 106
aircraft, 108
operational, 121
satellite, 110
Space Shuttle, 88
spaceborne, 106
pocket stereoscope, 392
quality
image classification, 440
photo-interpretation, 409
quantization, 116, 164
radar, 101
azimuth direction, 220
bands, 217

last

back

exit

zoom

contents

index

about

535

Index
differential interferometry, 245
equation, 215
foreshortening, 229, 231
ground range, 221
ground range resolution, 222
imaging, 215
incidence angle, 221
interferogram, 243
interferometry (INSAR), 240
layover, 229
multi-look, 227
polarisation, 218
range direction, 220
real aperture (RAR), 222
shadow, 229
Single-Look-Complex (SLC), 243
slant range, 221
slant range resolution, 222
sophisticated, 204
synthetic aperture (SAR), 223
radiant temperature, 459
receiving station, 113
red edge, 492
reflectance
bidirectional, 196
reflectance curve, 73
soil, 76

first

previous

next

last

back

vegetation, 75
water, 77
reflection, 71
diffuse, 72
specular, 72
relief displacement, 150
Remote Sensing, 32
replicability, 409
resampling, 318, 328
bilinear convolution, 328
cubic convolution, 328
nearest neighbour, 328
resolution, 115
radiometric, 115, 138, 164
spatial, 115, 148, 166
spectral, 115, 165, 169
temporal, 116
revisit time, 111, 116
satellite
ALOS, 204, 253
AlSAT-1, 205
Aqua, 184
Artemis, 200, 205
attitude control, 204
Aura, 184
Cartosat-1, 189
Cartosat-2, 189
exit

zoom

contents

index

about

536

Index
communication, 113
laser, 204
constellation, 205
Envisat-1, 199, 204
EO-1, 194, 495
EROS-1A, 191
ERS-1, 102, 201
ERS-2, 201
fast development, 203
GRACE, 278
Ikonos, 191
IRS-1C, 188
IRS-1D, 188
Landsat-5, 91
Landsat-7, 180
LEWIS, 495
Meteosat-8, 117, 174
miniaturization, 204
NOAA-17, 176
Oceansat-1, 188
OrbView-2, 93
OrbView-3, 191
Proba, 196
Quickbird, 191
Radarsat-1, 204
Resourcesat-1, 188
RISAT, 189

first

previous

next

SPOT-5, 187
Terra, 183
TES, 188
Thai-Paht-2, 205
Triana, 113
Tsinghua-1, 205
satellite navigation, 109, 156
scale factor, 146
scanner, 162
across-track, 163
along-track, 167
whiskbroom, 163
scattering, 66
Mie, 69
non-selective, 70
Rayleigh, 67
scatterometer, 214
scatterplot, 421
selective Radiator, 456
sensor
active, 84
aerial camera, 88, 131
AIS, 494
ALI, 194
ASAR, 201
ASTER, 183, 495
AVHRR, 176

last

back

exit

zoom

contents

index

about

537

Index
AVIRIS, 476, 494
AWiFS, 188
CHRIS, 196
ETM+, 180
gamma-ray spectrometer, 87, 275
GERIS, 476
gravimeter, 277
HIRES, 476
HRC, 196
HRG, 187
HSI, 495
Hyperion, 194, 495
hyperspectral imager, 94
imaging radar, 101
imaging spectrometer, 94
LAC, 194
laser scanner, 99
lidar, 99
LISS4, 188
magnetometer, 279
MERIS, 202
MODIS, 183, 495
MSS, 180
multispectral, 162
multispectral scanner, 92
new types of, 204
OSA, 191

first

previous

next

last

back

PALSAR, 204, 253


passive, 84
PMI, 494
pushbroom, 167
radar altimeter, 103
radiometer, 97
SeaWiFS, 93
SEVIRI, 174
side scan sonar, 104
SIS, 494
SMIRR, 494
sonar, 104
Terra, 495
thermal scanner, 95
TM, 180
VEGETATION, 187
video camera, 89
Snells Law, 236
spatio-temporal characteristics, 119,
427
speckle, 232, 366
spectral angle mapper (SAM), 484
spectral band, 92, 165
spectral matching, 483
spectral sensitivity, 137
spectral unmixing, 485
spectrometer, 494

exit

zoom

contents

index

about

538

Index
Stefan-Boltzmann law, 56, 455
stereogram, 392
stereomodel, 341
stereoplotting, 341
stereorestitution, 331
stereoscope, 341
stereoscopic vision, 392
subtractive colours, 356
superimposition, 321
temperature
kinetic, 459
radiant, 459
texture, 390
three-dimensional, 318
transfer function, 362
transmission, 61
two-dimensional, 318
validation, 440
wavelength, 53
wavelength band, 92

first

previous

next

last

back

exit

zoom

contents

index

about

Appendix A
SI units & prefixes
Quantity
Length
Time
Temperature
Energy
Power

first

previous

next

last

back

Table A.1: Relevant SI


units in the context of remote sensing.

SI unit
metre (m)
second (s)
kelvin (K)
joule (J)
watt (W) (J/s)

exit

zoom

contents

index

about

539

540

Appendix
Prefix
tera (T)
giga (G)
mega (M)
kilo (k)
centi (c)
milli (m)
micro ()
nano (n)
pico (p)

Unit
centimetre
millimetre
micron
micrometre
nanometre

Parameter
speed of light

C
inch
foot
mile

first

previous

next

last

back

Table A.2: Unit prefix notation.

Multiplier
1012
109
106
103
102
103
106
109
1012

SI Equivalent
102 m
103 m
106 m
106 m
109 m

Table A.3: Common units


of wavelength.

Value
2.9979 108 m/s
( C + 273.15) K
2.54 cm
30.48 cm
1, 609 m

Table A.4: Constants and


non-SI units.

exit

zoom

contents

index

about

You might also like