Professional Documents
Culture Documents
R EMOTE S ENSING
I NTERNATIONAL I NSTITUTE
FOR
G EO -I NFORMATION S CIENCE
2004 ITC
first
previous
next
last
back
exit
zoom
contents
index
about
AND
E ARTH O BSERVATION
Principles of
Remote Sensing
An introductory textbook
Editors
Norman Kerle
Lucas L. F. Janssen
Gerrit C. Huurneman
Authors
Wim H. Bakker
Karl A. Grabmaier
Gerrit C. Huurneman
Freek D. van der Meer
Anupma Prakash
Klaus Tempfli
first
previous
next
Ambro S. M. Gieske
Ben G. H. Gorte
Chris A. Hecker
John A. Horn
Lucas L. F. Janssen
Norman Kerle
Gabriel N. Parodi
Christine Pohl
Colin V. Reeves
Frank J. van Ruitenbeek
Michael J. C. Weir
Tsehaie Woldai
last
back
exit
zoom
contents
index
about
Cover illustration:
Paul Klee (18791940), Chosen Site (1927)
Pen-drawing and water-colour on paper. Original size: 57.8 40.5 cm.
Private collection, Munich
Paul Klee, Chosen Site, 2001 c/o Beeldrecht Amstelveen
Cover page design: Wim Feringa
All rights reserved. No part of this book may be reproduced or translated in any form, by
print, photoprint, microfilm, microfiche or any other means without written permission
from the publisher.
Published by:
The International Institute for Geo-Information Science and Earth Observation
(ITC), Hengelosestraat 99, P.O. Box 6, 7500 AA Enschede, The Netherlands
CIP-GEGEVENS KONINKLIJKE BIBLIOTHEEK, DEN HAAG
first
previous
next
last
back
exit
zoom
contents
index
about
Contents
1
27
L. L. F. Janssen
Spatial data acquisition . . . . . . . . . . . . . . . . . . . . . . . . . 28
Application of remote sensing . . . . . . . . . . . . . . . . . . . . . 34
Structure of this textbook . . . . . . . . . . . . . . . . . . . . . . . . 45
2.3
first
Introduction . . . . . . . . . . . . . . . . .
Electromagnetic energy . . . . . . . . . . .
2.2.1 Waves and photons . . . . . . . . .
2.2.2 Sources of EM energy . . . . . . .
2.2.3 Electromagnetic spectrum . . . . .
2.2.4 Active and passive remote sensing
Energy interaction in the atmosphere . . .
2.3.1 Absorption and transmission . . .
2.3.2 Atmospheric scattering . . . . . .
previous
next
last
back
exit
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
T. Woldai
. . . 50
. . . 52
. . . 53
. . . 56
. . . 58
. . . 60
. . . 61
. . . 63
. . . 66
index
about
Contents
2.4
3
3.3
3.4
3.5
4
Introduction . . . . . . . . . . . . .
Sensors . . . . . . . . . . . . . . . .
3.2.1 Passive sensors . . . . . . .
3.2.2 Active sensors . . . . . . . .
Platforms . . . . . . . . . . . . . . .
3.3.1 Airborne remote sensing . .
3.3.2 Spaceborne remote sensing
Image data characteristics . . . . .
Data selection criteria . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
71
73
82
L. L. F. Janssen & W. H. Bakker
. . . . . . . . . . . . . . . . 83
. . . . . . . . . . . . . . . . 84
. . . . . . . . . . . . . . . . 86
. . . . . . . . . . . . . . . . 98
. . . . . . . . . . . . . . . . 106
. . . . . . . . . . . . . . . . 108
. . . . . . . . . . . . . . . . 110
. . . . . . . . . . . . . . . . 114
. . . . . . . . . . . . . . . . 118
Aerial cameras
4.1
4.2
4.3
first
127
J. A. Horn & K. A. Grabmaier
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Aerial camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.2.1 Lens cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.2.2 Film magazine and auxiliary data . . . . . . . . . . . . . . 134
4.2.3 Camera mounting . . . . . . . . . . . . . . . . . . . . . . . 136
Spectral and radiometric characteristics . . . . . . . . . . . . . . . 137
4.3.1 General sensitivity . . . . . . . . . . . . . . . . . . . . . . . 138
4.3.2 Spectral sensitivity . . . . . . . . . . . . . . . . . . . . . . . 139
4.3.3 True colour and colour infrared photography . . . . . . . . 140
4.3.4 Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
previous
next
last
back
exit
zoom
contents
index
about
Contents
4.4
4.5
4.6
4.7
4.8
5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
161
W. H. Bakker
. . . . . 162
. . . . . 163
. . . . . 164
. . . . . 166
. . . . . 167
. . . . . 169
. . . . . 170
. . . . . 172
. . . . . 173
. . . . . 179
. . . . . 186
. . . . . 193
. . . . . 197
. . . . . 203
Multispectral scanners
5.1
5.2
5.3
5.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . .
Whiskbroom scanner . . . . . . . . . . . . . . . . . . . .
5.2.1 Spectral characteristics of a whiskbroom . . . .
5.2.2 Geometric characteristics of a whiskbroom . . .
Pushbroom sensor . . . . . . . . . . . . . . . . . . . . .
5.3.1 Spectral characteristics of a pushbroom . . . . .
5.3.2 Geometric characteristics of a pushbroom . . . .
Some operational Earth observation systems . . . . . .
5.4.1 Low-resolution systems . . . . . . . . . . . . . .
5.4.2 Medium-resolution systems . . . . . . . . . . . .
5.4.3 High-resolution systems . . . . . . . . . . . . . .
5.4.4 Imaging spectrometry, or hyperspectral systems
5.4.5 Example of a large multi-instument system . . .
5.4.6 Future developments . . . . . . . . . . . . . . . .
Active sensors
first
previous
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
143
145
146
148
150
153
156
210
C. Pohl, K. Tempfli & G. C. Huurneman
next
last
back
exit
zoom
contents
index
about
Contents
6.1
6.2
6.3
Introduction . . . . . . . . . . . . . . . .
Radar . . . . . . . . . . . . . . . . . . . .
6.2.1 What is radar? . . . . . . . . . . .
6.2.2 Principles of imaging radar . . .
6.2.3 Geometric properties of radar . .
6.2.4 Data formats . . . . . . . . . . .
6.2.5 Distortions in radar images . . .
6.2.6 Interpretation of radar images .
6.2.7 Applications of radar . . . . . . .
6.2.8 INSAR . . . . . . . . . . . . . . .
6.2.9 Differential INSAR . . . . . . . .
6.2.10 Application of (D)InSAR . . . . .
6.2.11 Supply market . . . . . . . . . .
6.2.12 SAR systems . . . . . . . . . . . .
6.2.13 Trends . . . . . . . . . . . . . . .
Laser scanning . . . . . . . . . . . . . . .
6.3.1 Basic principle . . . . . . . . . . .
6.3.2 ALS components and processes .
6.3.3 System characteristics . . . . . .
6.3.4 Variants of Laser Scanning . . . .
6.3.5 Supply Market . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
211
212
213
215
220
224
229
235
239
240
245
247
251
252
253
254
255
258
264
266
268
first
273
C. V. Reeves
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Gamma-ray surveys . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Gravity and magnetic anomaly mapping . . . . . . . . . . . . . . 277
previous
next
last
back
exit
zoom
contents
index
about
Contents
7.4
7.5
8
Radiometric correction
8.1
8.2
8.3
Geometric aspects
9.1
9.2
9.3
316
L. L. F. Janssen, M. J. C. Weir, K. A. Grabmaier & N. Kerle
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Two-dimensional approaches . . . . . . . . . . . . . . . . . . . . . 320
9.2.1 Georeferencing . . . . . . . . . . . . . . . . . . . . . . . . . 321
9.2.2 Geocoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Three-dimensional approaches . . . . . . . . . . . . . . . . . . . . 330
9.3.1 Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
9.3.2 Monoplotting . . . . . . . . . . . . . . . . . . . . . . . . . . 338
9.3.3 Orthoimage production . . . . . . . . . . . . . . . . . . . . 340
9.3.4 Stereo restitution . . . . . . . . . . . . . . . . . . . . . . . . 341
346
B. G. H. Gorte & E. M. Schetselaar
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
10.2 Perception of colour . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
first
previous
next
last
back
exit
zoom
contents
index
about
Contents
10.2.1 Tri-stimuli model . . . . . . . . . . . . . . . . .
10.2.2 Colour spaces . . . . . . . . . . . . . . . . . . .
10.3 Visualization of image data . . . . . . . . . . . . . . .
10.3.1 Histograms . . . . . . . . . . . . . . . . . . . .
10.3.2 Single band image display . . . . . . . . . . . .
10.4 Filter operations . . . . . . . . . . . . . . . . . . . . . .
10.4.1 Noise reduction . . . . . . . . . . . . . . . . . .
10.4.2 Edge enhancement . . . . . . . . . . . . . . . .
10.5 Colour composites . . . . . . . . . . . . . . . . . . . .
10.5.1 Application of RGB and IHS for image fusion
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
383
L. L. F. Janssen
. . . . . . 384
. . . . . . 386
. . . . . . 387
. . . . . . 389
. . . . . . 392
. . . . . . 394
. . . . . . 395
. . . . . . 402
. . . . . . 407
. . . . . . 409
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
349
351
357
358
362
365
366
367
368
370
415
L. L. F. Janssen & B. G. H. Gorte
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
first
previous
next
last
back
exit
zoom
contents
index
about
10
Contents
12.2 Principle of image classification . . . . . . .
12.2.1 Image space . . . . . . . . . . . . . .
12.2.2 Feature space . . . . . . . . . . . . .
12.2.3 Image classification . . . . . . . . . .
12.3 Image classification process . . . . . . . . .
12.3.1 Preparation for image classification
12.3.2 Supervised image classification . . .
12.3.3 Unsupervised image classification .
12.3.4 Classification algorithms . . . . . . .
12.4 Validation of the result . . . . . . . . . . . .
12.5 Problems in image classification . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
418
419
420
423
425
427
429
430
434
440
443
13.3
13.4
first
449
C. A. Hecker & A. S. M. Gieske
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Principles of Thermal Remote Sensing . . . . . . . . . . . . . . . . 452
13.2.1 The physical laws . . . . . . . . . . . . . . . . . . . . . . . . 453
13.2.2 Blackbodies and emissivity . . . . . . . . . . . . . . . . . . 456
13.2.3 Radiant and kinetic temperatures . . . . . . . . . . . . . . 459
Processing of thermal data . . . . . . . . . . . . . . . . . . . . . . . 461
13.3.1 Band ratios and transformations . . . . . . . . . . . . . . . 462
13.3.2 Determining kinetic surface temperatures . . . . . . . . . . 465
Thermal applications . . . . . . . . . . . . . . . . . . . . . . . . . . 467
13.4.1 Rock emissivity mapping . . . . . . . . . . . . . . . . . . . 468
13.4.2 Thermal hotspot detection . . . . . . . . . . . . . . . . . . . 470
previous
next
last
back
exit
zoom
contents
index
about
11
Contents
14 Imaging Spectrometry
475
F. D. van der Meer, F. J. A. van Ruitenbeek & W. H. Bakker
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
14.2 Reflection characteristics of rocks and minerals . . . . . . . . . . . 479
14.3 Pre-processing of imaging spectrometer data . . . . . . . . . . . . 481
14.4 Atmospheric correction of imaging spectrometer data . . . . . . . 482
14.5 Thematic analysis of imaging spectrometer data . . . . . . . . . . 483
14.5.1 Spectral matching algorithms . . . . . . . . . . . . . . . . . 484
14.5.2 Spectral unmixing . . . . . . . . . . . . . . . . . . . . . . . 486
14.6 Applications of imaging spectrometry data . . . . . . . . . . . . . 490
14.6.1 Geology and resources exploration . . . . . . . . . . . . . . 491
14.6.2 Vegetation sciences . . . . . . . . . . . . . . . . . . . . . . . 492
14.6.3 Hydrology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
14.7 Imaging spectrometer systems . . . . . . . . . . . . . . . . . . . . 494
14.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
Glossary
505
539
first
previous
next
last
back
exit
zoom
contents
index
about
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
30
31
36
37
39
40
41
45
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
.
.
.
.
.
.
.
.
50
53
54
56
59
62
64
65
first
previous
next
last
back
exit
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
index
about
12
13
List of Figures
2.9
2.10
2.11
2.12
2.13
2.14
2.15
Rayleigh scattering . . . . . . . . . . . . . . . .
Rayleigh scattering affects the colour of the sky
Effects of clouds in optical remote sensing . . .
Specular and diffuse reflection . . . . . . . . . .
Reflectance curve of vegetation . . . . . . . . .
Reflectance curves of soil . . . . . . . . . . . . .
Reflectance curves of water . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
67
68
70
71
74
76
77
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
3.10
3.11
Overview of sensors . . . . . . . . . . . . . . .
Example video image . . . . . . . . . . . . . . .
Example multispectral image . . . . . . . . . .
TSM derived from imaging spectrometer data
Example thermal image . . . . . . . . . . . . .
Example microwave radiometer image . . . . .
DTM derived by laser scanning . . . . . . . . .
Example radar image . . . . . . . . . . . . . . .
Roll, pitch and yaw angles . . . . . . . . . . . .
Meteorological observation system . . . . . . .
An image file comprises a number of bands . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
85
90
91
93
95
96
100
102
108
112
114
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
128
129
132
134
139
147
149
151
first
previous
next
last
back
exit
zoom
contents
index
about
14
List of Figures
4.9
5.1
5.2
5.3
5.4
.
.
.
.
163
165
168
177
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
6.10
6.11
6.12
6.13
6.14
6.15
6.16
6.17
6.18
214
216
218
219
219
221
230
233
240
242
248
255
257
258
260
261
263
267
7.1
7.2
first
previous
next
last
back
exit
zoom
contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
index
.
.
.
.
about
15
List of Figures
7.3
7.4
7.5
8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
290
292
297
298
299
300
302
313
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
9.10
9.11
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
320
324
326
327
328
329
332
333
337
338
342
10.1
10.2
10.3
first
previous
next
last
back
exit
zoom
contents
.
.
.
.
.
.
.
.
.
.
.
index
about
16
List of Figures
10.4
10.5
10.6
10.7
10.8
10.9
10.10
10.11
10.12
10.13
10.14
354
359
362
363
363
364
367
369
372
376
377
11.1
11.2
11.3
11.4
11.5
11.6
11.7
11.8
11.9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
388
388
393
396
397
403
404
410
411
12.1
12.2
12.3
12.4
12.5
12.6
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
419
420
421
422
424
425
first
previous
next
last
back
exit
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
index
about
17
List of Figures
12.7
12.8
12.9
12.10
12.11
12.12
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
426
432
433
436
439
445
13.1
13.2
13.3
13.4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
454
458
469
471
14.1
14.2
14.3
477
478
14.4
first
previous
next
last
back
exit
.
.
.
.
.
.
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
index
480
487
about
List of Tables
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
5.10
5.11
5.12
5.13
Meteosat-8/SEVIRI characteristics . . . . . . . .
NOAA-17/AVHRR characteristics . . . . . . . .
Landsat-7/ETM+ characteristics . . . . . . . . .
Example applications of Landsat-7/ETM+ bands
Terra/ASTER characteristics . . . . . . . . . . . .
SPOT-5/HRG characteristics . . . . . . . . . . . .
Resourcesat-1/LISS4 characteristics . . . . . . .
Ikonos/OSA characteristics . . . . . . . . . . . .
EO-1/Hyperion characteristics . . . . . . . . . .
Proba/CHRIS characteristics . . . . . . . . . . .
Applications of Envisats instruments . . . . . .
Characteristics of Envisat, ASAR and MERIS . .
Envisat/MERIS band characteristics . . . . . . .
6.1
6.2
8.1
first
previous
next
last
back
exit
zoom
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
index
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
174
176
180
182
185
186
189
191
193
195
198
200
201
about
18
19
List of Tables
9.1
10.1
10.2
10.3
10.4
10.5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
360
361
366
366
367
11.1
11.2
11.3
11.4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
400
401
405
406
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
first
previous
next
last
back
exit
zoom
.
.
.
.
.
.
.
.
.
.
.
.
contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
index
.
.
.
.
.
.
.
.
539
540
540
540
about
Preface
Principles of Remote Sensing is the basic textbook on remote sensing for all students enrolled in the educational programmes at ITC. As well as being a basic textbook for the institutes regular MSc and PM courses, Principles of Remote
Sensing will be used in various short courses and possibly also by ITCs sister
institutes. The first edition is an extensively revised version of an earlier text
produced for the 19992000 programme. Principles of Remote Sensing and the
companion volume, Principles of Geographic Information Systems [10], are published in the ITC Educational Textbook series. We need to go back to the 1960s
to find a similar official ITC textbook on subjects related to Remote Sensing: the
ITC Textbooks on Photogrammetry and Photo-interpretation, published in English and French [16, 17].
You may wonder why ITC has now produced its own introductory textbook
while there are already many books on the subject available on the market. Principles of Remote Sensing is different in various aspects. First of all, it has been
developed for the specific ITC student population, thereby taking into account
their entry level and knowledge of English language. The textbook relates to
the typical ITC application disciplines and among others provides an introduction into techniques which acquire sub-surface characteristics. As the textbook is
first
previous
next
last
back
exit
zoom
contents
index
about
20
21
Preface
used in the start of the programmes, it tries to stimulate conceptual and abstract
thinking by providing and explaining some fundamental, yet simple, equations
(in general, no more than one equation per chapter). Principles of Remote Sensing
aims to provide a balanced approach towards traditional photogrammetric and
remote sensing subjects: three sensors (aerial camera, multispectral scanner and
radar) are dealt with in more or less the same detail. Finally, compared to other
introductory textbooks which often focus on the technique, Principles of Remote
Sensing also introduces processes. In this sense, it provides a frame to refer to
when more detailed subjects are dealt with later in the programme.
first
previous
next
last
back
exit
zoom
contents
index
about
22
Preface
first
previous
next
last
back
exit
zoom
contents
index
about
23
Preface
Acknowledgements
This textbook is the result of a process to define and develop material for a core
curriculum. This process started in 1998 and was carried out by a working group
comprising of Rolf de By, Michael Weir, Cees van Westen, myself, chaired by
Ineke ten Dam and supported by Erica Weijer. This group put many efforts
in the definition and realization of the earlier version of the two core textbooks.
Ineke was also supervising the process leading to this result. My fellow working
group members are greatly acknowledged for their support.
This textbook could not have materialized without the efforts of the (co) authors of the chapters: Wim Bakker, Ben Gorte, John Horn, Christine Pohl, Colin
Reeves, Michael Weir and Tsehaie Woldai. Many other colleagues contributed
one way or another to either the earlier version or this version of Principles of Remote Sensing: Paul Hofstee, Gerrit Huurneman, Yousif Hussin, David Rossiter,
Rob Soeters, Ernst Schetselaar, Andrew Skidmore, Dhruba Shrestha and Zoltan
Vekerdy.
The design and implementation of the textbook layout, of both the hard-copy
and electronic document, is the work of Rolf de By. Using the LATEX typesetting system, Rolf realized a well structured and visually attractive document to
study. Many of the illustrations in the book have been provided by the authors,
supported by Job Duim and Gerard Reinink. Final editing of the illustrations
was done by Wim Feringa who also designed the cover.
Michael Weir has done a tremendous job in checking the complete textbook
on English spelling and grammar. We know that our students will profit from
this.
The work on this textbook was greatly stimulated through close collaboration
first
previous
next
last
back
exit
zoom
contents
index
about
24
Preface
with the editor of Principles of Geographic Information Systems, Rolf de By.
Lucas L. F. Janssen, Enschede, September 2000
first
previous
next
last
back
exit
zoom
contents
index
about
25
Preface
first
previous
next
last
back
exit
zoom
contents
index
about
26
Preface
A new chapter on thermal remote sensing was added (Chapter 13), providing a more in-depth discussion of thermal concepts first introduced in
Chapter 2.
Lastly, a chapter was added on the concepts of imaging spectrometry (Chapter 14). Like in the new chapters 8 and 13, the more quantitative side of
remote sensing is highlighted here.
Those modifications and additions show how fast-paced developments in
remote sensing and Earth observation continue to be, a good reason, in my view,
to study geoinformation science and be part of this exciting field. Some of the
Chapters and sections in this book are not part of the ITC core modules, but
provide information on more detailed or specific concepts and methods. They
are marked by a special icon in the margin, depicting a dangerous bend road
sign.
Lastly, I would like to thank those colleagues that provided material for the
additions to the book. In addition to most of the original authors acknowledged
above by Lucas Janssen, I thank the following new authors (in alphabetic order):
Ambro Gieske, Karl Grabmaier, Chris Hecker, Gerrit Huurneman, Freek van der
Meer, Gabriel Parodi, Frank van Ruitenbeek and Klaus Tempfli.
I am indebted to Rolf de By for his help with the LATEX implementation, and
especially to Wim Bakker for providing a careful review of, and valuable additions to, many of the chapters.
Norman Kerle, Enschede, September 2004
first
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
Chapter 1
Introduction to remote sensing
first
previous
next
last
back
exit
zoom
contents
index
about
27
28
previous
next
last
back
exit
zoom
contents
index
about
29
first
previous
next
last
back
exit
zoom
contents
index
about
30
Real world
Spatial
database
remote sensing methods, which are based on the use of image data acquired
by a sensor such as aerial cameras, scanners or a radar. Taking a remote
sensing approach means that information is derived from the image data,
which form a (limited) representation of the real world (Figure 1.2). Notice, however, that increasingly (remote) sensing devices are used in the
field that can acquire data in a fashion similar to air- or spaceborne sensors. Thus, the strict division between ground-based and remote sensing
methods is blurring.
This textbook, Principles of Remote Sensing, provides an overview and some
first concepts of the remote sensing process. First some definitions of remote
sensing will be given. In Section 1.2, some of the aspects and considerations
when taking a remote sensing approach are discussed.
first
previous
next
last
back
exit
zoom
contents
index
about
31
Real world
Sensor
Image data
Observation and
measurement
Spatial
database
first
previous
next
last
back
exit
zoom
contents
index
about
32
previous
next
last
back
exit
zoom
contents
index
about
33
first
previous
next
last
back
exit
zoom
contents
index
about
34
first
previous
next
last
back
exit
zoom
contents
index
about
35
first
previous
next
last
back
exit
zoom
contents
index
about
36
first
Real world
RS sensor
previous
next
last
Image data
back
exit
Analysis
zoom
Spatial
database
contents
index
about
37
Figure 1.4:
Sea surface
temperature
as
determined from NOAAAVHRR data. Courtesy of
NOAA.
first
previous
next
last
back
exit
zoom
contents
index
about
38
first
previous
next
last
back
exit
zoom
contents
index
about
39
first
previous
next
last
back
exit
zoom
contents
index
about
40
first
previous
next
last
back
exit
zoom
contents
index
about
41
first
previous
next
last
back
exit
zoom
contents
index
about
42
first
previous
next
last
back
exit
zoom
contents
index
about
43
first
previous
next
last
back
exit
zoom
contents
index
about
44
first
previous
next
last
back
exit
zoom
contents
index
about
45
Sensor
Image data
Observations and
measurements
Spatial
databases
2- Electromagnetic
energy
3- Sensors and platforms
4- Aerial camera
5- Multispectral scanner
6- Active sensors
7- RS below the ground
surface
8- Radiometric correction
9- Geometric aspects
1.3
first
previous
next
last
back
exit
zoom
contents
index
about
46
first
previous
next
last
back
exit
zoom
contents
index
about
47
Summary
Many human activities and interests involve some geographic component. For
planning, monitoring and decision making, there is typically a need for georeferenced (geospatial) data.
In this introduction chapter the concept of remote sensing was explained. A
remote sensing approach is usually complemented by ground-based methods
and the use of numerical models. For the appropriate choice of relevant remote
sensing data acquisition information requirements for a given application have
to be defined. The chapter gave an overview of how remote sensing can obtain
different types of information about the ground surface (under some circumstances even below), cover large and also less accessible areas, and be considered
a cost-effective tool.
first
previous
next
last
back
exit
zoom
contents
index
about
48
Questions
The following questions can help to study Chapter 1.
1. To what extent are GIS applied by your organization (company)?
2. Which ground-based and which remote sensing methods are used by your
organization (or company) to collect georeferenced data?
3. Remote sensing data and derived data products are available on the internet. Locate three web-based catalogues or archives that comprise remote
sensing image data.
These are typical exam questions:
1. Explain, or give an example, how ground-based and remote sensing methods may complement each other.
2. List three possible limitations of remote sensing data.
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 2
Electromagnetic energy and remote
sensing
first
previous
next
last
back
exit
zoom
contents
index
about
49
50
2.1. Introduction
2.1 Introduction
Remote sensing relies on the measurement of electromagnetic (EM) energy. EM
energy can take several forms. The most important source of EM energy at the
Earths surface is the Sun, which provides us, for example, with (visible) light,
heat (that we can feel) and UV-light, which can be harmful to our skin.
Sun
Passive
Sensor
Passive
Sensor
Reflected
Sunlight
Active
Sensor
Earth
energy
Figure 2.1:
A remote
sensing sensor measures
reflected or emitted energy. An active sensor has
its own source of energy.
Earths surface
Many sensors used in remote sensing measure reflected sunlight. Some sensors, however, detect energy emitted by the Earth itself or provide their own
energy (Figure 2.1). A basic understanding of EM energy, its characteristics and
its interactions is required to understand the principle of the remote sensor. This
knowledge is also needed to interpret remote sensing data correctly. For these
reasons, this chapter introduces the basic physics of remote sensing.
In Section 2.2, EM energy, its source, and the different parts of the electromagnetic spectrum are explained. In between the remote sensor and the Earths
first
previous
next
last
back
exit
zoom
contents
index
about
51
2.1. Introduction
surface is the atmosphere that influences the energy that travels from the Earths
surface to the sensor. The main interactions between EM waves and the atmosphere are described in Section 2.3. Section 2.4 introduces the interactions that
take place at the Earths surface.
first
previous
next
last
back
exit
zoom
contents
index
about
52
first
previous
next
last
back
exit
zoom
contents
index
about
53
2.2.1
Wavelength, l
Distance
Magnetic field
Velocity of light, c
One characteristic of electromagnetic waves is particularly important for understanding remote sensing. This is the wavelength, , that is defined as the distance between successive wave crests (Figure 2.2). Wavelength is measured in
metres (m), nanometres (nm = 109 m) or micrometres (m = 106 m). (For an
explanation of units and prefixes refer to Appendix 1).
The frequency, v, is the number of cycles of a wave passing a fixed point over
a specific period of time. Frequency is normally measured in hertz (Hz), which
first
previous
next
last
back
exit
zoom
contents
index
about
54
c = v.
In this equation, c is the speed of light (3 108 m/s), is the wavelength (m), and
v is the frequency (cycles per second, Hz).
The shorter the wavelength, the higher the frequency. Conversely, the longer
the wavelength, the lower the frequency (Figure 2.3).
Short wavelength
Long wavelength
High frequency
High energy
Low frequency
Low energy
where Q is the energy of a photon (J), h is Plancks constant (6.6262 1034 J s),
and v the frequency (Hz). From Equation 2.2 it follows that the longer the wavelength, the lower its energy content. Gamma rays (around 109 m) are the most
first
previous
next
last
back
exit
zoom
contents
index
about
55
first
previous
next
last
back
exit
zoom
contents
index
about
56
2.2.2
Sources of EM energy
104
12
T=
103
73
102
T=
107
101
T=873
Figure 2.4:
Blackbody
radiation curves based on
Stefan-Boltzmanns
law
(with temperatures, T , in
K).
T=673
1.0
first
previous
2.0
3.0
next
last
4.0
5.0
Wavelength (m)
back
exit
6.0
zoom
7.0
8.0
contents
index
about
57
first
previous
next
last
back
exit
zoom
contents
index
about
58
2.2.3
Electromagnetic spectrum
All matter with a temperature above absolute zero (K) radiates electromagnetic
waves of various wavelengths. The total range of wavelengths is commonly
referred to as the electromagnetic spectrum (Figure 2.5). It extends from gamma
rays to radio waves.
Remote sensing operates in several regions of the electromagnetic spectrum.
The optical part of the EM spectrum refers to that part of the EM spectrum in which
optical phenomena of reflection and refraction can be used to focus the radiation.
The optical range extends from X-rays (0.02 m) through the visible part of the
EM spectrum up to and including far-infrared (1000 m). The ultraviolet (UV)
portion of the spectrum has the shortest wavelengths that are of practical use
for remote sensing. This radiation is beyond the violet portion of the visible
wavelengths. Some of the Earths surface materials, in particular rocks and minerals, emit or fluoresce visible light when illuminated with UV radiation. The
microwave range covers wavelengths from 1 mm to 1 m.
The visible region of the spectrum (Figure 2.5) is commonly called light.
It occupies a relatively small portion in the EM spectrum. It is important to
note that this is the only portion of the spectrum that we can associate with the
concept of colour. Blue, green and red are known as the primary colours or
wavelengths of the visible spectrum. Section 10.2 gives more information on
light and perception of colour.
The longer wavelengths used for remote sensing are in the thermal infrared
and microwave regions. Thermal infrared gives information about surface temperature. Surface temperature can be related, for example, to the mineral composition of rocks or the condition of vegetation. Microwaves can provide information on surface roughness and the properties of the surface such as water
content.
first
previous
next
last
back
exit
zoom
contents
index
about
59
0.6
0.7 (mm)
red
green
UV
0.5
blue
0.4
Near-infrared
Visible
Wavelength (mm)
10
102
103
104 105
106
107 108
back
exit
zoom
contents
d
an io
d
ra
last
next
io
previous
is
av
ro
ev
ic
l
Te
IR
ys
ra
al
m
er
Th -IR
id -IR
lt
ys
ra
ys
ra
ic
M ar
V
e le (U
N ib let
s o
Viravi
os
first
109
index
about
60
2.2.4
first
previous
next
last
back
exit
zoom
contents
index
about
61
first
previous
next
last
back
exit
zoom
contents
index
about
62
Sun
y
nerg
ent e
Incid
RS
Sensor
Atmospheric
absorbtion
Scattered
radiation
Cloud
Direct
radiation
Scattered
radiation
Earth
first
previous
next
Atmospheric
emission
Thermal
emission
Reflected
radiation
Reflection
processes
last
back
Emission
processes
exit
zoom
contents
index
about
63
2.3.1
first
previous
next
last
back
exit
zoom
contents
index
about
64
% transmission
100
H2O
and
Co2
50
Co2
first
previous
next
H2O
last
O3
10
back
Co2
12
exit
14
zoom
H2O
16
18
contents
20
22
index
about
65
1.0 x
104
Solar extra terrestrial
6000 K blackbody
Solar at Earths surface
0.8
O3
O2
H2O
O2
H2 O
0.6
0.4
Visible
0.2
0.2
0.4
H2O
0.8
1.0
1.8 2.0
1.4
H2O
CO2
H2O
H2O
2.4
2.6
2.8
3.0
Wavelength m
first
previous
next
last
back
exit
zoom
contents
index
about
66
2.3.2
Atmospheric scattering
first
previous
next
last
back
exit
zoom
contents
index
about
67
q1
Blue light
Red light
q2
In the absence of particles and scattering, the sky would appear black. At
daytime, the Sun rays travel the shortest distance through the atmosphere. In
that situation, Rayleigh scattering causes a clear sky to be observed as blue because this is the shortest wavelength the human eye can observe. At sunrise and
sunset, however, the Sun rays travel a longer distance through the Earths atmosphere before they reach the surface. All the shorter wavelengths are scattered
after some distance and only the longer wavelengths reach the Earths surface.
As a result, the sky appears orange or red (Figure 2.10).
In the context of satellite remote sensing, Rayleigh scattering is the most important type of scattering. It causes a distortion of spectral characteristics of the
reflected light when compared to measurements taken on the ground: due to
the Rayleigh effect the shorter wavelengths are overestimated. In colour photos
taken from high altitudes it accounts for the blueness of these pictures. In genfirst
previous
next
last
back
exit
zoom
contents
index
about
68
B Sun
G R
Daytime
Sunset
Blue
first
previous
Red
Earth
next
last
back
exit
zoom
contents
Figure 2.10:
Rayleigh
scattering causes us to
perceive a blue sky during
daytime and a red sky at
sunset.
Sun
index
about
69
first
previous
next
last
back
exit
zoom
contents
index
about
70
nr
ad
Cloud
previous
next
ow zo
ne
Cloud
Shadow
Earth
first
ce
Zone of no
penetration
Shad
ce
an n
i
d
ra utio
d
b
ou tri
Cl con
ian
last
back
exit
zoom
contents
index
about
71
first
previous
next
last
back
exit
zoom
contents
index
about
72
first
previous
next
last
back
exit
zoom
contents
index
about
73
2.4.1
first
previous
next
last
back
exit
zoom
contents
index
about
74
Red
20
0
0.4
first
0.6
previous
Colour IR
sensitive region
Green
Chlorophyll absorption
Blue
40
Leaf refle
ct
Percent reflectance
60
nc
Chlorophyll absorption
Chlorophyll absorption
0.8
next
1.0
last
1.2
1.4
1.6
1.8
Wavelength, micrometers
back
exit
2.0
zoom
2.2
2.4
contents
2.6
index
about
75
first
previous
next
last
back
exit
zoom
contents
index
about
76
reflectance (%)
30
b
c
20
d
a
10
e
0
400
800
1200
1600
2000
2400
wavelength (nm)
first
previous
next
last
back
exit
zoom
contents
index
about
77
40
reflectance (%)
30
20
10
a
400
800
1200
1600
2000
2400
wavelength (nm)
first
previous
next
last
back
exit
zoom
contents
index
about
78
Summary
Remote sensing is based on the measurement of Electromagnetic (EM) energy.
EM energy propagates through space in the form of sine waves characterized by
electrical (E) and magnetic (M) fields, which are perpendicular to each other. EM
can be modelled either by waves or by energy bearing particles called photons.
One property of EM waves that is particularly important for understanding remote sensing is the wavelength (), defined as the distance between successive
wave crests measured in metres (m), micrometres (m, 106 m)or nanometres
(nm, 109 m). The frequency is the number of cycles of a wave passing a fixed
point in a specific period of time and is measured in hertz (Hz). Since the speed
of light is constant, wavelength and frequency are inversely related to each other.
The shorter the wavelength, the higher the frequency and vice versa.
All matter with a temperature above the absolute zero (0 K) radiates EM
energy due to molecular agitation. Matter that is capable of absorbing and reemitting all EM energy received is known as a blackbody. All matter with a
certain temperature radiates electromagnetic waves of various wavelengths depending on its temperature. The total range of wavelengths is commonly referred to as the electromagnetic spectrum. It extends from gamma rays to radio
waves. The amount of energy detected by a remote sensing system is a function
of the interactions on the way to the object, the object itself and the interactions
on the way returning to the sensor.
The interactions of the Suns energy with physical materials, both in the atmosphere and at the Earths surface, cause this energy to be reflected, absorbed,
transmitted or scattered. Electromagnetic energy travelling through the atmosphere is partly absorbed by molecules. The most efficient absorbers of solar
radiation in the atmosphere are ozone (O3 ), water vapour (H2 O) and carbon
first
previous
next
last
back
exit
zoom
contents
index
about
79
first
previous
next
last
back
exit
zoom
contents
index
about
80
Questions
The following questions can help you to study Chapter 2.
1. What are advantages/disadvantages of aerial RS compared to spaceborne
RS in terms of atmospheric disturbance?
2. How important are laboratory spectra in understanding the remote sensing images?
These are typical exam questions:
1. List and describe the two models used to describe electromagnetic energy.
2. How are wavelength and frequency related to each other (give a formula)?
first
previous
next
last
back
exit
zoom
contents
index
about
81
5. What specific energy interactions take place when EM energy from the Sun
hits the Earths surface?
6. In your own words give a definition of an atmospheric window.
7. Indicate True or False: Only the wavelength region outside the main absorption bands of the atmospheric gases can be used for remote sensing.
8. Indicate True or False: The amount of energy detected by a remote sensing
sensor is a function of how energy is partitioned between its source and
the materials with which it interacts on its way to the detector.
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 3
Sensors and platforms
first
previous
next
last
back
exit
zoom
contents
index
about
82
83
3.1. Introduction
3.1 Introduction
In Chapter 2, the underlying principle of remote sensing was explained. Depending on the surface characteristics, electromagnetic energy from the Sun or
active sensor is reflected, or energy may be emitted by the Earth itself. This energy is measured and recorded by the sensors. The resulting data can be used to
derive information about surface characteristics.
The measurements of electromagnetic energy are made by sensors that are
attached to a static or moving platform. Different types of sensors have been
developed for different applications (Section 3.2). Aircraft and satellites are generally used to carry one or more sensors (Section 3.3). General references with
respect to missions and sensors are [18, 21]. ITCs online Database of Satellites and
Sensors provides a complete and up-to-date overview.
The sensor-platform combination determines the characteristics of the resulting data. For example, when a particular sensor is operated from a higher altitude, the total area imaged is increased, while the level of detail that can be
observed is reduced (Section 3.4). Based on your information needs and on time
and budgetary criteria, you can determine which image data are most appropriate (Section 3.5).
first
previous
next
last
back
exit
zoom
contents
index
about
84
3.2. Sensors
3.2 Sensors
A sensor is a device that measures and records electromagnetic energy. Sensors
can be divided into two groups:
Passive sensors (Section 3.2.1) depend on an external source of energy, usually
the Sun, and sometimes the Earth itself. Current operational passive sensors
cover the electromagnetic spectrum in the wavelength range from less than 1 picometer (gamma rays) to larger than 1 meter (microwaves). The oldest and most
common type of passive sensor is the photographic camera.
Active sensors (Section 3.2.2) have their own source of energy. Measurements
by active sensors are more controlled because they do not depend upon varying
illumination conditions. Active sensing methods include radar (radio detection
and ranging), lidar (light detection and ranging) and sonar (sound navigation
ranging), all of which may be used for altimetry as well as imaging.
Figure 3.1 gives an overview of the types of sensors that are introduced in
this section. The camera, the multispectral scanner and the imaging radar are
explained in more detail in Chapters 4, 5 and 6 respectively. Procedures used for
the processing of data acquired with imaging spectrometers and thermal scanners are introduced in Chapter 13. For more information about spaceborne remote sensing you may refer to ITCs Database of Satellites and Sensors.
first
previous
next
last
back
exit
zoom
contents
index
about
85
3.2. Sensors
Visible
domain
Passive
sensors
gamma ray
spectrometer
Optical
domain
- multispectral scanner
- imaging spectrometer
Microwave
domain
thermal
scanner
passive microwave
radiometer
- aerial camera
- video camera
Active
sensors
radar
altimeter
laser
scanner
imaging
radar
wavelength
first
previous
next
last
back
exit
zoom
contents
index
about
86
3.2. Sensors
3.2.1
first
Passive sensors
previous
next
last
back
exit
zoom
contents
index
about
87
3.2. Sensors
Gamma-ray spectrometer
The gamma-ray spectrometer measures the amount of gamma rays emitted by
the upper soil or rock layers due to radioactive decay. The energy measured
in specific wavelength bands provides information on the abundance of (radio
isotopes that relate to) specific minerals. Therefore, the main application is found
in mineral exploration. Gamma rays have a very short wavelength on the order
of picometers (pm). Because of large atmospheric absorption of these waves,
this type of energy can only be measured up to a few hundred meters above the
Earths surface. Example data acquired by this sensor are given in Figure 7.1.
Gamma-ray surveys are treated in more detail in section 7.2.
first
previous
next
last
back
exit
zoom
contents
index
about
88
3.2. Sensors
Aerial camera
The (digital) camera system, lens and film (or CCD), is mostly found in aircraft
for aerial photography. Low orbiting satellites and NASA Space Shuttle missions also apply conventional camera techniques. The film types used in the
camera enable electromagnetic energy in the range between 400 nm and 900 nm
to be recorded. Aerial photographs are used in a wide range of applications.
The rigid and regular geometry of aerial photographs in combination with the
possibility to acquire stereo-photography has enabled the development of photogrammetric procedures for obtaining precise 3D coordinates (Chapter 9). Although aerial photos are used in many applications, principal applications include medium and large scale (topographic) mapping and cadastral mapping.
Today, analogue photos are often scanned to be stored in and processed by digital systems. Various examples of aerial photos are shown in Chapter 4. A recent
development is the use of digital cameras, which bypass the use of film and
directly deliver digital image data (Section 4.8).
first
previous
next
last
back
exit
zoom
contents
index
about
89
3.2. Sensors
Video camera
Video cameras are frequently used to record data. Most video sensors are only
sensitive to the visible spectrum, although a few are able to record the nearinfrared part of the spectrum (Figure 3.2). A recent development is the use of
thermal infrared video cameras. Until recently, only analogue video cameras
were available. Today, digital video cameras are increasingly available, some
of which are applied in remote sensing. Mostly, video images serve to provide
low cost image data for qualitative purposes, for example, to provide additional
visual information about an area captured with another sensor (e.g. laser scanner
or radar). Most image processing and information extraction methods useful for
individual images can be applied to video frames.
first
previous
next
last
back
exit
zoom
contents
index
about
90
3.2. Sensors
Figure 3.2:
Analogue
false colour video image
of De Lopikerwaard, the
Netherlands. Courtesy of
Syntoptics.
first
previous
next
last
back
exit
zoom
contents
index
about
91
3.2. Sensors
Figure 3.3:
Landsat-5
Thematic Mapper image
of Yemen, 1995. False
colour composite of TM
bands 4, 5 and 7 shown
in red, green and blue,
respectively. The image
covers an area of 30 km
by 17 km. The meaning of the colours, and
ways to interpret such
imagery, are provided in
Chapter 10.
first
previous
next
last
back
exit
zoom
contents
index
about
92
3.2. Sensors
Multispectral scanner
An instrument is a measuring device for determining the present value of a
quantity under observation. A scanner is an instrument that obtains observations in a point-by-point and line-by-line manner. In this way, a scanner fundamentally differs from an aerial camera, which records an entire image in only
one exposure.
The multispectral scanner is an instrument that measures the reflected sunlight in the visible and infrared spectrum. A sensor systematically scans the
Earths surface, thereby measuring the energy reflected by the viewed area. This
is done simultaneously for several wavelength bands, hence the name multispectral scanner. A wavelength band or spectral band is an interval of the electromagnetic spectrum for which the average reflected energy is measured. Typically, a number of distinct wavelength bands are recorded, because these bands
are related to specific characteristics of the Earths surface.
For example, reflection characteristics in the range of 2 m to 2.5 m (for
instance, Landsat TM band 7) may give information about the mineral composition of the soil, whereas the combined reflection characteristics of the red and
near infrared bands may tell something about vegetation, such as biomass and
health.
The definition of the wavebands of a scanner, therefore, depends on the applications for which the sensor has been designed. An example of multispectral
data for geological applications is given in Figure 3.3. Methods to interpret such
imagery are introduced in Chapter 10.
first
previous
next
last
back
exit
zoom
contents
index
about
93
3.2. Sensors
Figure 3.4:
Total Suspended Matter concentration data of the North Sea
derived from the SeaWiFS sensor onboard the
OrbView-2 satellite. The
image roughly covers an
area of 600 km by 500 km.
Courtesy of CCZM, Rijkswaterstaat.
30100+
TSM
0
SeaWIFS
2-bands MIM
Atm correction: Modtran
first
previous
next
last
back
exit
zoom
contents
index
about
94
3.2. Sensors
Imaging spectrometer or hyperspectral imager
The principle of the imaging spectrometer is similar to that of the multispectral
scanner, except that spectrometers measure many (64256), very narrow (5 nm to
10 nm) spectral bands. This results in an almost continuous reflectance curve per
pixel rather than the limited number values for relatively broad spectral bands of
the multispectral scanner. The spectral curves depend on the chemical composition and microscopic structure of the measured material. Imaging spectrometer
data, therefore, can be used, for instance, to determine the mineral composition
of the Earths surface, the chlorophyll content of surface water, or the total suspended matter concentration of surface water (Figure 3.4).
first
previous
next
last
back
exit
zoom
contents
index
about
95
3.2. Sensors
Thermal scanner
Thermal scanners measure thermal data in the range of 8 m to 14 m. Wavelengths in this range are directly related to an objects temperature. For instance,
data on cloud, land and sea surface temperature are indispensable for weather
forecasting. For this reason, most remote sensing systems designed for meteorology include a thermal scanner. Thermal scanners can also be used to study
the effects of drought on agricultural crops (water stress), and to monitor the
temperature of cooling water discharged from thermal power plants. Another
application is in the detection of underground coal fires (Figure 3.5).
Figure 3.5: Night-time airborne thermal scanner image of a coal mining area
affected by underground
coal fires. Darker tones
represent colder surfaces,
while lighter tones represent warmer areas. Most
of the warm spots are due
to coal fires, except for the
large white patch, which is
a lake. Apparently, at that
time of night, the temperature of the water is higher
than the temperature of
the land. Scene is approximately 4 km across.
first
previous
next
last
back
exit
zoom
contents
index
about
96
3.2. Sensors
first
previous
next
last
back
exit
zoom
contents
index
about
97
3.2. Sensors
Microwave radiometer
Long wavelength EM energy (1 cm to 100 cm) is emitted from the objects on,
or just below, the Earths surface. Every object with a temperature above the
absolute temperature of zero Kelvin emits radiation, called the blackbody radiation (Section 2.2.2). Natural materials may emit radiation that is somewhat
lower than the ideal case of a blackbody, which is demonstrated by an emissivity
microwave radiometer records this emitted radiation of objects.
smaller than 1A
The depth from which this emitted energy can be recorded depends on the properties of the specific material, such as the water content. The recorded signal is
called the brightness temperature. The physical surface temperature can be calculated from the brightness temperature, but then the emissivity must be known
(see Section 13.2.2. With an emissivity of 98% to 99% water behaves almost like
a blackbody, while land features may show varying emissivity numbers. Furthermore, the emissivity of materials may vary with changing conditions. For
instance, a wet soil may have a considerable higher emissivity than a dry soil.
Because blackbody radiation is weak, the energy must be measured over relatively large areas, and consequently passive microwave radiometers are characterized by a low spatial resolution. Passive microwave radiometer data can be
used in mineral exploration, soil mapping, soil moisture estimation (Figure 3.6),
and snow and ice detection.
first
previous
next
last
back
exit
zoom
contents
index
about
98
3.2. Sensors
3.2.2
first
Active sensors
previous
next
last
back
exit
zoom
contents
index
about
99
3.2. Sensors
Laser scanner
A very interesting active sensor system, similar in some respects to radar, is
lidar (light detection and ranging). A lidar transmits coherent laser light, at a
certain visible or near-infrared wavelength, as a series of pulses (thousands per
second) to the surface, from which some of the light reflects. Travel time for the
round-trip and the returned intensity of the reflected pulses are the measured
parameters. Lidar instruments can be operated as profilers and as scanners on
airborne and spaceborne platforms, day and night. Lidar can serve either as
a ranging device to determine altitudes and measure speeds, or as a particle
analyser for air. Light penetrates certain targets, which makes it possible to use
it for assessing tree height (biomass) and canopy conditions, or for measuring
depths of shallow waters such as tidal flats.
Laser scanners are typically mounted on aircraft or helicopters and use a
laser beam to measure the distance from the sensor to points located on the
ground. This distance measurement is then combined with exact information
on the sensorss position, using a satellite position system and an inertial navigation system (INS), to calculate the terrain elevation. Laser scanning produces
detailed, high-resolution, Digital Terrain Models (DTM) for topographic mapping (Figure 3.7). Laser scanning can also be used for the production of detailed
3D models of city buildings. Portable ground-based laser scanners can be used
for oblique and transverse measurements.
first
previous
next
last
back
exit
zoom
contents
index
about
100
3.2. Sensors
first
previous
next
last
back
exit
zoom
contents
index
about
101
3.2. Sensors
Imaging radar
Radar (radio detection and ranging) instruments operate in the 1 cm to 100 cm
wavelength range. Different wavelength bands are related to particular characteristics of the Earths surface. The radar backscatter (Figure 3.8) is influenced
by the emitted signal and the illuminated surface characteristics (Chapter 6).
Since radar is an active sensor system and the applied wavelengths are able to
penetrate clouds, it can acquire images day and night and under all weather
conditions, although the images may be affected somewhat by heavy rainfall.
The combination of two stereo radar images of the same area can provide
information about terrain heights (radargrammetry). Similarly, SAR Interferometry (INSAR) combines two radar images acquired at almost the same locations. These images are acquired either at different moments or at the same
moment using two systems on either end of a long boom, and can be used to
assess changes in height or vertical deformations with great precision (5 cm or
better). Such vertical motions may be caused by oil and gas exploitation (land
subsidence), or crustal deformation related to earthquakes.
first
previous
next
last
back
exit
zoom
contents
index
about
102
3.2. Sensors
first
previous
next
last
back
exit
zoom
contents
index
about
103
3.2. Sensors
Radar altimeter
Radar altimeters are used to measure the topographic profile parallel to the satellite orbit. They provide profiles, i.e. single lines of measurements, rather than
image data. Radar altimeters operate in the 1 cm to 6 cm wavelength range and
are able to determine height with a precision of 2 cm to 5 cm. Radar altimeters
are useful for measuring relatively smooth surfaces such as oceans and for small
scale mapping of continental terrain models. Sample results of radar altimeter
measurements are given in Figure 1.6 and Figure 7.2.
first
previous
next
last
back
exit
zoom
contents
index
about
104
3.2. Sensors
Bathymetry and Side Scan Sonar
Sonar stands for sound navigation ranging. It is a process used to map sea floor
topography or to observe obstacles underwater. It works by emitting a small
burst of sound from a ship. The sound is reflected off the bottom of the body of
water. The time that it takes for the reflected pulse to be received corresponds to
the depth of the water. More advanced systems also record the intensity of the
return signal, thus giving information about the material on the sea floor.
In its simplest form, the sonar looks straight down, and is operated very
much like a radar altimeter. The body of water will be traversed in paths like
a grid, and not every point below the surface will be monitored. The distance
between data points depends on the ships speed, the frequency the measurements, and the distance between the adjacent paths.
One of the most accurate systems for imaging large areas of the ocean floor
is called the side scan sonar. This is a towed system that is normally moved in a
straight line. Somewhat similar to side looking airborne radar (SLAR), side scan
sonar transmits a specially shaped acoustic beam perpendicular to the ships
path, and out to the left and right side. This beam propagates into the water and
across the seabed. The roughness of the floor of the ocean and any objects laying
upon it, reflect some of the incident sound energy back in the direction of the
sonar. The sonar is sensitive enough to receive these reflections, amplify them
and send them to a sonar data processor and display. Images produced by side
scan sonar systems are highly accurate and can be used to delineate even very
small (< 1 cm) objects.
The shape of the beam in side scan is crucial to the formation of the final
image. Typically, the acoustic beam a side scan sonar is very narrow in the horizontal dimension (about 0.1 degree) and much wider (40 to 60 degrees) in the
vertical dimension.
first
previous
next
last
back
exit
zoom
contents
index
about
105
3.2. Sensors
Using sonar data, contour maps of the bottoms of bodies of water are made.
Maps that show the contours under bodies of water through depth soundings
are called bathymetric maps. They are analogous to topographic maps that are
made to show contours of terrestrial areas.
first
previous
next
last
back
exit
zoom
contents
index
about
106
3.3. Platforms
3.3 Platforms
A platform is a vehicle, such as a satellite or aircraft, used for a particular activity
or purpose or to carry a specific kinds of equipment or instruments.
Sensors used in remote sensing can be carried at heights ranging from just
a few centimeters, using field equipment, up to orbits in space as far away as
36,000 km (geostationary orbits) and beyond. Very often the sensor is mounted
on a moving vehicle, which we call the platform, such as aircraft and satellites.
Occasionally, static platforms are used. For example, by using a multispectral
sensor mounted to a pole, the changing reflectance characteristics of a specific
crop during the day or season can be assessed.
Airborne observations are carried out using aircraft with specific modifications to carry sensors. An aircraft needs a hole in the floor or a special remote
sensing pod for the aerial camera or a scanner. Sometimes Ultra Light Vehicles
(ULVs), balloons, helicopters, airships or kites are used for airborne remote sensing. Depending on the platform and sensor, airborne observations are possible
at altitudes ranging from less than 100 m up to 40 km.
The navigation of an aircraft is one of the most crucial parts of airborne
remote sensing. The availability of satellite navigation technology has significantly improved the quality of flight execution as well as the positional accuracy
of the processed data. A recent development is the use of Unmanned Aerial
Vehicles (UAVs) for remote sensing.
For spaceborne remote sensing, satellites and space stations are used. Satellites
are launched into space with rockets. Satellites for Earth Observation are typically
positioned in orbits between 150 km and 36,000 km altitude. The choice of the
specific orbit depends on the objectives of the mission, e.g. continuous observation of large areas or detailed observation of smaller areas.
first
previous
next
last
back
exit
zoom
contents
index
about
107
3.3. Platforms
A recent development is the use of relatively small satellites with a low mass
of 1 kg to 100 kg (mini, micro and nanosatellites), which can be developed and
launched at relatively low-cost. These satellites are so small that they can even
be put into orbit by solid rocket boosters launched from aircraft flying at speeds
of 1000 km/h at 12 km altitude.
first
previous
next
last
back
exit
zoom
contents
index
about
108
3.3. Platforms
3.3.1
Airborne remote sensing is carried out using different types of aircraft depending on the operational requirements and budget available. The speed of the
aircraft can vary between 150 km/h and 750 km/h and must be carefully chosen
in relation to the mounted sensor system. The selected altitude influences the
scale and the resolution of the recorded images. Apart from the altitude, the aircrafts orientation also affects the geometric characteristics of the remote sensing
data acquired. The orientation of the aircraft is influenced by wind conditions
and can be corrected for to some extent by the pilot. The orientation can be expressed by three different rotation angles relative to a reference path, namely
roll, pitch and yaw (Figure 3.9). A satellite position system and an Inertial Navigation System (INS) can be installed in the aircraft to measure its position and
the three rotation angles at regular intervals. Subsequently, these measurements
can be used to correct the sensor data for geometric distortions resulting from
altitude and orientation errors.
+
0
-
+
0
-
roll angle
+
0
-
pitch angle
yaw angle
Today, most aircraft are equipped with standard satellite navigation technology, which yields positional accuracies better than 10 m to 20 m (horizontal) or
20 m to 30 m (horizontal plus vertical). More precise positioning and navigation
(1 m to 5 m) is possible using a technique called differential correction, which involves the use of a second satellite receiver. This second system, called the base
first
previous
next
last
back
exit
zoom
contents
index
about
109
3.3. Platforms
station, is located at a fixed and precisely known position. Even better positional
accuracies (1 cm) can be achieved using more advanced equipment. In this textbook we refer to satellite navigation in general, which comprises the American
GPS system, the Russian Glonass system and the forthcoming European Galileo
system. Refer to the Principles of Geographic Information Systems textbook for an
introduction to GPS.
In aerial photography (Chapter 4) the images are recorded on hard-copy material (film, Section 4.3) or, in case of a digital camera, recorded digitally. For
digital sensors, e.g. a multispectral scanner, the data can be stored on tape and
other mass storage devices or transmitted directly to a receiving station.
Owning, operating and maintaining survey aircraft, as well as employing a
professional flight crew is an expensive undertaking. In the past, survey aircraft were owned mainly by large national survey organizations that required
large amounts of photography. There is an increasing trend towards contracting specialized private aerial survey companies. Still, this requires a thorough
understanding of the process involved.
first
previous
next
last
back
exit
zoom
contents
index
about
110
3.3. Platforms
3.3.2
Spaceborne remote sensing is carried out using sensors that are mounted on
satellites, space vehicles and space stations. The monitoring capabilities of the
sensor are to a large extent determined by the parameters of the satellites orbit.
In general, an orbit is a circular path described by the satellite in its revolution
about the Earth. Different types of orbits are required to achieve continuous
monitoring (meteorology), global mapping (land cover mapping), or selective
imaging (urban areas). For remote sensing purposes, the following orbit characteristics are relevant :
Orbital altitude, which is the distance (in km) from the satellite to the surface
of the Earth. Typically, remote sensing satellites orbit either at 150 km to
1000 km (low-earth orbit, or LEO) or at 36,000 km (geostationary orbit or
GEO) distance from the Earths surface. This altitude influences to a large
extent the area that can be viewed (coverage) and the details that can be
observed (resolution).
Orbital inclination angle, which is the angle (in degrees) between the orbital
plane and the equatorial plane. The inclination angle of the orbit determines, together with the field of view of the sensor, the latitudes up to
which the Earth can be observed. If the inclination is 60, then the satellite
flies over the Earth between the latitudes 60 north and 60 south. If the
satellite is in a low-earth orbit with an inclination of 60, then it cannot observe parts of the Earth at latitudes above 60 north and below 60 south,
which means it cannot be used for observations of the polar regions of the
Earth.
Orbital period, which is the time (in minutes) required to complete one full
first
previous
next
last
back
exit
zoom
contents
index
about
111
3.3. Platforms
orbit. The orbital period and the mean distance to the centre of the Earth
are interrelated (Keplers third law). For instance, if a polar satellite orbits
at 806 km mean altitude, then it has an orbital period of 101 minutes, and
its ground speed of 23,700 km/h is equal to 6.5 km/s. Compare this figure
with the speed of an aircraft of around 400 km/h. The satellite is roughly
60 times faster. The speed of the platform has implications for the type of
images that can be acquired (exposure time).
Repeat cycle, which is the time (in days) between two successive identical
orbits. The revisit time, the time between two subsequent images of the
same area, is determined by the repeat cycle together with the pointing
capability of the sensor. Pointing capability refers to the possibility of the
sensor-platform to look to the side, or fore and aft. The sensors that are
mounted on SPOT, IRS and Ikonos (Section 5.4) have such a capability.
The following orbit types are most common for remote sensing missions:
Polar orbit. An orbit with inclination angle between 80 and 100. An orbit
having an inclination larger than 90 means that the satellites motion is in
westward direction. (Launching a satellite in eastward direction requires
less energy, because of the eastward rotation of the Earth.) Such a polar orbit enables observation of the whole globe, also near the poles. The satellite
is typically placed in orbit at 600 km to 1000 km altitude.
Sun-synchronous orbit. This is a near-polar orbit chosen in such a way
that the satellite always passes overhead at the same time. To achieve
this, the inclination angle must be carefully chosen between 98 to 99.
Most sun-synchronous orbits cross the equator at mid-morning at around
10:30 hour local solar time. At that moment the Sun angle is low and the
first
previous
next
last
back
exit
zoom
contents
index
about
112
3.3. Platforms
resultant shadows reveal terrain relief. In addition to day-time images, a
sun-synchronous orbit also allows the satellite to record night-time images
(thermal or radar) during the ascending phase of the orbit at the dark side
of the Earth. Examples of polar orbiting, sun-synchronous satellites are
Landsat, SPOT and IRS.
Geostationary orbit. This refers to orbits in which the satellite is placed
above the equator (inclination angle: 0) at an altitude of approximately
36,000 km. At this distance, the orbital period of the satellite is equal to the
rotational period of the Earth, exactly one sidereal day. The result is that
the satellite is at a fixed position relative to the Earth. Geostationary orbits
are used for meteorological and telecommunication satellites.
Figure 3.10:
Meteorological
observation system comprised
of geostationary and polar
satellites.
Todays meteorological weather satellite systems use a combination of geostationary satellites and polar orbiters. The geostationary satellites offer a
continuous hemispherical view of almost half the Earth (45%), while the
polar orbiters offer a higher spatial resolution (Figure 3.10).
first
previous
next
last
back
exit
zoom
contents
index
about
113
3.3. Platforms
Other orbits. For a satellite circling two massive bodies it appears that
there are five points where the pull of gravity of the two bodies is at equilibrium. These points are called the Lagrangian, or libration points, called
L1 to L5. In these points, a satellite can be positioned at zero velocity with
respect to both bodies. The L1 point of the Sun-Earth system, located between the Earth and the Sun at about 1.5 million kilometres from the Earth,
is already in use by a number of solar observation satellites. The Triana
Earth observation satellite was developed to be put in this L1 point, but
the satellite was never launched. In the future also the Lagrangian points
of the Earth-Moon system may be used for Earth observation.
The data of spaceborne sensors need to be sent to the ground for further analysis and processing. Some older spaceborne systems utilized film cartridges that
fell back to a designated area on Earth. Today, practically all Earth Observation
satellites apply satellite communication technology as a downlink of the data.
The acquired data are sent down directly to a receiving station, or to a (geostationary) communication satellite that transmits the data to receiving stations on
the ground. Another option is, if the satellite is outside the range of a receiving
station, to store the data temporarily on a tape recorder in the satellite and transmit them later. One of the current trends is that small receiving units, consisting
of a small dish with a PC, are being developed for local reception of image data.
first
previous
next
last
back
exit
zoom
contents
index
about
114
Rows
45
band 3
26 81
53 35 57
band 2
band 1
DN-values
Single pixel
first
previous
next
last
back
exit
zoom
contents
index
about
115
first
previous
next
last
back
exit
zoom
contents
index
about
116
previous
next
last
back
exit
zoom
contents
index
about
117
first
previous
next
last
back
exit
zoom
contents
index
about
118
first
previous
next
last
back
exit
zoom
contents
index
about
119
previous
next
last
back
exit
zoom
contents
index
about
120
first
previous
next
last
back
exit
zoom
contents
index
about
121
first
previous
next
last
back
exit
zoom
contents
index
about
122
first
previous
next
last
back
exit
zoom
contents
index
about
123
first
previous
next
last
back
exit
zoom
contents
index
about
124
Summary
This chapter has provided an introduction of sensors and platforms used for remote sensing observations. Aircraft and satellites are the main platforms used in
remote sensing. Both types of platforms have their own specific characteristics.
Two main categories of sensors are distinguished, passive and active. Passive
sensors depend on an external source of energy such as the Sun. Active sensors
have their own source of energy. A sensor carries out measurements of reflected
or emitted (EM) energy. The energy measured in specific wavelength bands is
related to the Earths surface characteristics. The measurements are stored as
pixels in image data. The characteristics of image data are related to the characteristics of the sensor-platform system (spatial, spectral, radiometric and temporal). Depending on the spatio-spectral-temporal phenomena of interest, the
most appropriate remote sensing data can be determined. To a large extent, data
availability and costs may determine which remote sensing data are used.
first
previous
next
last
back
exit
zoom
contents
index
about
125
Questions
The following questions can help you to study Chapter 3.
1. Think of an application, define the spatio-spectral-temporal characteristics
of interest and determine the type of remote sensing image data required.
2. Which types of sensors are used in your discipline or field-of-interest?
3. Which aspects need to be considered to assess if the statement RS data
acquisition is a cost-effective method is to be true?
The following are sample exam questions:
1. Explain the sensor-platform concept.
2. Mention two types of passive and two types of active sensor.
3. What is a typical application of a multispectral satellite image, and what is
a typical application of a very high spatial resolution satellite image?
first
previous
next
last
back
exit
zoom
contents
index
about
126
4. Describe two differences between aircraft and satellite remote sensing and
their implications for the data acquired.
5. Which two types of satellite orbits are mainly used for Earth observation?
6. List and describe four characteristics of image data.
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 4
Aerial cameras
first
previous
next
last
back
exit
zoom
contents
index
about
127
128
4.1. Introduction
4.1 Introduction
Aerial photography has been used since the early 20th century to provide spatial data for a wide range of applications. It is the oldest, yet most commonly
applied remote sensing technique. The science and technique of making measurements from photos or image data is called photogrammetry. Nowadays,
almost all topographic maps are based on aerial photographs. Aerial photographs also provide the accurate data required for many cadastral surveys and
civil engineering projects. Aerial photography is a useful source of information
for specialists such as foresters, geologists and urban planners. General references for aerial photography are [23, 24].
The aerial camera, mounted in an aircraft and using a lens to record data
on to photographic film, is therefore by far the longest serving sensor and platform system used in remote sensing. Although usually mounted in aircraft, conventional photographic cameras are also carried by some low orbiting satellites
and on the NASA Space Shuttle missions. Also digital cameras (using charge-
nadir
(a)
first
previous
next
nadir
last
back
exit
(b)
zoom
contents
index
about
129
4.1. Introduction
coupled devices [CCDs] instead of film as sensor) are used nowadays.
Two broad categories of aerial photography can be distinguished: vertical and
oblique photography (Figure 4.1). In most mapping applications, vertical aerial
photography is required. Vertical aerial photography is produced with a camera mounted in the floor of an aircraft. The resulting image is rather similar
to a map and has a scale that is approximately constant throughout the image
area. Usually, vertical aerial photography is acquired taken in stereo, in which
successive photos have a degree of overlap to enable stereo-interpretation and
stereo-measurements (see Section 4.7).
Oblique photographs are obtained when the axis of the camera is not vertical. They can also be made using a hand-held camera and shooting through
the (open) window of an aircraft. The scale of an oblique photo varies from the
foreground to the background. This scale variation complicates the measurement of positions from the image and, for this reason, oblique photographs are
rarely used for mapping purposes. Nevertheless, oblique images can be useful
for purposes such as viewing sides of buildings.
(a)
first
(b)
previous
next
last
back
exit
zoom
contents
index
about
130
4.1. Introduction
This chapter focusses on the camera, films and methods used for vertical
aerial photography. First of all, Section 4.2 introduces the aerial camera and its
main components. Photography is based on exposure of a film, processing and
printing. The type of film applied largely determines the spectral and radiometric characteristics of the image products (Section 4.3). Section 4.4 discusses the
use of CCDs in digital cameras. Section 4.5 focusses on the geometric characteristics of aerial photography. Section 4.6. In Section 4.7 some aspects of aerial
photography missions are introduced. In Section 4.8, some recent technological
developments are discussed.
first
previous
next
last
back
exit
zoom
contents
index
about
131
first
previous
next
last
back
exit
zoom
contents
index
about
132
4.2.1
Lens cone
Perhaps the most important (and expensive) single component within the camera is the lens cone. This is interchangeable, and the manufacturers produce a
range of cones, each of different focal length. Focal length is the most important property of a lens cone since, together with flying height, it determines the
photo scale (Section 4.5.1). The focal length also determines the angle of view of
the camera. The longer the focal length, the narrower the angle of view. Lenses
are usually available in the following standard focal lengths, ranging from narrow angle (610 mm), to normal angle (305 mm) to wide angle (210 mm, 152 mm)
to super-wide angle (88 mm). The 152 mm lens is the most commonly used lens.
CAUTION
Focal Plane
Frame
Optical Axis
Lens
System
Diaphragm
Shutter
Coloured +
A.V. Filter
The lens cone is responsible for projecting an optical image onto the film.
In an ideal lens, all the lines passing the lens can be thought of going through
one central point. Hence, the projection is called the central projection. The accuracy of this projection depends on the quality of the actual lens. Even with high
quality lenses some distortions still take place. These distortions are impercepti-
first
previous
next
last
back
exit
zoom
contents
index
about
133
first
previous
next
last
back
exit
zoom
contents
index
about
134
4.2.2
The aerial camera is fitted with a system to record various items of relevant information onto the side of the negative: mission identifier, date and time, flying
height and the frame number (Figure 4.4).
Altimeter
Enschede
25 June
1991
Watch
Message
Pad
Fiducial Marks
Frame
Number
Spirit
Level
2205
A vacuum plate is used for flattening the film at the instant of exposure. Socalled fiducial marks are recorded in all corners and/or on the sides of the film.
The fiducial marks are required to determine the optical centre of the photo,
needed to align photos for stereoviewing. The fiducials are also used to record
the precise position of the film in relation to the optical system, which is required
first
previous
next
last
back
exit
zoom
contents
index
about
135
first
previous
next
last
back
exit
zoom
contents
index
about
136
4.2.3
Camera mounting
The camera mounting enables the camera to be levelled by the operator. This is
usually performed at the start of a mission once the aircraft is in a stable configuration. Formal specifications for aerial survey photography usually require that
the majority of images are maintained within a few degrees (less than 3) of true
vertical. Errors exceeding this cause difficulties in photogrammetric processing
(Section 9.3).
The levelling of the camera can also be done automatically and precisely by
a stabilized platform, but this is quite expensive.
first
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
137
first
previous
next
last
back
exit
zoom
contents
index
about
138
4.3.1
General sensitivity
first
previous
next
last
back
exit
zoom
contents
index
about
139
4.3.2
Spectral sensitivity
1.0
1.0
0.8
0.8
Relative Sensitivity
Relative Sensitivity
Sensitization techniques are used not only to increase the general sensitivity but
also to produce films that are sensitive to longer wavelengths. By adding sensitizing dyes to the basic silver halide emulsion, the energy of longer light wavelengths becomes sufficient to produce latent images. In this way, a monochrome
film can be made sensitive to green, red or infrared wavelengths (Section 3.3.2).
A black-and-white (monochrome) type of film has one emulsion layer. Using sensitization techniques, different types of monochrome films are available.
Most common are the panchromatic and infrared sensitive film. The sensitivity
curves of these films are shown in Figure 4.5.
0.6
0.4
0.2
0.4
0.2
Blue + Green + Red + Infra Red Sensitive
0.0
(a)
first
Figure 4.5:
Spectral
sensitivity curves of a
panchromatic film (a) and
a black/white infrared film
(b). Note the difference in
scaling on the x-axis.
0.6
400
previous
500
600
Wavelength (nm)
next
last
700
back
400
exit
500
600
700
Wavelength (nm)
zoom
800
contents
900
(b)
index
CAUTION
about
140
4.3.3
Colour photography uses an emulsion with three sensitive layers to record three
wavelength bands corresponding to the three primary colours of the spectrum,
i.e. blue, green and red. There are two types of colour photography: true colour
and false colour infrared.
In colour photography each of the primary colours creates colour particles of
the opposite colour, i.e. dyes that remove this colour, but allow the other two
colours to pass. The result is a colour negative. If we make a colour copy of that
negative, we will get the original colours in that copy.
The emulsion used for colour infrared film creates yellow dyes for green
light, magenta dyes for red light and cyan dyes for infrared light (IR). Blue light
should be kept out of the camera by filter. IR, Red and Green give the same
result as Red, Green and Blue, respectively, in the normal case. If a copy of
this IR-negative is made with a normal colour emulsion, the result is an image
which shows blue for green objects, green for red objects and red for IR objects.
This is called a false colour infrared image.
first
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
141
4.3.4
Scanning
Classical photogrammetric techniques as well as visual photo-interpretation generally employ hard-copy photographic images. These can be the original negatives, positive prints or diapositives. Digital photogrammetric systems, as well
as geographic information systems, require digital photographic images. A scanner is used to convert a film or print into a digital form. The scanner samples
the image with an optical detector and measures the brightness of small areas
(pixels). The brightness values are then represented as a digital number (DN)
on a given scale. In the case of a monochrome image, a single measurement is
made for each pixel area. In the case of a coloured image, separate red, green
and blue values are measured. For simple visualization purposes, a standard
office scanner can be used, but high metric quality scanners are required if the
digital photos are to be used in precise photogrammetric procedures.
In the scanning process, the setting of the size of the scanning resolution is
most relevant. This is also referred to as the scanning density and is expressed in
dots per inch (dpi; 1 inch = 2.54 cm). The dpi-setting depends on the detail required for the application and is usually limited by the scanner. Office scanners
permit around 600 dpi, which gives a dot size of 42 m (2.54 cm 600 dots).
Photogrammetric scanners, on the other hand, may produce 3600 dpi (7 m dot
size).
For a monochrome 23 cm 23 cm negative, 600 dpi scanning results in a file
size of 9 600 = 5,400 rows and the same number of columns. Assuming that
1 byte is used per pixel (i.e. there are 256 grey levels), the resulting files requires
29 Mbyte of disk space. When the scale of the negative is given, the ground pixel
size of the resulting image can be calculated. Assuming a photo scale of 1:18,000,
the first step is to calculate the size of one dot: 25.4 mm / 600 dots = 0.04 mm
per dot. The next step is to relate this to the scale: 0.04 mm 18,000 = 720 mm in
first
previous
next
last
back
exit
zoom
contents
index
about
142
first
previous
next
last
back
exit
zoom
contents
index
about
143
first
previous
next
last
back
exit
zoom
contents
index
about
144
first
previous
next
last
back
exit
zoom
contents
index
about
145
first
previous
next
last
back
exit
zoom
contents
index
about
146
4.5.1
Scale
The relationship between the photo scale factor, s, flying height, H, and lens focal
length, f , is given by
s=
H
.
f
(4.1)
Hence, the same scale can be achieved with different combinations of focal length
and flying height. If the focal length of a lens is decreased, while the flying height
remains constant, then (also refer to Figure 4.6):
The image scale factor will increase and the size of the individual details in
the image becomes smaller. In the example shown in Figure 4.6, using a
150 mm and 300 mm lens at H =2000 m results in a scale factor of 13,333
and 6,666, respectively.
The ground coverage increases. A 23 cm negative covers a length (and width)
of respectively 3066 m and 1533 m using a 150 mm and 300 mm lens. This
has implications on the number of photos required for the mapping of a
certain area, which, in turn, affects the subsequent processing (in terms of
labour) of the photos.
The angular field of view increases and the image perspective changes. The
total field of view in situations (a) and (b) is 74 and 41, respectively. When
wide-angle photography is used for mapping, the measurement of height
information (z dimension) in a stereoscopic model is more accurate than
when long focal length lenses are used. The combination of a low flying
height with a wide-angle lens can be problematic when there are large terrain height differences or high man-made objects in the scene. Some areas
first
previous
next
last
back
exit
zoom
contents
index
about
147
f=300mm
H=2000 m
74
41
3066 m
(a)
1533 m
(b)
first
previous
next
last
back
exit
zoom
contents
index
about
148
4.5.2
Spatial resolution
While scale is a generally understood and applied term, the use of spatial resolution in aerial photography is quite difficult. Spatial resolution refers to the
ability to distinguish small adjacent objects in an image. The spatial resolution
of monochrome aerial photographs ranges from 40 to 800 line pairs per mm.
The better the resolution of a recording system, the more easily the structure of
objects on the ground can be viewed in the image. The spatial resolution of an
aerial photograph depends on:
the image scale factorspatial resolution decreases as the scale factor increases,
the quality of the optical systemexpensive high quality aerial lenses give
much better performance than the inexpensive lenses on amateur cameras,
the grain structure of the photographic filmthe larger the grains, the
poorer the resolution,
the contrast of the original objectsthe higher the target contrast, the better the resolution,
atmospheric scattering effectsthis leads to loss of contrast and resolution,
image motionthe relative motion between the camera and the ground
causes blurring and loss of resolution.
From the above list it can be concluded that the physical value of resolution
in an aerial photograph depends on a number of factors. The most variable
factor is the atmospheric condition, which can change from mission to mission,
and even during a mission.
first
previous
next
last
back
exit
zoom
contents
index
about
149
(a)
first
previous
next
last
back
exit
zoom
(b)
contents
index
about
150
rh
.
H
(4.2)
In this equation, r is the radial distance (mm) from the nadir, h (m) is the height
of the terrain above the reference plane, and H (m) is the flying height above the
reference plane (where nadir intersects the terrain). The equation shows that the
amount of relief displacement is zero at nadir (r = 0), greatest at the corners of
the photograph, and is inversely proportional to the flying height.
first
previous
next
last
back
exit
zoom
contents
index
about
151
In addition to relief displacement you can imagine that buildings and other
tall objects also can cause displacement (height displacement). This effect is, for
example, encountered when dealing with large scale photos of urban or forest
areas (Figure 4.8). In this chapter the subject of height displacement is not further
elaborated.
The main effect of relief displacement is that inaccurate or wrong coordinates
might be determined when, for example, digitizing from image data. Whether
relief displacement should be considered in the geometric processing of the image data depends on its impact on the required accuracy of the geometric information derived from the images. Relief displacement can be corrected for
if information on the terrain topography is available (in the form of a DTM).
first
previous
next
last
back
exit
zoom
contents
index
about
152
first
previous
next
last
back
exit
zoom
contents
index
about
153
Sideways
overlap
between
flight lines
Survey Area
3
Figure 4.9: Example survey area for aerial photography. Note the forward
and sideways overlap of
the photographs.
previous
next
last
back
exit
zoom
contents
index
about
154
first
previous
next
last
back
exit
zoom
contents
index
about
155
first
previous
next
last
back
exit
zoom
contents
index
about
156
first
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
157
first
previous
next
last
back
exit
zoom
contents
index
about
158
Summary
This chapter provided an overview of aerial photography. First, the characteristics of oblique and vertical aerial photography were distinguished. Vertical
aerial photography requires a specially adapted aircraft. The main components
of an aerial camera system are the lens and film. The focal length of the lens, in
combination with the flying height, determines the photo scale factor. The film
type used determines which wavelengths bands are recorded. The most commonly used film types are panchromatic, black-and-white infrared, true-colour
and false-colour infrared. Other characteristics of film are the general sensitivity, which is related to the size of the grains, and spectral sensitivity, which is
related to the wavelengths that the film is sensitive to. After exposure, the film
is developed and printed. The printed photo can be scanned to use the photo in
a digital environment.
In digital cameras the CCD is used as an image recording device.
Relief displacement is the distortion of the geometric relationship between
the image data and the terrain, caused by relief differences on the ground.
There have been many technological developments to improve mission execution as well the image quality itself. Most recent in this development is the
digital camera, which directly yields digital data. The advent of GPS has enabled
automatic flight planning and image acquisition.
first
previous
next
last
back
exit
zoom
contents
index
about
159
Questions
The following questions can help you to study Chapter 4.
1. Consider an area of 500 km2 that needs aerial photo coverage for topographic mapping at 1:50,000. Which specifications would you give on film,
photo scale, overlap, etc.?
2. Go to the Internet and locate three catalogues (archives) of aerial photographs. Compare the descriptions and specifications of the photographs
(in terms of scale, resolution, format, . . . ).
The following are typical exam questions:
1. Calculate the scale factor for an aerial photo taken at 2500 m height by a
camera with a focal length of 88 mm.
2. Consider a (monochrome) black-and-white film. What determines the general sensitivity of this film and why is it important?
first
previous
next
last
back
exit
zoom
contents
index
about
160
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 5
Multispectral scanners
first
previous
next
last
back
exit
zoom
contents
index
about
161
162
5.1. Introduction
5.1 Introduction
Multispectral scanners measure reflected electromagnetic energy by scanning
the Earths surface. This results in digital image data, of which the elementary unit is a picture element, a pixel. As the name multispectral suggests, the
measurements are made for different ranges of the EM spectrum. Multispectral
scanners have been used in remote sensing since 1972 when the first Landsat
satellite was launched. After the aerial camera it is the most commonly used
sensor. Applications of multispectral scanner data are mainly in the mapping of
land cover, vegetation, surface mineralogy and surface water.
Two types of multispectral scanners can be distinguished, the whiskbroom
scanner and the pushbroom sensor. The principles of these scanners and their
characteristics are explained in Section 5.2 and Section 5.3, respectively. Multispectral scanners are mounted on airborne and spaceborne platforms. Section 5.4 describes the most widely used satellite based scanners.
first
previous
next
last
back
exit
zoom
contents
index
about
163
first
previous
next
last
back
exit
zoom
contents
index
about
164
5.2.1
Whiskbroom scanners use solid state detectors (i.e. made of semi-conducting material) for measuring the energy transferred by the optical system to the sensor.
This optical system focusses the incoming radiation at the surface of the detector.
Various techniques, using prisms or gratings, are applied to split the incoming
radiation into spectral components that each have their own detector.
The detector transforms the electromagnetic radiation (photons) into electrons. The electrons are input to an electronic device that quantifies the level of
energy into the required units. In digital imaging systems, a discrete value is
used to store the level of energy. These discrete levels are referred to as Digital Number values, or DN-values. The fact that the input is measured in discrete levels is also referred to as quantization. Using the expression introduced
in Chapter 2, Equation 2.2, one can calculate the amount of energy of a photon
corresponding to a specific wavelength, using
(5.1)
Q = h v,
where Q is the energy of a photon (J), h is Plancks constant 6.6260693 1034 (J s),
and, v is the frequency (Hz). The solid state detector measures the amount of
energy (J) during a specific time period, which results in (J/s) or (W) (watt).
The range of input radiance, between a minimum and a maximum level, that
a detector can handle is called the dynamic range. This range is converted into the
range of a specified data format. Typically an 8-bit, 10-bit or 12-bit data format
is used. The 8-bit format allows 28 = 256 levels or DN-values. Similarly, the
12-bit format allows 212 = 4096 distinct DN-values. The smallest difference in
input level that can be distinguished is called the radiometric resolution. Consider
a dynamic range of energy between 0.5 W and 3 W. Using 100 or 250 DN-values
results in a radiometric resolution of 25 mW and 10 mW respectively.
first
previous
next
last
back
exit
zoom
contents
index
about
165
100
Figure 5.2:
Normalized
(maximum is 100%) spectral response curve of
a specific sensor.
It
shows that the setting of
this band (band 1 of the
NOAA-14 AVHRR sensor)
ranges from approximately
570 nm to 710 nm.
80
60
40
20
0
500
550
600
650
700
750
800
Wavelength (nm)
first
previous
next
last
back
exit
zoom
contents
index
about
166
5.2.2
At any instant, via the mirror system, the detector of the whiskbroom scanner
observes a circular area on the ground. Directly below the platform (at nadir),
the diameter, D, depends on the opening angle of a single detector, , and the
flying height, H:
(5.2)
D = H.
first
previous
next
last
back
exit
zoom
contents
index
about
167
first
previous
next
last
back
exit
zoom
contents
index
about
168
first
previous
next
last
back
exit
zoom
contents
index
about
169
5.3.1
first
previous
next
last
back
exit
zoom
contents
index
about
170
5.3.2
For each single line, pushbroom sensors have a geometry similar to that of aerial
photos, which have a central projection. Because of the central projection, images from pushbroom sensors exhibit less geometric distortions than images
from whiskbroom scanners. In the case of flat terrain, and a limited total field
of view (FOV), the scale is the same over the line, resulting in equally spaced
pixels. The concept of IFOV cannot be applied to pushbroom sensors.
Most pushbroom sensors have the ability for off-nadir viewing. In such a situation, the scanner can be pointed towards areas left or right of the orbit track
(across-track), or fore or aft (along-track). This characteristic has a number of
advantages. Firstly, it can be used to observe areas that are not at nadir of the
satellite, which reduces the time between successive observations (revisit time).
Secondly, it can be used to image an area that is not covered by clouds at that
particular moment. And lastly, off-nadir viewing is used to produce stereo images.
The production of a stereo image pair using across-track stereo viewing needs
a second image taken from a different track. When using along-track stereo
viewing, the second image can be taken in quick succession of the first image by
the same sensor along the same track. This means that the images are taken at
almost the same time and under the same conditions, such as season, weather,
and plant phenology.
When applying off-nadir viewing, similar to oblique photography, the scale
in an image varies and should be corrected for.
As with whiskbroom scanners, an integration over time takes place in pushbroom sensors. Consider a moving platform with a pushbroom sensor. Each
element of the CCD-array measures the energy related to a small area below the
platform. At 10 m spatial resolution and 6.5 km/s ground speed, every 1.5 milfirst
previous
next
last
back
exit
zoom
contents
index
about
171
first
previous
next
last
back
exit
zoom
contents
index
about
172
first
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
173
5.4.1
first
Low-resolution systems
previous
next
last
back
exit
zoom
contents
index
about
174
Meteosat-8
Geo-stationary, 0 longitude
SEVIRI
(Spinning Enhanced VIS and IR Imager)
Full Earth disc (FOV = 18)
Not applicable
15 minutes
0.50.9 (PAN), 0.6, 0.8 (VIS), 1.6, 3.9 (IR),
6.2, 7.3 (WV), 8.7, 9.7 10.8, 12.0, 13.4
(TIR)
1 km (PAN), 3 km (all other bands)
www.eumetsat.de
The spectral bands of the SEVIRI sensor (Table 5.1) are chosen for observing phenomena that are relevant to meteorologists: a panchromatic band (PAN),
mid-infrared bands which give information about the water vapour (WV) present
first
previous
next
last
back
exit
zoom
contents
index
about
175
first
previous
next
last
back
exit
zoom
contents
index
about
176
Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archive at
Table 5.2:
NOAA-17
AVHRR characteristics.
NOAA-17
812 km, 98.7 inclination, sun-synchronous
AVHRR-3
(Advanced Very High Resolution Radiometer)
2800 km (FOV = 110)
No
214 times per day, depending on latitude
0.580.68 (1), 0.731.00 (2),
1.581.64 (3A day), 3.553.93 (3B night),
10.311.3 (4), 11.512.5 (5)
1 km 1 km (at nadir), 6 km 2 km (at
limb), IFOV=1.4 mrad
www.saa.noaa.gov
As the AVHRR sensor (Table 5.2) has a very wide FOV (110) and is at a large
distance from the Earth, the whiskbroom principle causes a large difference in
the ground cell measured within one scanline (Figure 5.4). The standard image
data products of AVHRR are resampled to image data with equally sized ground
pixels.
first
previous
next
last
back
exit
zoom
contents
index
about
177
6
5
4
3
2
1
60
50
40
30
20
10
10
20
30
40
50
60
first
previous
next
last
back
exit
zoom
contents
index
about
178
first
previous
next
last
back
exit
zoom
contents
index
about
179
5.4.2
first
Medium-resolution systems
previous
next
last
back
exit
zoom
contents
index
about
180
Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archives at
Table 5.3:
Landsat-7
ETM+ characteristics.
Landsat-7
705 km, 98.2 inclination,
sun-synchronous, 10:00 AM crossing,
16-day repeat cycle
ETM+ (Enhanced Thematic Mapper)
185 km (FOV = 15)
No
16 days
0.450.52 (1), 0.520.60 (2), 0.630.69 (3),
0.760.90 (4), 1.551.75 (5), 10.412.50
(6),
2.082.34 (7), 0.500.90 (PAN)
15 m (PAN), 30 m (bands 15,7), 60 m
(band 6)
earthexplorer.usgv.gov
edcimswww.cr.usgs.gov/imswelcome
Landsat-7
The Landsat programme is the oldest civil Earth Observation programme. It
started in 1972 with the Landsat-1 satellite carrying the MSS multispectral sensor. In 1982, the Thematic Mapper (TM) replaced the MSS sensor. Both MSS
and TM are whiskbroom scanners. In April 1999 Landsat-7 was launched carrying the ETM+ scanner (Table 5.3). Today, only Landsat-5 and -7 are operational.
On its 20th anniversary (March 1st , 2004) Landsat-5 was still generating valuable
data.
There are many applications of Landsat TM data in land-cover mapping,
first
previous
next
last
back
exit
zoom
contents
index
about
181
first
previous
next
last
back
exit
zoom
contents
index
about
182
first
Wavelength
(m)
0.450.52
(Blue)
0.520.60
(Green)
0.630.69
(Red)
0.760.90
(NIR)
1.551.75
(SWIR)
10.412.5
(TIR)
2.082.35
(SWIR)
0.500.90
(15-m PAN)
previous
next
last
Example Applications
Coastal water mapping: bathymetry &
quality
Ocean phytoplankton & sediment mapping
Atmosphere: pollution & haze detection
Chlorophyll reflectance peak
Vegetation species mapping
Vegetation stress
Chlorophyll absorption
Plant species differentiation
Biomass content
Vegetation species & stress
Biomass content
Soil moisture
Vegetation-soil delineation
Urban area mapping
Snow-cloud differentiation
Vegetation stress analysis
Soil moisture & evapotranspiration mapping
Surface temperature mapping
Geology: mineral and rock type mapping
Water-body delineation
Vegetation moisture content mapping
Medium-scale topographic mapping
Image sharpening
Snow-cover classification
back
exit
zoom
contents
index
about
183
previous
next
last
back
exit
zoom
contents
index
about
184
first
previous
next
last
back
exit
zoom
contents
index
about
185
System
Orbit
Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archives at
first
previous
next
Terra
705 km, 98.2 inclination,
sun-synchronous, 10:30 AM crossing,
16-day repeat cycle
ASTER
60 km
across-track 8.5 SWIR and TIR 24
VNIR along-track 27.7 backwards band 3B
5 days (VNIR)
VIS (bands 12) 0.56, 0.66,
NIR 0.81 (3N nadir and 3B backward
27.7),
SWIR (49) 1.65, 2.17, 2.21, 2.26, 2.33,
2.40
TIR (bands 1014) 8.3, 8.65, 9.10, 10.6,
11.3
15 m (VNIR), 30 m (SWIR), 90 m (TIR)
terra.nasa.gov
edcimswww.cr.usgs.gov/imswelcome
last
back
exit
zoom
contents
index
about
186
5.4.3
High-resolution systems
System
Orbit
Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archives at
first
previous
next
SPOT-5
822 km, 98.7 inclination,
sun-synchronous, 10:30 AM crossing,
26-day repeat cycle
2 HRG (High Resolution Geometric) and
HRS (High Resolution Stereoscopic)
60 km
31 across-track
23 days (depending on latitude)
0.500.59 (Green), 0.610.68 (Red),
0.780.89 (NIR), 1.581.75 (SWIR),
0.480.70 (PAN)
10 m, 5 m (PAN)
sirius.spotimage.fr
www.vgt.vito.be (free VEGETATION data,
older than 3 months)
last
back
exit
zoom
contents
index
about
187
first
previous
next
last
back
exit
zoom
contents
index
about
188
previous
next
last
back
exit
zoom
contents
index
about
189
Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archive at
Resourcesat-1
817 km, 98.8 inclination,
sun-synchronous, 10:30 AM crossing,
24-day repeat cycle
LISS4
70 km
20 across-track
524 days
0.56, 0.65, 0.80
6m
www.spaceimaging.com
previous
next
last
back
exit
zoom
contents
index
about
190
first
previous
next
last
back
exit
zoom
contents
index
about
191
Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands (m)
Spatial resolution
Data archive at
Ikonos
681 km, 98.2 inclination,
sun-synchronous, 10:30 AM crossing,
14-day repeat cycle
Optical Sensor Assembly (OSA)
11 km
50 omnidirectional
13 days
0.450.52 (1), 0.520.60 (2), 0.630.69 (3),
0.760.90 (4), 0.450.90 (PAN)
1 m (PAN), 4 m (bands 14)
www.spaceimaging.com
It is expected that, in the long term, 50% of the aerial photography will be refirst
previous
next
last
back
exit
zoom
contents
index
about
192
first
previous
next
last
back
exit
zoom
contents
index
about
193
5.4.4
Due to the limited success and availability of imaging spectrometry (or hyperspectral) systems, the Earth Observing-1 (EO-1), which is in fact a multi-instrument
satellite, is described in the first section as an example of an imaging spectrometry system.
ESAs micro-satellite Proba is discussed in the next section.
System
Orbit
Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands
Spatial resolution
Data archive at
first
previous
next
EO-1
705 km, 98.7 inclination,
sun-synchronous, 10:30 AM crossing,
16-day repeat cycle
Hyperion
7.5 km
No
16 days
220 bands, covering 0.4 m to 2.5 m
30 m
eo1.gsfc.nasa.gov
last
back
exit
zoom
contents
index
about
194
previous
next
last
back
exit
zoom
contents
index
about
195
System
Orbit
Sensor
Swath width
Off-nadir viewing
Revisit time
Spectral bands
Spatial resolution
Data archive at
first
previous
next
last
back
exit
zoom
contents
index
about
196
first
previous
next
last
back
exit
zoom
contents
index
about
197
5.4.5
first
previous
next
last
back
exit
zoom
contents
index
about
198
Atmosphere
Clouds
Humidity
Radiative Fluxes
Temperature
Trace Gases
Aerosols
Land
Surface Temperature
Vegetation Characteristics
Surface Elevation
Ocean
Ocean Colour
Sea Surface Temperature
Surface Topography
Turbidity
Wave Characteristics
Marine Geoid
Ice
Extent
Snow Cover
Topography
Temperature
first
previous
next
last
AATSR
DORIS
SCIAMACHY
LR
MWR
MIPAS
MERIS
RA-2
ASAR
Envisat-1
GOMOS
back
exit
zoom
contents
index
about
199
first
previous
next
last
back
exit
zoom
contents
index
about
200
Sensor
Swath width
Off-nadir viewing
Revisit time
Frequency
Polarization
Spatial resolution
Sensor
Swath width
Revisit time
Spectral range (m)
Spectral bandwidth
Bands
Spatial resolution
Data archive at
Table 5.12:
Characteristics of the
Envisat-1 satellite and
its ASAR and MERIS
sensors.
Envisat-1
800 km, 98.6 inclination,
sun-synchronous, 10:00 AM crossing,
35-day repeat cycle
ASAR (Advanced SAR)
56 km to 405 km
across-track 17 to 45
35 days
C-band, 5.331 GHz
several modes: HH+VV, HH+HV or VV+VH
30 m or 150 m (depending on mode)
MERIS
1150 km
3 days
0.391.04 (VNIR)
1.25 nm to 25 nm (programmable)
15 bands (due to limited capacity)
300 m (land), 1200 m (ocean)
envisat.esa.int
SCIAMACHY,
Scanning Imaging Absorption Spectrometer for Atmospheric Cartography.
Additionally, ESAs Artemis data relay satellite system is used for communication to the ground.
The most important sensors for land applications are the Advanced Synthetic Aperture Radar (ASAR), the Medium-Resolution Imaging Spectrometer
(MERIS), and the Advanced Along-Track Scanning Radiometer (AATSR).
first
previous
next
last
back
exit
zoom
contents
index
about
201
Band
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
first
Band
center
(nm)
412.5
442.5
490
510
560
620
665
681.25
705
753.75
760
775
865
890
900
previous
next
Band
width
(nm)
10
10
10
10
10
10
10
7.5
10
7.5
2.5
15
20
10
10
last
Table 5.13:
Envisat-1
MERIS band characteristics.
Potential Applications
Yellow substance, turbidity
Chlorophyll absorption maximum
Chlorophyll and other pigments
Turbidity, suspended sediment, red tides
Chlorophyll reference, suspended sediment
Suspended sediment
Chlorophyll absorption
Chlorophyll fluorescence
Atmospheric correction, vegetation
Oxygen absorption reference, vegetation
Oxygen absorption band
Aerosols, vegetation
Aerosols corrections over ocean
Water vapour absorption reference
Water vapour absorption, vegetation
back
exit
zoom
contents
index
about
202
first
previous
next
last
back
exit
zoom
contents
index
about
203
5.4.6
Future developments
There have been trends towards higher spatial resolution (1-meter detail), higher
spectral resolution (more than 100 bands), higher temporal resolution (global
coverage within 3 days), and higher radiometric resolution (simultaneous observation of dark and bright targets). Although most of these trends were driven
by advancing technology, the trend towards higher spatial resolution was initiated by an act of politics, when the US president Clinton issued the US Land
Remote Sensing Act of 1992. The act initiated a new Race to Space, this time
about which private company would be the first to launch their high-resolution
satellite. Nowadays, private companies spend more money on remote sensing
than the governments.
In addition to the traditional spacefaring countries, new countries have launched their own remote sensing satellites. Countries such as India, China and
Brazil, for instance, have a strong ongoing remote sensing program. As remote
sensing technology matures and becomes available at lower cost, still other players may enter the field.
The following list shows a number of technological developments:
Fast development. Nowadays, the time from the drawing board to the launch
in space of a remote sensing satellite can be as short as one year. This means
that satellites can be developed and deployed very fast.
New sensor types. We may see a development of sensor types that were
previously unused in space. One example of this may be P-band radar.
Up to now, no P-band radar was used in space. P-band has a longer wavelength (30 cm to 100 cm) than the radar systems used before and penetrates
deeper into the soil, which means it may offer an unprecedented view on
first
previous
next
last
back
exit
zoom
contents
index
about
204
first
previous
next
last
back
exit
zoom
contents
index
about
205
previous
next
last
back
exit
zoom
contents
index
about
206
first
previous
next
last
back
exit
zoom
contents
index
about
207
Summary
The multispectral scanner is a sensor that collects data in various wavelength
bands of the EM spectrum. The scanner can be mounted on an aircraft or on
a satellite. There are two types of scanners, whiskbroom scanner and pushbroom sensor. They use single solid state detectors and arrays of solid state
detectors (CCD), respectively, for measuring EM energy levels. The resulting
image data store the measurements as Digital Numbers. Multispectral scanners
provide multi-band data.
In terms of geometrically distortions and operational reliability the pushbroom sensor is superior over the whiskbroom scanner. However, because of
the limited spectral range of current CCDs, whiskbroom scanners are still used
for spectral ranges above 2.5 m, such as the mid-infrared and thermal infrared
bands.
Operational remote sensing satellites can be distinguished by the resolution
and the characteristics of their (main) sensor. A number of sensors and platforms
were discussed in the following categories: low resolution, medium resolution,
high resolution, imaging spectrometry, and, finally, a large multi-instrument system with an imaging radar. In addition, some future developments were discussed. Keywords such as small, high resolution, high quality, low cost, agile,
constellation, large coverage, rapid dissemination, describe the trend.
first
previous
next
last
back
exit
zoom
contents
index
about
208
Questions
The following questions can help you to study Chapter 5.
1. Compare multispectral scanner data with scanned aerial photographs.
Which similarities and differences can you identify?
The following are typical exam questions:
1. Explain the principle of the whiskbroom scanner.
2. Explain the principle of the pushbroom sensor.
3. What does CCD stand for and what is it used for?
4. Consider a whiskbroom scanner at 5000 m height and with a Field of View
() of 2 mrad. Calculate the diameter of the area observed on the ground.
first
previous
next
last
back
exit
zoom
contents
index
about
209
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 6
CAUTION
Active sensors
first
previous
next
last
back
exit
zoom
contents
index
about
210
211
6.1. Introduction
6.1 Introduction
Active remote sensing technologies have the potential to provide accurate information about the land surface by imaging Synthetic Aperture Radar (SAR) from
airborne or spaceborne platforms, and three-dimensional measurement of the
surface by interferometric SAR (INSAR or IFSAR) and airborne laser scanning
(LIDAR). While these technologies are both active ranging systems utilizing precise GPS and INS systems, they represent fundamentally different sensing processes. Radar systems, introduced in Section 6.2, are based upon microwave
sensing principles, while laser scanners are optical sensors, typically operating
in the near-infrared portion of the electromagnetic spectrum. They are introduced in Section 6.3.
first
previous
next
last
back
exit
zoom
contents
index
about
212
6.2. Radar
6.2 Radar
first
previous
next
last
back
exit
zoom
contents
index
about
213
6.2. Radar
6.2.1
What is radar?
So far, you have learned about remote sensing using the visible and infrared
part of the electromagnetic spectrum. Microwave remote sensing uses electromagnetic waves with wavelengths between 1 cm and 1 m (Figure 2.5). These relatively longer wavelengths have the advantage that they can penetrate clouds
and are independent of atmospheric conditions, such as haze. Although microwave remote sensing is primarily considered an active technique, also passive sensors are used. They operate similarly to thermal sensors by detecting
naturally emitted microwave energy. They are primarily used in meteorology,
hydrology and oceanography. In active systems on the other hand, the antenna
transmits microwave signals from an antenna to the Earths surface where they
are backscattered. The part of the electromagnetic energy that is scattered into
the direction of the antenna is detected by the sensor as illustrated in Figure 6.1.
There are several advantages to be gained from the use of active sensors, which
have their own energy source:
It is possible to acquire data at any time, including during the night (similar
to thermal remote sensing).
Since the waves are created actively, the signal characteristics are fully controlled (e.g. wavelength, polarization, incidence angle, etc.) and can be adjusted according to the desired application.
Active sensors are divided into two groups: imaging and non-imaging sensors.
Radar sensors belong to the group of most commonly used active (imaging) microwave sensors. The term radar is an acronym for radio detection and ranging.
Radio stands for the microwave and range is another term for distance. Radar
sensors were originally developed and used by the military. Nowadays, radar
first
previous
next
last
back
exit
zoom
contents
index
about
214
6.2. Radar
.
sensors are widely used in civil applications as well, such as environmental monitoring. To the group of non-imaging microwave instruments belong altimeters,
which collect distance information (e.g. sea surface height), and scatterometers,
which acquire information about the object properties (e.g. wind speed).
This section focuses on the principles of imaging radar and its applications.
The interpretation of radar imagery is less intuitive than of that obtained from
optical remote sensing. This is because of differences in physical interaction of
the waves with the Earths surface. The section explains which interactions take
place and how radar images can be interpreted.
first
previous
next
last
back
exit
zoom
contents
index
about
215
6.2. Radar
6.2.2
G 2 2 P t
,
(4)3 R4
(6.1)
where
Pr
G
Pt
=
=
=
=
=
received energy,
antenna gain,
wavelength,
transmitted energy,
radar cross section, which is a function of the object characteristics and the size of the illuminated area, and
range from the sensor to the object.
From this equation you can see that there are three main factors that influence
the strength of the backscattered received energy:
first
previous
next
last
back
exit
zoom
contents
index
about
216
6.2. Radar
radar system properties, i.e. wavelength, antenna and transmitted power,
radar imaging geometry, that defines the size of the illuminated area which
is a function of, for example, beam-width, incidence angle and range, and
characteristics of interaction of the radar signal with objects, i.e. surface
roughness and composition, and terrain topography and orientation.
They are explained in the following sections in more detail.
Intensity
Time
Intensity
Time
Sampling
Time
Intensity
Time
What exactly does a radar system measure? To interpret radar images correctly, it is important to understand what a radar sensor detects. The physical
properties of a radar wave are the same as those introduced in Section 2.2. Radar
waves, too, are electric and magnetic fields that oscillate in the shape of a sine
wave, oriented perpendicular to each other. The concepts of wavelength and frequency are used as described before. In addition, amplitude, phase and period are
first
previous
next
last
back
exit
zoom
contents
index
about
217
6.2. Radar
relevant. The amplitude is the peak value of the wave. It relates to the amount
of energy contained in the wave. The phase is the fraction of the period that has
elapsed relative to the start point of the wave. In other words, the phase rotates
360, i.e. a full circle, for one full wave. The time required to complete that full
wave is the period.
The radar transmitter creates microwave signals, i.e. pulses of microwaves at
a fixed frequency (the Pulse Repetition Frequency [PRF]), that are directed by the
antenna into a beam. A pulse travels in this beam through the atmosphere, illuminates a portion of the Earths surface, is backscattered and passes through
the atmosphere again to reach the antenna where the signal is received and the
intensity of it is determined. The signal needs to pass twice the distance between object and antenna, and, knowing the speed of light, the distance (range)
between sensor and object can be derived.
To create an image, the return signal of a single pulse is sampled and these
samples are stored in an image line (Figure 6.2). With the movement of the sensor emitting pulses, a two-dimensional image is created (each pulse defines one
line). The radar sensor, therefore, measures distances and detects backscattered
signal intensities.
Commonly used imaging radar bands Similarly to optical remote sensing,
radar sensors operate with one or more different bands. For better identification,
a standard has been established that defines various wavelength ranges using
letters to distinguish among the various bands (Figure 6.3). In the description
of different radar missions you will recognize the different wavelengths used if
you see the letters. The European ERS mission and the Canadian Radarsat, for
example, use C-band radar. Just like multispectral bands, different radar bands
provide information about different object characteristics.
first
previous
next
last
back
exit
zoom
contents
index
about
218
6.2. Radar
Band
Q V
Frequency (GHz)
0.3
1.0
3.0
10.0
30.0
100.0
Wavelength (cm)
100
30
10
0.3
Figure 6.3:
Microwave
spectrum
and
band
identification by letters.
Microwave polarizations The polarization of an electromagnetic wave is important in the field of radar remote sensing. Depending on the orientation of
the transmitted and received radar wave, polarization will result in different
images (Figure 6.4). It is possible to work with horizontally, vertically or crosspolarized radar waves. Using different polarizations and wavelengths, you can
collect information that is useful for particular applications, for example, to classify agricultural fields. In radar system descriptions you will come across the
following abbreviations:
HH: horizontal transmission and horizontal reception,
VV: vertical transmission and vertical reception,
HV: horizontal transmission and vertical reception, and
VH: vertical transmission and horizontal reception.
first
previous
next
last
back
exit
zoom
contents
index
about
219
6.2. Radar
Electric field
Wavelength, l
Distance
Magnetic field
Velocity of light, c
Ra
ng
ve
cto
Local
incidence
angle
ath
Flight p
Illumination angle
Ra
ng
Altitude
Earth
normal
vector
fa
cto
an r
tr
an
ge Incidence
Sl
Swath width
ange
e
g
n
a
Far r
nge
th ra
Swa
ar R
angle Ne
ir
Nad
uth
Azim
Ground range
first
previous
next
last
back
exit
zoom
contents
index
about
220
6.2. Radar
6.2.3
The platform carrying the radar sensor moves along the orbit in the flight direction (Figure 6.5). You can see the ground track of the orbit/flight path on the
Earths surface at nadir. The microwave beam illuminates an area, or swath, on
the Earths surface, with an offset from the nadir, i.e. side-looking. The direction
along-track is called azimuth, the direction perpendicular (across-track) is called
range.
first
previous
next
last
back
exit
zoom
contents
index
about
221
6.2. Radar
Radar viewing geometry
Radar sensors are side-looking instruments. The portion of the image that is
closest to the nadir track of the satellite carrying the radar is called near range.
The part of the image that is farthest from the nadir is called far range (Figure 6.5).
The incidence angle of the system is defined as the angle between the radar beam
and the local vertical. Moving from near range to far range, the incidence angle increases. It is important to distinguish between the incidence angle of the
sensor and the local incidence angle, which differs depending on terrain slope and
earth-curvature (Figure 6.5). It is defined as the angle between the radar beam
and the local surface normal. The radar sensor measures the distance between
antenna and object. This line is called slant range. But the true horizontal distance along the ground corresponding to each measured point in slant range is
called ground range (Figure 6.5).
Pu
lse
le
PL
ngt
h
Front of return wave from A
<
first
previous
next
last
back
exit
PL
2
zoom
contents
index
about
222
6.2. Radar
Spatial resolution
In radar remote sensing, the images are created from the backscattered portion
of transmitted signals. Without further sophisticated processing, the spatial resolutions in slant range and azimuth direction are defined by pulse length and antenna beam width, respectively. This setup is called Real Aperture Radar (RAR).
Due to the different parameters that determine the spatial resolution in range
and azimuth direction, it is obvious that the spatial resolution in the two directions is different. For radar image processing and interpretation it is useful to
resample the image data to regular pixel spacing in both directions.
Slant range resolution In slant range the spatial resolution is defined as the
distance that two objects on the ground have to be apart to give two different
echoes in the return signal. Two objects can be resolved in range direction if
they are separated by at least half a pulse length. In that case the return signals
will not overlap. The slant range resolution is independent of the range (see
Figure 6.6).
Azimuth resolution The spatial resolution in azimuth direction depends on
the beam width and the range. The radar beam width is proportional to the
wavelength and inversely proportional to the antenna length, i.e. aperture; this
means the longer the antenna, the narrower the beam and the higher the spatial
resolution in azimuth direction.
first
previous
next
last
back
exit
zoom
contents
index
about
223
6.2. Radar
Synthetic Aperture Radar (SAR)
To get useful spatial resolutions in radar images the RAR systems have their limitations, since there is a physical limit to the length of the antenna, that can be
carried on an aircraft or satellite. On the other hand, shortening the wavelength
will reduce the penetrating capability of clouds. To improve the spatial resolution a large antenna is synthesized. The synthesization is achieved by taking
advantage of the forward motion of the platform. Using all the backscattered
signals in which a contribution of the same object is present, a very long antenna
can be synthesized. This length is equal to the part of the orbit or the flightpass
in which the object is visible. Most airborne and spaceborne radar systems
use this type of radar. Systems using this approach are called Synthetic Aperture
Radar (SAR).
first
previous
next
last
back
exit
zoom
contents
index
about
224
6.2. Radar
6.2.4
Data formats
SAR data are recorded in so-called raw format. It can be processed with a SAR
processor into a number of derived products, such as intensity images, geocoded
images and phase-containing data. The highest possible spatial resolution of the
raw data is defined by the radar system characteristics.
first
previous
next
last
back
exit
zoom
contents
index
about
225
6.2. Radar
Raw data
Raw data contain the backscatter of objects on the ground seen at different
points in the sensor orbit. The received backscatter signals are sampled and
separated into two components, together forming a complex number. The components contain information about the amplitude and the phase of the detected
signal. The two components are stored in different layers. In this format, all
backscatter information is still available in the elements of the data layers and:
Each line consists of the sampled return signal of one pulse.
An object is included in many lines (about 1000 for ERS).
The position of an object in the different lines varies (different range).
Each object has a unique Doppler history which is included in the data
layers.
first
previous
next
last
back
exit
zoom
contents
index
about
226
6.2. Radar
SLC data
The raw data are compressed based on the unique Doppler shift and range information for each pixel, which means that the many backscatters of a point are
combined into one. The output of that compression is stored in one pixel which
is still in complex format. Each pixel still contains information of the returned
microwave. The phase and amplitude belonging to that pixel can be computed
from the complex number. If all backscatter information of a point is used in the
compression, then the output data is in Single Look Complex (SLC) format. The
data still have their highest possible spatial resolution.
first
previous
next
last
back
exit
zoom
contents
index
about
227
6.2. Radar
Multi-look data
In the case of multi-look processing, the total range of the orbit in which an object
can be seen is divided into several parts. Each part provides a look at the object.
Using the average of these multiple looks, the final image is obtained which is
still in complex format. Multi-look processing reduces the spatial resolution but
it reduces unwanted effects (speckle) by averaging.
first
previous
next
last
back
exit
zoom
contents
index
about
228
6.2. Radar
Intensity image
To get a visually interpretable image, the SLC or Multi-look data need to be
processed. The complex format is transformed into an Intensity image. In fact
the norm (length) of the complex vector gives the intensity of the pixel. The
spatial resolution of the intensity image is related to the number of looks that
are used in the compression step.
first
previous
next
last
back
exit
zoom
contents
index
about
229
6.2. Radar
6.2.5
Due to the side-looking viewing geometry, radar images suffer from serious geometric and radiometric distortions. In radar imagery, you encounter variations
in scale (caused by slant range to ground range conversion), foreshortening, layover and shadows (due to terrain elevation; Figure 6.7). Interference due to the
coherence of the signal causes speckle effect.
first
previous
next
last
back
exit
zoom
contents
index
about
230
6.2. Radar
Scale distortions
Radar measures ranges to objects in slant range rather than true horizontal distances along the ground. Therefore, the image has different scales moving from
near to far range (Figure 6.5). This means that objects in near range are compressed with respect to objects at far range. For proper interpretation, the image
has to be corrected and transformed into ground range geometry.
sensor
1
slant range
43
altitude
Figure 6.7:
Geometric
distortions in radar imagery due to terrain
elevations.
terrain
F
2
ground range 1
Foreshortening
first
previous
next
last
L
3
4
Layover
back
exit
S
5
S
6
7
Shadow
zoom
contents
index
about
231
6.2. Radar
Terrain-induced distortions
Similarly to optical sensors that can operate in an oblique manner (e.g. SPOT)
radar images are subject to relief displacements. In the case of radar, these distortions can be severe. There are three effects that are typical for radar: foreshortening, layover and shadow (see Figure 6.7).
Foreshortening Radar measures distance in slant range. The slope area facing
the radar is compressed in the image. The amount of shortening depends on the
angle that the slope forms in relation to the incidence angle. The distortion is at
its maximum if the radar beam is almost perpendicular to the slope. Foreshortened areas in the radar image are very bright.
Layover If the radar beam reaches the top of the slope earlier than the bottom, the slope is imaged upside down, i.e. the slope lays over. As you can
understand from the definition of foreshortening, layover is an extreme case of
foreshortening. Layover areas in the image are very bright.
Shadow In the case of slopes that are facing away from the sensor, the radar
beam cannot illuminate the area. Therefore, there is no energy that can be backscattered to the sensor and those regions remain dark in the image.
first
previous
next
last
back
exit
zoom
contents
index
about
232
6.2. Radar
Radiometric distortions
The above-mentioned geometric distortions also have an influence on the received energy. Since the backscattered energy is collected in slant range, the
received energy coming from a slope facing the sensor is stored in a reduced
area in the image, i.e. it is compressed into fewer image pixels than should be
the case if obtained in ground range geometry. This results in high digital numbers because the energy collected from different objects is combined. Slopes
facing the radar appear (very) bright. Unfortunately this effect cannot be corrected for. This is why especially layover and shadow areas in radar imagery
cannot be used for interpretation. However, they are useful in the sense that
they contribute to a three-dimensional look of the image and therefore help the
understanding of the terrain structure and topography.
A typical property of radar images is the so-called speckle. It appears as
grainy salt and pepper effects in the image (Figure 6.8). Speckle is caused by
the interference of backscattered signals coming from an area which is included
in one pixel. The wave interactions are called interference. Interference causes
the return signals to be extinguished or amplified, resulting in dark and bright
pixels in the image even when the sensor observes a homogenous area. Speckle
degrades the quality of the image and makes the interpretation of radar imagery
difficult.
first
previous
next
last
back
exit
zoom
contents
index
about
233
6.2. Radar
(a)
first
(b)
previous
next
last
back
exit
zoom
contents
index
about
234
6.2. Radar
Speckle reduction
It is possible to reduce speckle by means of multi-look processing or spatial filtering. If you purchase an ERS SAR scene in Intensity (PRI)-format you will
receive a 3-look or 4-look image. Another way to reduce speckle is to apply spatial filters on the images. Speckle filters are designed to adapt to local image
variations in order to smooth the values to reduce speckle but to enhance lines
and edges to maintain the sharpness of the imagery.
first
previous
next
last
back
exit
zoom
contents
index
about
235
6.2. Radar
6.2.6
The brightness of features in a radar image depends on the strength of the backscattered signal. In turn, the amount of energy that is backscattered depends on various factors. An understanding of these factors will help you to interpret radar
images properly.
first
previous
next
last
back
exit
zoom
contents
index
about
236
6.2. Radar
Microwave signal and object interactions
For interpreters who are concerned with visual interpretation of radar images,
the degree to which they can interpret an image depends upon whether they
can identify typical/representative tones related to surface characteristics. The
amount of energy that is received at the radar antenna depends on the illuminating signal (radar system parameters such as wavelength, polarization, viewing geometry, etc.) and the characteristics of the illuminated object (roughness,
shape, orientation, dielectric constant, etc.).
Surface roughness is the terrain property that most strongly influences the
strength of radar returns. It is determined by textural features comparable to
the size of radar wavelength (typically between 5 and 40 cm), such as leafs and
twigs of vegetation, and sand, gravel and cobble particles. A distinction should
be made between surface roughness and topographic relief. Surface roughness
occurs at the level of the radar wavelength (centimetres to decimetres). Topographic relief occurs at a quite different level (metres to kilometres). Snells law
states that the angle of reflection is equal and opposite to the angle of incidence.
A smooth surface reflects the energy away from the antenna without returning a
signal, thereby resulting in a black image. With an increase in surface roughness,
the amount of energy reflected away is reduced, and there is an increase in the
amount of signal returned to the antenna. This is known as the backscattered
component. The greater the amount of energy returned, the brighter the signal
is shown on the image. Radar imagery is, therefore, a measure of the backscatter
component, and is related to object- or surface roughness.
first
previous
next
last
back
exit
zoom
contents
index
about
237
6.2. Radar
Complex dielectric constant Microwave reflectivity is a function of the complex dielectric constant. The complex dielectric constant is a measure of the
electrical properties of surface materials. The dielectric constant of a medium
consists of a part referred to as permittivity, and a part referred to as conductivity ([39]). Both properties, permittivity and conductivity, are strongly dependent
on the moisture or liquid water content of a medium. Material with a high dielectric constant has a strongly reflective surface. Therefore, the difference in the
intensity of the radar return for two surfaces of equal roughness is an indication
of the difference in their dielectric properties. In case of soils this could be due
to differences in soil moisture content.
Surface Orientation Scattering is also related to the orientation of the object
relative to the radar antenna. For example the roof of a building appears bright
if it faces the antenna and dark if the incoming signal is reflected away from the
antenna. Thus backscatter depends also on the local incidence angle.
Volume scattering is related to multiple scattering processes within a group of
objects, such as the vegetation canopy of a wheat field or a forest. The cover may
be all trees, as in forested area, which may be of different species with variation
in leaf form and size, or grasses and bushes with variations in form, stalk size,
leaf and angle, fruiting and a variable soil surface. Some of the energy will be
backscattered from the vegetated surface, but some, depending on the characteristics of radar system used and the object material, will penetrate the object
and be backscattered from surfaces within the vegetation. Volume scattering is
therefore dependent upon the inhomogeneous nature of the object surface and
the physical properties of the object such as leaf size, direction, density, height,
presence of lower vegetation, etc., together with the characteristics of the radar
first
previous
next
last
back
exit
zoom
contents
index
about
238
6.2. Radar
used, such as wavelength and related effective penetration depth ([3]).
Point objects are discrete objects of limited size, which gives a very strong
radar return. Usually the high backscatter is caused by the so called corner reflection. An example is the dihedral corner reflectora point object situation
resulting from two flat surfaces intersecting at 90and situated orthogonal to
the radar incident beam. Common forms of dihedral configurations are manmade features, such as transmission towers, railroad tracks, or the smooth side
of buildings on a smooth ground surface. Another type of point object is trihedral corner reflector, which is formed by the intersection of three mutually perpendicular flat surfaces. Point objects of the corner reflector type are commonly
used to identify known fixed points in an area in order to perform precise calibration measurements. Such objects can occur naturally and are best seen in
urban areas where buildings can act as trihedral or dihedral corner reflectors.
These objects give rise to intense bright spots on an image and are typical for
urban areas. Point objects are examples of objects that are sometimes below the
resolution of the radar system, but because they dominate the return from a cell
they give a clearly visible point, and may even dominate the surrounding cells.
first
previous
next
last
back
exit
zoom
contents
index
about
239
6.2. Radar
6.2.7
Applications of radar
There are many useful applications of radar images. Radar data provide complementary information to visible and infrared remote sensing data. In the case
of forestry, radar images can be used to obtain information about forest canopy,
biomass and different forest types. Radar images also allow the differentiation of
different land cover types, such as urban areas, agricultural fields, water bodies,
etc. In agricultural crop identification, the use of radar images acquired using
different polarization (mainly airborne) is quite effective. It is crucial for agricultural applications to acquire data at a certain point in time (season) to obtain
the necessary parameters. This is possible because radar can operate independently of weather or daylight conditions. In geology and geomorphology the
fact that radar provides information about surface texture and roughness plays
an important role in lineament detection and geological mapping. Other successful applications of radar include hydrological modelling and soil moisture
estimation, based on the sensitivity of the microwave to the dielectric properties
of the observed surface. The interaction of microwaves with ocean surfaces and
ice provides useful data for oceanography and ice monitoring. Radar data is also
used for oil slick monitoring and environmental protection.
first
previous
next
last
back
exit
zoom
contents
index
about
240
6.2. Radar
6.2.8
INSAR
Radar data provide a wealth of information that is not only based on a derived
intensity image but also on other data properties that measure characteristics of
the objects. One example is SAR interferometry (INSAR), an advanced processing
method that takes advantage of the phase information of the microwave. INSAR is a technique that enables the extraction of 3D information of the Earths
surface. It is based on the phase differences between corresponding pixels in
two SAR images of the same scene but acquired at a slightly different position.
The different path lengths from these positions to the target on the earth surface
cause the differences in phase. SAR systems can detect the phase of the return
signals very accurately (Figure 6.9).
f1
first
previous
f2
next
last
back
f = f1 - f2
exit
zoom
contents
index
about
241
6.2. Radar
Data acquisition modes
Radar data for INSAR can be collected in two different modes:
Single or simultaneous pass interferometry. In this mode, two images are
simultaneously acquired from two antennas mounted on the same platform and separated by a distance known as baseline. This mode is mainly
applied with aircraft systems but also the Shuttle Radar Topographic Mapper (SRTM) was based on this principle where receiving antennas are located at two ends of a mast of 60 metres as baseline length.
Repeat or dual pass interferometry. In this mode, two images of the same
area are taken in different passes of the platform. The SAR data acquired
from satellites such as ERS-1 and ERS-2, JERS-1 and RADARSAT may be
used in this mode to produce SAR interferograms but also some aircraft
systems are based on this mode.
first
previous
next
last
back
exit
zoom
contents
index
about
242
6.2. Radar
dr
A1
A2
a
q
r
z(x,y)
first
previous
next
last
back
exit
zoom
contents
index
about
243
6.2. Radar
Concept
The phase information of two radar datasets (in SLC format) of the same region
is used to create a DEM of the region (Figure 6.10). The corresponding elements
of two SLC datasets are acquired at two slightly different antenna positions (A1
and A2 ). The connection between these positions forms the baseline. This baseline has a length (B) and an orientation () relative to the horizontal direction.
The positions A1 and A2 are extracted from the platform orbits or flight lines.
The difference in antenna position results in a difference in the range ( and 0 )
from the target to those positions. This difference in range () causes a phase
difference (). This phase difference can be computed from the phase differences
between corresponding elements in the SLC datasets, and are stored in the socalled interferogram. Finally, the terrain height is a function of the phase difference , the baseline B, some additional orbit parameters and it is represented in
a chosen reference system.
first
previous
next
last
back
exit
zoom
contents
index
about
244
6.2. Radar
Coherence
The basis of INSAR is phase comparison over many pixels. This means that the
phase between scenes must be statistically similar. The coherence is a measure of
the phase noise of the interferogram. It is estimated by window-based computation of the magnitude of the complex cross correlation coefficient of the SAR
images. The interferometric coherence is defined as the absolute value of the
normalized complex cross correlation between the two signals. The correlation
will always be a number between 0 and 1. If corresponding pixels are similar
then the correlation is high. If the pixels are not similar, i.e. not correlated to a
certain degree, then the phase will vary significantly and the coherence is low,
meaning that the particular image part is de-correlated. Low coherence (e.g.
less than 0.1) indicates low phase quality and results in a noisy interferogram
that causes problems in the DTM generation. The coherence decreases with increasing change in the random component of the backscattered fields between
passes (temporal decorrelation) due to the physical change of the surface roughness structure reducing the signal correlation as do vegetation, water, shifting
sand dunes, farm work (planting fields), etc. Geometric distortions caused by
steep topography and orbit inaccuracies also decorrelate the images.
first
previous
next
last
back
exit
zoom
contents
index
about
245
6.2. Radar
6.2.9
Differential INSAR
first
previous
next
last
back
exit
zoom
contents
index
about
246
6.2. Radar
Concept
Three SAR images of the same area are acquired in three different passes to generate two interferograms. The arithmetic difference of the two interferograms
is used to produce a differential interferogram. The elements of the differential
interferogram, in combination with orbit information and sensor characteristics,
is used to compute surface changes.
first
previous
next
last
back
exit
zoom
contents
index
about
247
6.2. Radar
6.2.10
Application of (D)InSAR
One of the possibilities for interferometry using data from space is its possibility
to derive global DEMs. This allows for global topographic elevation mapping
in areas such as the tropics which were previously inaccessible without radars
ability to penetrate cloud cover, and to acquire imagery regardless to sunlight.
Utilizing ERS, SRTM and RADARSAT imagery can lead to DEMs with absolute
height errors of less than 10 metres (assuming three or more GCPs are collected
and optimal circumstances). Horizontal resolution with ERS data of 20 metres
and with RADARSAT Fine Beam mode 10 metres are possible. Georeferencing
accuracy can be better than 20 metres. In addition, the ability to create DEMs
is beneficial for the generation of 3D imagery to assist in the identification of
targets for military and environmental purposes. DEMs also provide critical
information with reference to the geomorphic process. In this practice interferometry is useful in detecting changes caused by alluvial fans, broad flood plain
sedimentation patterns, sediment extraction, delta extensions and the formation
and movement of large dune fields.
Change detection is an important field of study and is based on the practice
of Differential Interferometry. This allows for super-sensitive change detection
with accurate measurements into millimetres accuracy. This practice can be used
for landslide monitoring and erosion. In addition, this can be used to gather
information on changes in areas where mining and water extraction have taken
place (see Figure 6.11).
SAR can provide high-resolution imagery of earthquake-prone areas, highresolution topographic data, and a high-resolution map of co-seismic deformation generated by an earthquake. Of these the last one is probably the most
useful, primarily because it is unique. Other techniques are capable of generating images of the Earths surface and topographic data, but no other technique
first
previous
next
last
back
exit
zoom
contents
index
about
248
6.2. Radar
provides high-spatial-resolution maps of earthquake deformation. Crust deformation is a direct manifestation of the processes that lead to earthquakes. Consequently, it is one of the most useful physical measurements we can make to
improve estimates of earthquake potential. SAR interferometry can provide the
requisite information.
Interferograms were calculated to help study the activity of volcanoes through
the creation of DEMs and mapping of land deformation. Researchers have used
over 300 images of ERS-1 data to create many interferograms of Mount Etna
(Italy). They were able to measure the increase in the size of the volcano (change
detection and deformation) caused by the pressure of the magma in its interior.
They were also able to follow the shrinking of the volcano once its activity had
subsided as well as the changes in the surrounding topography caused by the
lava flows. This technique can be used to monitor awakening volcanoes to prevent mass destruction, and for local and international relief planning.
Interferometry is useful in coherence-based land use classification. In this
first
previous
next
last
back
exit
zoom
contents
index
about
249
6.2. Radar
practice, the coherence of a repeat pass interferogram provides additional information contained in the radar backscatter. This information can be used as
an input channel into land use classification. The use of coherence has proven
successful for the separation of forests and open fields.
Interferometry is also useful in polar studies, including the measuring of flow
velocities, tidal displacements and ice sheet monitoring. Researchers at the University of Alaska at Fairbanks were able to use interferometry to calculate the
surface velocity field on the Bagley Icefield (Alaska) before and during the 199394 surge of the Bearing Glacier using ERS-1 data.
In terms of ocean dynamics, interferometry can be used to study ocean wave
and current dynamics, wave forecasting, ship routing, and placement and design of coastal and off shore installations.
Many major cities are located in areas undergoing subsidence as a result of
withdrawal of ground water, oil or gas, and other minerals. Several metres of
subsidence over several decades are not uncommon. Examples of cities with
significant problems include Houston, Mexico City, Maracaibo, and Katmandu.
High rates of subsidence can have a major impact on flood control, utility distribution, and water supply.
Subsidence is also a result of natural processes such as limestone or marble
dissolution that forms karst topography. In western Pennsylvania, an underground coal fire has burned for many years causing localized subsidence and
threatening a much wider region with similar risks. Successive SAR images in
urban areas over periods of several months may be able to detect subsidence
directly. The surface structure of many parts of urban areas remains unchanged
over several years, suggesting that interferometry over several years may be
possible. Subsidence rates of several centimetres per year or more may be occurring in affected cities and should be detectable.
first
previous
next
last
back
exit
zoom
contents
index
about
250
6.2. Radar
Satellite-based SAR interferometry has two important roles to play in polar
studies. First, SAR interferometry can provide complete high-resolution highaccuracy topographic data. Second, repeat-pass interferometry can be used to
measure ice flow and assess other changes. The cover image shows the first
direct measurement of ice flow velocity from space without ground control.
first
previous
next
last
back
exit
zoom
contents
index
about
251
6.2. Radar
6.2.11
Supply market
first
previous
next
last
back
exit
zoom
contents
index
about
252
6.2. Radar
6.2.12
SAR systems
The following SAR systems mounted on airborne (Table 6.1) and spaceborne (Table 6.2) platforms did or do produce SAR data. However, not all of these systems
generate data that can be processed into interferograms because of inaccuracies
in orbit or flight pass data. This list, created in May 2004, is most probably not
complete. For the latest situation refer to ITCs Database of Satellites and Sensors.
Instrument
Emisar
Pharus
Star-31
Airsar/Topsar
Carabas
Geosar
WINSAR
Instrument
Radarsat
ERS-1
ERS-2
Envisat
JERS -1
SRTM
first
previous
Band/
C,L band
C band
X band
P,L,C band
3-15 cm
X,P band
4 bands
Band
C-band
C-band
C-band
C-band
L-band
C and X band
next
last
Organization
Techn. Univ. of Denmark
FEL-TNO
Intermap
Nasa/JPL
Chalmers University/FOI
JPL and others
Metratec
Owner
Denmark
Netherlands
Canada
USA
Sweden
USA
USA
Owner
Canada
ESA
ESA
ESA
Japan
NASA
Table 6.2:
Spaceborne
SAR systems.
Remarks
Not operational anymore
back
exit
zoom
contents
index
about
253
6.2. Radar
6.2.13
Trends
first
previous
next
last
back
exit
zoom
contents
index
about
254
first
previous
next
last
back
exit
zoom
contents
index
about
255
6.3.1
Basic principle
In functional terms, Laser Scanning can be defined as system that produces digital surface models. The system comprises an assemblage of various sensors,
recording devices, and software. The core component is the laser instrument,
which measures distance, also referred to as laser ranging. When mounted on
an aircraft, the laser range finder measures the near vertical distance to the terrain in very short time intervals. Combining a laser range finder with sensors
that can measure the position (GPS) and attitude (IMU) of the aircraft makes it
possible to determine a model of the terrain surface in terms of a set of (X,Y ,Z)
coordinates, following the polar measuring principle (Figure 6.12).
Z
Y X
GPS
LASER
X
(U,V)
IMU
Y X
GPS
(U,V)
previous
TERRAIN
(a)
first
next
last
(b)
back
exit
zoom
contents
index
about
256
first
previous
next
last
back
exit
zoom
contents
index
about
257
first
previous
next
last
back
exit
zoom
contents
index
about
258
6.3.2
Pe
laser source
Photodiode
Pr
Figure 6.14: Concept of
Laser Ranging [42].
first
previous
next
last
back
exit
zoom
contents
index
about
259
1
c t.
2
(6.2)
Several laser range finders for airborne applications can record multiple returns
from the same pulse. The cheaper single return sensors only register one return
pulse for every emitted pulse. Some of the range finders equipped with a single return sensor allow the operator to select either first or last return ranging.
The difference is especially relevant when flying terrain with vegetation. Many
returns from first-return systems are from the top of the tree canopy, while the
last returns are more likely to be of the ground. A multiple-return sensor can
either register the first and the last return or even several returned pulses per
first
previous
next
last
back
exit
zoom
contents
index
about
260
Time [s ]
Distance [m]
0
0
Pulse emission
20
0
First return
100%
50%
50%
0
100%
6
20
No return
12
40
50%
100%
60
18
Second return
100%
80
50%
100
50%100%
Third return
24
30
120
Figure 6.15:
Multiple
return
laser
ranging.
Adapted from Mosaic
Mapping Systems, Inc.
36
Fourth return
140
50%
100%
42
Along with measuring the range, some laser instruments measure the intensity of the returned signal. The benefit of imaging lasers, however, is still a
matter of discussion in the laser community. The obtained images are monochromatic and are of lower quality than panchromatic images. A separate optical
sensor can produce much richer spectral information.
first
previous
next
last
back
exit
zoom
contents
index
about
261
first
previous
next
last
back
exit
zoom
contents
index
about
262
first
previous
next
last
back
exit
zoom
contents
index
about
263
ure 6.17).
Critical to getting the right data at the right time are proper system calibration, accurate flight planning and execution (including the GPS logistics), and
adequate software.
first
previous
next
last
back
exit
zoom
contents
index
about
264
6.3.3
System characteristics
ALS produces a DSM directly comparable with what is obtained by image matching of aerial photographs. Image matching is the core process of automatically
generating a DSM from stereophotos. Alternatively we can also use microwave
radar to generate DSMs and eventually DTMs. The question is, why go for ALS?
There are several good reasons for using ALS for terrain modelling:
A laser range finder measures distance by recoding the elapse time between emitting a signal and receiving the reflected signal from the terrain.
Hence, the laser range finder is an active sensor (emitting the radiation,
which is to be sensed). Being an active sensor, the data collection does not
depend on reflected sunlight (as opposed to recordings of passive sensors
such as aerial cameras), meaning that the laser range finder can be flown
at day and night.
Different from indirect distance measuring as done when using overlapping photographs, laser ranging does not depend on surface/terrain texture.
Laser ranging is less weather dependent than using passive optical sensors. A laser cannot penetrate clouds as microwave radar can, but it can be
flown at low altitude, thus very often below the cloud ceiling.
The laser beam is very narrow, therefore it can see through the tree canopy,
unless very dense. ALS can see objects that are much smaller than the
footprint of the laser beam, therefore, we can use it for mapping power
lines, etc.
first
previous
next
last
back
exit
zoom
contents
index
about
265
first
previous
next
last
back
exit
zoom
contents
index
about
266
6.3.4
The first airborne laser ranging experiments were conducted in North America
in the 1970s, aimed at bathymetric applications. There are currently a half dozen
airborne laser bathymetry systems in operation. These are heavy and very expensive systems commonly employing lasers in two different wavelengths, near
infrared or green. Near infrared light has good atmospheric penetration but does
not penetrate water or the ground. The near infrared beam is reflected by a water surface, while the green beam penetrates (clear) water and is reflected from
the bottom of the water body. Measuring the time difference between the returns
of the co-aligned laser pulses allows determining the water depth (shallow water, not deeper than 80 m). For precise topographic surveys (using near infrared
lasers), progress was first required in satellite positioning (GPS), which allowed
the development of development of laser profiling in the late 1980s.
The idea of creating surface models and true 3D models by laser ranging can
also be applied to problems where an aircraft does not offer the right perspective, and where ground-based cameras would fail to do a good job (e.g. because
the objects/scene to be surveyed have/has little texture). Ground-based laser
scanners, also called terrestrial laser scanners (TLSs), combine the concepts of
tacheometry (polar measuring, e.g. by Total Station; explicit distance) and photogrammetry (bundle of rays, implicit distance). TLSs (e.g. Leica Cyrax 2500,
Riegl 3D-Imaging Sensor LMS-Z210) do not require a prism/reflector at a target point and can yield a very high density of points in a very short time. Such
instruments may have a scanning range of 1-200 m and record 1000 points per
second with an accuracy of 2-6 mm.
TLSs are particularly suited for surveying in dangerous or inaccessible environments, and for high precision work. All kinds of civil engineering, architectural, and archaeological surveys are the prime application areas. Figure 6.18
first
previous
next
last
back
exit
zoom
contents
index
about
267
illustrates a typical TLS process chain. The TLS scans an object, usually from different positions to enable true 3D modelling of the object of interest (e.g. a bridge,
a building, a statue). For each viewing position of the TLS we obtain a point
cloud, which can be shown as a 2D picture by colour coding the ranges to the
objects surface. We can also co-register all point clouds into one common 3D coordinate system and then fit simple geometric shapes (cylinders, spheres, cubes,
etc.) to the (X,Y ,Z) data. This way we can create CAD models and use computer aided design software, e.g. for intelligible visualisations. An even more
recent application area of ground-based laser scanning is mobile mapping and
automatic vehicle navigation.
There is also satellite laser ranging, used to measure the distance from a
ground station to a satellite with high precision. Moreover, NASA operates laser
systems on spaceborne platforms with a particular interest in ice and clouds.
There are also military/spy laser satellites orbiting.
first
previous
next
last
back
exit
zoom
contents
index
about
268
6.3.5
Supply Market
The acceptance of ALS data is rapidly growing. Growth rates in the commercial
sector in terms of installed instrument base have been averaging about 25 percent per year since 1998, with projections for an installed instrument base of 150
- 200 sensors by 2005. There is also a growing number of value-added resellers
and product developers who include laser mapping and laser data analysis as
an integral part of their activities. The website www.airbornelasermapping.com
tries to maintain a current inventory of ALS organisations; in 2001 there were
already 70 commercial ones throughout the world. By 2004 the ALS supply
landscape has become diverse. A potential client of ALS products has the option of purchasing a total service for getting a tailor-made DSM and products
derived from it, or purchasing a value-added product (e.g. derived from a national DSM), purchase a part of a national DSM and process it further in-house,
or buy ALS hardware and software and try to make it all work.
The Netherlands was the first country to establish a DSM for the entire territory using ALS. The product (AHN) has become available for any part of the
country by in 2004, with a density of at least 1 point per 16 m2 and 1p/32 m2 in
forest areas. Several other states in Europe are currently in the process of creating a countrywide high density DSM based on ALS, typically with a spacing
between points in the order of 10 m.
With the increasing availability of systems and services, researchers look into
possibilities of refined processing (e.g. further reduction of systematic errors and
elimination of blunders) and automated post-processing. The latter refers to attempts to derive a DTM from the DSM, and to classify and delineate terrain features, in particular buildings. Another research topic concerns the derivation of
terrain breaklines. Meanwhile, the manufactures will continue to aim for higher
measuring speeds and higher laser power, so that higher density DSMs can be
first
previous
next
last
back
exit
zoom
contents
index
about
269
first
previous
next
last
back
exit
zoom
contents
index
about
270
Summary
In this chapter, the principles of imaging radar, interferometric SAR, laser scanning and their respective applications have been introduced. The microwave
interactions with the surface have been explained to illustrate how radar images are generated, and how that can be interpreted. Radar sensors measure
distances and detect backscattered signal intensities. In radar processing, special attention has to be paid to geometric corrections and speckle reduction for
improved interpretability. Radar data have many potential applications in the
fields of geology, oceanography, hydrology, environmental monitoring, land use
and land cover mapping and change detection. The concept and applications of
INSAR were also explained, including how differential INSAR allows the detection of surface deformation.
In the second part the principle of laser scanning and its historical development have been outlined. The principal product is a digital surface model.
Capabilities of airborne laser scanning and operational aspects have been introduced, and current trends reviewed. Typical applications of laser scanning
data are forest surveys, surveying of coastal areas and sand deserts, flood plain
mapping, power line and pipeline mapping, and 3D city modelling.
first
previous
next
last
back
exit
zoom
contents
index
about
271
Questions
The following questions can help to study Chapter 6.
1. List three major differences between optical and microwave remote sensing?
2. What type of information can you extract from imaging radar data?
3. What are the limitations of radar images in terms of visual interpretation?
4. What kind of processing is necessary to prepare radar images for interpretation? Which steps are obligatory and which are optional?
5. Search the Internet for successful applications of radar images from ERS1/2, Radarsat and other sensors.
6. What are the components of an airborne laser scanning system, and what
are the operational aspects to consider in planning and executing an ALS
mission?
first
previous
next
last
back
exit
zoom
contents
index
about
272
7. What are the key performance characteristics of ALS, and what are the
major differences with airborne microwave radar?
8. What makes ALS especially suited for the mentioned applications?
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 7
Remote sensing below the ground
surface
first
previous
next
last
back
exit
zoom
contents
index
CAUTION
about
273
274
7.1. Introduction
7.1 Introduction
When it comes to exploration for the resources of the solid earth, methods of detecting directly or indirectly resources of minerals, hydrocarbons and groundwater are needed that can see deep into the Earth. Similar capabilities may
be required at a more local scale for foundation studies or environmental pollution problems. Methods that map the Earths surface in or near the visible
spectrum are a good start, and the interpretation of surface geology (for example from RS imagery) allows inference of what may lie below. However, there
are other methods that actually probe more deeply into the ground by making
use of the physical or chemical properties of the buried rocks. These properties or changes in properties from one rock-type to another are detected by
carrying out geophysical surveys with dedicated sensors such as gravimeters,
magnetometers and seismometers. Interpretation of geophysical surveys, along
with all other available data, is the basis for planning of further investment in
exploration, such as drilling, or using more expensive, dedicated geophysical
techniques to probe in more detail into the most promising locations. Applied
geophysical techniques form an important part of practical earth science, and the
theory and application of the available methods is set out in several introductory
textbooks [27, 19, 12, 37].
first
previous
next
last
back
exit
zoom
contents
index
about
275
first
previous
next
last
back
exit
zoom
contents
index
about
276
first
previous
next
last
back
exit
zoom
contents
index
about
277
first
previous
next
last
back
exit
zoom
contents
index
about
278
Figure 7.2:
Sea floor
topography in the western
Indian Ocean, revealed by
satellite altimetry of the
sea surface (GEOSAT).
Note the mid-ocean ridges
and the transform faults
either side of them. Note
also the (largely submarine) chains of volcanic
islands between India and
Madagascar.
Scene is
roughly 3800 km across
(from [35]).
first
previous
next
last
back
exit
zoom
contents
index
about
279
first
previous
next
last
back
exit
zoom
contents
index
about
280
(a)
first
previous
(b)
next
last
back
exit
zoom
contents
index
about
281
first
previous
next
last
back
exit
zoom
contents
index
about
282
first
previous
next
last
back
exit
zoom
contents
index
about
283
first
previous
next
last
back
exit
zoom
contents
index
about
284
first
previous
next
last
back
exit
zoom
contents
index
about
285
Summary
Geophysical methods therefore provide a wide range of possible methods of
imaging the subsurface. Some are used routinely, others only for special applications. All are potentially useful to the alert geoscientist.
Gravity and magnetic anomaly mapping has been carried out for over 50 years.
While most countries have national programmes, achievements to date are somewhat variable from country to country. The data are primarily useful for geological reconnaissance at scales from 1:250,000 to 1:1,000,000. Gamma-ray spectrometry, flown simultaneously with aeromagnetic surveys, has joined the airborne
geophysical programmes supporting geological mapping in the past decade.
All three methods are therefore used primarily by national geological surveys
to support basic geoscience mapping, alongside conventional field and photogeology, and to set the regional scene for dedicated mineral and oil exploration.
It is normal that the results are published at nominal cost for the benefit of all
potential users.
Geophysical surveys for mineral exploration are applied on those more limited
areas (typically at scales 1:50,000 to 1:10,000) selected as being promising for
closer (and more expensive!) examination. Typically this might start with an
airborne EM and magnetometer survey that would reveal targets suitable for
detailed investigation with yet more expensive methods (such as EM and IP)
on the ground. Once accurately located in position (X,Y ) and depth, the most
promising anomalies can be tested further by drilling.
Groundwater exploration has historically relied on electrical sounding and profiling, but has been supplemented in some cases by EM profiling and sounding and shallow seismic surveys. Regrettably, poor funding often dictates that
such surveys are less thorough and systematic than is the case in mineral ex-
first
previous
next
last
back
exit
zoom
contents
index
about
286
first
previous
next
last
back
exit
zoom
contents
index
about
287
Questions
The following questions can help you to study Chapter 7.
1. Make a list of geophysical maps (and their scales) that you are aware of in
your own country (or that part of it you are familiar with).
2. Trace the geophysical features revealed in Figure 7.3(a) on a transparent
overlay and compare your result with the geological map in Figure 7.3(b).
The following are typical exam questions:
1. Why is it necessary to use geophysical methods to explore the subsurface?
What are the limitations of visual RS methods in this respect?
2. Make a list of the physical properties of rocks that have been used as the
basis of geophysical mapping methods.
3. In the process of systematic exploration for Earth resources, why is it important to use inexpensive methods for the reconnaissance of large areas
before using more expensive methods over much smaller ones?
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 8
Radiometric correction
first
previous
next
last
back
exit
zoom
contents
index
about
288
289
8.1. Introduction
8.1 Introduction
The previous chapters have examined remote sensing as a means of producing
image data for a variety of purposes. The following chapters deal with processing of the image data for rectification, visualization and interpretation. The first
step in the processing chain, often referred to as pre-processing, involves radiometric and geometric corrections (Figure 8.1). The radiometric aspects are dealt
with in this chapter, and the geometric aspects in the following. Three groups of
radiometric corrections are identified:
cosmetic rectification to compensate for data errors,
relative atmospheric correction based on ground reflectance properties,
and
absolute atmospheric correction based on atmospheric process information.
The radiance values of reflected polychromatic solar radiation and/or the
emitted thermal radiance from a certain specific target (pixel) at the Earth surface
are for researchers the most valuable information obtainable from a RS scanner.
In the absence of an atmosphere, these radiances leaving the ground will reach
the orbiting sensor practically unaltered in any wavelength, in others words,
what is recorded by the satellite directly corresponds to the radiance leaving the
target on Earth in the wavelength range (band) under consideration.
first
previous
next
last
back
exit
zoom
contents
index
about
290
8.1. Introduction
Cosmetic
corrections
- Radiometric
corrections
- Atmospheric
Pre-processing
corrections
-
first
previous
next
last
Geometric
corrections
back
exit
zoom
contents
index
about
291
previous
next
last
back
exit
zoom
contents
index
about
292
13.4). Since solar reflection and earth thermal emission occur in different
wavelengths, the behaviour of one is not an indication of the other.
The processes of atmospheric attenuation, i.e. scattering and absorption,
are both wavelength dependent, and affect the two sectors of the spectrum
differently.
Because of the previous statement, AC techniques are applied at monochromatic level (individual wavelengths). It means that attenuation of energy
is calculated at every individual wavelength and then integrated in the
spectrum of the sensor by mathematical integration.
Atmospheric components affect different areas of the spectrum in different
first
previous
next
last
back
exit
zoom
contents
index
about
293
previous
next
last
back
exit
zoom
contents
index
about
294
first
previous
next
last
back
exit
zoom
contents
index
about
295
first
previous
next
last
back
exit
zoom
contents
index
about
296
8.3.1
Cosmetic corrections
These procedures are not true AC techniques. Their objective is to correct visible errors and noise in the image data. No atmospheric model of any kind is
involved at all in these correction processes; instead, corrections are achieved
using especially designed filters and image stretching and enhancement procedures. Nowadays these corrections are typically executed (if required) at the
satellite data receiving stations or image pre-processing centres, before reaching
the final user. All applications require this form of correction. True AC methods,
if required, follow these cosmetic modifications.
Typical problems requiring cosmetic corrections are:
Periodic line dropouts;
Line striping;
Random noise or spike corrections.
These effects can be identified visually and automatically, and are here illustrated on a Landsat Enhanced Thematic Mapper (ETM) image of Enschede
(Figure 8.3).
first
previous
next
last
back
exit
zoom
contents
index
about
297
96
89
104
114
131
122
112
137
138
131
108
88
78
70
87
106
106
99
88
91
87
89
108
97
143
155
127
152
149
137
132
101
68
85
114
126
116
116
95
91
83
96
93
93
107
147
148
162
155
139
136
113
81
111
128
125
123
123
99
85
94
94
92
86
101
115
151
164
159
141
134
100
88
118
127
127
119
119
96
87
114
98
97
96
122
145
159
161
168
148
135
94
73
100
119
108
89
89
112
80
109
108
94
100
127
155
155
157
162
158
124
93
76
89
96
88
95
95
76
97
107
111
86
92
88
101
134
150
154
159
115
90
76
86
87
92
99
99
104
115
104
110
81
88
86
91
91
98
123
137
106
84
78
85
86
87
95
93
93
105
94
94
75
88
87
92
92
86
87
96
96
89
79
84
80
80
82
85
85
94
83
80
83
92
90
95
91
96
97
92
88
79
72
81
74
80
76
74
74
87
81
88
91
93
90
102
106
118
126
122
90
82
76
73
69
71
79
69
69
78
96
79
75
92
92
119
126
127
123
117
96
91
83
68
72
75
79
66
66
70
90
73
75
105
124
131
121
104
108
114
113
88
86
73
76
77
79
70
70
73
110
107
110
110
122
128
112
121
125
121
120
102
86
79
80
84
71
71
71
65
109
121
131
106
116
132
126
122
121
121
116
114
91
85
81
68
69
78
85
85
(a)
first
(b)
previous
next
last
back
exit
zoom
contents
index
about
Figure 8.3: Original Landsat ETM image of Enschede and environs (a),
and corresponding Digital
Numbers (DN) of the the
indicated subset (b).
298
87
89
108
0
143
155
127
152
149
137
132
101
68
85
114
126
116
116
95
0
83
96
93
0
107
147
148
162
155
139
136
113
81
111
128
125
123
123
99
0
94
94
92
0
101
115
151
164
159
141
134
100
88
118
127
127
119
119
96
0
114
98
97
0
122
145
159
161
168
148
135
94
73
100
119
108
89
89
112
0
109
108
94
0
127
155
155
157
162
158
124
93
76
89
96
88
95
95
76
0
107
111
86
0
88
101
134
150
154
159
115
90
76
86
87
92
99
99
104
0
104
110
81
0
86
91
91
98
123
137
106
84
78
85
86
87
95
93
93
0
94
94
75
0
87
92
92
86
87
96
96
89
79
84
80
80
82
85
85
0
83
80
83
0
90
95
91
96
97
92
88
79
72
81
74
80
76
74
74
0
81
88
91
0
90
102
106
118
126
122
90
82
76
73
69
71
79
69
69
0
96
79
75
0
92
119
126
127
123
117
96
91
83
68
72
75
79
66
66
0
90
73
75
0
124
131
121
104
108
114
113
88
86
73
76
77
79
70
70
0
110
107
110
0
122
128
112
121
125
121
120
102
86
79
80
84
71
71
71
0
109
121
131
0
116
132
126
122
121
121
116
114
91
85
81
68
69
78
85
0
(a)
(b)
The first step in the restoration process is to calculate the average DN-value
per scan line for the entire scene. The average DN-value for each scan line is then
compared with this scene average. Any scan line deviating from the average by
more than a designated threshold value is identified as defective. In regions
of very diverse land cover, better results can be achieved by considering the
histogram for sub-scenes and processing these sub-scenes separately.
The next step is to replace the defective lines. For each pixel in a defective
line, an average DN is calculated using DNs for the corresponding pixel in the
first
previous
next
last
back
exit
zoom
contents
index
about
Figure 8.4:
The image with periodic line
dropouts (a) and the
DN-values (b). All erroneous DN-values in these
examples are shown in
bold.
299
87
89
108
126
143
155
127
152
149
137
132
101
68
85
114
126
116
116
95
93
83
96
93
100
107
147
148
162
155
139
136
113
81
111
128
125
123
123
99
93
94
94
92
97
101
115
151
164
159
141
134
100
88
118
127
127
119
119
96
90
114
98
97
110
122
145
159
161
168
148
135
94
73
100
119
108
89
89
112
102
109
108
94
111
127
155
155
157
162
158
124
93
76
89
96
88
95
95
76
95
107
111
86
87
88
101
134
150
154
159
115
90
76
86
87
92
99
99
104
110
104
110
81
84
86
91
91
98
123
137
106
84
78
85
86
87
95
93
93
101
94
94
75
81
87
92
92
86
87
96
96
89
79
84
80
80
82
85
85
87
83
80
83
87
90
95
91
96
97
92
88
79
72
81
74
80
76
74
74
81
81
88
91
91
90
102
106
118
126
122
90
82
76
73
69
71
79
69
69
77
96
79
75
84
92
119
126
127
123
117
96
91
83
68
72
75
79
66
66
74
90
73
75
100
124
131
121
104
108
114
113
88
86
73
76
77
79
70
70
74
110
107
110
116
122
128
112
121
125
121
120
102
86
79
80
84
71
71
71
81
109
121
131
124
116
132
126
122
121
121
116
114
91
85
81
68
69
78
85
95
(a)
first
(b)
previous
next
last
back
exit
zoom
contents
index
about
300
87
89
108
127
143
155
127
152
149
137
132
101
68
85
114
126
116
116
95
121
83
96
93
121
107
147
148
162
155
139
136
113
81
111
128
125
123
123
99
111
94
94
92
113
101
115
151
164
159
141
134
100
88
118
127
127
119
119
96
119
114
98
97
129
122
145
159
161
168
148
135
94
73
100
119
108
89
89
112
107
109
108
94
130
127
155
155
157
162
158
124
93
76
89
96
88
95
95
76
132
107
111
86
121
88
101
134
150
154
159
115
90
76
86
87
92
99
99
104
160
104
110
81
116
86
91
91
98
123
137
106
84
78
85
86
87
95
93
93
139
94
94
75
115
87
92
92
86
87
96
96
89
79
84
80
80
82
85
85
128
83
80
83
120
90
95
91
96
97
92
88
79
72
81
74
80
76
74
74
121
81
88
91
127
90
102
106
118
126
122
90
82
76
73
69
71
79
69
69
104
96
79
75
124
92
119
126
127
123
117
96
91
83
68
72
75
79
66
66
93
90
73
75
143
124
131
121
104
108
114
113
88
86
73
76
77
79
70
70
99
110
107
110
148
122
128
112
121
125
121
120
102
86
79
80
84
71
71
71
90
109
121
131
142
116
132
126
122
121
121
116
114
91
85
81
68
69
78
85
113
(a)
(b)
Though several procedures can be adopted to correct this effect, the most
popular is the histogram matching. Separate histograms corresponding to each
detector unit are constructed and matched. Taking one response as standard, the
gain (rate of increase of DN) and offset (relative shift of mean) for all other detector units are suitably adjusted, and new DN-values are computed and assigned.
first
previous
next
last
back
exit
zoom
contents
index
about
301
first
previous
next
last
back
exit
zoom
contents
index
about
302
87
89
108
97
143
155
127
152
149
137
132
101
68
85
114
126
116
116
95
91
83
96
93
93
107
147
148
162
155
139
136
113
81
111
128
125
123
123
99
85
94
94
92
86
101
115
151
0
159
141
134
100
88
118
127
127
119
119
96
87
114
98
97
96
122
145
159
161
168
148
135
94
73
100
119
108
89
89
112
80
109
108
94
100
127
155
155
157
162
158
124
93
76
89
96
88
95
95
76
97
107
111
86
92
88
101
134
150
154
159
115
90
76
86
87
92
99
99
104
115
104
110
81
88
86
91
91
98
123
137
106
84
78
85
86
87
95
93
93
105
94
94
75
88
87
92
92
86
87
96
96
89
79
84
80
255
82
85
85
94
83
80
255
92
90
95
91
96
97
92
88
79
72
81
74
80
76
74
74
87
81
88
91
93
90
102
106
118
126
122
90
82
76
73
69
71
79
69
69
78
96
79
75
92
92
119
126
127
123
117
96
91
83
68
72
75
79
66
66
70
90
73
75
105
124
131
121
104
108
114
113
88
86
73
76
77
79
70
70
73
110
107
110
110
122
128
112
121
125
121
120
102
86
79
80
84
71
71
71
65
109
121
131
106
116
132
126
122
121
121
116
114
91
85
81
68
69
78
85
85
(a)
first
(b)
previous
next
last
back
exit
zoom
contents
index
about
303
8.3.2
previous
next
last
back
exit
zoom
contents
index
about
304
first
previous
next
last
back
exit
zoom
contents
index
about
305
first
previous
next
last
back
exit
zoom
contents
index
about
306
8.3.3
These methods require a description of the components in the atmospheric profile. The output of these methods is an image that matches the reflectance of the
ground pixels with a maximum estimated error of 10 %, if atmospheric profiling
is adequate enough. This image can be used for flux quantifications, parameter
evolution assessments, etc., as mentioned above. The advantage of these methods is that ground reflectance can be evaluated under any atmospheric condition, altitude and relative geometry between sun and satellite. The disadvantage
is that the atmospheric profiling required for these methods is rarely available.
Regarding this inconvenience, a sub-classification of absolute AC methods could
be based on the accuracy of the method related to the effort in obtaining the required data.
first
previous
next
last
back
exit
zoom
contents
index
about
307
Spectral resolution
Clouds
Aerosols
Gas absorption 1
Atmospheric profiles
Surface
characteristics
first
previous
LOWTRAN/ MODTRAN
Two-stream, including atmospheric refraction; discrete
ordinates also in MODTRAN3
20 cm1 (LOWTRAN); 2
cm1 (MODTRAN)
Eight cloud models; userspecified optical properties
Four optical models
Principle and trace gases
Standard and user-specified
Lambertian, no built-in models
Radiance
Input file
next
last
back
exit
6S
Successive orders of scattering
zoom
contents
index
about
308
first
previous
next
last
back
exit
zoom
contents
index
about
309
first
previous
next
last
back
exit
zoom
contents
index
about
310
first
previous
next
last
back
exit
zoom
contents
index
about
311
first
previous
next
last
back
exit
zoom
contents
index
about
312
first
previous
next
last
back
exit
zoom
contents
index
about
313
(a)
(b)
3
Log(O3dens.[g/m ])
240
260
280
-11 -10 -9 -8 -7 -6 -5 -4 -3
100
P
T
60
Mid.Sum.atm.
40
60
40
Mid.Sum.atm.
-4
-3 -2 -1
-8
-6
-4
-2
first
Log(P[mb])
previous
next
60
40
US62 atm.
-6
-4
-2
-2
H 2O
O3
80
Trop. atm.
40
60
40
Figure 8.8: Model atmospheric profiles for midlatitude summer (a), midlatitude winter (b), US 62
standard (c) and tropical
mode (d).
Trop. atm.
20
0
-4
-3 -2 -1
Log(H2O dens.[g/m ])
back
-4
Log(O3dens.[g/m3])
60
last
-6
0
-8
-8
Log(H2O dens.[g/m ])
20
-10
-10
P
T
80
Temperature [K]
20
20
-11 -10 -9 -8 -7 -6 -5 -4 -3
180 200 220 240 260 280 300
100
100
Altitude [km]
Altitude [km]
Altitude [km]
US62 atm.
Log(P[mb])
H 2O
O3
80
60
-3 -2 -1
-3 -2 -1
Log(O3dens.[g/m3])
P
T
-4
0
-4
-11 -10 -9 -8 -7 -6 -5 -4 -3
180 200 220 240 260 280 300
100
100
80
40
Mid.Win.atm.
Log(H2O dens.[g/m ])
Temperature [K]
60
20
Log(P[mb])
40
Mid.Win.atm.
-10
60
40
H 2O
O3
80
20
P
T
80
20
20
(c)
-11 -10 -9 -8 -7 -6 -5 -4 -3
180 200 220 240 260 280 300
100
100
H 2O
O3
80
Altitude [km]
80
Altitude [km]
300
Altitude [km]
220
Altitude [km]
200
100
Log(O3dens.[g/m3])
Temperature [K]
Altitude [km]
Temperature [K]
Log(P[mb])
exit
zoom
-10
-8
-6
-4
-2
0
3
Log(H2O dens.[g/m ])
contents
(d)
index
about
314
Summary
Radiometric corrections can be divided into relatively simple cosmetic rectifications, as well as atmospheric corrections. The cosmetic modifications are useful
to reduce or compensate for data errors.
Atmospheric corrections constitute an important step in the pre-processing
of remotely sensed data. Their effect is to re-scale the raw radiance data provided by the sensor to reflectance by correcting for atmospheric influence. They
are important for generating image mosaics and for comparing multitemporal
remote sensing data, but are also a critical prerequisite for the quantitative use of
remote sensing data, for example to calculate surface temperatures (see Chapter 13), or to determine specific surface materials in spectrometer data (Chapter 14).
Following an overview of different techniques to correct data errors, absolute correction methods were explained. Focus was placed on radiative transfer
models. Note that such corrections should be applied with care, and only after
understanding the physical principles behind these corrections.
first
previous
next
last
back
exit
zoom
contents
index
about
315
Questions
The following questions can help to study Chapter 8.
1. Should radiometric corrections be performed before or after geometric corrections, and why?
2. Why is the effect of haze more pronounced in shorter wavelength bands?
3. In a change detection study, if there were images from different years but
from the same season, would it still be necessary to perform atmospheric
corrections? Why or why not?
4. Why is the selection of the appropriate standard atmosphere for a RTM
critical?
The following are typical exam questions:
1. Which are the radiometric errors that can be introduced due to malfunctioning of satellite sensors?
2. What are the differences between line dropouts and line striping?
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 9
Geometric aspects
first
previous
next
last
back
exit
zoom
contents
index
about
316
317
9.1. Introduction
9.1 Introduction
Chapters 4, 5 and 6 described some of the geometric characteristics of different
sensor and platform systems, and explained how these different characteristics
affect the geometry of the resulting image products. For example, Chapter 4
outlined how the tilt and roll of an aircraft during a photo mission causes gradual changes in scale across an aerial photograph. These distortions are quite
different from those caused by the rotation of the Earth during the scanning of
an image with a space borne multispectral scanner (Chapter 5), or to the effects
of viewing the terrain under an oblique angle with side looking airborne radar
(Chapter 6). After considering these different geometric characteristics and correcting for their effects, we are able to use remotely sensed data to:
1. Derive 2-dimensional (X,Y ) and 3-dimensional (X,Y ,Z) coordinate information.
One of the most important applications of remote sensing is to use photographs or images as an intermediate product from which maps are made.
2D geometric descriptions of objects (points, lines, areas) can be derived
relatively easily from a single image or photo through the process of georeferencing. More complicated orientation procedures have to be applied
before 3D geometric descriptions (2.5D terrain relief, 3D objects as volumes) from overlapping, stereoscopic pairs of images or photos can be
derived, or if the relief displacement is too large to be tolerated.
2. Visualize the image data in a GIS environment.
Sometimes it can be helpful to combine image data with vector data, for
example to study cadastral boundaries in the context of land cover. In such
cases, the raster image data form a backdrop for the vector data. To enable
first
previous
next
last
back
exit
zoom
contents
index
about
318
9.1. Introduction
such integration, the image data must be georeferenced to the coordinate
system of the vector data.
3. Merge different types of image data in a GIS for integrated processing
and analysis.
Instead of using them to make maps, image data may be incorporated directly as raster layers in a GIS. For example, in a monitoring project it
might be necessary to compare land cover maps and classified multispectral Landsat and SPOT data of various dates. All these different data sets
need to be georeferenced to the same geometric grid of a given coordinate
system before they can be processed simultaneously. That means that the
pixels of all raster data sets (rasterized maps and image data) have to be
geocoded. This is a rather complex process in which the various sets of
georeferenced pixels are resampled, i.e. rearranged to a common system
of columns and rows.
Traditionally, the subjects of georeferencing and orientation have been addressed
by photogrammetry. This is the discipline concerned with making spatial measurements on images, and to create accurate products from those measurements,
for example Digital Terrain Models (DTMs). Today, not only aerial photographs
but also various other types of image data are used in the field of digital photogrammetry. Most digital image processing software includes at least simple
routines for georeferencing and geocoding. Deriving 3D measurements from
radar data is the focus of the related disciplines of interferometry and radargrammetry (see Chapter 6).
The remainder of this chapter is divided into two sections. Section 9.2 outlines the procedures used to determine 2D coordinates from photos and images.
As noted above, this is a relatively simple procedure in which a transformation
first
previous
next
last
back
exit
zoom
contents
index
about
319
9.1. Introduction
between two 2D coordinate systems (image and terrain) is determined. However, it is only applicable if the relief displacement is small enough to be neglected. Application of this transformation enables the terrain coordinates of all
image points subsequently to be determined. Note that the general concept of
spatial referencing and map projections are introduced in the Principles of Geographic Information Systems textbook.
Section 9.3 discusses the more complex procedures used to deal with the effects of relief differences in the terrain and to determine 3D coordinates from
stereo pairs. These procedures include: Monoplotting, which is an approach to
correct for terrain relief during digitizing of terrain features from aerial photos that results in relatively accurate (X,Y ) coordinates. Orthoimage production,
which is an approach to correct image data for terrain relief and store the image in a specific map projection. Orthoproducts can be used as a backdrop to
other data or, used to directly determine the (X,Y ) geometry of the features of
interest. Stereoplotting, which is used to extract 3D information from stereo pairs,
i.e. partially overlapping images, which can be viewed and measured in three
dimensions. Examples of 3D products include large-scale databases used for
topographic and cadastral mapping, and DTM, acquired for the design of civil
engineering projects.
first
previous
next
last
back
exit
zoom
contents
index
about
320
4
y
Figure 9.1:
Coordinate
system of the image defined by rows and columns
(a), and map coordinate
system with x- and y-axes
(b).
(x,y)
2
3
4
(i,j)
(0,0)
(a)
9.2
(b)
Two-dimensional approaches
first
previous
next
last
back
exit
zoom
contents
index
about
321
9.2.1
Georeferencing
The simplest way to link an image to a map projection system is to use a geometric
transformation. A transformation is a function that relates the coordinates of two
systems. A transformation relating (x, y) to (i, j) is typically defined by linear
equations, such as: x = 3 + 5i, and, y = 2 + 2.5j.
Using the above transformation, for example, image position (i = 3, j = 4)
relates to map coordinates (x = 18, y = 12). Once such a transformation has been
determined, the map coordinates for each image pixel can be calculated. Images
for which such a transformation has been carried out are said to be georeferenced.
It allows the superimposition of vector data and the storage of the data in map
coordinates when applying on-screen digitizing. Note that the image in the case
of georeferencing remains stored in the original (i, j) raster structure, and that
its geometry is not altered. As we will see later on, transformations can also
be used to change the actual shape of imagery, and thus make it geometrically
equivalent to a true map.
The process of georeferencing involves two steps: selection of the appropriate type of transformation, and determination of the transformation parameters.
The type of transformation depends mainly on the sensor-platform system used.
For aerial photographs (of a flat terrain) usually a so-called perspective transformation is used to correct for the effect of pitch and roll (Section 3.3.1). A more
general type is the polynomial transformation, which enables 1st , 2nd to nth order
transformations. In many situations a 1st order transformation is adequate. A
1st order transformation relates map coordinates (x, y) with image coordinates
(i, j) as follows:
first
x = a + bi + cj
(9.1)
y = d + ei + f j
(9.2)
previous
next
last
back
exit
zoom
contents
index
about
322
first
previous
next
last
back
exit
zoom
contents
index
about
323
i
254
149
40
26
193
j
68
22
132
269
228
x
958
936
916
923
954
y
155
151
176
206
189
xc
958.552
934.576
917.732
921.835
954.146
yc
154.935
150.401
177.087
204.966
189.459
dx
0.552
-1.424
1.732
-1.165
0.146
dy
-0.065
-0.599
1.087
-1.034
0.459
and
yc = 152.579 0.044i + 0.199j.
For example, for the pixel corresponding to GCP 1 (i=254 and j=68) we can
calculate new coordinates xc and yc as 958.552 and 154.935, respectively. These
values deviate slightly from the measured coordinates of the GCPs, because the
least-squares adjustment provides the best overall fit. The errors that remain
after the transformation are called residual errors, and are listed in the table as dx
and dy . Their magnitude is an indicator of the quality of the transformation. The
residual errors can be used to analyse which GCPs have the largest contribution
to the errors. This may indicate, for example, a GCP that has been inaccurately
identified.
The overall accuracy of a transformation is usually expressed by the Root
Mean Square Error (RMS error), which calculates a mean value from the individual residuals. The RMS error in x-direction, mx , is calculated using the following
equation:
v
u
n
u1 X
t
mx =
xi 2 .
(9.3)
n i=1
first
previous
next
last
back
exit
zoom
contents
index
about
324
first
previous
next
last
back
exit
zoom
contents
index
about
325
mtotal =
mx 2 + my 2 .
(9.4)
For the example data set given in Table 9.1, the residuals were calculated.
The respective mx , my and mtotal are 1.159, 0.752 and 1.381. The RMS error is
a quantitative method to check the accuracy of the transformation. However,
the RMS error does not take into account the spatial distribution of the GCPs.
It cannot, therefore, tell us which parts of the image are accurately transformed.
Furthermore, the RMS error is only valid for the area that is bounded by the
GCPs. In the selection of GCPs, therefore, points should be well distributed and
include location near the edges of the image.
first
previous
next
last
back
exit
zoom
contents
index
about
326
Transformation
Grid overlay
Corrected image
Resampling
9.2.2
Geocoding
first
previous
next
last
back
exit
zoom
contents
index
about
327
Conformal
(4 parameters)
Affine
(6 parameters)
Projective
Polynomial
previous
next
last
back
exit
zoom
contents
index
about
328
Nearest Neighbour
Bilinear Interpolation
+
Cubic Convolution
first
previous
next
last
back
exit
zoom
contents
index
about
329
Original
Nearest
Neighbour
Bilinear
Convolution
Cubic
Convolution
Figure 9.6: The effect of
nearest neighbour, and bilinear and cubic convolution resampling on the
original data.
first
previous
next
last
back
exit
zoom
contents
index
about
330
first
previous
next
last
back
exit
zoom
contents
index
about
331
first
previous
next
last
back
exit
zoom
contents
index
about
332
xi,yi
X0,Y0,Z0
Figure 9.7: Illustration of
the collinearity concept,
where image point, lens
centre and terrain point all
lie on one line.
X,Y,Z
first
previous
next
last
back
exit
zoom
contents
index
about
333
9.3.1
Orientation
Orientation results in a formula to calculate image coordinates (x, y or row, column) from terrain coordinates (X, Y , Z).
Aerial sensor
(film or area CCD)
col
Fiducial marks
Image
Principal
point
Principal
distance
Projection center
Lens
row
previous
next
last
back
exit
zoom
contents
index
about
334
first
previous
next
last
back
exit
zoom
contents
index
about
335
first
previous
next
last
back
exit
zoom
contents
index
about
336
first
previous
next
last
back
exit
zoom
contents
index
about
337
t2
t1
first
previous
next
last
back
exit
zoom
contents
index
about
338
(x,y)
Processing
(X,Y,Z)
9.3.2
Monoplotting
Suppose you need to derive accurate planimetric (X, Y ) positions from an aerial
photograph expressed in a specific map projection. This can be achieved for flat
terrain using a vertical photograph and a georeferencing approach. Recall from
the earlier discussion of relief displacement (Figure 4.7) how elevation differences lead to distortions in the image, preventing the use of such data for direct
measurements. Therefore, if there are significant terrain relief differences, the resulting relief displacement has to be corrected for. For this purpose the method
of monoplotting has been developed.
Monoplotting is based on the reconstruction of the position of the camera at
the moment of image exposure with respect to the GCPs, i.e. the terrain. This
is achieved by identification of a number (at least four) of GCPs for which both
the photo and map coordinates are known. The applied DTM should be stored
first
previous
next
last
back
exit
zoom
contents
index
about
339
first
previous
next
last
back
exit
zoom
contents
index
about
340
9.3.3
Orthoimage production
first
previous
next
last
back
exit
zoom
contents
index
about
341
9.3.4
Stereo restitution
Stereo restitution is closely related with stereo observation (Section 11.2.3). Similar to our two eyes allowing us to perceived our environment in 3D, two overlapping images can be arranged under a stereoscope or on a computer screen
and viewed stereoscopically, that is in 3D. The arranging required is somewhat analogous to the relative orientation of an image pair as explained above.
The resulting image pair allows us to form a stereo model of the terrain imaged,
which can be used to digitize features or make measurements in 3D. In effect we
arrange the position and angular orientation (attitude) of the sensor to duplicate
its exact configuration during image acquisition at times t1 and t2 . As described
in Section 4.7, the 60% forward overlap guarantees that the entire surveyed area
can be viewed and processed stereoscopically. Stereo pairs can also be derived
from other sensors, such as multispectral scanners and imaging radar.
The measurements made in a stereo model make use of a phenomenon called
parallax (Figure 9.11). Parallax refers to the fact that an object photographed from
different camera locations (e.g. from a moving aircraft) has different relative positions in the two images. In other words, there is an apparent shift of an object
as it is observed from different locations. Figure 9.11 illustrates that points at
two different elevations, regardless of whether it is the top and bottom of a hill
or of a building, experience this relative shift. The measurement of the difference
in position is a basic input for elevation calculations.
A stereo model enables parallax measurement using a special 3D cursor. If
the stereo model is appropriately oriented, the parallax measurements yield (X,
Y , Z) coordinates. Analogue systems use hardcopy images and perform the
computation by mechanical, optical or electrical means. Analytical systems
also use hardcopy images, but do the computation digitally, while in modern
digital systems both the images and the computation are digital. Using digital
first
previous
next
last
back
exit
zoom
contents
index
about
342
Parallax
pa
pb
instruments we can not only perform spot elevation measurements, but complete DTMs can be calculated for the overlapping part of the two images. Recall,
however, that reliable elevation values can only be extracted if the orientation
steps were carried out accurately, using reliable ground control points.
first
previous
next
last
back
exit
zoom
contents
index
about
343
Summary
This chapter has introduced some general geometric aspects of dealing with image data. A basic principle in dealing with remote sensing sensors is terrain
relief, which can be neglected (2D approaches) or taken into account (3D approaches). In both approaches there is a possibility to keep the image data stored
in their (i, j) system and relate it to other data through coordinate transformations (georeferencing and monoplotting). The other possibility is to change the
image raster into a specific map projection system using resampling techniques
(geocoding and orthoimage production). A true 3D approach is that of stereoplotting, which applies parallax differences as observed in stereo pairs to measure (X, Y , Z) coordinates of terrain and objects.
first
previous
next
last
back
exit
zoom
contents
index
about
344
Questions
The following questions can help you to study Chapter 9.
1. Suppose your organization develops a GIS application for road maintenance. What would be the consequences of using georeferenced versus
geocoded image data as a backdrop?
2. Think of two situations in which image data are applied and in which you
need to take relief displacement into account.
3. For a transformation of a specific image into a specific coordinate system,
an mtotal error of two pixels is given. What additional information do you
need to assess the quality of the transformation?
The following are typical exam questions:
1. Compare an image and map coordinate system (give figure with comment).
2. What is the purpose of acquiring stereo pairs of image data?
3. What are ground control points used for?
first
previous
next
last
back
exit
zoom
contents
index
about
345
4. Calculate the map position (x, y) for image position (10, 20) using the two
following equations: x = 10 + 5i j and y = 5 + 2i + 2j
5. Explain the purpose of monoplotting. What inputs do you need?
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 10
Image enhancement and
visualisation
first
previous
next
last
back
exit
zoom
contents
index
about
346
347
10.1. Introduction
10.1
Introduction
Many of the figures in the previous chapters have presented examples of remote
sensing image data. There is a need to visualize image data at most stages of
the remote sensing process. For example, the procedures for georeferencing, explained in Chapter 8, cannot be performed without visual examination to measure the location of ground control points on the image. However, it is in the
process of information extraction that visualization plays the most important
role. This is particularly so in the case of visual interpretation (Chapter 11), but
also during automated classification procedures (Chapter 12).
Because many remote sensing projects make use of multispectral data, this
chapter focuses on the visualization of colour imagery. An understanding of
how we perceive colour is required at two main stages in the remote sensing
process. In the first instance, it is required to produce optimal pictures from
(multispectral) image data on the computer screen or as a (printed) hard-copy.
Subsequently, the theory of colour perception plays an important role in the subsequent interpretation of these pictures. To understand how we perceive colour,
Section 10.2 deals with the theory of colour perception and colour definition.
Section 10.3 explains the basic principles you need to understand and interpret
the colours of a displayed image. Section 10.4 introduces some filter operations
for enhancing specific characteristics of the image, while the last section (Section 10.5) introduces the concepts of colour composites and image fusion.
first
previous
next
last
back
exit
zoom
contents
index
about
348
10.2
Perception of colour
Colour perception takes place in the human eye and the associated part of the
brain. Colour perception concerns our ability to identify and distinguish colours,
which, in turn, enables us to identify and distinguish entities in the real world.
It is not completely known how human vision works, or what exactly happens
in the eyes and brain before someone decides that an object is (for example) light
blue. Some theoretical models, supported by experimental results are, however,
generally accepted. Colour perception theory is applied whenever colours are
reproduced, for example in colour photography, TV, printing and computer animation.
first
previous
next
last
back
exit
zoom
contents
index
about
349
10.2.1
Tri-stimuli model
first
previous
next
last
back
exit
zoom
contents
index
about
350
eye sensitivity
Blue
400
wavelength (nm)
first
previous
next
last
Green Red
500
back
600
exit
700
zoom
contents
index
about
351
10.2.2
Colour spaces
The tri-stimuli model of colour perception is generally accepted. This states that
there are three degrees of freedom in the description of a colour. Various threedimensional spaces are used to describe and define colours. For our purpose the
following three are sufficient.
1. Red Green Blue (RGB) space, based on the additive principle of colours.
2. Intensity Hue Saturation (IHS) space, which is most related to our, intuitive, perception of colour.
3. Yellow Magenta Cyan (YMC) space based on the subtractive principle of
colours.
first
previous
next
last
back
exit
zoom
contents
index
about
352
(a)
(b)
In the additive colour scheme, all visible colours can be expressed as combinations of red, green and blue, and can therefore be plotted in a three-dimensional
space with R, G and B along the axes. The space is bounded by minimum and
maximum values (intensities) for red, green and blue, defining the so-called
first
previous
next
last
back
exit
zoom
contents
index
about
353
yellow
[1,1,0]
white
cyan
[0,1,1]
ine
[0,0,0]
[0,0,1]
ro
h
ac
[1,1,1]
l
tic
red
black
[1,0,0]
magenta
[1,0,1]
blue
first
previous
next
last
back
exit
zoom
contents
index
about
354
I
S
b
r
B
Figure 10.4 illustrates the correspondence between the RGB and the IHS system. Although the mathematical model for this description is tricky, the description is, in fact, more natural. For example, light, pale red is easier to imagine
than a lot of red with considerable amounts of green and blue. The result, however, is the same. Since the IHS scheme deals with colour perception, which is
somewhat subjective, complete agreement about the definitions does not exist.
It is safe to define intensity as the sum of the R, G and B values. On the main
first
previous
next
last
back
exit
zoom
contents
index
about
355
first
previous
next
last
back
exit
zoom
contents
index
about
356
first
previous
next
last
back
exit
zoom
contents
index
about
357
10.3
In this section, various ways of visualizing single and multi-band image data
are introduced. The section starts with an explanation of the concept of an image histogram. The histogram has a crucial role in realizing optimal contrast of
images. An advanced section deals with the application of RGB-IHS transformation to integrate different types of image data.
first
previous
next
last
back
exit
zoom
contents
index
about
358
10.3.1
Histograms
A number of important characteristics of a single-band image, such as a panchromatic satellite image, a scanned monochrome photograph or a single band
from a multi-band image, are found in the histogram of that image. The histogram describes the distribution of the pixel values (Digital Numbers, DN) of
that image. In the usual case, the DN-values range between 0255. A histogram
indicates the number of pixels for each value in this range. In other words, the
histogram contains the frequencies of DN-values in an image. Histogram data
can be represented either in tabular form or graphically. The tabular representation (Table 10.1) normally shows five columns. From left to right these are:
DN: Digital Numbers, in the range [0. . . 255]
Npix: the number of pixels in the image with this DN (frequency)
Perc: frequency as a percentage of the total number of image pixels
CumNpix: cumulative number of pixels in the image with values less than
or equal to DN
CumPerc: cumulative frequency as a percentage of the total number of
image pixels
Histogram data can be further summarized in some characteristic statistics:
mean, standard deviation, minimum and maximum, as well as the 1% and 99%
values (Table 10.2). Standard deviation is a statistical measure of the spread of
the values around the mean. The 1% value, for example, defines the cut-off value
below which only 1% of all the values are found. 1% and 99% values can be used
to define an optimal stretch for visualization.
first
previous
next
last
back
exit
zoom
contents
index
about
359
Cumulative histogram
3%
2%
Ordinary histogram
50%
1%
0%
first
previous
0 14
next
114
last
back
0%
255
173
exit
Figure 10.5:
Standard
histogram and cumulative
histogram corresponding
with Table 10.1.
zoom
contents
index
about
360
first
previous
DN
0
13
14
15
16
51
52
53
54
102
103
104
105
106
107
108
109
110
111
163
164
165
166
173
174
255
Npix
0
0
1
3
2
55
59
94
138
1392
1719
1162
1332
1491
1685
1399
1199
1488
1460
720
597
416
274
3
0
0
next
last
Perc
0.00
0.00
0.00
0.00
0.00
0.08
0.08
0.13
0.19
1.90
2.35
1.59
1.82
2.04
2.31
1.91
1.64
2.04
2.00
0.98
0.82
0.57
0.37
0.00
0.00
0.00
back
CumNpix
0
0
1
4
6
627
686
780
918
25118
26837
27999
29331
30822
32507
33906
35105
36593
38053
71461
72058
72474
72748
73100
73100
73100
exit
CumPerc
0.00
0.00
0.00
0.01
0.01
0.86
0.94
1.07
1.26
34.36
36.71
38.30
40.12
42.16
44.47
46.38
48.02
50.06
52.06
97.76
98.57
99.14
99.52
100.00
100.00
100.00
zoom
contents
index
about
361
Mean
113.79
first
previous
StdDev
27.84
next
last
Min
14
Max
173
back
1%-value
53
exit
zoom
Table 10.2:
Summary
statistics for the example
histogram given above.
99%-value
165
contents
index
about
362
10.3.2
The histogram is used to obtain optimum display of single band images. Single band images are normally displayed using a grey scale. Grey shades of the
monitor typically range from black (value 0) to white (value 255). When applying grey shades, the same signal is input to each of the three (RGB) channels of
the computer monitor (Figure 10.6).
Blue
Green
Red
NIR
MIR
TIR
B
G
Figure 10.6: Multi-band
image displayed on a monitor using the monitors
Red, Green and Blue input
channels.
Using the original image values to control the monitor values usually results in an image with little contrast since only a limited number of grey values
are used. In the example introduced in the previous section (Table 10.2) only
173 14 = 159 out of 255 grey levels would be used. To optimize the range of
grey values, a transfer function maps DN-values into grey shades on the monitor
(Figure 10.7). The transfer function can be chosen in a number of ways. Linear
contrast stretch is obtained by finding the DN-values where the cumulative his-
first
previous
next
last
back
exit
zoom
contents
index
about
363
no stretch
linear stretch
histogram stretch
grey
shade
(black)
0
DN
255
DN
255
DN
255
togram of the image passes 1% and 99%. DNs below the 1% value become black
(0), DNs above the 99% value are white (255), and grey levels for the intermediate values are found by linear interpolation. Histogram equalization, or histogram
stretch, shapes the transfer function according to the cumulative histogram. As
a result, the DNs in the image are distributed as equally as possible over the
available grey levels (Figure 10.7).
first
previous
next
last
back
exit
zoom
contents
index
about
364
Output
Input
16 12 20
first
previous
13
15
12
next
last
12
back
exit
zoom
contents
index
about
365
10.4
Filter operations
A further step in producing optimal images for interpretation is the use of filter
operations. Filter operations are local image transformations: a new image is
calculated and the value of a pixel depends on the values of its former neighbours. Filter operations are usually carried out on a single band. Filters are
used for spatial image enhancement, for example to reduce noise or to sharpen
blurred images. Filter operations are extensively used in various semi-automatic
procedures that are outside the scope of this chapter.
To define a filter, a kernel is used. A kernel defines the output pixel value as a
linear combination of pixel values in a neighbourhood around the corresponding position in the input image. For a specific kernel, a so-called gain can be
calculated as follows:
gain =
1
ki
(10.1)
The gain sums all kernel coefficients (ki ). In general, the sum of the kernel
coefficient, after multiplication by the gain, should be equal to 1 to result in an
image with approximately the same range of grey values. The effect of using a
kernel is illustrated in Figure 10.9, which shows how the output value is calculated in terms of average filtering.
The significance of the gain factor is explained in the next two subsections.
In these examples only small neighbourhoods of 3 3 kernels are considered. In
practice other kernel dimensions may be used.
first
previous
next
last
back
exit
zoom
contents
index
about
366
10.4.1
Noise reduction
Consider the kernel shown in Table 10.3 in which all coefficients equal 1. This
means that the values of the nine pixels in the neighbourhood are summed. Subsequently, the result is divided by 9 to achieve that the overall pixel values in the
output image are in the same range as the input image. In this situation the gain
is 1/9 = 0.11. The effect of applying this averaging filter is that image will become
blurred or smoothed. When dealing with speckle effect in radar imagery the
result of applying this filter is to reduce the speckle.
1
1
1
1
1
1
1
1
1
In the above kernel, all pixels have equal contribution in the calculation of the
result. It is also possible to define a weighted average. To emphasize the value of
the central pixel, a larger value can be put in the centre of the kernel. As a result,
less drastic blurring takes place. In addition, it is necessary to take into account
that the horizontal and vertical neighbours influence the result more strongly
that the diagonal ones. The reason for this is that the direct neighbours are closer
to the central pixel. The resulting kernel, for which the gain is 1/16 = 0.0625, is
given in Table 10.4.
1
2
1
first
previous
next
last
2
4
2
back
1
2
1
exit
zoom
contents
index
about
367
10.4.2
-1
16
-1
-1
-1
-1
Edge enhancement
Figure 10.10: Original image (middle), edge enhanced image (left) and
smoothed image (right).
first
previous
next
last
back
exit
zoom
contents
index
about
368
10.5
Colour composites
The previous section explained visualization of single band images. When dealing with a multi-band image, any combination of three bands can, in principle,
be used as input to the RGB channels of the monitor. The choice should be made
based on the application of the image data. To increase contrast, the three bands
can be subjected to linear contrast stretch or histogram equalization.
Sometimes a true colour composite, where the RGB channels relate to the red,
green and blue wavelength bands of a scanner, is made. A popular choice is
to link RGB to the near-infrared, red and green bands respectively to yield a
false colour composite (Figure 10.11). The results look similar to prints of colourinfrared photography (CIR). As explained in Chapter 4, the three layers in a
false colour infrared film are sensitive to the NIR, R, and G parts of the spectrum
and made visible as R, G and B respectively in the printed photo. The most
striking characteristic of false colour composites is that vegetation appears in
a red-purple colour. In the visible part of the spectrum, plants reflect mostly
green light (this is why plants appear green), but their infrared reflection is even
higher. Therefore, vegetation in a false colour composite is shown as a combination of some blue but even more red, resulting in a reddish tint of purple.
Depending on the application, band combinations other than true or false
colour may be used. Land-use categories can often be distinguished quite well
by assigning a combination of Landsat TM bands 543 or 453 to RGB. Combinations that display near-infrared as green show vegetation in a green colour
and are, therefore, called pseudo-natural colour composites (Figure 10.11).
first
previous
next
last
back
exit
zoom
contents
index
about
369
Pseudo-natural colour
composite (3,5,2)
first
previous
next
last
back
Figure 10.11:
Landsat
TM false colour composite
of Enschede and surrounding. Three different
colour composites are
shown:
natural colour,
pseudo-natural colour and
false colour composite.
Other band combinations
are possible.
exit
zoom
contents
index
about
370
10.5.1
Section 10.3 explained how three bands of a multispectral image dataset can be
displayed as a colour composite image using the RGB colour space functions of
a computer monitor. Once one becomes familiar with the additive mixing of the
red, green and blue primaries, the colours perceived on the computer monitor
can be intuitively related to the digital numbers of the three input bands, thereby
providing qualitative insight in the spectral properties of an imaged landscape.
This colour coding principle can also be exploited to display images acquired
by different sensor systems, an enhancement technique known as image fusion.
Like with any enhancement technique, the aim of image fusion is to optimize
image display for visual interpretation. One objective that uniquely applies to
image fusion, however, is to enhance the properties of images acquired by multiple sensors. This additional objective allows the interpreter instantly to extract
complementary information from a multi-sensor image dataset by looking at a
single image. Various combinations of image characteristics may be exploited,
some of which are illustrated in the examples below, such as: (i) the high spatial resolution of a panchromatic image with a multispectral image dataset of
lower spatial resolution, (ii) the textural properties of a synthetic aperture radar
image with the multispectral properties of an optical dataset, (iii) various terrain properties derived from a digital elevation model with a single band image,
(iv) gridded airborne geophysical measurements with a relief-shaded digital elevation model or satellite image data of higher spatial resolution, and (v) image
data with multi-temporal resolution for change detection.
The processing technique underlying all image fusion methods is based on
applying a mathematical function on the co-registered pixels of the merged image set, yielding a single image display optimized for visual interpretation. Without considering the standard pre-processing steps, such as atmospheric correcfirst
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
371
previous
next
last
back
exit
zoom
contents
index
about
372
SPOT XS
band 3
SPOT XS
band 2
SPOT XS
band 1
SPOT
Pan
Green
Hue
RGB to
HSI
Blue
Red
HSI
to RGB
Satur.
Int.
Green
Color
composite
Blue
contrast
stretch
Intensity
One commonly used method to fuse co-registered image channels from multiple sources is to skip the third step and directly display the georeferenced image channels as a colour composite image. Although this is the simplest method
to fuse multi-sensor image data, this may result in colours that are not at all
intuitive to the interpreter. Considering, for example Figure 10.13(c) below, evidently it is very difficult to appreciate the characteristics of the original images
from the various additive colour mixtures in such a colour composite image. The
algebraic manipulations of step 3 aim to overcome this problem. The transformations of step 3 have in common that they map the DN variations in the input
images in one way or another on the perceptual attributes of human colour vision. These colour attributes are usually approached by using the IHS colour
first
previous
next
last
back
exit
zoom
contents
index
about
373
G0 =
previous
next
R0 =
first
G 0
I
I
last
B0 =
B 0
I
I
back
exit
zoom
contents
index
about
374
G0 = aG + bI 0
B 0 = aB + bI 0
with
a, b > 0
and
a + b = 1,
where a and b are scaling factors to balance intensity sharpening from the high
spatial resolution image versus the hue and saturation from the multispectral
band triplet.
Next to colour space transformations and arithmetic combinations, statistical
transforms such as principal component and regression analysis have been commonly used in image fusion, since they have the theoretical advantage of being
able to combine a large number of images ([14]). In practice, however, fused
images derived from many images are difficult to interpret.
The fourth and final step is essentially equivalent to the display of colour
composite images. Regardless of which fusion method is used, one needs to assure that the results are visualized as unambiguously as possible using the RGB
first
previous
next
last
back
exit
zoom
contents
index
about
375
first
previous
next
last
back
exit
zoom
contents
index
about
376
Figure 10.13:
Fused
images generated from
bands 7, 3 and 1 of a
Landsat 7 ETM subscene
and an orthophoto mosaic
of the Tabernas area,
Southeast Spain.
(a)
colour composite of red
= band 7, green = band
3 and blue = band 1;
(b) orthophoto mosaic
resampled to 5 metre
pixels; (c) fused image
of ETM bands, its intensity substituted with the
orthophoto mosaic using
RGB-IHS colour space
transformations; (d) as (c)
but with preservation of
intensity of ETM bands by
adding back 50 % of the
original intensity to the intensity substitute, leading
to better preservation of
the spectral properties.
first
previous
next
last
back
exit
zoom
contents
index
about
377
first
previous
next
last
back
exit
zoom
contents
index
about
378
previous
next
last
back
exit
zoom
contents
index
about
379
first
previous
next
last
back
exit
zoom
contents
index
about
380
Summary
The way we perceive colour is most intuitively described by the Hue component
of the IHS colour space. The colour space used to describe colours on computer
monitors is the RGB space.
When displaying an image on a screen (or on hard copy) many choices need
to be made: the selection of bands, the sequence in which these are linked to the
Red-Green-Blue channels of the monitor, the use of stretching techniques and
the possible use of (spatial) filtering techniques.
The histogram, and the derived cumulative histogram, are the basis for all
stretching methods. Stretching, or contrast enhancement, is realized using transfer functions.
Filter operations are based on the use of a kernel. The weights of the coefficients in the kernel determine the effect of the filter which can be, for example,
to smooth or sharpen the original image.
first
previous
next
last
back
exit
zoom
contents
index
about
381
Questions
The following questions can help you to study Chapter 10.
1. How many possibilities are there to visualize a 4 band image using a computer monitor?
2. You are shown a picture in which grass looks green and houses are red
what is your conclusion? Now, you are shown a picture in which grass
shows as purple and houses are blackwhat is your conclusion now?
3. What would be a reason for not using the default application of histogram
equalization for all image data?
4. Can you think of a situation in your own context where you would probably use filters to optimize interpretation of image data?
first
previous
next
last
back
exit
zoom
contents
index
about
382
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 11
Visual image interpretation
first
previous
next
last
back
exit
zoom
contents
index
about
383
384
11.1. Introduction
11.1
Introduction
previous
next
last
back
exit
zoom
contents
index
about
385
11.1. Introduction
In Section 11.2 some theory about image understanding is explained. Visual
image interpretation is used to produce spatial information in all of ITCs fields
of interest: urban mapping, soil mapping, geomorphological mapping, forest
mapping, natural vegetation mapping, cadastral mapping, land use mapping
and many others. As visual image interpretation is application specific, it is
illustrated by two examples (soil mapping, land cover mapping) in Section 11.3.
The last section (11.4) addresses some aspects of quality.
first
previous
next
last
back
exit
zoom
contents
index
about
386
11.2
first
previous
next
last
back
exit
zoom
contents
index
about
387
11.2.1
Human vision
first
previous
next
last
back
exit
zoom
contents
index
about
388
first
previous
next
last
back
exit
zoom
contents
index
about
389
11.2.2
Interpretation elements
When dealing with image data, visualized as pictures, a set of terms is required
to express and define characteristics present in a picture. These characteristics
are called interpretation elements and are used, for example, to define interpretation
keys, which provide guidelines on how to recognize certain objects.
The following seven interpretation elements are distinguished: tone/hue,
texture, shape, size, pattern, site and association.
Tone is defined as the relative brightness of a black/white image. Hue refers
to the colour on the image as defined in the intensity-hue-saturation (IHS)
system. Tonal variations are an important interpretation element in an image interpretation. The tonal expression of objects on the image is directly
related to the amount of light (energy) reflected from the surface. Different
types of rock, soil or vegetation most likely have different tones. Variations
in moisture conditions are also reflected as tonal differences in the image:
increasing moisture content gives darker grey tones. Variations in hue are
primarily related to the spectral characteristics of the measured area and
also to the bands selected for visualization (see Chapter 10). The advantage of hue over tone is that the human eye has a much larger sensitivity
for variations in colour (approximately 10,000 colours) as compared to tone
(approximately 200 grey levels).
Shape or form characterizes many terrain objects visible in the image. Shape
also relates to (relative) height when dealing with stereo-images, which
we discuss in Section 11.2.3. Height differences are important to distinguish between different vegetation types and also in geomorphological
mapping. The shape of objects often helps to determine the character of
the object (built-up areas, roads and railroads, agricultural fields, etc.).
first
previous
next
last
back
exit
zoom
contents
index
about
390
previous
next
last
back
exit
zoom
contents
index
about
391
first
previous
next
last
back
exit
zoom
contents
index
about
392
11.2.3
Stereoscopic vision
The impression of depth encountered in the real world can also be realized by
images of the same object that are taken from different positions. Such a pair of
images (photographs or digital images) is separated and observed at the same
time by both eyes. These give images on the retinas of the viewers eyes, in
which objects at different positions in space are projected on relatively different
positions. We call this stereoscopic vision. Pairs of images that can be viewed
stereoscopically are called stereograms. Stereoscopic vision is explained here because the impression of height and height differences is important in the interpretation of both natural and man-made features from image data. Note that
in Chapter 9 we explained that under specific conditions stereo-models can be
used to derive 3D coordinates.
Under normal conditions the human eye can focus on objects between 150 mm
distance and infinity. In doing so we direct both eyes to the object (point) of interest. This is known as convergence, and is how humans normally see in three
dimensions. To view the stereoscopic model formed by a pair of overlapping
photographs, the two images have to be separated so that the left and right eyes
see only the left and right photographs, respectively. In addition one should not
focus on the photo itself but at infinity. Some experienced persons can experience stereo by putting the two photos at a suitable distance from their eyes.
Most of us need some help and different methods have been developed.
Pocket and mirror stereoscopes, and also the photogrammetric plotters, use a
system of lenses and mirrors to feed one image into one eye. Pocket and mirror
stereoscopes are mainly applied in mapping applications related to vegetation,
forest, soil and geomorphology (Figure 11.3) Photogrammetric plotters are used
in topographic and large scale mapping activities.
Another way of achieving stereovision is to project the two images in two
first
previous
next
last
back
exit
zoom
contents
index
about
393
first
previous
next
last
back
exit
zoom
contents
index
about
394
11.3
first
previous
next
last
back
exit
zoom
contents
index
about
395
11.3.1
first
previous
next
last
back
exit
zoom
contents
index
about
396
Figure 11.4:
Panchromatic
photograph to be interpreted.
the interpreter finds and draws master lines dividing major landscapes
(mountains, hill land, plateau, valley, . . . ). Each landscape is then divided
into relief types (e.g. sharply-dissected plateau), each of which is further
divided by lithology (e.g. fine-bedded shales and sandstones), and finally
by detailed landform (e.g. scarp slope). The landform consists of a topographic form, a geomorphic position, and a geochronological unit, which
together determine the environment in which the soil formed. A legend
category usually comprises many areas (polygons) with the same photointerpretation characteristics. Figure 11.4 shows a photograph and Fig-
first
previous
next
last
back
exit
zoom
contents
index
about
397
Figure 11.5:
Photointerpretation
transparency related to the
aerial photo shown in
Figure 11.4.
ure 11.5 shows the interpretation units that resulted from its stereo interpretation.
In the next phase, a sample area of the map is visited in the field to study
the soil. The sampled area is between 1020% of the total area and comprises all legend classes introduced in the previous stage. The soils are
described in the field, and samples are taken for laboratory analysis, to determine their characteristics (layering, particle-size distribution, density,
first
previous
next
last
back
exit
zoom
contents
index
about
398
first
previous
next
last
back
exit
zoom
contents
index
about
399
first
previous
next
last
back
exit
zoom
contents
index
about
400
Landscape
Relief
Lithology
Landform
API Code
Hilland
Dissected
ridge
Loess
Summit
Hi111
Shoulder &
backslope
Scarp
Hi112
Toe slope
Hi212
Slope
Hi311
Bottom
Slope
Hi312
Hi411
Tread
Pl311
High
race
ter-
Old floodplain
first
previous
next
last
Colluvium
from loess
Loess over
old river alluvium
Old
vium
allu-
back
exit
Hi211
Abandoned
Pl312
channel
Abandoned
Pl411
floodplain
(channelized)
zoom
contents
index
about
401
API Code
Hi111
Hi112
Hi211
Hi212
Hi311
Hi312
Hi411
Pl311
Pl312
Pl411
first
previous
next
last
back
exit
zoom
60%
40%
50%
50%
contents
index
about
402
11.3.2
previous
next
last
back
exit
zoom
contents
index
about
403
Buildings
Buildings
first
Road
previous
Grass
Trees
next
P
(a)
Trees
last
back
exit
(b)
zoom
contents
index
about
404
311
311
112l
231
243
112 l
211
211
231
l121
231
142
312
242
231
231
112
211
111l
312
313
231
111
121
l
l
211 l
l
512
111
first
previous
next
last
111
211
l
l
l121
111
231l
211
242
back
211
exit
zoom
111l
contents
index
about
405
1. Artificial
Level 2
1.1.
Level 3
Urban fabric
ric
1.1.2. Discontinuous
Surfaces
urban
fabric
1.2.
Industrial,
commercial
and transport
units
1.3.
1.4.
Mine,
dump
and construction sites
Artificial nonagricultural
vegetated
areas
1.2.3.
1.2.4.
1.3.1.
Arable land
areas
2.2.
first
previous
next
Permanent
last
crops back
ties
2.1.1. Non-irrigated
arable
land
2.1.2. Permanently irrigated
land
2.1.3. Rice fields
2.2.1. Vineyards
exit
zoom contents index about
406
Table 11.4:
CORINEs
extended description for
class 1.3.1 (Mineral extraction sites). Source:[9].
first
previous
next
last
back
exit
zoom
contents
index
about
407
11.3.3
first
previous
next
last
back
exit
zoom
contents
index
about
408
first
previous
next
last
back
exit
zoom
contents
index
about
409
11.4
Quality aspects
previous
next
last
back
exit
zoom
contents
index
about
410
(crisp or ambiguous) and the instructions and methods used. Two examples are
given here to give you an intuitive idea. Figure 11.8 gives two interpretation results for the same area. Note that both results differ in terms of total number of
objects (map units) and in terms of (line) generalization. Figure 11.9 compares
13 individual interpretation results of a geomorphological interpretation. Similar to the previous example, large differences are found along the boundaries.
In addition to this, you also can conclude that for some objects (map units) there
was no agreement on the thematic attribute.
first
previous
next
last
back
exit
zoom
contents
index
about
411
first
previous
next
last
back
exit
11
2
3
7
8
13
4
5
9
10
zoom
12
contents
index
about
412
Summary
Visual image interpretation is one of the methods used to extract information
from remote sensing image data. For that purpose, images need to be visualized on screen or in hard-copy. The human vision system is used to interpret
the colours and patterns on the picture. Spontaneous recognition and logical
inference (reasoning) are distinguished.
Interpretation keys or guidelines are required to instruct the image interpreter. In such guidelines, the (seven) interpretation elements can be used to
describe how to recognize certain objects. Guidelines also provide a classification scheme, which defines the thematic classes of interest and their (hierarchical) relationships. Finally, guidelines give rules on the minimum size of objects
to be included in the interpretation.
When dealing with a new area or a new application, no guidelines are available. An iterative approach is then required to establish the relationship between
features observed in the picture and the real world.
In all interpretation and mapping processes the use of ground observations is
essential to (i) acquire knowledge of the local situation, (ii) gather data for areas
that cannot be mapped from the images (iii) to check the result of the interpretation.
The quality of the result of visual image interpretation depends on the experience and skills of the interpreter, the appropriateness of the image data applied
and the quality of the guidelines being used.
first
previous
next
last
back
exit
zoom
contents
index
about
413
Questions
The following questions can help you to study Chapter 11.
1. What is the relationship between image visualization and image interpretation?
2. Describe (to a colleague) how to recognize a road on an aerial photo (make
use of the interpretation elements).
3. Why is it necessary to have a sound conceptual model of how soils form
in the landscape to apply the aerial photo-interpretation method presented
in Section 11.3.1? What are the advantages of this approach in terms of efficiency and thematic accuracy, compared to interpretation element (only)
analysis?
4. Describe a relatively simple method to check the quality (in terms of replicability) of visual image interpretation.
5. Which products in your professional environment are based on visual image interpretation?
first
previous
next
last
back
exit
zoom
contents
index
about
414
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 12
Digital image classification
first
previous
next
last
back
exit
zoom
contents
index
about
415
416
12.1. Introduction
12.1
Introduction
first
previous
next
last
back
exit
zoom
contents
index
about
417
12.1. Introduction
operates in feature space. Section 12.3 gives an overview of the classification
process, the steps involved and the choices to be made. The result of an image
classification needs to be validated to assess its accuracy (Section 12.4). Finally,
two major problems in image classification are addressed in Section 12.5.
first
previous
next
last
back
exit
zoom
contents
index
about
418
12.2
first
previous
next
last
back
exit
zoom
contents
index
about
419
12.2.1
Image space
Rows
45
band 3
26 81
53 35 57
previous
next
band 1
DN-values
Single pixel
first
band 2
last
back
exit
zoom
contents
index
about
420
12.2.2
Feature space
In one pixel, the values in (for example) two bands can be regarded as components of a two-dimensional vector, the feature vector. An example of a feature
vector is (13, 55), which tells that 13 DN and 55 DN are stored for band 1 and
band 2 respectively. This vector can be plotted in a two-dimensional graph.
Image
band 1
band 2
Image
band 1
band 2
band 3
(v1, v2, v3)
(v1, v2)
band 3
band 2
v3
v2
3-dimensional
Feature Space
2-dimensional
Feature Space
v1
band 1
v1
v2
band 1
band 2
Similarly, this approach can be visualized for a three band situation in a threedimensional graph. A graph that shows the values of the feature vectors is called
a feature space, or also feature space plot or scatter plot. Figure 12.2 illustrates
how a feature vector (related to one pixel) is plotted in the feature space for two
and three bands, respectively. Two-dimensional feature space plots are most
common.
first
previous
next
last
back
exit
zoom
contents
index
about
421
first
previous
next
last
back
exit
zoom
contents
index
about
422
Distance in the feature space is expressed as Euclidian distance and the units
are DN (as this is the unit of the axes). In a two-dimensional feature space the
distance can be calculated according to Pythagoras theorem. In the situation of
Figure 12.4, the distance between (10, 10) and (40, 30) equals the square root of
(40 10)2 + (30 10)2 . For three or more dimensions, the distance is calculated
in a similar way.
(0,0)
first
previous
next
last
Figure 12.4:
Euclidian
distance between the two
points is calculated using
Pythagoras theorem.
back
exit
zoom
contents
index
about
423
12.2.3
Image classification
The scatterplot shown in Figure 12.3 gives information about the distribution
of corresponding pixel values in two bands of an image. Figure 12.5 shows a
feature space in which the feature vectors have been plotted for six specific land
cover classes (grass, water, trees, etc). Each cluster of feature vectors (class) occupies its own area in the feature space. Figure 12.5 shows the basic assumption
for image classification: a specific part of the feature space corresponds to a specific class. Once the classes have been defined in the feature space, each image
pixel can be compared to these classes and assigned to the corresponding class.
Classes to be distinguished in an image classification need to have different
spectral characteristics. This can, for example, be analysed by comparing spectral reflectance curves (Section 2.4). Figure 12.5 also illustrates the limitation of
image classification: if classes do not have distinct clusters in the feature space,
image classification can only give results to a certain level of reliability.
The principle of image classification is that a pixel is assigned to a class based
on its feature vector, by comparing it to predefined clusters in the feature space.
Doing so for all image pixels results in a classified image. The crux of image
classification is in comparing it to predefined clusters, which requires definition of
the clusters and methods for comparison. Definition of the clusters is an interactive process and is carried out during the training process. Comparison of the
individual pixels with the clusters takes place using classifier algorithms. Both are
explained in the next section.
first
previous
next
last
back
exit
zoom
contents
index
about
424
255
grass
water
band y
trees
houses
bare soil
Figure 12.5:
Feature
space showing the respective clusters of six
classes; note that each
class occupies a limited
area in the feature space.
wheat
band x
first
previous
next
last
back
255
exit
zoom
contents
index
about
425
12.3
The process of image classification (Figure 12.6) typically involves five steps:
1. Selection and preparation of the image data. Depending on the cover types
to be classified, the most appropriate sensor, the most appropriate date(s)
of acquisition and the most appropriate wavelength bands should be selected (Section 12.3.1).
Training
data
Remote
Sensing
data
running the
actual
classification
Classification
data
2. Definition of the clusters in the feature space. Here two approaches are
possible: supervised classification and unsupervised classification. In a supervised classification, the operator defines the clusters during the training
process (Section 12.3.2); in an unsupervised classification a clustering algorithm automatically finds and defines a number of clusters in the feature
space (Section 12.3.3).
3. Selection of classification algorithm. Once the spectral classes have been
defined in the feature space, the operator needs to decide on how the pixels
(based on their DN-values) are assigned to the classes. The assignment can
be based on different criteria (Section 12.3.4).
first
previous
next
last
back
exit
zoom
contents
index
about
426
(a)
(b)
The above points are elaborated on in the next sections. Most examples deal
with a two-dimensional situation (two bands) for reasons of simplicity and visualization. In principle, however, image classification can be carried out on any
n-dimensional data set. Visual image interpretation, however, limits itself to an
image that is composed of a maximum of three bands.
first
previous
next
last
back
exit
zoom
contents
index
about
427
12.3.1
Image classification serves a specific goal: converting image data into thematic
data. In the application context, one is rather interested in thematic characteristics of an area (pixel) rather than in its reflection values. Thematic characteristics
such as land cover, land use, soil type or mineral type can be used for further
analysis and input into models. In addition, image classification can also be
considered as data reduction: the n multispectral bands result in a single band
raster file.
With the particular application in mind, the information classes of interest
need to be defined and their spatio-temporal characteristics assessed. Based on
these characteristics the appropriate image data can be selected. Selection of the
adequate data set concerns the type of sensor, the relevant wavelength bands
and the date(s) of acquisition.
The possibilities for the classification of land cover types depend on the date
an image was acquired. This not only holds for crops, which have a certain
growing cycle, but also for other applications. Here you may think of snow
cover or illumination by the sun. In some situations, a multi-temporal data set is
required. A non-trivial point is that the required image data should be available
at the required moment. Limited image acquisition and cloud cover may force
you to make use of a less optimal data set.
Before starting to work with the acquired data, a selection of the available
spectral bands may be made. Reasons for not using all available bands (for example all seven bands of Landsat TM) lie in the problem of band correlation and,
sometimes, in limitations of hard- and software. Band correlation occurs when
the spectral reflection is similar for two bands. An example is the correlation
between the green and red wavelength bands for vegetation: a low reflectance
in green correlates with a low reflectance in red. For classification purposes, corfirst
previous
next
last
back
exit
zoom
contents
index
about
428
first
previous
next
last
back
exit
zoom
contents
index
about
429
12.3.2
One of the main steps in image classification is the partitioning of the feature
space. In supervised classification this is realized by an operator who defines the
spectral characteristics of the classes by identifying sample areas (training areas).
Supervised classification requires that the operator be familiar with the area of
interest. The operator needs to know where to find the classes of interest in the
area covered by the image. This information can be derived from general area
knowledge or from dedicated field observations (Section 11.3.1).
A sample of a specific class, comprising of a number of training pixels, forms
a cluster in the feature space (Figure 12.5). The clusters, as selected by the operator:
should form a representative data set for a given class; this means that
the variability of a class within the image should be taken into account.
Also, in an absolute sense, a minimum number of observations per cluster
is required. Although it depends on the classifier algorithm to be used, a
useful rule of thumb is 30 n (n = number of bands).
should not or only partially overlap with the other clusters, as otherwise a
reliable separation is not possible. Using a specific data set, some classes
may have significant spectral overlap, which, in principle, means that these
classes cannot be discriminated by image classification. Solutions are to
add other spectral bands, and/or, add image data acquired at other moments.
first
previous
next
last
back
exit
zoom
contents
index
about
430
12.3.3
Supervised classification requires knowledge of the area at hand. If this knowledge is not sufficiently available or the classes of interest are not yet defined,
an unsupervised classification can be applied. In an unsupervised classification,
clustering algorithms are used to partition the feature space into a number of
clusters.
Several methods of unsupervised classification exist, their main purpose being to produce spectral groupings based on certain spectral similarities. In one
of the most common approaches, the user has to define the maximum number
of clusters in a data set. Based on this, the computer locates arbitrary mean vectors as the centre points of the clusters. Each pixel is then assigned to a cluster
by the minimum distance to cluster centroid decision rule. Once all the pixels have been labelled, recalculation of the cluster centre takes place and the
process is repeated until the proper cluster centres are found and the pixels are
labelled accordingly. The iteration stops when the cluster centres do not change
any more. At any iteration, however, clusters with less than a specified number
of pixels are eliminated. Once the clustering is finished, analysis of the closeness
or separability of the clusters will take place by means of inter-cluster distance
or divergence measures. Merging of clusters needs to be done to reduce the
number of unnecessary subdivisions in the data set. This will be done using a
pre-specified threshold value. The user has to define the maximum number of
clusters/classes, the distance between two cluster centres, the radius of a cluster,
and the minimum number of pixels as a threshold number for cluster elimination. Analysis of the cluster compactness around its centre point is done by
means of the user-defined standard deviation for each spectral band. If a cluster
is elongated, separation of the cluster will be done perpendicular to the spectral
axis of elongation. Analysis of closeness of the clusters is carried out by meafirst
previous
next
last
back
exit
zoom
contents
index
about
431
first
previous
next
last
back
exit
zoom
contents
index
about
432
first
previous
next
last
back
exit
zoom
contents
index
about
433
Band 2
Band 2
255
Band 1
255
Band 1
255
first
previous
next
last
back
exit
zoom
contents
index
about
434
12.3.4
Classification algorithms
After the training sample sets have been defined, classification of the image can
be carried out by applying a classification algorithm. Several classification algorithms exist. The choice of the algorithm depends on the purpose of the classification and the characteristics of the image and training data. The operator
needs to decide if a reject or unknown class is allowed. In the following, three
classifier algorithms are explained. First the box classifier is explained, for its simplicity to help in understanding of the principle. In practice, the box classifier is
hardly ever used. In practice the Minimum Distance to Mean and the Maximum
Likelihood classifiers are normally used.
first
previous
next
last
back
exit
zoom
contents
index
about
435
first
previous
next
last
back
exit
zoom
contents
index
about
436
Band 2
255
Mean vectors
255
0
Band 2
Band 1
255
Distance
Threshold
0
Band 1
255
first
Figure 12.10:
Principle
of the minimum distance
to mean classification in
a two-dimensional situation. The decision boundaries are shown for a
situation without threshold distance (upper right)
and with threshold distance (lower right).
Unknown
Band 2
255
previous
next
last
back
exit
Band 1
zoom
contents
255
index
about
437
first
previous
next
last
back
exit
zoom
contents
index
about
438
first
previous
next
last
back
exit
zoom
contents
index
about
439
Band 2
255
Band 2
Band 1
255
Band 1
255
Unknown
Figure 12.11:
Principle
of the maximum likelihood classification. The
decision boundaries are
shown for a situation without threshold distance (upper right) and with threshold distance (lower right).
Band 2
255
first
previous
next
last
back
exit
Band 1
zoom
contents
255
index
about
440
12.4
Image classification results in a raster file in which the individual raster elements
are class labelled. As image classification is based on samples of the classes, the
actual quality should be checked and quantified afterwards. This is usually done
by a sampling approach in which a number of raster elements are selected and
both the classification result and the true world class are compared. Comparison
is done by creating an error matrix from which different accuracy measures can be
calculated. The true world class are preferably derived from field observations.
Sometimes, sources of an assumed higher accuracy, such as aerial photos, are
used as a reference.
Various sampling schemes have been proposed to select pixels to test. Choices
to be made relate to the design of the sampling strategy, the number of samples
required, and the area of the samples. Recommended sampling strategies in
the context of land cover data are simple random sampling or stratified random
sampling. The number of samples may be related to two factors in accuracy
assessment: (1) the number of samples that must be taken in order to reject a
data set as being inaccurate; or (2) the number of samples required to determine
the true accuracy, within some error bounds, for a data set. Sampling theory
is used to determine the number of samples required. The number of samples
must be traded-off against the area covered by a sample unit. A sample unit can
be a point but also an area of some size; it can be a single raster element but may
also include the surrounding raster elements. Among other considerations the
optimal sample area size depends on the heterogeneity of the class.
Once the sampling has been carried out and the data collected, an error matrix can be established (Table 12.1). Other terms for this table are confusion matrix
or contingency matrix. In the table, four classes (A, B, C, D) are listed. A total
first
previous
next
last
back
exit
zoom
contents
index
about
441
Total
a
b
c
d
35
4
12
2
14
11
9
5
11
3
38
12
1
0
4
2
61
18
63
21
Total
Error of Omission
Producer Accuracy
53
34
66
39
72
28
64
41
59
7
71
29
163
User Accuracy
(%)
43
39
40
90
57
61
60
10
of 163 samples were collected. From the table you can read that, for example,
53 cases of A were found in the real world (reference) while the classification
result yields 61 cases of a; in 35 cases they agree.
The first and most commonly cited measure of mapping accuracy is the overall accuracy, or Proportion Correctly Classified (PCC). Overall accuracy is the
number of correctly classified pixels (i.e., the sum of the diagonal cells in the
error matrix) divided by the total number of pixels checked. In Table 12.1 the
overall accuracy is (35 + 11 + 38 + 2)/163 = 53%. The overall accuracy yields one
figure for the result as a whole.
Most other measures derived from the error matrix are calculated per class.
Error of omission refers to those sample points that are omitted in the interpretation result. Consider class A, for which 53 samples were taken. 18 out of the 53
samples were interpreted as b, c or d. This results in an error of omission of
18/53 = 34%. Error of omission starts from the reference data and therefore relates to the columns in the error matrix. The error of commission starts from the
interpretation result and refers to the rows in the error matrix. The error of commission refers to incorrectly classified samples. Consider class d: only two of the
21 samples (10%) are correctly labelled. Errors of commission and omission are
first
previous
next
last
back
exit
zoom
contents
index
about
442
first
previous
next
last
back
exit
zoom
contents
index
about
443
12.5
first
previous
next
last
back
exit
zoom
contents
index
about
CAUTION
444
Table 12.2:
Spectral
classes
distinguished
during classification can
be aggregated to land
cover classes. 1-to-n and
n-to-1 relationships can
exist between land cover
and land use classes.
previous
next
last
back
exit
zoom
contents
index
about
445
Terrain
Image
first
previous
next
last
back
exit
zoom
contents
index
about
446
Summary
Digital image classification is a technique to derive thematic classes from image data. Input are multi-band image data; output is a raster file containing
thematic (nominal) classes. In the process of image classification the role of the
operator and additional (field) data are significant. In a supervised classification, the operator needs to provide the computer with training data and select
the appropriate classification algorithm. The training data are defined based on
knowledge (derived by field work, or from secondary sources) of the area being
processed. Based on the similarity between pixel values (feature vector) and the
training classes a pixel is assigned to one of the classes defined by the training
data.
An integral part of image classification is validation of the results. Again,
independent data are required. The result of the validation process is an error
matrix from which different measures of error can be calculated.
first
previous
next
last
back
exit
zoom
contents
index
about
447
Questions
The following questions can help you to study Chapter 12.
1. Compare digital image classification with visual image interpretation in
terms of input of the operator/photo-interpreter and in terms of output.
2. What would be typical situations in which to apply digital image classification?
3. Another wording for image classification is partitioning of the feature
space. Explain what is meant by this.
The following are typical exam questions:
1. Name the different steps of the process of image classification.
2. What is the principle of image classification?
first
previous
next
last
back
exit
zoom
contents
index
about
448
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 13
Thermal remote sensing
first
previous
next
last
back
exit
zoom
CAUTION
contents
index
about
449
450
13.1. Introduction
13.1
Introduction
Thermal Remote Sensing (TRS) is based on the measuring of electromagnetic radiation (EM) in the infrared region of the spectrum. Most commonly used are
the intervals from 35 m (MIR) and 814 m (TIR), in which the atmosphere
is fairly transparent and the signal is only lightly attenuated by atmospheric absorption. Since the source of the radiation is the heat of the imaged surface itself
(compare Figure 2.1), the handling and processing of TIR data is considerably
different from remote sensing based on reflected sunlight:
The surface temperature is the main factor that determines the amount of
energy that is radiated and measured in the thermal wavelengths. The
temperature of an object varies greatly depending on time of the day, season, location, exposure to solar irradiation, etc. and is difficult to predict.
In reflectance remote sensing, on the other hand, the incoming radiation
from the sun is constant and can be readily calculated, although of course
atmospheric correction has to be taken into account.
In reflectance remote sensing the characteristic property we are interested
in is the reflectance of the surface at different wavelengths. In thermal remote sensing, however, one property we are interested in is how well energy is emitted from the surface at different wavelengths.
Since thermal remote sensing does not depend on incoming sunlight, it can
also be performed during the night (for some applications even better than
during the day).
In section 13.2 the basic theory of thermal remote sensing is explained. Section 13.3 introduces the fundamental steps of processing TIR data to extract usefirst
previous
next
last
back
exit
zoom
contents
index
about
451
13.1. Introduction
ful information. Section 13.4 illustrates in a number of examples how thermal
remote sensing can be used in different application fields.
first
previous
next
last
back
exit
zoom
contents
index
about
452
13.2
first
previous
next
last
back
exit
zoom
contents
index
about
453
13.2.1
C1
C
( T2
(13.1)
,
2898
,
T
(13.2)
where max is the wavelength of the radiation maximum in (m), T is temperature in (K) and 2898 is a physical constant in (m).
We can use Wiens law to predict the position of the peak of the blackbody
curve if we know the temperature of the emitting object. If you were interested
first
previous
next
last
back
exit
zoom
contents
index
about
454
Figure 13.1: Illustration of
Plancks radiation law for
the sun (6000 K) and
the average earth surface
temperature (300 K). Note
the logarithmic scale for
both x and y-axis. The
dashed lines mark the
wavelength with the emission maxima for the two
temperatures.
As predicted by Wiens law, the
radiation maximum shifts
to longer wavelengths as
temperature decreases.
visible band
1.0E+08
blackbody at
Suns temperature
6000 K
-2
-1
1.0E+06
1.0E+04
blackbody at
Earths temperature
300 K
1.0E+02
Wienss shift
1.0E+00
0.1
10
100
wavelength (m)
in monitoring forest fires that burn at 1000 K, you could immediately turn to
bands around 2.9 m in the SWIR, where the radiation maximum for those fires
is expected. For ordinary land surface temperatures around 300 K, wavelengths
from 8 to 14 m are most useful (TIR range).
You can now understand why reflectance remote sensing (i.e. based on reflected sunlight) uses short wavelengths in the visible and short wave infrared,
and thermal remote sensing (based on emitted earth radiation) uses the longer
wavelengths around 3-14 m. Figure 13.1 also shows that the total energy (integrated area under the curve) is considerably higher for the sun than for the
cooler earth surface. This relationship between surface temperature and total
first
previous
next
last
back
exit
zoom
contents
index
about
455
(13.3)
where M is the total radiant emittance (W m2 ), is the Stefan-Boltzmann constant, 5.6697 108 (W m2 K4 ), and T is the temperature in (K).
The Stefan-Boltzmann law states that colder targets emit only small amounts
of EM radiation, and Wiens displacement law predicts that the peak of the radiation distribution will shift to longer wavelengths as the target gets colder. In
section 2.2.1 we learnt that photons at long wavelengths have less energy than
those at short wavelengths. Hence, in TRS we are dealing with a small amount
of low energy photons, which makes their detection difficult. As a consequence
of that we often have to reduce spatial or spectral resolution when acquiring
thermal imagery datasets to guarantee a reasonable signal-to-noise ratio.
first
previous
next
last
back
exit
zoom
contents
index
about
456
13.2.2
Blackbody The three laws described above are, strictly speaking, only valid
for an ideal radiator, which we refer to as a blackbody (BB). A BB is a perfect
absorber and a perfect radiator in all wavelengths. It can be thought of as a
black object that reflects no incoming EM radiation. But it is not only black in
the visible bands, but in all wavelengths of interest. If an object is a blackbody, it
behaves exactly as the theoretical laws predict. True blackbodies do not exist in
nature, although some materials (e.g. clean, deep water between 8-12 m) can
be very close.
Greybody Materials that absorb and radiate only a certain fraction compared
to a blackbody are called greybodies. The fraction is a constant for all wavelengths. Hence, a greybody curve is identical in shape to a blackbody curve, but
the absolute values are lower as it does not radiate as perfectly as a blackbody.
Selective Radiator A third group are the selective radiators. They also radiate
only a certain fraction of a blackbody, but this fraction changes with wavelength.
A selective radiator may radiate perfectly in some wavelengths, while acting
as a very poor radiator in other wavelengths. The radiant emittance curve of
a selective radiator can then also look quite different from a ideal, blackbody
curve.
Emissivity The fraction of energy that is radiated by a material compared to a
true blackbody is also referred to as emissivity ( ). Hence, emissivity is defined
first
previous
next
last
back
exit
zoom
contents
index
about
457
M,T
,
BB
M,T
(13.4)
= 1 ,
first
previous
next
last
back
exit
zoom
contents
index
about
458
1.00
Emissivity
0.75
0.50
Sandy soil
Marble
0.25
3
10
11
12
13
14
Wavelength (mm)
first
previous
next
last
back
exit
zoom
contents
index
about
459
13.2.3
(13.6)
With a single thermal band (e.g. Landsat7 ETM+ sensor), has to be estimated from other sources. One way is to do a land cover classification with all
available bands and then assign an value for each class from an emissivity table
(e.g. 0.99 for water, 0.85 for granite).
first
previous
next
last
back
exit
zoom
contents
index
about
460
first
previous
next
last
back
exit
zoom
contents
index
about
461
13.3
first
previous
next
last
back
exit
zoom
contents
index
about
462
13.3.1
For some applications, image enhancement of the thermal data bands is sufficient to achieve the necessary outputs. They can be used, for example, to delineate relative differences in surface emissivity or surface temperature.
first
previous
next
last
back
exit
zoom
contents
index
about
463
first
previous
next
last
back
exit
zoom
contents
index
about
464
first
previous
next
last
back
exit
zoom
contents
index
about
465
13.3.2
K2
,
K1
ln
+1
L
(13.7)
first
previous
next
last
back
exit
zoom
contents
index
about
466
(13.8)
In summary, Lsat is measured by the satellite, but we need LBB to determine the
kinetic surface temperature. Equation 13.8 shows that in practice there are two
problems in the processing of thermal images:
Determination of the upwelling (L ) and downwelling (L ) atmospheric
radiances, together with the atmospheric transmittance ( ) in the thermal
range of the EM spectrum.
Determination of the emissivities and surface temperature.
The first of these problems requires knowledge of atmospheric parameters
during the time of satellite overpass. Once these are known, software such as
MODTRAN4 (see Section 8.3.3) can be applied to produce the required parameters L , L and .
Because the emissivities are wavelength dependent, equation 13.8 leads to n
equations with n + 1 unknowns, where n is the number of bands in the thermal
image. Additional information is therefore required to solve the set of equations. Most methods make use of laboratory derived information with regard to
the shape of the emissivity spectra. This process is called temperature-emissivity
separation (TES). The algorithms, however, are rather complex and outside the
scope of this chapter. A complete manual on thermal processing, including examples and more details on the mathematics involved, is available from Ambro
Gieske in the Department of Water Resources.
first
previous
next
last
back
exit
zoom
contents
index
about
467
13.4
Thermal applications
This section provides a number of example applications, for which thermal data
can be used. In general the applications of thermal remote sensing can be divided into two groups:
The main interest is the study of the surface composition by looking at the
surface emissivity in one or several wavelengths.
The focus is on the surface temperature, its spatial distribution or change
over time.
first
previous
next
last
back
exit
zoom
contents
index
about
468
13.4.1
As we have seen in Figure 13.2, many rock and soil types show distinct spectra
in the thermal infrared. These absorption bands are mainly caused by silicate
minerals, such as quartz or feldspars, that make up a large percentage of the
worlds rocks and soils. By carefully studying thermal emissivity spectra, we can
identify different mineral components the target area is composed of. Figure 13.3
shows a thermal image taken by the MASTER airborne sensor over an area near
Cuprite, Nevada. The original colour composite was decorrelation stretched for
better contrasts. It clearly shows the different rock units in the area. The labels
show a limestone (lst) in tints of light green in the south, volcanic tuff (t) in cyan
colour in the north. A silica (s) capped hill shows shades of dark orange near the
centre of the image. Several additional units can also be distinguished, based on
this band combination alone.
first
previous
next
last
back
exit
zoom
contents
index
about
469
t
s
Figure 13.3:
Decorrelation
stretched
colour composite of a
MASTER image; RGB =
Bands 46, 44, 42; see text
for more information on
rock units. Scene is 10 km
wide.
lst
first
previous
next
last
back
exit
zoom
contents
index
about
470
13.4.2
first
previous
next
last
back
exit
zoom
contents
index
about
471
first
previous
next
last
back
exit
zoom
contents
index
about
472
Summary
This chapter has provided an introduction to thermal remote sensing (TRS), a
passive technique that is aimed at recording radiation emitted by the material or
surface of interest. It was explained how TRS is mostly applied to the middleand thermal infrared, and how the amount and peak wavelengths of the energy
emitted is a function of the objects temperature. This explained why reflectance
remote sensing (i.e. based on reflected sunlight) uses short wavelengths in the
visible and short wave infrared, while thermal remote sensing uses the longer
wavelengths. In addition to the basic physical laws, the concepts of blackbody
radiation and emissivity were explained. Incomplete radiation, i.e. a reduced
emissivity was shown to account for the kinetic temperatures often being lower
than the corresponding radiant temperature.
The subsequent section on the processing of thermal data gave an overview
of techniques aimed at the differentiation of surface materials, as well as methods to calculate actual surface temperatures. The last section provided examples
of how the surface distribution of different rock types can be mapped, and how
thermal anomalies can be assessed. The methods are applicable to many different problems, including coal-fire mapping, sea surface temperature monitoring,
weather forecasting, but also in the search-and-rescue of missing persons.
first
previous
next
last
back
exit
zoom
contents
index
about
473
Questions
The following questions can help to study Chapter 13.
1. What is the total radiant energy from an object at a temperature of 300 K?
2. Calculate the peak wavelength of energy from a volcanic lava flow of about
1200 C.
3. Is the kinetic temperature higher than the radiant temperature or the other
way around? Explain your answer with an example.
4. For a Landsat 7 ETM+ image a certain pixel has a radiance of 10.3 W m2
sr1 m1 . Determine its radiant temperature.
5. For a sea level summer image (Landsat 5), the following atmospheric parameters are determined with MODTRAN4:
The atmospheric transmissivity is 0.7.
The upwelling radiance L is 2.4 W m2 sr1 m1 .
The downwelling radiance L is 3.7 W m2 sr1 m1 .
The broad-band surface emissivity is 0.98 and the radiance Lsat observed
at the satellite is 8.82 W m2 sr1 m1 .
- Calculate the black-body radiance LBB .
- Determine the surface radiant and kinetic temperatures.
first
previous
next
last
back
exit
zoom
contents
index
about
474
first
previous
next
last
back
exit
zoom
contents
index
about
Chapter 14
Imaging Spectrometry
first
previous
next
last
back
exit
CAUTION
zoom
contents
index
about
475
476
14.1. Introduction
14.1
Introduction
first
previous
next
last
back
exit
zoom
contents
index
about
477
14.1. Introduction
224 spectral
bands
Along track
(512 pixels
per scene)
20 m
Reflectance
Crosstrack
(614 pixels x
20 m/pixel)
100
Kaolinite
50
10
0.4
first
previous
next
last
back
exit
zoom
Figure 14.1:
Concept
of imaging spectrometry
(modified after De Jong).
1 1.5 2 2.5
Wavelength (m)
contents
index
about
478
14.1. Introduction
Landsat TM
2.0
GERIS
1.5
HIRIS
1.0
AVIRIS
0.5
USGS Lab
0.0
0.5
1.5
1.0
2.0
2.5
Wavelength (m)
first
previous
next
last
back
exit
zoom
contents
index
about
479
14.2
first
previous
next
last
back
exit
zoom
contents
index
about
480
Electronic processes
40
Fe
30
Fe
20
10
Reflectance (%)
a) hematite
Al-OH
90
80
70
60
OH
50
b) kaolinite
40
Vibrational processes
90
CO3
80
70
60
CO3
c) calcite
50
40
0.3
0.8
1.3
1.8
2.3
Wavelength (m)
first
previous
next
last
back
exit
zoom
contents
index
about
481
14.3
first
previous
next
last
back
exit
zoom
contents
index
about
482
14.4
first
previous
next
last
back
exit
zoom
contents
index
about
483
14.5
Once reflectance-like imaging spectrometer data are obtained, the logical next
step is to use diagnostic absorption features to determine and map variations
in surface composition. New analytical processing techniques have been developed to analyse such high-dimensional spectral data sets. These methods are
the focus of this section. Such techniques can be grouped into two categories:
spectral matching approaches and subpixel classification methods.
first
previous
next
last
back
exit
zoom
contents
index
about
484
14.5.1
Spectral matching algorithms aim at quantifying the statistical or physical relationship between measurements at a pixel scale and field or laboratory spectral
responses of target materials of interest. A simple spectral matching algorithm
used is binary encoding, in which an imaged reflectance spectrum is encoded
as
(
hi =
0 if xi T
1 if xi > T
(14.1)
where xi is the brightness value of a pixel in the ith channel, T is the user
specified threshold (often the average brightness value of the spectrum is used
for T ), and hi is the resulting binary code for the pixel in the ith band.
This binary encoding provides a simple mean of analysing data sets for the
presence of absorption features, which can be directly related to similar encoding profiles of known materials.
An often used spectral matching technique in the analysis of imaging spectrometer data sets is the so-called spectral angle mapper (SAM). In this approach,
the spectra are treated as vectors in a space with a dimensionality equal to the
number of bands, n. SAM calculates the spectral similarity between an unknown
reflectance spectrum, ~t (consisting of band values ti ), and a reference (field or
laboratory) reflectance spectrum, ~r (consisting of band values ri ), and expresses
the spectral similarity of the two in terms of the vector angle, , between the two
spectra as calculated using all the bands, i, using the vector notation, as
= cos
first
~t ~r
k~tk k~rk
previous
next
(14.2)
last
back
exit
zoom
contents
index
about
485
= cos
ti ri
i=1
(14.3)
v
.
u n
n
uX X
t
r2
t2
i
i=1
i=1
The outcome of the spectral angle mapping for each pixel is an angular difference, measured in radians ranging from zero to /2, which gives a qualitative
measure for comparing known and unknown spectra. The smaller the angle, the
more similar the two spectra.
first
previous
next
last
back
exit
zoom
contents
index
about
486
14.5.2
Spectral unmixing
previous
next
last
back
exit
zoom
contents
index
about
487
Material
IFOV of pixel
0.25
0.25
0.50
Each endmember
has a unique spectrum
mix=0.25*A+0.25*B+0.5*C
first
previous
next
last
back
exit
zoom
contents
index
about
488
first
previous
next
last
back
exit
zoom
contents
index
about
489
n
X
~ j + ~ and 0
fj Re
n
X
(14.4)
fj 1
j=1
j=1
first
previous
next
last
back
exit
zoom
contents
index
about
490
14.6
In this last section, a brief outline of current applications in various fields relevant to the thematic context of ITC are given. These include examples in the area
of geologic mapping and resource exploration, vegetation sciences, and hydrology.
first
previous
next
last
back
exit
zoom
contents
index
about
491
14.6.1
first
previous
next
last
back
exit
zoom
contents
index
about
492
14.6.2
Vegetation sciences
first
previous
next
last
back
exit
zoom
contents
index
about
493
14.6.3
Hydrology
In hydrological sciences, the interaction of electromagnetic radiation with water, and the inherent and apparent optical properties of water are a central issue.
Very important in imaging spectrometry of water bodies is the atmospheric correction and air-water interface corrections. Water quality of freshwater aquatic
environments, estuarine environments and coastal zones are of importance to
national water bodies. Detection and identification of phytoplankton-biomass,
suspended sediments and other matter, coloured dissolved organic matter, and
aquatic vegetation (i.e. macrophytes) are crucial parameters in optical models of
water quality. Much emphasis has been put on the mapping and monitoring of
the state and the growth or brake-down of coral reefs, as these are important in
the CO2 cycle. In general, many multi-sensor missions such as Terra and Envisat
are directed towards integrated approaches for global change studies and global
oceanography. Atmosphere models are important in global change studies and
aid in the correction of optical data for scattering and absorption due to atmospheric trace gasses. In particular, the optical properties and absorption characteristics of ozone, oxygen, water vapor, and other trace gasses, and scattering
by molecules and aerosols are important parameters in atmosphere studies. All
these can be and are derived from imaging spectrometers.
first
previous
next
last
back
exit
zoom
contents
index
about
494
14.7
In Chapter 5 an overview of spaceborne multispectral and hyperspectral sensors was given. Here we provide a short historic overview of imaging spectrometer sensor systems with examples of presently operational airborne and
spaceborne systems. The first civilian scanning imaging spectrometer was the
Scanning Imaging Spectroradiometer (SIS) constructed in the early 1970s for
NASAs Johnson Space Center. After that, civilian airborne spectrometer data
were collected in 1981 using a one-dimensional profile spectrometer developed
by the Geophysical Environmental Research Company, which acquired data in
576 channels covering the 0.42.5 m wavelength range, followed by the Shuttle Multispectral Infrared Radiometer (SMIRR) in 1981. The first imaging device
was the Fluorescence Line Imager (FLI, also known as the Programmable Multispectral Imager, PMI), developed by Canadas Department of Fisheries and
Oceans in 1981. This was followed by the Airborne Imaging Spectrometer (AIS),
developed at the NASA Jet Propulsion Laboratory, which was operational from
1983 onward acquiring 128 spectral bands in the range of 1.22.4 m. The fieldof-view of 3.7resulted in 32 pixels across-track. A later version of the instrument, AIS-2, covered the 0.82.4 m region, acquiring images 64 pixels wide.
Since 1987, NASA is operating the successor of the AIS systems, AVIRIS, the
Airborne Visible/Infrared Imaging Spectrometer, AVIRIS. The AVIRIS scanner
collects 224 contiguous bands resulting in a complete reflectance spectrum for
each 20 m by 20 m pixel in the 0.4 m to 2.5 m region, with a sampling interval of <10 nm. The field-of-view of the AVIRIS scanner is 30 degrees, resulting
in a ground field-of-view of 10.5 km from 20 km altitude. AVIRIS uses scanning optics and four spectrometers to image a 614 pixel swath simultaneously
in 224 contiguous spectral bands over the 400 nm to 2500 nm wavelength range.
first
previous
next
last
back
exit
zoom
contents
index
about
495
first
previous
next
last
back
exit
zoom
contents
index
about
496
Instrument
Airborne
Visible/Infrared
Imaging Spectrometer
(AVIRIS)
The Airborne Imaging
Spectrometer (AISA)
Compact Airborne
Spectrographic
Imager (CASI)
Digital Airborne
Imaging Spectrometer
(DAIS)
Hyperspectral Mapper
(HyMAP)
Multispectral Infrared
and Visible Imaging
Spectrometer (MIVIS)
Reflective Optics
System Imaging
Spectrometer
(ROSIS)
first
previous
next
Spectral
Manufacturer
range
(nm)
NASA (US)
4002500
224 bands
Bandwidth
(nm)
10
430900
288 bands
1.639.8
400870
288 bands
1.9
50012300
79 bands
152000
4002500
126 bands
1020
43012700
102 bands
8500
Dornier
Satellite
Systems
(Germany)
430850
84 bands
last
exit
SPECIM
(Finland)
ITRES
(Canada)
Geophysical
Environmental Research
Corporation
(US)
Integrated
Spectronics
(Australia)
SenSyTech,
Inc. (US)
back
Bands
zoom
contents
index
about
497
14.8. Summary
14.8
Summary
Chapter 14 has given an overview of the concepts and methods of imaging spectrometry. It was explained how, on the one hand, it is similar to multi-spectral
remote sensing in that a number of visible and NIR bands are used to study the
characteristics of a given surface. However, imaging spectrometers typically acquire image data in a much larger number of narrow and contiguous spectral
bands. This makes it possible to extract per-pixel reflectance spectra, which are
useful for the detection of the diagnostic absorption features that allow us to
determine and map variations in surface composition.
The section on pre-processing showed that for some applications a scenedependent relative atmospheric correction is sufficient. However, it was also explained when an absolute radiometric calibration, which provides transfer functions to convert DN values to at-sensor radiance, must be applied.
Once the data have been corrected, spectral matching and subpixel classification methods can be used to relate observed spectra to known spectral responses
of different materials, or to identify which material are present within a single
pixel. Examples were provided in the areas of geologic mapping and resource
exploration, vegetation sciences, and hydrology.
first
previous
next
last
back
exit
zoom
contents
index
about
498
14.8. Summary
Questions
The following questions can help to study Chapter 14.
1. What are the advantages and disadvantages of hyperspectral remote sensing in comparison with multispectral Landsat-type scanning systems.
2. In which part of the electromagnetic spectrum do absorption bands occur
that are diagnostic for different mineral types?
3. Under which conditions can signal mixing be considered a linear process?
4. Assume you will design a hyperspectral scanner sensitive to radiation in
the 400 to 900 nm region. The instrument will carry 288 bands with a spectral resolution (bandwidth) of 1.8 nm. Thus this configuration result in
spectral overlap between the bands?
first
previous
next
last
back
exit
zoom
contents
index
about
Bibliography
[1] A RONOFF , S. Geographic Information Systems: A Management Perspective.
WDL Publications, Ottawa, 1989.
[2] B ERK , A., B ERNSTEIN , L., AND R OBERTSON , D. MODTRAN: A Moderate Resolution Model for LOWTRAN. Air Force Geophysics Laboratory,
Hanscom AFB, MA, US, 1997. 308
[3] B IJKER , W. Radar for Rain Forest: A Monitoring System for Land Cover Change
in the Colombian Amazon. Phd thesis, ITC, 1997. 238
[4] B UITEN , H. J., AND C LEVERS , J. G. P. W. Land Observation by Remote Sensing: Theory and Applications, vol. 3 of Current Topics in Remote Sensing. Gorden & Breach, 1993. 32
[5] B URROUGH , P. A., AND F RANK , A. U. Geographic Objects with Indeterminate
Boundaries. GISDATA Series. Taylor & Francis, London, 1996. 407
[6] B URROUGH , P. A., AND M C D ONNELL , R. Principles of Geographical Information Systems. Oxford University Press, Oxford, 1998.
first
previous
next
last
back
exit
zoom
contents
index
about
499
500
Bibliography
[7] C HAVEZ , P., S IDES , S., AND A NDERSON , J. Comparison of three different
methods to merge multiresolution and multispectral data: Landsat TM and
SPOT panchromatic. Photogrammetric Engineering & Remote Sensing 57, 3
(1991), 295303. 374
[8] C LARK , R. N. Spectroscopy of rocks and minerals, and principles of spectroscopy. In Manual of Remote Sensing: Remote Sensing for the Earth Sciences
(New York, 1999), A. Rencz, Ed., vol. 3, John Wiley and Sons, pp. 358. 479,
495
[9] C OMMUNITY, E. CORINE Land Cover Technical Guide. ECSCEECEAEC,
Brussels, Belgium, 1993. EUR 12585 EN. 402, 406
[10]
previous
next
last
back
exit
zoom
contents
index
about
501
Bibliography
thematic data for the geosciences. Canadian Journal of Remote Sensing 20, 3
(1994), 210221. 374
[15] H UNT, G. R. Remote sensing in geology. In Electromagnetic Radiation:
The Communication Link in Remote Sensing (New York, 1980), B. Siegal and
A. Gillespie, Eds., John Wiley and Sons, p. 702. 495
[16] ITC. ITC Textbook of Photo-interpretation. Four volumes, 19631974. 20
[17] ITC. ITC Textbook of Photogrammetry. Five volumes, 19631974. 20
[18] J ANE S. Janes Space Directory 19971998, 13th ed. Alexandria, Janes Information Group, 1997. 83
[19] K EAREY, P., B ROOKS , M., AND H ILL , I. An Introduction to Geophysical Exploration. Blackwell Science, Oxford, UK, 2002. 274
[20] K NIEZYS , F., S HETTLE , E., A BREU , L., C HETWYND , J., A NDERSON , G.,
G ALLERY, W., S ELBY, J., AND C LOUGH , S. User Guide to LOWTRAN 7. Air
Force Geophysics Laboratory, Hanscom AFB, MA, US, 1988. 308
[21] K RAMER , H. J. Observation of the Earth and its Environment: Survey of Mission
and Sensors, fourth ed. Springer Verlag, Berlin, Germany, 2002. 83, 122
[22] L AURINI , R., AND T HOMPSON , D. Fundamentals of Spatial Information Systems, vol. 37 of The APIC Series. Academic Press, London, 1992.
[23] L ILLESAND , T. M., K IEFER , R. W., AND C HIPMAN , J. W. Remote Sensing
and Image Interpretation, fifth ed. John Wiley & Sons, New York, NY, 2004.
32, 59, 128, 182, 445
first
previous
next
last
back
exit
zoom
contents
index
about
502
Bibliography
[24] M C C LOY, K. R. Resource Management Information Systems. Taylor & Francis,
London, U.K., 1995. 76, 77, 128
[25] M IDDELKOOP, H. Uncertainty in a GIS, a test for quantifying interpretation
output. ITC Journal 1990, 3 (1990), 225232. 411
[26] M OLENAAR , M. An Introduction to the Theory of Spatial Object Modelling.
Research Monographs in GIS Series. Taylor & Francis, London, 1998. 407
[27] PARASNIS , D. Principles of Applied Geophysics. Kluwer Academic Publishing, Dordrecht, The Netherlands, 1996. 274
[28] P ERDIGAO , V., AND A NNONI , A. Technical and Methodological Guide for Updating CORINE Land Cover Data Base. EC-JRC, EEA, Brussels, Belgium, 1997.
EUR 17288 EN. 402
[29] P EUQUET, D. J., AND M ARBLE , D. F., Eds. Introductory Readings in Geographic Information Systems. Taylor & Francis, London, 1990.
[30] R AHMAN , H., AND D EDIEU , G. SMAC: A simplified method for the atmospheric correction of satellite measurements in the solar spectrum. International Journal of Remote Sensing 15 (1994), 123143. 310
[31] R ICHTER , R. A Spatially-Adaptive Fast Atmospheric Correction Algorithm. ERDAS Imagine ATCOR2 User Manual (Version 1.0), 1996. 309
[32] R OSSITER , D. G. Lecture Notes: Methodology for Soil Resource Inventories, 2nd
revised ed. ITC Lecture Notes SOL.27. ITC, Enschede, The Netherlands,
2000. 395
first
previous
next
last
back
exit
zoom
contents
index
about
503
Bibliography
[33] S ABINS , F. F. Remote Sensing: Principles and Interpretation, third ed. Freeman
& Co., New York, NY, 1996. 32
[34] S CHETSELAAR , E. On preserving spectral balance in image fusion and its
advantages for geological image interpretation. Photogrammetric Engineering & Remote Sensing 67, 8 (2001), 925934. 375
[35] S MITH , W., AND S ANDWELL , D. Measured and estimated seafloor topography, version 4.2. Poster RP1, 1997. World Data Center for Marine Geology
and Geophysics. 278
[36] TANRE , D., D EROO , C., D UHAUT, P., H ERMAN , M., M ORCETTE , J., P ER BOS , J., AND D ESCHAMPS , P. Description of a computer code to simulate
the satellite signal in the solar spectrum: the 5S code. International Journal of
Remote Sensing 11 (1990), 659668. 308
[37] T ELFORD , W., G ELDART, L., AND S HERIFF , R. Applied Geophysics, second ed. Cambridge University Press, Cambridge, UK, 1991. 274
[38] T OPO S YS, 2002. 257, 261
[39] T REVETT, J. W. Imaging Radar for Resources Surveys. Chapman and Hall
Ltd., London, U.K., 1986. 237
[40]
[41] V ERMOTE , E., TANRE , D., D EUZE , J., H ERMAN , M., AND M ORCETTE , J.J. Second simulation of the satellite signal in the solar spectrum, 6S: an
first
previous
next
last
back
exit
zoom
contents
index
about
504
Bibliography
overview. IEEE Transactions on Geoscience and Remote Sensing 35, 3 (May
1997), 675686. 308
[42] W EHR , A., AND L OHR , U. Airborne laser scanning an introduction and
overview. ISPRS Journal of Photogrammetry & Remote Sensing 54 (1999), 68
82. 258
[43] W ORBOYS , M. F. GIS: A Computing Perspective. Taylor & Francis, London,
U.K., 1995.
[44] Z INCK , A. J. Physiography & Soils. ITC Lecture Notes SOL.41. ITC, Enschede, The Netherlands, 1988. 395
first
previous
next
last
back
exit
zoom
contents
index
about
Glossary
first
previous
next
last
back
exit
zoom
contents
index
about
505
506
Glossary
A
Absorption The process in which electromagnetic energy is converted in an
object into other forms of energy (e.g. heat).
Active sensor Sensor with a built in source of energy. The sensor both emits
and receives energy (e.g. radar and laser).
Additive colours The additive principle of colours is based on the three primary colours of light: red, green, blue. All three primary colours together produce white. Additive colour mixing is used, for example,
on computer screens and television.
first
previous
next
last
back
exit
zoom
contents
index
about
507
Glossary
B
Backscatter The microwave signal reflected by elements of an illuminated
surface in the direction of the radar antenna.
Band
first
previous
next
last
back
exit
zoom
contents
index
about
508
Glossary
C
Charge-coupled device (CCD) Semi-conductor elements usually aligned as a
linear (scanner) or surface array (video, digital camera). CCDs produce image data.
Class
Cluster
Colour
Colour film Also known as true colour film used in (aerial) photography. The
principle of colour film is to add sensitized dyes to the silver halide.
Magenta, yellow and cyan dyes are sensitive to red, green and blue
light respectively.
Colour infrared film Film with specific sensitivity for infrared wavelengths.
Typically used in surveys of vegetation.
Corner reflector Combination of two or more intersecting specular surfaces
that combine to enhance the signal reflected back in the direction of
the radar, e.g. houses in urban areas.
first
previous
next
last
back
exit
zoom
contents
index
about
509
Glossary
D
Di-electric constant Parameter that describes the electrical properties of a medium. Reflectivity of a surface and penetration of microwaves into the
material are determined by this parameter.
Digital Terrain Model (DTM) Term indicating a digital description of the terrain relief. A DTM can be stored in different manners (contour lines,
TIN, raster) and may also contain semantic, relief-related information
(breaklines, saddlepoints).
first
previous
next
last
back
exit
zoom
contents
index
about
510
Glossary
E
Earth Observation (EO) Term indicating the collection of remote sensing techniques performed from space.
Electromagnetic energy Energy with both electric and magnetic components.
Both the wave model and photon model are used to explain this phenomenon. The measurement of reflected and emitted electromagnetic
energy is an essential aspect in remote sensing.
Electromagnetic spectrum The complete range of all wavelengths, from gamma rays (1012 m) up to very long radio waves (1012 m).
Emission
Emissivity The radiant energy of an object compared to the energy of a blackbody of the same temperature, expressed as a ratio.
Error matrix Matrix that compares samples taken from the source to be evaluated with observations that are considered as correct (reference). The
error matrix allows calculation of quality parameters such as overall
accuracy, error of omission and error of commission.
first
previous
next
last
back
exit
zoom
contents
index
about
511
Glossary
F
False colour infrared film see Colour infrared film.
Feature space The mathematical space describing the combinations of observations (DN values in the different bands) of a multispectral or multiband image. A single observation is defined by a feature vector.
Feature space plot A two- or three-dimensional graph in which the observations made in different bands are plotted against each other.
Field of view (FOV) The total swath as observed by a sensor-platform system.
Sometimes referred to as total field of view. It can be expressed as an
angle or by the absolute value of the width of the observation. See
also Instantaneous field of view.
Filter
(1) Physical product made out of glass and used in remote sensing
devices to block certain wavelenghts, e.g. ultraviolet-filter. (2) Mathematical operator used in image processing for modifying the signal,
e.g. a smoothing filter.
first
previous
next
last
back
exit
zoom
contents
index
about
512
Glossary
G
Geo-spatial data Data that includes positions in the geographic space. In this
book, usually abbreviated to spatial data.
Geocoding Process of transforming and resampling image data in such way
that these can be used simultaneously with data that are in a specific
map projection. Input for a geocoding process are image data and
control points, output is a geocoded image. A specific category of
geocoded images are orthophotos and orthoimages.
Geographic information Information derived from spatial data, and in the
context of this book, from image data. Information is what is relevant in a certain application context.
Geographic Information System (GIS) A software package that accommodates
the capture, analysis, manipulation and presentation of georeferenced
data. It is a generic tool applicable to many different types of use (GIS
applications).
Georeferencing Process of relating an image to a specific map projection. As a
result, vector data stored in this projection can for example be superimposed on the image. Input for a georeferencing process are image
data and coordinates of ground control points, output is a georeferenced image.
Global Navigation Satellite System (GNSS) The Global Navigation Satellite
System is a global infrastructure for the provision of positioning and
timing information. It consists of the American GPS and Russian
first
previous
next
last
back
exit
zoom
contents
index
about
513
Glossary
Glonass systems. There is also the forthcoming European Galileo system.
Ground control points (GCPs) Points which are used to define or validate
a geometric transformation process. Sometimes also referred to as
Ground Control Points stating these have been measured on the ground.
Ground control points should be recognizable both in the image and
in the real world.
Ground range Range direction of the side-looking radar image as projected
onto the horizontal reference plane.
Ground truth A term that may include different types of observations and
measurements performed in the field. The name is imprecise because
it suggests that these are 100% accurate and reliable, and this may be
difficult to achieve.
first
previous
next
last
back
exit
zoom
contents
index
about
514
Glossary
H
Histogram Tabular or graphical representation showing the (absolute and/or
relative) frequency. In the context of image data it relates to the distribution of the (DN) values of a set of pixels.
Histogram equalization Process used in the visualization of image data to optimize the overall image contrast. Based on the histogram, all available grey levels or colours are distributed in such way that all occur
with equal frequency in the result.
first
previous
next
last
back
exit
zoom
contents
index
about
515
Glossary
I
Image
first
previous
next
last
back
exit
zoom
contents
index
about
516
Glossary
Incidence angle Angle between the line of sight from the sensor to an element
of an imaged scene and a vertical direction to the scene. One must
distinguish between the nominal incidence angle determined by the
geometry of the radar and the Earths geoidal surface and the local
incidence angle, which takes into account the mean slope of the pixel
in the image.
Infrared waves Electromagnetic radiation in the infrared region of the electromagnetic spectrum. Near-infrared (0.71.2 m), middle infrared
(1.22.5 m) and thermal infrared (814 m) are distinguished.
Instantaneous field of view (IFOV) The area observed on the ground by a sensor, which can be expressed by an angle or in ground surface units.
Interferometry Computational process that makes use of the interference of
two coherent waves. In the case of imaging radar, two different paths
for imaging cause phase differences from which an interferogram can
be derived. In SAR applications, interferometry is used for constructing a DEM.
Interpretation elements The elements used by the human vision system to interprete a picture or image. Interpretation elements are: tone, texture,
shape, size, pattern, site, association and resolution.
first
previous
next
last
back
exit
zoom
contents
index
about
517
Glossary
L
Latent image When exposed to light, the silver halide crystals within the photographic emulsion undergo a chemical reaction, which results in an
invisible latent image. The latent is transformed into a visible image by the development process in which the exposed silver halide is
converted into silver grains that appear black.
Layover
Look angle The angle of viewing relative to the vertical (nadir) as perceived
from the sensor.
first
previous
next
last
back
exit
zoom
contents
index
about
518
Glossary
M
Microwaves Electromagnetic radiation in the microwave window, which ranges from 1100 cm.
Mixel
Acronym for mixed pixel. Mixel is used in the context of image classification where different spectral classes occur within the area covered
by one pixel.
first
previous
next
last
back
exit
zoom
contents
index
about
519
Glossary
N
Nadir
The point (or line) directly under the platform during acquisition of
image data.
Noise
first
previous
next
last
back
exit
zoom
contents
index
about
520
Glossary
O
Objects
Objects are real world features and have clearly identifiable geometric characteristics. In a computer environment, objects are modelled
using an object-based approach in contrast to a field-based approach,
which is more suited for continuous phenomena.
Orbit
The path of a satellite through space. Types of orbits used for remote
sensing satellites are, for example, (near) polar and geostationary.
first
previous
next
last
back
exit
zoom
contents
index
about
521
Glossary
P
Panchromatic Indication of one (wide or narrow) spectral band in the visible
and near-infrared part of the electromagnetic spectrum.
Passive sensor Sensor that records energy that is produced by external sources
such as the Sun and the Earth.
Pattern recognition Term for the collection of techniques used to detect and
identify patterns. Patterns can be found in the spatial, spectral and
temporal domains. An example of spectral pattern recognition is image classification; an example of spatial pattern recognition is segmentation.
Photogrammetry The science and techniques of making measurements from
photos or image data. Photogrammetric procedures are required for
accurate measurements from stereo pairs of aerial photos, image data
or radar data.
Photograph Image obtained by using a camera. The camera produces a negative film, which is can be printed into positive paper product.
Pixel
Pixel value The representation of the energy measured at a point, usually expressed as a Digital Number (DN-) value.
Polarization Orientation of the electric (E) vector in an electromagnetic wave,
frequently horizontal (H) or vertical (V).
first
previous
next
last
back
exit
zoom
contents
index
about
522
Glossary
Polychromatic Solar radiation comprised of a composite of monochromatic
wavelenghts.
Pulse
first
previous
next
last
back
exit
zoom
contents
index
about
523
Glossary
Q
Quantization The number of discrete levels applied to store the energy as measured by a sensor, e.g. 8-bit quantization allows 256 levels of energy.
first
previous
next
last
back
exit
zoom
contents
index
about
524
Glossary
R
Acronym for Radio Detection And Ranging. Radars are active sensors
at wavelengths between 1100 cm.
Radar
Radiance
Radiometric resolution
Range
RAR
Raster
A regularly spaced set of cells with associated (field) values. In contrast to a grid, the associated values represent cell values, not point
values. This means that the value for a cell is assumed to be valid for
all locations within the cell.
Reflectance The ratio of the reflected radiation to the total irradiation. Reflectance depends on the wavelength.
Reflection
first
previous
next
last
back
exit
zoom
contents
index
about
525
Glossary
direction) depends on sensor-platform characteristics and on the elevation of the object.
Remote sensing (RS) Remote sensing is the instrumentation, techniques and
methods to observe the Earths surface at a distance and to interpret
the images or numerical values obtained in order to acquire meaningful information of particular objects on Earth.
Resampling Process to generate a raster with another orientation and/or different cell size and to assign DN-values using one of the following
methods,nearest neighbour selection, bilinear interpolation and cubic convolution.
Resolution Indicates the smallest observable (measurable) difference at which
objects can still be distinguished. In remote sensing context used in
spatial, spectral and radiometric resolution.
RMS error Root Mean Square error. A statistical measure of accuracy, similar
to standard deviation, indicating the spread of the measured values
around the true value.
Roughness Variation of surface height within an imaged resolution cell. A
surface appears rough to microwave illumination when the height
variations become larger than a fraction of the radar wavelength.
first
previous
next
last
back
exit
zoom
contents
index
about
526
Glossary
S
SAR
Scale
Scanner
(1) remote sensing sensor that is based on the scanning principle, e.g.
a multispectral scanner (2) office device to convert analogue products
(photo, map) into digital raster format.
Slant range Image direction as measured along the sequence of line of sight
rays from the radar to each reflecting point in the scene.
Spatial data In the broad sense, spatial data is any data with which position is
associated.
Spatial resolution
Speckle
See resolution.
Interference of backscattered waves stored in the cells of a radar image. It causes the return signals to be extinguished or amplified resulting in random dark and bright pixels in the image.
previous
next
last
back
exit
zoom
contents
index
about
527
Glossary
Spectral resolution
See resolution.
Specular reflection
first
previous
next
last
back
exit
zoom
contents
index
about
528
Glossary
T
Training stage Part of the image classification process in which pixels representative for a certain class are identified. Training results in a training set that comprises the statistical characteristics (signatures) of the
classes of interest.
Transmittance The ratio of the radiation transmitted to the total irradiation.
first
previous
next
last
back
exit
zoom
contents
index
about
529
Glossary
V
Variable, interval A variable that is measured on a continuous scale, but with
no natural zero. It cannot be used to form ratios.
Variable, nominal A variable that is organized in classes, with no natural order, i.e., cannot be ranked.
Variable, ordinal A variable that is organized in classes with a natural order,
and so it can be ranked.
Variable, ratio A variable that is measured on a continuous scale, and with a
natural zero, so can be used to form ratios.
Viewing angle
sor.
first
previous
next
last
back
exit
zoom
contents
index
about
530
Glossary
W
Wavelength Minimum distance between two events of a recurring feature in
a periodic sequence such as the crests of a wave. Wavelength is expressed as a distance (e.g. m or nm).
first
previous
next
last
back
exit
zoom
contents
index
about
Index
absorbtion, 61
absorptivity, 56
active sensor, 60
laser scanning, 255
radar, 213
additive colours, 353
aerial camera, 131
digital, 157
aerial photography
oblique, 129
vertical, 129
aerospace surveying, 33
altimeter, 214
amplitude, 217
aperture, 222
atmospheric window, 63
bathymetry, 104
binary encoding, 484
blackbody, 56, 456
first
previous
next
last
back
exit
zoom
contents
index
about
531
532
Index
spaces, 351
tone, 389
YMC system, 356
constellation, 205
coordinate system, 320
coverage, 115
dynamic range, 115, 164
spatial, 115
spectral, 115
temporal, 115
end-member, 488
error of commission, 441
error of omission, 441
false colour composite, 368
feature space, 420
fiducial marks, 135, 333
field observations, 407
field of view, 166
angular, 147
instantaneous, 166
film, 137
emulsion, 139
false colour infrared, 140
scanning, 141
speed, 138
true colour, 140
filter
kernel, 365
optical, 133
filter operations, 365
averaging, 366
edge enhancement, 367
filtering, 234
flat-field correction, 481
focal length, 132, 146
frequency, 54, 217
previous
next
last
back
exit
zoom
contents
index
about
533
Index
general sensitivity, 137
geocoding, 326
geometric transformation, 321
first order, 322
residual errors, 323
root mean square error, 323
georeferencing, 321
greybody, 456
ground control point (GCP), 322
ground observations, 36
ground truth, 408
ground-based observations, 30
supervised, 429
unsupervised, 430
image data, 32
imaging spectrometry, 475, 492
imaging spectroscopy, 476
Inertial Measuring Unit, 261
Inertial Navigation System, 261
information extraction, 33
interferometry, 318
differential (DINSAR), 245
interpretation, 236
interpretation elements, 389
irradiance, 73
IFOV, 166
IHS transformation, 373
image, 114
data cost, 123
size, 117
stereo, 170
image classification, 391, 416
first
previous
next
last
mapping, 384
mapping unit
minimum, 403
mirror stereoscope, 392
mixed pixel, 488
mixel, 488
back
exit
zoom
contents
index
about
534
Index
monoplotting, 330, 338
multispectral scanner, 162
overlap, 153
nadir, 150
night-time image, 112
noise, 143
Normalized Difference Vegetation Index (NDVI), 177
off-nadir viewing, 170
orbit, 110
altitude, 110
GEO, 110
geostationary, 112
inclination angle, 110
LEO, 110
period, 110
polar, 111
repeat cycle, 111
sun-synchronous, 112
types of, 110
orientation, 330
exterior, 334
interior, 334
relative, 334
orthoimage, 330, 340
orthophoto, 340
overall accuracy, 441
first
previous
next
passive sensor, 60
pattern, 390
phase, 217
photogrammetry, 128, 318
photon, 55, 164
pixel, 114
mixed, 444
size, 116
Plancks constant, 453
platform, 106
airborne, 106
aircraft, 108
operational, 121
satellite, 110
Space Shuttle, 88
spaceborne, 106
pocket stereoscope, 392
quality
image classification, 440
photo-interpretation, 409
quantization, 116, 164
radar, 101
azimuth direction, 220
bands, 217
last
back
exit
zoom
contents
index
about
535
Index
differential interferometry, 245
equation, 215
foreshortening, 229, 231
ground range, 221
ground range resolution, 222
imaging, 215
incidence angle, 221
interferogram, 243
interferometry (INSAR), 240
layover, 229
multi-look, 227
polarisation, 218
range direction, 220
real aperture (RAR), 222
shadow, 229
Single-Look-Complex (SLC), 243
slant range, 221
slant range resolution, 222
sophisticated, 204
synthetic aperture (SAR), 223
radiant temperature, 459
receiving station, 113
red edge, 492
reflectance
bidirectional, 196
reflectance curve, 73
soil, 76
first
previous
next
last
back
vegetation, 75
water, 77
reflection, 71
diffuse, 72
specular, 72
relief displacement, 150
Remote Sensing, 32
replicability, 409
resampling, 318, 328
bilinear convolution, 328
cubic convolution, 328
nearest neighbour, 328
resolution, 115
radiometric, 115, 138, 164
spatial, 115, 148, 166
spectral, 115, 165, 169
temporal, 116
revisit time, 111, 116
satellite
ALOS, 204, 253
AlSAT-1, 205
Aqua, 184
Artemis, 200, 205
attitude control, 204
Aura, 184
Cartosat-1, 189
Cartosat-2, 189
exit
zoom
contents
index
about
536
Index
communication, 113
laser, 204
constellation, 205
Envisat-1, 199, 204
EO-1, 194, 495
EROS-1A, 191
ERS-1, 102, 201
ERS-2, 201
fast development, 203
GRACE, 278
Ikonos, 191
IRS-1C, 188
IRS-1D, 188
Landsat-5, 91
Landsat-7, 180
LEWIS, 495
Meteosat-8, 117, 174
miniaturization, 204
NOAA-17, 176
Oceansat-1, 188
OrbView-2, 93
OrbView-3, 191
Proba, 196
Quickbird, 191
Radarsat-1, 204
Resourcesat-1, 188
RISAT, 189
first
previous
next
SPOT-5, 187
Terra, 183
TES, 188
Thai-Paht-2, 205
Triana, 113
Tsinghua-1, 205
satellite navigation, 109, 156
scale factor, 146
scanner, 162
across-track, 163
along-track, 167
whiskbroom, 163
scattering, 66
Mie, 69
non-selective, 70
Rayleigh, 67
scatterometer, 214
scatterplot, 421
selective Radiator, 456
sensor
active, 84
aerial camera, 88, 131
AIS, 494
ALI, 194
ASAR, 201
ASTER, 183, 495
AVHRR, 176
last
back
exit
zoom
contents
index
about
537
Index
AVIRIS, 476, 494
AWiFS, 188
CHRIS, 196
ETM+, 180
gamma-ray spectrometer, 87, 275
GERIS, 476
gravimeter, 277
HIRES, 476
HRC, 196
HRG, 187
HSI, 495
Hyperion, 194, 495
hyperspectral imager, 94
imaging radar, 101
imaging spectrometer, 94
LAC, 194
laser scanner, 99
lidar, 99
LISS4, 188
magnetometer, 279
MERIS, 202
MODIS, 183, 495
MSS, 180
multispectral, 162
multispectral scanner, 92
new types of, 204
OSA, 191
first
previous
next
last
back
exit
zoom
contents
index
about
538
Index
Stefan-Boltzmann law, 56, 455
stereogram, 392
stereomodel, 341
stereoplotting, 341
stereorestitution, 331
stereoscope, 341
stereoscopic vision, 392
subtractive colours, 356
superimposition, 321
temperature
kinetic, 459
radiant, 459
texture, 390
three-dimensional, 318
transfer function, 362
transmission, 61
two-dimensional, 318
validation, 440
wavelength, 53
wavelength band, 92
first
previous
next
last
back
exit
zoom
contents
index
about
Appendix A
SI units & prefixes
Quantity
Length
Time
Temperature
Energy
Power
first
previous
next
last
back
SI unit
metre (m)
second (s)
kelvin (K)
joule (J)
watt (W) (J/s)
exit
zoom
contents
index
about
539
540
Appendix
Prefix
tera (T)
giga (G)
mega (M)
kilo (k)
centi (c)
milli (m)
micro ()
nano (n)
pico (p)
Unit
centimetre
millimetre
micron
micrometre
nanometre
Parameter
speed of light
C
inch
foot
mile
first
previous
next
last
back
Multiplier
1012
109
106
103
102
103
106
109
1012
SI Equivalent
102 m
103 m
106 m
106 m
109 m
Value
2.9979 108 m/s
( C + 273.15) K
2.54 cm
30.48 cm
1, 609 m
exit
zoom
contents
index
about