You are on page 1of 438

Ravi P.

Gupta

Remote Sensing
Geology
Third Edition
Remote Sensing Geology
Ravi P. Gupta

Remote Sensing Geology


Third Edition

123
Ravi P. Gupta
Formerly Professor, Earth Resources
Technology, Department of Earth
Sciences
Indian Institute of Technology Roorkee
Roorkee
India

ISBN 978-3-662-55874-4 ISBN 978-3-662-55876-8 (eBook)


https://doi.org/10.1007/978-3-662-55876-8
Library of Congress Control Number: 2017954274

© Springer-Verlag GmbH Germany 1991, 2003, 2018


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction
on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic
adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not
imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and
regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed
to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty,
express or implied, with respect to the material contained herein or for any errors or omissions that may have been
made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional
affiliations.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer-Verlag GmbH Germany
The registered company address is: Heidelberger Platz 3, 14197 Berlin, Germany
To,
M. S. R.
For inspiration and faith
Preface to the Third Edition

The main objective of producing the third edition of “Remote Sensing Geology” is to
incorporate in the book recent advances in this field. The book has been thoroughly revised
and enlarged. Topics such as satellite orbits, atmospheric correction, digital elevation model,
topographic correction, temperature/emissivity separation, object-based image analysis,
ASTER ratio indices for mineralogic identification and GIS-based prospectivity modelling
have been introduced and/or discussed at greater length. Besides, additional information has
been incorporated in nearly all the chapters including the latest remote sensing systems, as also
on the remote sensing based geologic applications. Critical suggestions from colleagues and
esteemed reviewers of the earlier editions have been taken into consideration to the extent
possible.
I am greatly indebted to a number of persons for their suggestions, inputs and contributions
in this edition. My particular thanks are due to the following (in alphabetic order) for kindly
sparing their valuable time, going through parts of manuscript of their interest and extending
most valuable suggestions and comments:
A.K. Awasthi, Graphic Era University, Dehradun; Robert Corrie, The Queen’s College,
University of Oxford, Oxford; E. Elango, National Remote Sensing Centre, Hyderabad;
Rűdiger Gens, Alaska Satellite Facility, Geophysical Institute, University of Alaska,
Fairbanks; Sudhir K. Govil, Indian Institute of Remote Sensing, Dehradun; Umesh K.
Haritashya Department of Geology, University of Dayton, Dayton; Douglas King, Dept. of
Geography and Environmental Studies, Carleton University, Ottawa; Yoshiki Ninomiya,
Research Institute of Geology and Geoinformation, Geological Survey of Japan, Tsukuba;
Amin B. Pour, Korea Polar Research Institute, Incheon; Anupma Prakash, Geophysical
Institute, University of Alaska, Fairbanks; Ratan K. Samaddar, State Water Investigation
Directorate, Kolkata; Ashis K. Saha, Dept. of Geography, Delhi University, Delhi; Varinder
Saini, Dept. of Civil Engineering, Indian Institute of Technology, Ropar; Amit K. Sen, Dept.
of Earth Sciences, Indian Institute of Technology, Roorkee; Reet K. Tiwari, Dept. of Civil
Engineering, Indian Institute of Technology, Ropar; and Yasushi Yamaguchi, Graduate
School of Environmental Studies, Nagoya University, Nagoya.
Thanks are particularly due to Sarvesh Kumar Sharma for soft copy preparation of the
manuscript and Narendra Kumar Varshney for the drawing work. Finally, I remain most
indebted to my wife Renu for her positive support and encouragement throughout and bearing
with me during the period of my preoccupation with the book.

Lucknow Ravi P. Gupta


July 2017

vii
Preface to the Second Edition

The first edition of this book appeared in 1991, and since then there have been many
developments in the field of remote sensing, both in the direction of technology of data
acquisition and in data processing and applications. This has necessitated a new edition of the
book.
The revised edition includes new and updated material on a number of topics SAR inter-
ferometry, hyperspectral sensing, digital imaging cameras, GPS principle, new optical and
microwave satellite sensors, and some of the emerging techniques in digital image processing
and GIS. Besides, a host of new geological applications of remote sensing are also included.
The book has been thoroughly revised; nevertheless, it retains the original long axis and
style, i.e. discusses the basic remote sensing principles, systems of data acquisition and data
processing, and presents the wide ranging geological applications.
The following individuals reviewed parts of the manuscript, suggested improvements and
furnished missing links: R.P. Agarwal, M.K. Arora, R. Gens, U.K. Haritashya, K. Hiller,
H. Kaufmann, D. King, J. Mathew, F. vander Meer, R.R. Navalgund, S. Nayak, A. Prakash,
S.-K. Rath, A.K. Saha, A.K. Sen and A.N. Singh. I am greatly obliged to them for their
valuable inputs and suggestions in arriving at the final presentation.
I deeply appreciate the infinite patience and endurance of Sarvesh Kumar Sharma in typing
and computer-finishing the manuscript.
Finally, I am indebted to my wife Renu, for her encouragement and support, particularly in
times when no end appeared in sight.

Roorkee Ravi P. Gupta


November 2002

ix
Preface to the First Edition

There has been phenomenal growth in the field of remote sensing over the last two to three
decades. It has been applied in the fields of geology, mineral exploration, forestry, agriculture,
hydrology, soils land use, etc.—that is, in all pursuits of sciences dealing with the features,
processes, and phenomena operating at the Earth’s surface. The status of geological remote
sensing has rapidly advanced and the scientific literature is scattered. The aim of the present
book is to systematically discuss the specific requirements of geological remote sensing, to
summarize the techniques of remote sensing data collection and interpretation, and to integrate
the technique into geo-exploration.
The main conceptual features of the book are:

– To combine various aspects of geological remote sensing, ranging from the laboratory
spectra of minerals and rocks to aerial and space-borne remote sensing;
– To integrate photogeology into remote sensing;
– To promote remote sensing as a tool in integrated geo-exploration; and
– To elucidate the wide-spectrum geo-scientific applications of remote sensing, ranging from
meso- to global scale.

The book has been written to satisfy the needs of mainly graduate students and active
research workers interested in applied Earth sciences. It is primarily concept oriented rather
than system or module oriented.
The organization of the book is detailed in Chap. 1 (Table 1.1). The book has three chief
segments: (1) techniques, sensors and interpretation of data in the optical region; (2) tech-
niques, sensors and interpretation of data in the microwave region; and (3) data processing,
integration and applications.
The idea for the book germinated as I prepared a course in remote sensing at the University
of Roorkee for graduate students, during which extensive lecture notes were made. The book
is an outcome of my teaching and research at the University of Roorkee, and partly also at the
University of Munich.
A wide-spectrum book in a field like remote sensing, where advancements are taking place
at such a fast pace, can hardly be exhaustive and up-to-date. Although every effort has been
made to incorporate recent developments, the priority has been on concepts rather than on
compilation of data alone (SPOT data examples could not be included because of copyright
limitations).
Sincere thanks are due to many individuals and organizations who have contributed in
various ways to the book. Particularly, I am grateful to Dr. Rupert Haydn, Managing Director,
Gesellschaft fuer Angewandte Fernerkundung mbH, Munich, Germany, and formerly at the
University of Munich, for supplying numerous illustrations. He kindly provided many images
for the book and offered blanket permission to select illustrations and examples from his wide
and precious collection. Dr. Haydn also spent valuable time reviewing parts of the text, offered
fruitful criticism and is responsible for many improvements.

xi
xii Preface to the First Edition

Dr. Konrad Hiller, DLR, Germany and formerly at the University of Munich, provided
what was needed most—inspiration and warm friendly support. Many stimulating discussions
with him promoted my understanding of the subject matter and led to numerous reforms.
Without Konard’s encouragement, this book may not have seen the light of the day.
I am grateful to a number of people, particularly the following, for going through parts
of the manuscript of their interest, suggesting amendments and furnishing several missing
links: K. Arnason, R. Chander, R.P.S. Chhonkar, G.F. Jaskolla, H. Kaufmann, F. Lehmann,
G. Philip, A.K. Saraf, K.P. Sharma, V.N. Singh, B.B.S. Singhal, R. Sinha, D.C. Srivastava,
U. Terhalle, R.S. Tiwari, L.C. Venkatadhri and P. Volk.
Thanks are also due to Prof. Dr. J. Bodechtel, Institut fuer Allgemeine und Angewandie
Geologie (Institute for General and Applied Geology), University of Munich, for his advice,
suggestions and free access to the facilities at Munich. The Alexander von Humboldt Foun-
dation, Bonn, and the Gesellschaft fuer Angewandte Fernerkundung mbH, Munich
(Dr. R. Haydn) kindly provided financial support for my visits and stay in Germany, during
which parts of the book were written.
A book on remote sensing has to present many pictures and illustrations. A large number
of these were borrowed from colleagues, organizations, instrument manufacturers, commercial
firms and publications. These are acknowledged in the captions.
For the excellent production of the book, the credit goes to Dr. W. Engel, Ms. I. Scherich,
Ms. G. Hess, Ms. Jean von dem Bussche and Ms. Theodora Krammer of Springer-Verlag,
Heidelberg.
Although a number of people have directly and indirectly contributed to the book, I am
alone responsible for the statements made herein. It is possible that some oversimplifications
appear as erroneous statements. Suggestions from readers will be gratefully accepted.
Finally, I am indebted to my wife Renu for not only patiently enduring 4 years of my
preoccupation with the book but also extending positive support and encouragement.
If this book is able to generate interest in readers for this newly emerging technology, I
shall consider my efforts to be amply rewarded.

Roorkee R.P. Gupta


June 1991
Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Definition and Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Development of Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Fundamental Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Advantages and Challenges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 A Typical Remote Sensing Programme . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 Field Data (Ground Truth) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6.1 Timing of Field Data Collection. . . . . . . . . . . . . . . . . . . . . . 7
1.6.2 Sampling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6.3 Types of Field Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6.4 GPS Survey. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.7 Scope and Organization of This Book . . . . . . . . . . . . . . . . . . . . . . . . 11
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Physical Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1 The Nature of EM Radiation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Radiation Principles and Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 Radiation Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.2 Blackbody Radiation Principles . . . . . . . . . . . . . . . . . . . . . . 14
2.2.3 Electromagnetic Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.4 Energy Available for Sensing . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Atmospheric Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.1 Atmospheric Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.2 Atmospheric Absorption . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.3 Atmospheric Emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 Energy Interaction Mechanisms on the Ground . . . . . . . . . . . . . . . . . . 18
2.4.1 Reflection Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.2 Transmission Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.3 Absorption Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.4 Earth’s Emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3 Spectra of Minerals and Rocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Basic Arrangements for Laboratory Spectroscopy . . . . . . . . . . . . . . . . 23
3.3 Energy States and Transitions—Basic Concepts. . . . . . . . . . . . . . . . . . 24
3.3.1 Electronic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.2 Vibrational Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 Spectral Features of Mineralogical Constituents . . . . . . . . . . . . . . . . . . 26
3.4.1 Visible and Near-Infrared (VNIR) Region (0.4–1.0 µm) . . . . . 26
3.4.2 Shortwave-Infrared (SWIR) Region (1–3 µm) . . . . . . . . . . . . 27
3.4.3 Thermal-Infrared (TIR) Region (Approx. 3–25 µm) . . . . . . . . 28

xiii
xiv Contents

3.5 Spectra of Minerals . . . . . . . . . . . . . . . ....... . . . . . . . . . . . . . . . 29


3.6 Spectra of Rocks . . . . . . . . . . . . . . . . . ....... . . . . . . . . . . . . . . . 30
3.6.1 Solar Reflection Region (VNIR + SWIR) . . . . . . . . . . . . . . . 30
3.6.2 Thermal-Infrared Region . . . . . ....... . . . . . . . . . . . . . . . 31
3.7 Laboratory Versus Field Spectra . . . . . . ....... . . . . . . . . . . . . . . . 32
3.8 Spectral Libraries. . . . . . . . . . . . . . . . . ....... . . . . . . . . . . . . . . . 32
3.9 Spectra of Other Common Objects. . . . . ....... . . . . . . . . . . . . . . . 32
3.10 Future . . . . . . . . . . . . . . . . . . . . . . . . ....... . . . . . . . . . . . . . . . 34
References . . . . . . . . . . . . . . . . . . . . . . . . . . . ....... . . . . . . . . . . . . . . . 34

4 Photography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1.1 Relative Merits and Limitations . . . . . . . . . . . . . . . . . . . . . . 37
4.1.2 Working Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.2 Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Films . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.4 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.5 Vertical and Oblique Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.6 Ground Resolution Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.7 Photographic Missions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.7.1 Aerial Photographic Missions . . . . . . . . . . . . . . . . . . . . . . . 42
4.7.2 Space-Borne Photographic Missions . . . . . . . . . . . . . . . . . . . 42
4.7.3 Product Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5 Multispectral Imaging Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 45


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 45
5.1.1 Working Principle of a Digital Sensor. . . . . . . . . . . . . . .... 45
5.1.2 Imaging Versus Non-imaging Optical Sensors and
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2 Factors Affecting Sensor Performance . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2.1 Sensor Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.3 Non-imaging Radiometers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.3.1 Working Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.4 Imaging Sensors (Scanning Systems) . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.4.1 What Is an Image? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.4.2 Optical-Mechanical Line Scanner (Whiskbroom Scanner) . . . . 52
5.4.3 CCD Linear Array Scanner (Pushbroom Scanner) . . . . . . . . . 53
5.4.4 FPA and TDI Architecture of Spaceborne CCD Linear
Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 55
5.4.5 Digital Cameras (Area Arrays) . . . . . . . . . . . . . . . . . . . .... 56
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 59

6 Important Spaceborne Missions and Multispectral Sensors . . . . . . . . . . . . . 61


6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.2 Orbital Motion and Earth Orbits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.2.1 Kepler’s Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.2.2 Earth Orbits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.3 Landsat Programme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.4 SPOT Programme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.5 IRS/Resourcesat Programme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6.6 Japanese Programmes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.7 CBERS Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Contents xv

6.8 RESURS-1 Series . . . . . . . . . . ...... . . . . . . . . . . . . . . . . . . . . . . 77


6.9 TERRA-ASTER Sensor . . . . . . ...... . . . . . . . . . . . . . . . . . . . . . . 78
6.10 High Spatial Resolution Satellite Sensors . . . . . . . . . . . . . . . . . . . . . . 80
6.11 Other Programmes (Past) . . . . . ...... . . . . . . . . . . . . . . . . . . . . . . 83
6.12 Products from Scanner Data . . . ...... . . . . . . . . . . . . . . . . . . . . . . 85
References . . . . . . . . . . . . . . . . . . . . . ...... . . . . . . . . . . . . . . . . . . . . . . 85

7 Geometric Aspects of Photographs and Images . . . . . . . . . . . . ......... 87


7.1 Geometric Distortions. . . . . . . . . . . . . . . . . . . . . . . . . . . ......... 87
7.1.1 Distortions Related to Sensor System . . . . . . . . . ......... 87
7.1.2 Distortions Related to Sensor-Craft Altitude and
Perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.1.3 Distortions Related to the Earth’s Shape and Spin . . . . . . . . . 91
7.1.4 Relief Displacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7.2 Stereoscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
7.2.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
7.2.2 Vertical Exaggeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.2.3 Aerial and Spaceborne Configurations for Stereo
Coverage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . ......... 95
7.2.4 Photography Vis-à-Vis Line-Scanner Imagery for
Stereoscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
7.2.5 Instrumentation for Stereo Viewing . . . . . . . . . . . . . . . . . . . 98
7.3 Photogrammetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
7.3.1 Measurements on Photographs . . . . . . . . . . . . . . . . . . . . . . . 98
7.3.2 Measurements on Line-Scanner Images . . . . . . . . . . . . . . . . . 100
7.3.3 Aerial Vis-à-Vis Satellite Photogrammetry. . . . . . . . . . . . . . . 100
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

8 Digital Elevation Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8.2 Data Acquisition for Generating DTM . . . . . . . . . . . . . . . . . . . . . . . . 101
8.2.1 Ground Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8.2.2 Digitization of Topographic Contour Maps . . . . . . . . . . . . . . 101
8.2.3 Conventional Aerial Photographic Photogrammetry . . . . . . . . 102
8.2.4 Digital Photogrammetry Utilizing Remote Sensing
Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
8.2.5 UAV-Borne Digital Camera. . . . . . . . . . . . . . . . . . . . . . . . . 103
8.2.6 Satellite SAR Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
8.2.7 Aerial LIDAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
8.3 Orthorectification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
8.4 Derivatives of DEM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
8.5 Geological Applications of DEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.6 Global DEM Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

9 Image Quality and Principles of Interpretation . . . . . . . . . . . . . . . . . . . . . 107


9.1 Image Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
9.1.1 Factors Affecting Image Quality. . . . . . . . . . . . . . . . . . . . . . 107
9.2 Handling of Photographs and Images . . . . . . . . . . . . . . . . . . . . . . . . . 109
9.2.1 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
9.2.2 Mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
9.2.3 Scale Manipulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
9.2.4 Stereo Viewing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
xvi Contents

9.2.5 False Colour Composites (FCCs) . . . . . . . . . . . . . . . . . . . . . 110


9.3 Fundamentals of Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
9.3.1 Elements of Photo-Interpretation . . . . . . . . . . . . . . . . . . . . . 112
9.3.2 Geotechnical Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

10 Atmospheric Corrections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115


10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
10.2 Atmospheric Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
10.2.1 Solar Reflection Region . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
10.2.2 Thermal IR Region. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
10.3 Procedures of Atmospheric Correction . . . . . . . . . . . . . . . . . . . . . . . . 117
10.3.1 Empirical-Statistical Methods. . . . . . . . . . . . . . . . . . . . . . . . 117
10.3.2 Radiative Transfer Modelling Based Methods . . . . . . . . . . . . 119
10.3.3 Hybrid Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

11 Interpretation of Solar Reflection Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 123


11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
11.2 Energy Budget Considerations for Sensing in the SOR Region . . . . . . . 123
11.2.1 Effect of Attitude of the Sun . . . . . . . . . . . . . . . . . . . . . . . . 123
11.2.2 Effect of Atmospheric-Meteorological Conditions. . . . . . . . . . 125
11.2.3 Effect of Topographic Slope and Aspect . . . . . . . . . . . . . . . . 125
11.2.4 Effect of Sensor Look Angle . . . . . . . . . . . . . . . . . . . . . . . . 126
11.2.5 Effect of Target Reflectance . . . . . . . . . . . . . . . . . . . . . . . . 126
11.3 Acquisition and Processing of Solar Reflection Image Data. . . . . . . . . . 127
11.4 Interpretation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
11.4.1 Interpretation of Panchromatic Black-and-White Products . . . . 127
11.4.2 Interpretation of Multispectral Products . . . . . . . . . . . . . . . . . 130
11.4.3 Interpretation of Colour Products . . . . . . . . . . . . . . . . . . . . . 133
11.5 Computation of Reflectance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
11.5.1 Spectral Radiance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
11.5.2 Top of the Atmosphere (TOA) Reflectance . . . . . . . . . . . . . . 134
11.5.3 Target Irradiance in Solar Reflection Region in
an Undulating Terrain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
11.5.4 Influence of Topography on Solar Reflection Image Data . . . . 135
11.5.5 Topographic Correction of Solar Reflection Images . . . . . . . . 136
11.6 Active Optical Sensor-Luminex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
11.7 Scope for Geological Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

12 Interpretation of Thermal-IR Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141


12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
12.2 Earth’s Radiant Energy—Basic Considerations . . . . . . . . . . . . . . . . . . 141
12.2.1 Surface (Kinetic) Temperature . . . . . . . . . . . . . . . . . . . . . . . 141
12.2.2 Emissivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
12.3 Broad-Band Thermal-IR Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
12.3.1 Radiant Temperature and Kinetic Temperature . . . . . . . . . . . . 146
12.3.2 Acquisition of Broad-Band Thermal-IR Data . . . . . . . . . . . . . 146
12.3.3 Processing of Broad-Band TIR Images . . . . . . . . . . . . . . . . . 148
12.3.4 Interpretation of Thermal-IR Imagery . . . . . . . . . . . . . . . . . . 148
Contents xvii

12.3.5 Thermal Inertia Mapping. . . . . . . . . . . . . . . . . . . . . . . . ... 150


12.3.6 Scope for Geological Applications—Broad-Band Thermal
Sensing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
12.4 Temperature Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
12.4.1 Computation of Spectral Radiance . . . . . . . . . . . . . . . . . . . . 154
12.4.2 Atmospheric Correction of Spectral Radiance Data . . . . . . . . . 154
12.4.3 Conversion of Spectral Radiance to Temperature . . . . . . . . . . 154
12.4.4 Sub-pixel Temperature Estimation . . . . . . . . . . . . . . . . . . . . 157
12.5 Thermal-IR Multispectral Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
12.5.1 Multispectral Sensors in the TIR . . . . . . . . . . . . . . . . . . . . . 157
12.5.2 Data Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
12.5.3 Temperature/Emissivity Separation (TES) . . . . . . . . . . . . . . . 158
12.5.4 SO2 Atmospheric Absorption. . . . . . . . . . . . . . . . . . . . . . . . 159
12.5.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
12.6 LIDAR Sensing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
12.6.1 Working Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
12.6.2 Scope for Geological Applications . . . . . . . . . . . . . . . . . . . . 159
12.7 Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

13 Digital Image Processing of Multispectral Data . . . . . . . . . . . . . . . . . . . . . 163


13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
13.1.1 What Is Digital Imagery? . . . . . . . . . . . . . . . . . . . . . . . . . . 163
13.1.2 Sources of Multispectral Image Data. . . . . . . . . . . . . . . . . . . 163
13.1.3 Storage and Supply of Digital Image Data. . . . . . . . . . . . . . . 163
13.1.4 Image Processing Systems. . . . . . . . . . . . . . . . . . . . . . . . . . 165
13.1.5 Techniques of Digital Image Processing . . . . . . . . . . . . . . . . 167
13.2 Radiometric Image Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
13.2.1 Sensor Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
13.2.2 De-Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
13.2.3 Correction for Periodic and Spike Noise . . . . . . . . . . . . . . . . 169
13.3 Geometric Corrections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
13.3.1 Correction for Panoramic Distortion . . . . . . . . . . . . . . . . . . . 169
13.3.2 Correction for Skewing Due to Earth’s Rotation. . . . . . . . . . . 169
13.3.3 Correction for Aspect Ratio Distortion . . . . . . . . . . . . . . . . . 169
13.4 Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
13.4.1 Definition and Importance . . . . . . . . . . . . . . . . . . . . . . . . . . 170
13.4.2 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
13.4.3 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
13.5 Image Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
13.6 Image Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
13.6.1 High-Pass Filtering (Edge Enhancement). . . . . . . . . . . . . . . . 175
13.6.2 Image Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
13.6.3 Fourier Filtering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
13.7 Image Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
13.7.1 Addition and Subtraction. . . . . . . . . . . . . . . . . . . . . . . . . . . 179
13.7.2 Principal Component Transformation . . . . . . . . . . . . . . . . . . 180
13.7.3 Other Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
13.7.4 Decorrelation Stretching . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
13.7.5 Ratioing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
13.8 Colour Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
13.8.1 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
xviii Contents

13.8.2 Pseudocolour Display . . . . . . . . . . . . . . . . . . . . . . . . . . ... 186


13.8.3 Colour Display of Multiple Images—Guidelines for Image
Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
13.8.4 Colour Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
13.9 Image Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
13.9.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
13.9.2 Techniques of Image Fusion . . . . . . . . . . . . . . . . . . . . . . . . 188
13.10 2.5-Dimensional Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
13.10.1 Shaded Relief Model (SRM) . . . . . . . . . . . . . . . . . . . . . . . . 190
13.10.2 Synthetic Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
13.10.3 Perspective View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
13.11 Image Segmentation/Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
13.11.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
13.11.2 Object Based Image Analysis . . . . . . . . . . . . . . . . . . . . . . . 192
13.12 Digital Image Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
13.12.1 Supervised Classification. . . . . . . . . . . . . . . . . . . . . . . . . . . 193
13.12.2 Unsupervised Classification . . . . . . . . . . . . . . . . . . . . . . . . . 197
13.12.3 Fuzzy Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
13.12.4 Linear Mixture Modelling (LMM) . . . . . . . . . . . . . . . . . . . . 198
13.12.5 Artificial Neural Network Classification . . . . . . . . . . . . . . . . 198
13.12.6 Classification Accuracy Assessment . . . . . . . . . . . . . . . . . . . 199
13.12.7 Super Resolution Techniques . . . . . . . . . . . . . . . . . . . . . . . . 199
13.12.8 Scope for Geological Applications . . . . . . . . . . . . . . . . . . . . 200
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

14 Imaging Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203


14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
14.2 Spectral Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
14.2.1 Processes Leading to Spectral Features . . . . . . . . . . . . . . . . . 203
14.2.2 Continuum and Absorption Depth—Terminology . . . . . . . . . . 203
14.2.3 High-Resolution Spectral Features of Minerals . . . . . . . . . . . . 205
14.2.4 High-Resolution Spectral Features of Stressed
Vegetation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
14.2.5 Mixtures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
14.2.6 Spectral Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
14.3 Hyperspectral Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
14.3.1 Working Principle of Imaging Spectrometers . . . . . . . . . . . . . 209
14.3.2 Sensor Specification Parameters . . . . . . . . . . . . . . . . . . . . . . 210
14.3.3 Selected Airborne and Space-Borne Hyperspectral
Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
14.4 Processing of Hyperspectral Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
14.4.1 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
14.4.2 Radiance-to-Reflectance Transformation . . . . . . . . . . . . . . . . 213
14.4.3 Data Analysis for Feature Mapping . . . . . . . . . . . . . . . . . . . 213
14.5 Applications and Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

15 Microwave Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221


15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
15.2 Passive Microwave Sensors and Radiometry . . . . . . . . . . . . . . . . . . . . 221
15.2.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
15.2.2 Measurement and Interpretation . . . . . . . . . . . . . . . . . . . . . . 221
Contents xix

15.3 Active Microwave Sensors—Imaging Radars . . . . . . . . . . . . ....... 222


15.3.1 What is a Radar? . . . . . . . . . . . . . . . . . . . . . . . . . ....... 222
15.3.2 Side-Looking Airborne Radar—Basic Configuration . ....... 223
15.3.3 Spatial Positioning and Ground Resolution from
SLAR/SAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
15.3.4 SAR System Specifications . . . . . . . . . . . . . . . . . . . . . . . . . 227
15.3.5 Imaging Modes of SAR Sensors . . . . . . . . . . . . . . . . . . . . . 228
15.3.6 Selected Space-Borne SAR Sensors . . . . . . . . . . . . . . . . . . . 228
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

16 Interpretation of SAR Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235


16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
16.2 SAR Image Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
16.2.1 Radiometric Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . 235
16.2.2 Geometric Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 238
16.3 SAR Stereoscopy and Radargrammetry . . . . . . . . . . . . . . . . . . . . . . . 239
16.4 Radar Return. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
16.4.1 Radar Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
16.4.2 Radar System Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
16.4.3 Terrain Factors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
16.5 Processing of SAR Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
16.6 SAR Polarimetry and Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . 247
16.7 Field Data (Ground Truth) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
16.7.1 Corner Reflectors (CRs) . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
16.7.2 Scatterometers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
16.8 Interpretation and Scope for Geological Applications . . . . . . . . . . . . . . 248
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

17 SAR Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253


17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
17.2 Principle of SAR Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
17.3 Configurations of Data Acquisition for InSAR . . . . . . . . . . . . . . . . . . 255
17.4 Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
17.5 Ground Truth and Corner Reflectors . . . . . . . . . . . . . . . . . . . . . . . . . 256
17.6 Methodology of Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
17.7 Differential SAR Interferometry (DInSAR) . . . . . . . . . . . . . . . . . . . . . 259
17.8 Factors Affecting SAR Interferometry . . . . . . . . . . . . . . . . . . . . . . . . 260
17.9 InSAR Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
17.10 Pol-InSAR (Polarimetric InSAR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
17.11 Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264

18 Integrating Remote Sensing Data with Other Geodata (GIS Approach) . . . . 267
18.1 Integrated Multidisciplinary Geo-investigations . . . . . . ......... . . . 267
18.1.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . ......... . . . 267
18.1.2 Scope of the Present Discussion. . . . . . . . . . ......... . . . 268
18.2 Geographic Information System (GIS)—Basics . . . . . . ......... . . . 268
18.2.1 What is GIS?. . . . . . . . . . . . . . . . . . . . . . . ......... . . . 268
18.2.2 GIS Data-Base . . . . . . . . . . . . . . . . . . . . . . ......... . . . 270
18.2.3 Continuous Versus Categorical Data . . . . . . . ......... . . . 270
xx Contents

18.2.4 Basic Data Structures in GIS . . . .... . . . . . . . . . . . . . . . . . 271


18.2.5 Main Segments of GIS . . . . . . . .... . . . . . . . . . . . . . . . . . 272
18.3 Data Acquisition (Sources of Geodata in a GIS) . . . . . . . . . . . . . . . . . 272
18.3.1 Remote Sensing Data . . . . . . . . .... . . . . . . . . . . . . . . . . . 272
18.3.2 Geophysical Data . . . . . . . . . . . .... . . . . . . . . . . . . . . . . . 273
18.3.3 Gamma Radiation Data . . . . . . . .... . . . . . . . . . . . . . . . . . 274
18.3.4 Geochemical Data . . . . . . . . . . .... . . . . . . . . . . . . . . . . . 274
18.3.5 Geological Data . . . . . . . . . . . . .... . . . . . . . . . . . . . . . . . 274
18.3.6 Topographical Data . . . . . . . . . .... . . . . . . . . . . . . . . . . . 274
18.3.7 Other Thematic Data . . . . . . . . .... . . . . . . . . . . . . . . . . . 275
18.4 Pre-processing . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . . . . . . . 275
18.5 Data Management . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . . . . . . . 279
18.6 Data Manipulation and Analysis . . . . . . . .... . . . . . . . . . . . . . . . . . 279
18.6.1 Image Processing Operations . . . .... . . . . . . . . . . . . . . . . . 279
18.6.2 Classification . . . . . . . . . . . . . . .... . . . . . . . . . . . . . . . . . 281
18.6.3 GIS Analysis . . . . . . . . . . . . . . .... . . . . . . . . . . . . . . . . . 283
18.7 GIS Based Modelling. . . . . . . . . . . . . . . .... . . . . . . . . . . . . . . . . . 286
18.8 Applications . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . . . . . . . 288
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . . . . . . . 288

19 Geological Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 291


19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 291
19.2 Accuracy Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 293
19.2.1 Factors Affecting Pixel Radiometry and Geometry—An
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
19.2.2 Positional Accuracy Thematic Accuracy . . . . . . . . . . . . . . . . 294
19.2.3 Thematic Accuracy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
19.3 Geomorphology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
19.3.1 Tectonic Landforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
19.3.2 Volcanic Landforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
19.3.3 Fluvial Landforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
19.3.4 Coastal and Deltaic Landforms . . . . . . . . . . . . . . . . . . . . . . 298
19.3.5 Aeolian Landforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
19.3.6 Glacial Landforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
19.4 Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
19.4.1 Bedding and Simple-Dipping Strata . . . . . . . . . . . . . . . . . . . 302
19.4.2 Folds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
19.4.3 Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
19.4.4 Features of Global Tectonics . . . . . . . . . . . . . . . . . . . . . . . . 309
19.4.5 Lineaments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
19.4.6 Circular Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
19.4.7 Intrusives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
19.4.8 Unconformity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
19.5 Stratigraphy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
19.6 Lithology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
19.6.1 Mapping of Broad-Scale Lithologic Units—General . . . . . . . . 324
19.6.2 Sedimentary Rocks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
19.6.3 Igneous Rocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
19.6.4 Metamorphic Rocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
19.7 Identification of Mineral Assemblages from ASTER Ratio Indices. . . . . 332
19.7.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Contents xxi

19.7.2 Approaches for Computing Spectral Ratios . . . . . . . . . . . . . . 333


19.7.3 ASTER Ratio Indices in the VNIR Region . . . . . . . . . . . . . . 333
19.7.4 ASTER Ratio Indices in the SWIR Region . . . . . . . . . . . . . . 334
19.7.5 ASTER Ratio Indices in the Thermal IR Region . . . . . . . . . . 336
19.8 Mineral Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
19.8.1 Remote Sensing in Mineral Exploration . . . . . . . . . . . . . . . . 339
19.8.2 Main Types of Mineral Deposits and Their Surface
Indications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
19.8.3 Stratigraphical–Lithological Guides. . . . . . . . . . . . . . . . . . . . 341
19.8.4 Geomorphological Guides . . . . . . . . . . . . . . . . . . . . . . . . . . 341
19.8.5 Structural Guides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
19.8.6 Guides Formed by Rock Alteration . . . . . . . . . . . . . . . . . . . 344
19.8.7 Geobotanical Guides. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
19.8.8 Application Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
19.9 GIS-Based Mineral Prospectivity Modelling . . . . . . . . . . . . . . . . . . . . 352
19.9.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
19.9.2 Approaches in Mineral Prospectivity Modelling . . . . . . . . . . . 353
19.10 Hydrocarbon Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
19.10.1 Surface Geomorphic Anomalies . . . . . . . . . . . . . . . . . . . . . . 355
19.10.2 Lineament-Structural Control on the Distribution of
Hydrocarbon Pools . . . . . . . . . . . . . . . . . . . . . . . . . . ..... 356
19.10.3 Surface Alterations Related to Hydrocarbon Seepage
Hydrocarbon Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
19.10.4 Hydrocarbon Index (HI) . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
19.10.5 Thermal Anomalies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
19.10.6 Oceanic Oil Slicks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
19.11 Groundwater Investigations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
19.11.1 Factors Affecting Groundwater Occurrence . . . . . . . . . . . . . . 362
19.11.2 Indicators for Groundwater on Remote Sensing Images. . . . . . 362
19.11.3 Application Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
19.12 Engineering Geological Investigations . . . . . . . . . . . . . . . . . . . . . . . . 371
19.12.1 River Valley Projects—Dams and Reservoirs . . . . . . . . . . . . . 372
19.12.2 Landslides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
19.12.3 Route Location (Highways and Railroads) and Canal,
Pipeline and Tunnel Alignments. . . . . . . . . . . . . . . . . . . . . . 375
19.13 Neotectonism, Seismic Hazard and Damage Assessment. . . . . . . . . . . . 376
19.13.1 Neotectonism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
19.13.2 Local Ground Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 378
19.13.3 Disaster Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
19.14 Volcanic and Geothermal Energy Applications . . . . . . . . . . . . . . . . . . 381
19.14.1 Volcano Mapping and Monitoring . . . . . . . . . . . . . . . . . . . . 382
19.14.2 Geothermal Energy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
19.15 Coal Fires. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
19.16 Snow, Ice and Glaciers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
19.16.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
19.16.2 Snow/Ice Facies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
19.16.3 Snow Cover Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
19.16.4 Glaciers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
19.16.5 SAR Data Application in Snow-Ice Studies . . . . . . . . . . . . . . 399
xxii Contents

19.17 Environmental Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400


19.17.1 Vegetation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
19.17.2 Land Use and Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
19.17.3 Soil Erosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
19.17.4 Oil Spills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
19.17.5 Smoke from Oil Well Fires . . . . . . . . . . . . . . . . . . . . . . . . . 405
19.17.6 Atmospheric Pollution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
19.18 Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Brainstorming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
About the Author

Ravi P. Gupta retired as Professor of Earth Resources


Technology, Department of Earth Sciences, Indian Institute of
Technology, Roorkee, India. At Roorkee, he designed and taught
remote sensing courses for nearly four decades and supervised
numerous space application projects and research studies.
Besides, he has been involved in research and academic programs
at the Ludwig-Maximilians University, Munich, Technological
University, Dresden, and Asian Institute of Technology,
Bangkok. He has authored more than 90 research papers in
refereed international journals and six books. He is recipient of
several fellowships, awards and honours, including from the
Alexander von Humboldt Foundation.

xxiii
Introduction
1

1.1 Definition and Scope monitoring of the Earth’s environment on local and global
scales (Rencz 1999; Lillesand et al. 2015; Thenkabail 2015).
Remote sensing, in the simplest words, means obtaining Systematic and concise timelines of key developments in
information about an object without being in touch with the platforms and sensors for Earth observations are given by
object itself. It has two facets: the technology of acquiring Green and Jackson (2009).
data through a device which is located at a distance from the A major landmark in the history of remote sensing was the
object, and analysis of the data for interpreting the physical decision to land man on moon. As a sequel to this, the space
attributes of the object, both these aspects being intimately race between the US and the erstwhile USSR began, which
linked with each other. led to rapid development of space systems. The US National
Taking the above definition literally, various techniques Aeronautics and Space Administration (NASA) has led the
of data collection where sensor and object are not in contact development of many aerial and spaceborne programmes
with each other could be classed as remote sensing, e.g. which have provided remote sensing data world-wide. In
looking through a window or reading a wall-poster, as also addition, the European Space Agency (ESA) and national
many standard geophysical exploration techniques (aero- space agencies of a number of countries, such as Canada,
magnetic, electromagnetic induction, etc.), and a host of Japan, India, China, Brazil, Russia and S. Korea have also
other methods. Conventionally, however, the term remote developed remote sensing systems. All these missions have
sensing has come to indicate that the sensor and the sensed provided a stimulus to the technology and yielded valuable
object are located quite remotely apart, the distance between data and images of the Earth from space.
the two being of the order of several kilometres or hundreds The first space photography of the Earth was transmitted
of kilometres. In such a situation, the intervening space is by Explorer-6 in 1959. This was followed by the Mercury
filled with air (aerial platform) or, even partly, vacuum Program (1960), which provided orbital photography
(space platform) and only electromagnetic (EM) radiation is (70-mm format colour) from an unmanned automatic cam-
able to serve as an efficient link between the sensor and era. The Gemini mission (1965) provided a number of good
object. quality, stereo, vertical and oblique photographs, which
Practically, therefore, remote sensing has come to imply formally demonstrated the potential of remote sensing
data acquisition of electromagnetic radiation (commonly techniques in Earth resources exploration (Lowman 1969).
between the 0.4 µm and 30 cm wavelength range) from Later, the experiments in the Apollo program included Earth
sensors flying on aerial or space platforms, and its inter- coverage by stereo vertical photography and multispectral
pretation for deciphering ground object characteristics. 70-mm format photography. The above series of photo-
graphic experiments finally paved the way for unmanned
space orbital sensors.
1.2 Development of Remote Sensing Meanwhile, sensors for Earth observations had been
developed for meteorological purposes (TIROS-I, ITOS and
Remote sensing has evolved primarily from the techniques NOAA series) and also were in orbit in the early 1960s. The
of aerial photography and photo interpretation. It is a rela- payload of the weather satellite (NOAA) was modified for
tively young scientific discipline, and is an area of emerging inclusion in the first Earth Resources Technology Satellite
technology that has undergone phenomenal growth during (ERTS-1).
the last nearly five decades. It has dramatically enhanced With the launching of ERTS-1, in 1972 (later renamed
man’s capability for resource exploration, mapping and Landsat-1), began a new era in the history of remote sensing

© Springer-Verlag GmbH Germany 2018 1


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_1
2 1 Introduction

of the Earth. ERTS-1 carried on-board a four-channel mul- QuickBird, Eros series, Cartosat series, GeoEye, WorldView
tispectral scanning system (MSS) and a tape recorder, which series and Pleiades series.
captured extremely valuable data of world-wide distribution. Another major milestone has been the development of
Because the period from 1970 to the early 1980s includes the hyperspectral imaging sensors from aerial and space plat-
first availability of MSS image data and image bands with forms. These sensors capture images in several hundred
good geometric and radiometric quality, world-wide cover- wavelength bands and the image data have utility in lithologic
age, and low user costs, it marks an important evolutionary identification and possibly quantification of mineral content.
stage in the history of remote sensing. At the same time, The use of Side-Looking Airborne Radar (SLAR) imaging
valuable data were accumulating on the spectral behaviour techniques in the 1960s and early 1970s demonstrated their
of the atmosphere, as well as on spectral signatures of great potential for natural resources mapping and micro-relief
minerals, rock materials, soil and vegetation. Based on this discrimination from aerial platforms. Seasat (1978) was the first
knowledge, a new sensor called the Thematic Mapper (TM), free-flying space sensor which provided radar imagery. Sub-
was developed and launched in 1982 aboard Landsat-4. sequently, a series of shuttle imaging radar experiments (SIR-A,
A further modified version, the Enhanced Thematic Mapper -B, -C) were flown to understand the radar response in varying
(ETM+) instrument was launched in subsequent years on modes, for example, multi-frequency, multi-polarization and
Landsat-7. Because of their good spatial resolution and multi-look configurations. The ESA’s ERS-1/2, Envisat and
appropriately selected spectral channels the TM/ETM+ type Sentinel series, Japanese JERS and ALOS, Canada’s Radarsat,
satellite sensor data have been extensively used in remote India’s RISAT, German Terra-SAR and Italy’s Cosmo-
sensing world-wide since 1982. SKYMED series programmes have provided imaging radar
Concurrently, developments in space transportation sys- data of the Earth from space.1 Besides, during the last some 10–
tems and reusable space shuttles came on the scene. Space 15 years, considerable progress has been made in the field of
shuttles provide orbital platform that allow an on-board SAR polarimetry and tomography that is finding applications in
modular approach for experimental trial of different sensors. forestry and vegetation studies.
The most important of these have been the Metric Camera, Further, interferometric synthetic aperture radar (SAR) data
Large Format Camera, electronic scanner MOMS, and the processing has advanced the use of radar technology in remote
Shuttle Imaging Radar series (SIR-A, -B, -C). sensing, allowing monitoring of ground terrain changes from
Developments in electronic technology led to the design of space with an accuracy in the range of centimetres.
solid-state CCD linear array scanners, the first of these being Side-by-side, during the past few years, developments in
the German space mission MOMS-1, flown on the NASA’s micro-electronics have revolutionized the computerized data
space shuttle. Subsequently, many multispectral sensors uti- analysis scene. Satellite images have been presented for free-
lizing this technology have been placed in orbit on free-flying viewing over the web through services such as Google Earth,
platforms. Examples include the French SPOT series, the and image data from several satellites are available free of
Indian IRS series, the Japanese MOS PRISM and AVINIR, charge world-wide. Image-processing facilities, which were
and the China—Brazil CBERS series. The launch of earlier restricted to selected major research establishments,
Terra-ASTER in 1999 that carried a combination of pushb- have now become widely available. These factors have also
room and optomechanical sensors can be said to be the most been responsible for greater dissemination of remote sensing-
important milestone for geological remote sensing from space image processing knowledge and capability world-wide.
as it carried sensors spectrally well suited for geologic studies. The modern trend in satellite remote sensing is to create
The decade of 1980s saw the emergence of CCD/CMOS constellations of satellites, both of the very high resolution
area arrays that lead to digital imaging cameras. It became a optical type, and synthetic aperture radar type, for daily
turning point in the evolution of imaging technology, as repeat/revisit observations.
photography soon became obsolete and outdated. Digital
imaging cameras have now fully replaced photographic films
for aerial remote sensing, as also in all walks of life. The 1.3 Fundamental Principle
unmanned aerial vehicles (UAV) or mini-UAV carrying dig-
ital cameras are fast emerging as new platforms for acquiring The basic principle involved in remote sensing methods is that
images with very high spatial resolution (a few centimetres) in in different wavelength regions of the electromagnetic spec-
a cost competitive manner for smaller project areas. trum, each type of object reflects or emits a certain intensity of
During the last nearly two decades, there has been a fast radiation, which is dependent upon the physical or
technological development leading to large linear CCD
arrays with several thousand detector cells. This technology
forms the heart of very high resolution sensors with 1
A list of important past and present remote sensing platforms and
sub-meter spatial resolution from space, such as Ikonos, sensors is available on several websites such as: www.ceos.org.
1.3 Fundamental Principle 3

Fig. 1.1 Typical spectral reflectance curves for selected common natural objects—water, vegetation, soil and limonite

compositional attributes of the object (Fig. 1.1). Figure 1.2 types of objects (e.g. dry soil, wet soil, vegetation, etc.), and
shows a set of multispectral images in blue, red and map their distribution on the ground.
near-infrared bands of the same area and illustrates that var- The curves showing the intensity of radiation emitted or
ious features may appear differently in different spectral reflected by objects at different wavelengths, called spectral
bands. Thus, using information from one or more wavelength response curves, constitute the basic information required
intervals, it may be possible to differentiate between different for successful planning of a remote sensing mission.

Fig. 1.2 Multispectral images in a blue, b red and c near-infrared bands of the same area; note differences in spectral characters of various objects
in the three spectral bands (IRS-LISS-III sensor images of a part of the Himalayan foothills and Gangetic plains)
4 1 Introduction

1.4 Advantages and Challenges There are additional specific advantages associated with
individual sensors, namely: the photographic systems are
Major advantages of remote sensing techniques over meth- marked by analogy to the human eye and have high geo-
ods of ground investigations are due to the following: metric fidelity; scanners provide remote sensing data such
that the digital information is directly telemetered from space
1. Synoptic overview: Remote sensing permits the study of to ground; and imaging radars possess the unique advantages
various spatial features in relation to each other, and of all-weather and all-time capability. Such specific advan-
delineation of regional features/trends/phenomena tages are highlighted for various sensor types, at appropriate
(Fig. 1.3). places.
2. Feasibility aspect: As some areas may not be accessible to Remote sensing technologies pose some ongoing
ground survey, the only feasible way to obtain information challenges:
about such areas may be from remote sensing platforms.
3. Time saving: The techniques save time and manpower, as 1. Changing technologies: Keeping up-to-date with sensor
information about a large area is gathered quickly. technology, new hardware, software tools, data handling
4. Multispectral approach: Data are available in many techniques requires constant effort.
spectral bands, providing information well beyond the 2. Data management: Data volumes from past and current
visible part of the EM spectrum. satellites are huge and will only grow in future making
5. Repeat data availability: Satellite remote sensing pro- data management a challenge. Changing data formats
vides repeat coverage of the same target area offering the and complexities, and development in techniques for data
possibility of easy monitoring and change detection. processing, integration, analysis and presentation are
6. Global coverage: Satellite data facilitate quantitative areas of continuing research.
estimation of physical attributes for global mapping and 3. Increasing resolution: The desire to acquire data at
modeling. higher spatial, spectral, temporal and radiometric reso-
7. Permanent reliable archive: The images provide a per- lutions poses new issues in data analysis and interpreta-
manent archive of baseline data and information against tion. Newer images may provide far more detail than
which more recent observations can be compared and optimal for delineation of features much larger than the
contrasted. spatial resolution of the source image.
8. Multidisciplinary applications: The same remote sensing 4. Societal benefits: Taking remote sensing techniques
data can be used by researchers/workers in different from a research environment to operational settings
disciplines, such as geology, forestry, land use, agricul- where derived products can be useful for effective
ture, hydrology etc., and therefore the overall benefit-to- decision making in near-real time remains a constant
cost ratio is higher. challenge.

Fig. 1.3 One of the chief advantages of remote sensing lies in illustration shows the Richat structure in Mauritania. a air-photo
providing a synoptic overview—an altogether different scale of mosaic, and b satellite image (Beregovoi et al. in Kats et al. 1976)
observation, which may give new insights into the problem; this
1.5 A Typical Remote Sensing Programme 5

1.5 A Typical Remote Sensing Programme record the radiation intensities in various spectral channels.
The platforms for remote sensing data acquisition could be
A generalized schematic of energy/data flow in a typical of various types: aerial (balloons, helicopters and aircraft)
remote sensing system is shown in Fig. 1.4. Most remote and space-borne (rockets, manned and unmanned satellites)
sensing programmes utilize the sun’s energy, which is the (Fig. 1.5). Unmanned aerial systems (UAS), particularly
predominant source of energy at the Earth’s surface. In low-altitude platforms, are becoming increasingly popular
addition, some remote sensors also utilize the blackbody for data acquisition over hazardous areas. Terrestrial plat-
radiation emitted by the Earth. Also, active sensors such as forms are used to generate ground truth data. The remotely
radars and lasers illuminate the Earth from artificially gen- sensed data are digitally processed for rectification and
erated energy. The electromagnetic radiation travelling enhancement, and integrated with ‘ground truth’ and other
through the atmosphere is selectively scattered and absor- reference data. The processed products are interpreted for
bed, depending upon the composition of the atmosphere and identification/discrimination of ground objects. Thematic
the wavelength involved. maps may be integrated with other multidisciplinary spatial
Sensors such as photographic cameras (earlier days), data and ground truth data and used for decision making by
scanners or radiometers mounted on suitable platforms scientists and managers.

Fig. 1.4 a−f. Scheme of a typical remote sensing programme. a Sources of radiation and interaction. b Platforms. c Sensors. d Data products.
e Interpretation and analysis. f Output (Modified after Lillesand et al. 2015)
6 1 Introduction

Fig. 1.5 Remote sensing platforms for Earth resources investigations (modified after Barzegar 1983)

1.6 Field Data (Ground Truth) The main purposes of field data collection are the
following:
Ground truth implies reference field data collected to control
and help remote sensing image interpretation. In the early a. To calibrate a remote sensor
days of remote sensing research, ground investigations were b. To help in remote sensing data correction, analysis and
used to verify the results of remote sensing interpretation, interpretation
e.g. the soil type, condition of agricultural crops, distribution c. To validate the thematic maps and quantitative parame-
of diseased trees, water ponds etc. Hence the term ground ters derived from remote sensing.
truth came into vogue. The same term (ground truth) is still
widely applied in remote sensing literature, although some- The parameters/physical properties of interest are different
what erroneously, as the reference data may now be obtained in various parts of the electromagnetic (EM) spectrum, from
from diverse sources, not necessarily involving ground visible, near-IR, thermal-IR to the microwave region (for ter-
investigations. minology, see Sect. 2.2.3). Table 1.1 gives a brief overview.
1.6 Field Data (Ground Truth) 7

Table 1.1 Main physical properties for study during field data collection
A. General
Topography, slope and aspect
Atmospheric-meteorological conditions: cloud, wind, rain etc.
Solar illumination: Sun azimuth, elevation
B. Solar reflection region (visible-near infrared)
Spectral reflectance
Sun-object-sensor angle
Bidirectional reflectance distribution function
Surface coatings, leachings, encrustations
Soil: texture, moisture, humus, soil mineralogy
Rock type, structure
Vegetation characteristics, land use/land cover types, distribution
C. Thermal-infrared region
Ground temperature
Emissivity
Soil: texture, moisture, humus, soil mineralogy
Vegetation characteristics, land use/land cover types, and distribution
Rock type, mineralogy, structure
D. Microwave region
Microwave roughness (surface and sub-surface)
Volume scattering and complex dielectric constant
Rainfall pattern/surface moisture

There are four main considerations while planning the (Townshend 1981). The most commonly used method of
ground truth part of a remote sensing project: sampling is purposive sampling. In this method, observa-
tions are made in linear traverses in such frequency and
1. Timing of ground truth data collection intensity as seems appropriate to the field worker. It utilizes
2. Sampling the skills and local knowledge of the field worker. The
3. Types of field data method is time and cost effective. It is well suited to making
4. GPS survey. point observations and interpretations, and interpreting
anomalous features observed on remote sensing images.
However, the drawback is the difficulty in statistically
extrapolating results and deducing quantitative results for the
1.6.1 Timing of Field Data Collection whole study area.
Other methods include probability sampling, random
Ground data can be collected before, during or after the sampling, systematic sampling etc., which are more time and
acquisition of remote sensing data. The field data may cost consuming (for details, see Townshend 1981).
comprise two types of parameters: (a) intrinsic and (b) time
variant. An intrinsic parameter is a time-stable parameter
that could be measured any time, e.g. albedo, spectral 1.6.3 Types of Field Data
emissivity, rock type, structure etc. A time-variant (or
time-critical) parameter varies with time and must be mea- The ground data may be derived from a variety of sources,
sured during the remote sensing overpass, e.g. temperature, such as: (a) dedicated field measurements/surveys; (b) aerial
rain, condition of crop etc. Generally, data on meteorological photographic interpretation; and (c) library records/reports.
conditions are collected for about one week before and These may be considered to be of two main types: (1) the-
during the remote sensing overpass; this is particularly matic maps and (2) spectral data.
important for thermal-IR surveys.
1. Thematic Maps: show distribution of features which may
be of interest for a particular remote sensing project, e.g.
1.6.2 Sampling landforms, drainage, distribution of agricultural crops,
water bodies, lithological boundaries, structure etc. In
Different methods of sampling ground truth may be adopted addition, field data may also involve maps exhibiting
depending upon the time and resources available special features, such as landslides, or suspended silt
8 1 Introduction

distribution in an estuary, or isotherms on a water body spectrum by recording data in more than 1000 narrow
etc. Such thematic maps may be derived from aerial spectral bands. Further, matching spectra to a library of
photographic interpretation, existing records/reports, or previously recorded/stored spectra is also possible in some
generated through dedicated field surveys. instruments.
2. Spectral data: are generally not available in the existing In the solar reflection region, we have a typical bidirec-
reports/records and have almost invariably to be specif- tional arrangement in which illumination is from one
ically collected. The instruments could be field-portable, direction and observation from another (Fig. 1.7). Taking a
or may be mounted on a hydraulic platform or used on an general case, sunlight incident on an object is scattered in
aerial platform. Instrumentation for generating field various directions, depending upon the reflection character-
spectral data is different in various parts of the EM istics of the material. The intensity of light in a particular
spectrum. However, they all have in common two com- viewing direction depends upon the angle of incidence,
ponents: an optical system to collect the radiation, and a angle of view and the reflection characteristics. Therefore,
detector system to convert radiation intensity into elec- the sun—target—sensor goniometric geometry has a pro-
trical signal. Usually, a PC notebook is integrated to found influence on the radiance reaching the sensor. For the
record the data, which provides flexibility in data storage same ground surface, the radiance reaching the sensor may
and display. be different depending upon the angular relations. This
property is given in terms of the bidirectional reflectance
In the solar reflection region (visible, near-IR, SWIR), distribution function (BRDF) of the surface, which mathe-
two types of instrument are used: (a) multichannel matically relates reflectance for all combinations of illumi-
radiometers, and (b) spectroradiometers. Multichannel nation and viewing angles at a particular wavelength (e.g.
radiometers generate spectral data in selected bands of Silva 1978). A goniometer is used for field measurements of
wavelength ranges, which commonly correspond to satellite BRDF (Fig. 1.8). This consists of a semi-circular arch on
sensor bands (such as the Landsat TM/ETM+, IRS-LISS, which a spectroradiometer is mounted. The spectrora-
SPOT-HRVs etc.). Some instruments permit calculation of diometer is moved over the arch from one side to the other to
band ratios and other computed parameters in the field, in view the same ground area from different angles. In this way,
real time. Data from multichannel field radiometers help in reflectance data can be gathered to describe the BRDF for
interpreting the satellite sensor data. the surface.
Spectroradiometers (e.g. Fig. 1.6) are used to generate In the thermal-IR region, field surveys are carried out to
spectral response curves, commonly in the wavelength range measure two ground parameters: (a) ground temperature,
of 0.4–2.5 µm. The system usually acquires a continuous and (b) spectral emissivity. Suitable thermometers (0.1 °C
least count) are commonly used for measuring ground
temperatures. Temperature measurements are made for
several days at a point, or along traverse lines at the same
time, as per the requirement. Measurement of spectral

Fig. 1.6 A typical field spectroradiometer (Courtesy of Geophysical Fig. 1.7 Geometry of bidirectional arrangement in solar reflection
and Environmental Research) region
1.6 Field Data (Ground Truth) 9

Fig. 1.9 Satellites orbitting around the Earth forming the Global
Positioning System (courtesy of P.H. Dana)
Fig. 1.8 Goniometer for measuring BRDF; a spectroradiometer is
mounted on the semi-circular arch of the goniometer; measurements of The satellites transmit time-coded radio signals that are
reflectance are made for various combinations of zenith and azimuth
angles to generate BRDF for the surface recorded by ground-based GPS receivers (Fig. 1.10). At
any given time, at least four and generally five to eight SVs
are visible from any point on the Earth (except in deep
mountain gorges) (Fig. 1.11). The method of satellite
emissivity in the field is done through a reflection
ranging is used to compute distances and locate the posi-
arrangement. Spectral reflectance (Rk) over narrow spectral
tion. Four GPS satellite signals are used to compute the
ranges is measured in the field and spectral emissivity (ek)
position of any point in three dimensions on the Earth’s
is computed (ek = 1 − Rk) on the basis of Kirchoff’s Law.
surface. Basically, this works on the principle of measuring
In addition, it is also important to monitor the heat energy
the time for a signal to travel between the SV and the GPS
budget, for which a host of parameters are measured (see
receiver. With the speed of light known (3  108 m/s), the
Sect. 12.3.2.3).
distance can be calculated.
In the microwave region (SAR sensors) the main
parameter of interest is the back-scattering coefficient.
Scatterometers which can be operated at variable wave-
lengths and incidence angles are used to gather the requisite
field data (also see Sect. 16.7.2).

1.6.4 GPS Survey

Global Positioning System (GPS) devices are used to exactly


locate the position of field observations. The GPS is a
satellite-based navigation system, originally developed by
the US Department of Defense. It includes a group of
nominally 24 satellites (space vehicles, SV), that orbit the
Earth at an altitude of about 20,200 km, with an orbital
period of 12 h. There are six orbital planes (with four
satellites in each), equally spaced 60° apart along the equator
and inclined at 55° from the equatorial plane. The orbits are
nearly circular, highly stable and precisely known. In all,
there are often more than 24 operational satellites at any
point in time (as new ones are launched to replace the older
satellites) (Fig. 1.9). Fig. 1.10 GPS receiver (courtesy of Garmin Corp.)
10 1 Introduction

There may be errors due to the clock (errors in clock corrects for the corresponding errors (for fuller details on
synchronization, called clock bias), uncertainties in the GPS, refer to Kaplan and Hegarty 2006; Leick et al. 2015).
satellite orbit, errors due to atmospheric conditions Differential GPS method aims at eliminating the above
(influencing the travel of EM radiation through the atmo- errors and providing refined estimates of differential or rel-
sphere), GPS receiver errors etc. The exact ephemeris (or- ative distances on the ground. It involves one stationary or
bital) data and SV clock corrections are monitored for each base GPS receiver and one or more roving (moving) recei-
SV and also recorded by the ground GPS receiver. This vers. Simultaneous signals from SVs are recorded at the base

Table 1.2 Organization scheme: remote sensing geology (3rd ed.)

Table 1.2. Organization scheme: Remote Sensing Geology (3 rd ed.)

Chapter 1
Introduction

Chapter 2
Physical principles

OPTICAL REGION MICROWAVE REGION

Chapter 3 Chapter 15
Spectra of minerals and rocks Microwave sensors

Chapter 4
Photography Chapter 16
Interpretation of SAR imagery

Chapter 5
Multispectral imaging techniques
Chapter 17
SAR interferometry
Chapter 6
Important spaceborne missions and
multispectral sensors

Chapter 7
Geometric aspects of images and photographs

Chapter 8
Digital elevation model

Chapter 9
Image quality and principles of interpretation

Chapter 10
Atmospheric corrections

Chapter 11
Interpretation of solar reflection data

Chapter 12 INTEGRATION AND APPLICATIONS


Interpretation of thermal IR data

Chapter 13 Chapter 18
Digital image processing of multispectral data Integrating remote sensing data with other
geodata (GIS approach)

Chapter 14 Chapter 19
Imaging spectroscopy Geological applications
1.6 Field Data (Ground Truth) 11

specifically on geological aspects including sensors, inves-


tigations and applications.
The organization of this book is schematically shown in
Table 1.2. In this chapter, we have introduced the basic
principles involved in remote sensing. Chapter 2 discusses
the physical principles, including the nature of EM radia-
tion and the interaction of radiation with matter. Chapters 3
–14 present various aspects of remote sensing in the optical
region of the EM spectrum, whereas Chaps. 15–17 discuss
radar remote sensing. Chapter 18 deals with the GIS
approach of image-based data integration. Finally,
Chap. 19 gives examples of thematic geological
applications.

Fig. 1.11 Signals from four GPS satellites being received at a field
References
site
Barzegar F (1983) Earth resources remote sensing platforms. Pho-
togramm Eng Remote Sens 49:1669
and rover GPS receivers. As positional errors are similar in Green K, Jackson MW (2009) Timeline of key developments in
base and rover GPS receivers, it is possible to obtain highly platforms and sensors for Earth observations. In: Jackson MW
refined differential estimates of distances. (ed) Earth observing platforms and sensors, manual of remote
sensing, vol 1.1, 3rd edn. American Society for Photogrammetry
Accuracies using GPS depend upon the GPS receiver and and Remote Sensing (ASPRS), Bethesda, MD, pp 1–48
data processing, both of which are governed by project costs. Kaplan ED, Hegarty CJ (eds) (2006) Understanding GPS: principles
Some estimates are as follows: and applications, 2nd edn. Artech House Publishers, Boston,
p 703
Kats Y, Ryabukhin AG, Trofimov DM (1976) Space methods in
• Low cost, single receiver: 10–30 m geology. Moscow State University, Moscow, p 248 (in Russian)
• Medium cost, differential receiver: 50 cm–5 m Lowman PD Jr (1969) Geologic orbital photography: experience from
• High cost, differential GPS: 1 mm to 1 cm. the Gemini Program. Photogrammetrica 24:77–106
Leick A, Rapopart L, Tatarnikov D (2015) GPS satellite surveying, 4th
edn. Wiley, p 840
Lillesand TM, Kiefer RW, Chipman JW (2015) Remote sensing and
1.7 Scope and Organization of This Book image interpretation, 7th edn, Wiley
Rencz AN (ed) (1999) Remote sensing for the earth sciences. Manual
Remote sensing techniques have proved to be of immense of remote sensing, vo1 3, 3rd edn. American Society for
Photogrammetry and Remote Sensing, Wiley
value in mapping and monitoring various Earth’s surface Silva LF (1978) Radiation and instrumentation in remote sensing. In:
features and resources such as minerals, water, snow, agri- Swain PH, Davis SM (eds) Remote sensing: the quantitative
culture, vegetation etc., and have attained an operational approach. McGraw Hill, New York, pp 21–135
status in many of these disciplines. Details of broad- Thenkabail PS (ed) (2015) Remote sensing handbook (Three volume
set), CRC Press
spectrum applications for these can be found elsewhere Townshend JRG (ed) (1981) Terrain analysis and remote sensing.
(e.g. Thenkabail 2015). Here, in this work, we concentrate George Allen & Unwin, London, pp 38–54
Physical Principles
2

2.1 The Nature of EM Radiation another, the variation being caused due to the change in
wavelength of the radiation from medium to medium. The
As discussed in Chap. 1, in remote sensing, the electro- speed of EM radiation in a vacuum is 299,793 km s−1
magnetic (EM) radiation serves as the communication link (approx. 3  108 m s−1). The frequency (m), given in hertz
between the sensor and the object. Fraser and Curran (1976), (cycles per second), is an inherent property of the radiation
Silva (1978) and Suits (1983), provide valuable reviews on that does not change with the medium. The wavelength (k)
the nature of EM radiation and physical principles. The is given in µm (10−6 m) or nm (10−9 m).
properties of EM radiation can be classified into two main The particle or quantum nature of the EM radiation, first
groups: (1) those showing a wave nature and (2) those logically explained by Max Planck, postulates that the EM
showing particle characteristics. radiation is composed of numerous tiny indivisible discrete
Maxwell gave a set of four differential equations, which packets of energy called photons or quanta. The energy of a
forms the basis of the electromagnetic wave theory. It con- photon can be written as:
siders EM energy as propagating in harmonic sinusoidal
hc
wave motion (Fig. 2.1), consisting of inseparable oscillating E ¼ hm ¼ ð2:2Þ
electric and magnetic fields that are always perpendicular to k
each other and to the direction of propagation. The wave where E is the energy of a photon (Joules), h is a constant,
characteristics of EM radiation are exhibited in space and called Planck’s constant (6.62  10−34 J s) and m is the
during interaction with matter on a macroscopic scale. From frequency. This means that the photons of shorter wave-
basic physics, we have length (or higher frequency) radiation carry greater energy
than those of larger wavelength (or lower frequency). EM
C ¼ mk ð2:1Þ
radiation exhibits quantum characteristics when it interacts
where c is the speed of light, m is the frequency and k is the with matter on an atomic—molecular scale and these char-
wavelength. All EM radiation travels with the same speed in acteristics explain strikingly well the phenomena of black-
a particular medium. The speed varies from one medium to body radiation, selective absorption and photoelectric effect.

Fig. 2.1 Electromagnetic wave—the electric and magnetic components are perpendicular to each other and to the direction of wave propagation;
k = wavelength, c = velocity of light and m = frequency

© Springer-Verlag GmbH Germany 2018 13


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_2
14 2 Physical Principles

2.2 Radiation Principles and Sources

2.2.1 Radiation Terminology

Several terms are used while discussing EM radiation.


Radiant energy is given in joules. Radiant flux or power is
the radiant energy per second and is given in watts.
Irradiance implies the amount of radiant energy that is
incident on a horizontal surface of unit area per unit time. It
is called spectral irradiance when considered at a specific
wavelength. Radiance describes the radiation field as
dependent on the angle of view. If we consider the radiation
passing through only a small solid angle of view, then the
irradiance passing through the small solid angle and inci-
dent on the surface is called radiance for the corresponding
solid angle.

2.2.2 Blackbody Radiation Principles

Blackbody radiation was studied in depth in the 19th century Fig. 2.2 Spectral distribution of energy radiated from blackbodies
of various temperatures such as that of the Sun, incandescent lamp,
and is now a well-known physical principle. All matter at fire and Earth. The spectral radiant power wk is the energy emitted
temperatures above absolute zero (0 K or −273.1 °C) emits (m−2 k−1 s−1). Total energy radiated, W, is given by the area under the
EM radiation continuously. The intensity and spectral respective curves
composition of the emitted radiation depend upon the
composition and temperature of the body. A blackbody is an Another important relationship is the Stefan−Boltzmann
ideal body and is defined as one that absorbs all radiation Law that gives the total radiation emitted by a blackbody
incident on it, without any reflection. It has a continuous over the entire EM range:
spectral emission curve, in contrast to natural bodies that
emit only at discrete spectral bands, depending upon the 1
Z
W¼ wk dk ¼ rT4 watts m2 ; ð2:4Þ
composition of the body. 0
Temperature has a great influence on the intensity of
blackbody emitted radiation (Fig. 2.2). Experimentally, it where wk is the spectral radiance, i.e. the energy radiated per
was found initially that the wavelength at which most of the unit wavelength per second per unit area of the blackbody, T
radiation is emitted depends on the temperature of the is the temperature (K) of the blackbody and r is the Stefan
blackbody. The relationship, called Wien’s Displacement −Boltzmann constant. It implies that the total energy emitted
Law, is expressed as is a function of the fourth power of temperature of the
blackbody. This relation applies to all wavelengths of the
A spectrum shorter than microwaves.
kmax ¼ ; ð2:3Þ
T Another empirical law is the Rayleigh−Jeans Law, valid
where kmax is the wavelength (cm) at which peak of the for longer wavelengths (such as microwaves), and is
radiation occurs, A is a constant (=0.29 cm K) and T is the written as:
temperature (K) of the object. This relationship is found to 2pck
be valid for shorter wavelengths, and gives the shift in kmax wffi  T, ð2:5Þ
k4
with temperature of the radiating object. Using this law, we
can estimate the temperature of objects by measuring the where k is called Boltzmann’s constant. This implies that
wavelength at which peak radiation occurs. For example, for spectral radiance is directly proportional to temperature.
the Sun, kmax occurs at 0.48 µm, which gives the tempera- Max Planck, using his quantum theory, developed a
ture of the Sun as 6000 K (approx.); similarly for the Earth, radiation law to inter-relate spectral radiance (wk in watts)
the ambient temperature is 300 K and kmax occurs at 9.7 µm and wavelength (k in m) of the emitted radiation to the
(Fig. 2.2). temperature (T in K) of the blackbody:
2.2 Radiation Principles and Sources 15

0 1
atoms (e.g. as in the case of the Sun and the Earth as whole
2phc2 @ 1
wk ¼  A ð2:6Þ bodies), the various wavelengths overlap, and the resulting
k5 p ekKT  1
hc
spectrum has in toto a near-continuous appearance.
The emitting ability of a real material compared to that of
where h is Planck’s constant (=6.62  10−34 J s), c is the the blackbody is referred to as the material’s emissivity (e).
speed of light in m s−1 and k is Boltzmann’s constant It varies with wavelength and geometric configuration of the
(1.38  10−23 J deg−1). surface and has a value ranging between 0 and 1:
Planck’s Law was able to explain all the empirical rela-
tions observed earlier. Integrating Planck’s radiation equa- 0  ek  1; ð2:9Þ
tion over the entire EM spectrum, we can derive the Stefan A graybody has an emissivity less than 1, but constant at
−Boltzmann Law. Wien’s Displacement Law is found to be all wavelengths. Natural materials are also not graybodies.
a corollary of Planck’s radiation equation when k is small. To account for non-blackbodiness of the natural materi-
The Rayleigh−Jean’s Law is also found to be an approxi- als, the relevant parts of the various equations described
mation of Planck’s radiation equation when k is large. above are multiplied by the factor of spectral emissivity, i.e.
As mentioned earlier, a blackbody is one which absorbs
all radiation incident on it, without any reflection. It is ðwk Þobject ¼ ðek Þobject ðwk Þblackbody ð2:10Þ
observed that the fraction of the radiation absorbed exactly
1
Z
equals the fraction that is emitted by the body. Good ðW Þobject ¼ ðek Þobject ðwk Þblackbody dk ð2:11Þ
absorbers are good emitters of radiation. This was stated by 0
Kirchoff, in what is now called Kirchoff’s Law
ðWÞobject ¼ ðek Þobject r  T4 ð2:12Þ
ak ¼ ek , ð2:7Þ
0 1
2
where ak is the spectral absorptivity and ek is the spectral 2phc @ 1
ðwk Þobject ¼ ðek Þobject   A ð2:13Þ
emissivity. Both ak and ek are dimensionless and less than 1 k5 p ekKT  1
hc

for natural bodies. A blackbody has

ak ¼ ek ¼ 1, ð2:8Þ
A blackbody radiates a continuous spectrum. It is an 2.2.3 Electromagnetic Spectrum
idealization, and since ak = ek = 1, radiation is emitted at all
possible wavelengths (Fig. 2.2). Real materials do not The electromagnetic spectrum is the ordering of EM radia-
behave as a blackbody. A natural body radiates at only tion according to wavelength, or in other words, frequency
selected wavelengths as permitted by the atomic (shell) or energy. The EM spectrum is most commonly presented
configuration. Therefore, the spectrum of a natural body is between cosmic rays and radiowaves, the intervening parts
discontinuous, as typically happens in the case of gases. being gamma rays, X-rays, ultra-violet, visible, infrared and
However, if the solid consists of a variety of densely packed microwave (Fig. 2.3). The EM spectrum from 0.02 µm to

Fig. 2.3 a Electromagnetic spectrum between 10−8 µm and 102 m. b Terminology used in the 0.4 µm–1 mm part of the spectrum in this work,
involving VIS, NIR, SWIR, MIR and FIR
16 2 Physical Principles

1 m wavelength can be divided into two main parts, the 2.2.4 Energy Available for Sensing
optical range and the microwave range. The optical range
refers to that part of the EM spectrum in which optical Most commonly, in remote sensing, we measure the inten-
phenomena of reflection and refraction can be used to focus sity of naturally available radiation—such sensing is called
the radiation. It extends from X-rays (0.02-µm wavelength) passive sensing, sensors being accordingly called passive
through visible and includes far-infrared (>1 mm wave- sensors. The Sun, due to its high temperature (6000 K), is
length). The microwave range is from 1 mm to 1 m the most dominant source of EM energy. The radiation
wavelength. emitted by the Sun is incident on the Earth and is back
For terrestrial remote sensing purposes, as treated later, scattered. Assuming an average value of diffuse reflectance
the most important spectral regions are 0.4–14 µm (lying in of 10%, the spectral radiance due to solar reflection is as
the optical range) and 2 mm–0.8 m (lying in the microwave shown in Fig. 2.4a. Additionally, the Earth itself emits
range). There is a lack of unanimity amount scientists with radiation due to its thermal state (Fig. 2.4a). All these radi-
regard to the nomenclature of some of the parts of the EM ation—the Sun’s radiation reflected by the Earth and those
spectrum. For example, the wavelength at 1.5 µm is con- emitted by the Earth carry information about ground mate-
sidered as near-IR (Fraser and Curran 1976; Hunt 1980), rials and can be used for terrestrial remote sensing. On the
middle-IR (Silva 1978), and short-wave-IR (Goetz et al. other hand, in some cases, the radiation is artificially gen-
1983). The nomenclature followed throughout the present erated (active sensor!), and the back-scattered signal is used
work is shown in Fig. 2.3b. for remote sensing, such as by laser and radar.

Fig. 2.4 a Energy available for remote sensing. The solar radiation b Transmission of the radiation through the atmosphere; note the
curve corresponds to the back-scattered radiation from the Earth’s presence of numerous atmospheric absorption bands and atmospheric
surface, assuming the surface to be Lambertian and having an albedo of windows. c Major sensor types used in different parts of the EM
0.1. The Earth’s blackbody radiation curve is for 300 K temperature; spectrum
2.3 Atmospheric Effects 17

2.3 Atmospheric Effects any change in the wavelength of the radiation and are con-
sidered as elastic scattering. Several models have been
The radiation reflected and emitted by the Earth passes proposed to explain the scattering phenomena.
through the atmosphere. In this process, it interacts with There are two basic types of scattering: (a) nonselective
atmospheric constituents such as gases (CO2, H2O vapour, scattering and (b) selective scattering. Nonselective scatter-
O3 etc.), and suspended materials such as aerosols, dust ing occurs when all wavelengths are equally scattered. It is
particles etc. During interaction, it gets partly scattered, caused by dust, cloud and fog, such that the scatterer parti-
absorbed and transmitted. The degree of atmospheric inter- cles are much larger than the wavelengths involved. As all
action depends on the pathlength and wavelength. visible wavelengths are equally scattered, clouds and fog
Pathlength means the distance travelled by the radiation appear white.
through the atmosphere, and depends on the location of the Amongst selective scattering, the most common is
energy source and the altitude of the sensor platform. Raleigh scattering, also called molecular scattering, which
Sensing in the solar reflection region implies that the radi- occurs due to interaction of the radiation with mainly gas
ation travels through the atmosphere twice—in the first molecules and tiny particles (much smaller than the wave-
instance from the Sun towards the Earth, and then from the length involved). Raleigh scattering is inversely proportional
Earth towards the sensor, before being sensed. On the other to the fourth power of the wavelength. This implies that
hand, the radiation emitted by the Earth traverses the shorter wavelengths are scattered more than longer wave-
atmosphere only once. Further, pathlength also depends lengths. This type of scattering is most severe in the ultra-
upon the altitude of the platform—whether it is at low aerial violet and blue end of the spectrum and is negligible at
altitude, high aerial altitude or space altitude. wavelengths beyond 1 µm. This is responsible for the blue
Attenuation of the radiation due to atmosphere interaction colour of the sky. If there were no atmosphere, the sky
also depends on the wavelength. Some of the wavelengths would appear just as a dark space.
are transmitted with higher efficiency, whereas others are In the context of remote sensing, Raleigh scattering is the
more susceptible to atmospheric scattering and absorption. most important type of scattering and causes high path
The transmissivity of the atmosphere at a particular wave- radiance at the blue-end of the spectrum. It leads to haze on
length is a measure of the fraction of the radiance that images and photographs, which results in reduced contrast
emanates from the ground (due to solar reflection or and unsharp pictures. The effect of this type of scattering can
self-emission) and passes through the atmosphere without be reduced by using appropriate filters to eliminate shorter
interacting with it. It varies from 0 to 1. The transmissivity is wavelength radiation.
inversely related to another attribute called the optical Another type of scattering is the large-particle scattering,
thickness of the atmosphere, which describes the efficiency also called Mie scattering, which occurs when the particles
of the atmosphere in blocking the ground EM radiation by are spherical. It is caused by coarse sus-pended particles of a
absorption or scattering. size larger than the wavelength involved. The main scatterers
Thus the atmosphere acts as scatterer and absorber of the of this type are suspended dust particles and water vapour
radiation emanating from the ground. In addition, the molecules, which are more important in lower altitudes of
atmosphere also acts as a source of EM radiation due to its the atmosphere, close to the Earth’s surface. Mie scattering
thermal state. Therefore, the atmosphere–radiation interac- influences the entire spectral region from near-UV up to and
tions can be grouped into three physical processes: scatter- including the near-IR, and has a greater effect on the larger
ing, absorption and emission. wavelengths than Raleigh scattering. Mie scattering depends
A remote sensor collects the total radiation reaching the on various factors such as the ratio of the size of scatterer
sensor—that emanating from the ground as well as that due particle to the wavelength incident, the refractive index of
to the atmospheric effects. The part of the signal emanating the object and the angle of incidence.
from the atmosphere is called path radiance, and that As it is influenced by water vapour, the Mie effect is more
coming from the ground is called ground radiance. The path manifest in overcast atmospheric conditions.
radiance tends to mask the ground signal and acts as a
background noise.
2.3.2 Atmospheric Absorption

2.3.1 Atmospheric Scattering The atmospheric gases selectively absorb EM radiation. The
atoms and molecules of the gases possess certain specific
Atmospheric scattering is the result of diffuse multiple energy states (rotational, vibrational and electronic energy
reflections of EM radiation by gas molecules and suspended levels; see Chap. 3). Photon energies of some of the EM
particles in the atmosphere. These interactions do not bring radiation may be just sufficient to cause permissible energy
18 2 Physical Principles

Table 2.1 Major atmospheric Name Wavelength range Region


windows (clearer windows shown
in boldface) Ultraviolet—visible 0.30–0.75 µm Optical
Near-IR 0.77–0.91 µm Optical
Short-wave-IR 1.00–1.12 µm Optical
1.19–1.34 µm
1.55–1.75 µm
2.05–2.4 µm
Mid-IR (Thermal-IR) 3.50–4.16 µm Optical
4.50–5.0 µm
8.00–9.2 µm
10.20–12.4 µm
(8–14 µm for aerial sensing)
17.00–22.0 µm
Microwave 2.06–2.22 mm Microwave
7.50–11.5 mm
20.0+ mm

level changes in the gas molecule leading to selective Therefore, for terrestrial sensing, the effect of self-emission
absorption of EM radiation (Fig. 2.4b). The most important by the atmosphere can be significantly reduced by restricting
atmospheric constituents in this regard are H2O vapour remote sensing observations to good atmospheric windows.
(absorption at 0.69, 0.72, 0.76 µm), CO2 (absorption at 1.6,
2.005, 2.055 µm) and O3 (absorption at 0.35, 9.6 µm). The
spectral regions of least absorption are called atmospheric 2.4 Energy Interaction Mechanisms
windows, as they can be used for looking at ground surface on the Ground
phenomena from aerial or space platforms across the atmo-
sphere. Important atmospheric windows available for The EM-energy incident on the earth-surface may be
space-borne sensing are listed in Table 2.1. The visible part reflected, absorbed and or transmitted (Fig. 2.5). Following
of the spectrum is marked by the presence of an excellent the Law of Conservation of Energy, the energy balance can
atmospheric window. Prominent windows occur throughout be written as:
the EM spectrum at intervals. In the thermal-IR region, two
important windows occur at 8.0–9.2 and 10.2–12.4 µm that Eik  Erk þ Eak þ Etk , ð2:14Þ
are separated by an absorption band due to ozone, present in where Eik is the spectral incident energy, Erk, Eak and Etk
the upper atmosphere. For sensing from aerial platforms, the are the energy components reflected, absorbed and trans-
thermal channel can be used as 8–14 µm. The atmosphere is mitted respectively. The components Erk, Eak and Etk differ
essentially opaque in the region of 22 µm to 1 mm wave-
length. Microwaves of wavelength greater than 20 mm are
propagated through the atmosphere with least attenuation.

2.3.3 Atmospheric Emission

The atmosphere also emits EM radiation due to its thermal


state. Owing to its gaseous nature, only discrete bands of
radiation (not forming a continuous spectrum) are emitted by
the atmosphere. The atmospheric emission would tend to
increase the path radiance, which would act as a background
noise, superimposed over the ground signal. However, as
spectral emissivity equals spectral absorptivity, the atmo- Fig. 2.5 Energy interaction mechanism on ground; Eik is the incident
EM energy; Erk, Eak and Etk are the energy components reflected,
spheric windows are marked by low atmospheric emission. absorbed and transmitted respectively
2.4 Energy Interaction Mechanisms on the Ground 19

for different objects at different wavelengths. These inherent


differences build up the avenues for discrimination of objects
by remote sensing measurements.

2.4.1 Reflection Mechanism

The reflection mechanism has relevance to techniques in the


solar reflection region and active microwave sensing, where
sensors record intensity of EM radiation reflected from the
ground. In the reflectance domain, the reflected energy can
be written as:

Erk ¼ Eik  ðEak þ Etk Þ, ð2:15Þ


Therefore, the amount of reflected energy depends upon
the incident energy, and mechanisms of reflection, absorp-
tion and transmission. The reflectance is defined as the Fig. 2.7 Mechanism of scattering in multiple directions from natural
proportion of the incident energy, which is reflected: uneven surfaces

Erk
Rk ¼ ; ð2:16Þ non-parallel plane surfaces and fine edges and irregularities,
Eik
the dimensions of which are of the order of the wavelength
When considered over a broader wavelength range, it is of the incident radiation. This results in multitudes of
also called as albedo. Further, the interactions between EM reflections in numerous directions and diffraction at fine
radiation and ground objects may result in reflection, edges and small irregularities, leading to a sum total of the
polarization and diffraction of the wave, which are governed scattered radiation from the surface (Fig. 2.7). An extreme
by mainly composite physical factors like shape, size, sur- ideal case is the Lambertian surface in which the radiation is
face features and environment. These phenomena occur reflected equally in all directions, irrespective of the angle of
at boundaries, and are best explained by the wave nature incidence (Fig. 2.6b). Most natural bodies are in-between
of light. the two extremes of specular reflection and Lambertian
If the surface of the object is an ideal mirror-like plane, reflection, and show a semi-diffuse reflection pattern. The
specular reflection occurs following the Snell’s Law radiation is scattered in various directions, but is maximum
(Fig. 2.6a). The angle of reflection equals the angle of in a direction, which corresponds to the Snell’s Law
incidence and the incident ray the normal and the reflected (Fig. 2.6c).
ray are in the same plane. Rough surfaces reflect in multi- Further, whether a particular surface behaves as a spec-
tudes of directions, and such reflection is said to be scat- ular or a rough surface depends on the dimension of the
tering or non-specular reflection. It is basically an elastic or wavelength involved and the local relief. For example, a
coherent type of phenomenon in which no change in the level bed composed of coarse sand (grain size, e.g. 1 mm)
wavelength of the radiation occurs. The uneven surfaces can would behave as a rough surface for VNIR wavelengths and
be considered as being composed of numerous small as a smooth surface for microwaves.

Fig. 2.6 Reflection mechanisms. a specular reflection from a plane surface; b Lambertian reflection from a rough surface (diffuse reflection);
c semi-diffused reflection (natural bodies)
20 2 Physical Principles

Fig. 2.8 Common geometric configurations in reflection sensing. a Solar reflection sensing, b SAR sensing, c LIDAR sensing

The intensity of reflected EM radiation received at the transmitted ray gets further scattered, leading to volume
remote sensor depends, beside other factors, on geometry— scattering in the medium. In nature, both surface and volume
both viewing and illuminating. In practice, a number of scattering happen, side by side, and both processes con-
variations occur: (a) In solar reflection sensing, commonly tribute to the total signal received at the sensor.
the Sun is obliquely illuminating the ground, and the remote As defined, the depth of penetration is considered as that
sensor is viewing the terrain near-vertically from the above depth below the surface at which the magnitude of the power
(Fig. 2.8a). (b) The radar (SAR) imaging involves illumi- of the transmitted wave is equal to 36.8% (1/e) of the power
nation and sensing from an oblique direction (Fig. 2.8b). transmitted, at a point just beneath the surface (Ulaby and
(c) In LIDAR the sensors operate in near-vertical mode and Goetz 1987).
record back-scattered radiation (Fig. 2.8c). It is important The transmission mechanism of EM energy is still not
that the goniometric aspects are properly taken in-to account fully well understood. It is considered to depend mainly on
while interpreting the remote sensing data. an electrical property of matter, called the complex dielectric
Some special phenomena may occur in specific circum- constant (d). This varies spectrally and is different for dif-
stances during reflection, the most important of which is ferent materials. When the dielectric constant is low, the
polarization. The reflected wave train may become polarized radiation penetrates to a greater depth and the energy travels
or depolarized in a certain direction depending upon the through a larger volume of the material (therefore there is
ground attributes. The potential of utilizing the polarization less surface scattering and greater volume scattering). Con-
effects of waves in remote sensing appears to be quite dis- versely, when the object has a higher d, the energy gets
tinct in the microwaves (see Chap. 16). confined to the top surficial layer with little penetration
A remote sensor measures the total intensity of EM (resulting in dominantly surface scattering). As the complex
radiation received the sensor which depends not only on dielectric constant of materials varies with wavelength, the
reflection mechanism but also on factors influencing depth penetration also varies accordingly. For example,
absorption and transmission processes. water bodies exhibit penetration at visible wavelengths but
mainly surface scattering at microwave frequencies, whereas
the reverse happens for dry rock/soil (Table 2.2).
2.4.2 Transmission Mechanism It is implicit that the transmission characteristics also
influence the amount of energy received at the sensor, for the
When a beam of EM energy is incident on a boundary, for simple reason that transmission characteristics govern sur-
example on the Earth’s surface, part of the energy gets face vis-à-vis volume scattering, as also the component of
scattered from the surface (called surface scattering) and part the energy which is transmitted and does not reach the
may get transmitted into the medium. If the material is remote sensor.
homo-geneous, then this wave is simply transmitted. If, on Figure 2.9a, b, c is a set of three images of the same water
the other hand, the material is inhomogeneous, the body illustrating how interaction of the EM radiation may

Table 2.2 Bearing of the Wavelength Water/sea body Dry rock/soil


spectral complex dielectric range
constant (dk) of matter on depth
penetration (transmission) of EM Visible Low dk; transmission of radiation and High dk; surface scattering
radiation volume scattering
Microwave High dk; surface scattering Low dk; transmission of radiation and
volume scattering
2.4 Energy Interaction Mechanisms on the Ground 21

differ according to wavelength of the incident radiation particles; therefore, the water body appears in shades of
resulting in different images. Figure 2.9a, b are images in the gray. In the near-IR range, the incident radiation is com-
blue and near-IR ranges of the EM spectrum respectively, pletely absorbed by the water body, and the water body
and Fig. 2.9c is a radar image. Figure 2.9d provides expla- appears very dark on the image. In the case of radar, there is
nations for all the three cases. In the blue part of the spec- neither absorption nor transmission of radiation into the
trum, the radiation is partly scattered from the water surface, water body but the radiation is specularly reflected from the
enters (transmission!) the water body and also exhibits water surface with no radar-return at the antenna resulting in
volume scattering due to interaction with suspended nearly black image tones for the water body.

Fig. 2.9 Interaction mechanism of the EM radiation of different Fig. 2.9d schematically explains the corresponding interaction mech-
wavelengths resulting in different images. Figure 2.9a–c is a set of anisms. For details see text. (ALOS—AVNIR and PALSAR images of
three images of the same water body in three wavelength ranges: west coast of Florida, USA) (a–c courtesy: A. Prakash)
a = blue-green band, b = near-IR band and c = SAR;
22 2 Physical Principles

2.4.3 Absorption Mechanism In the above paragraphs, we have discussed the sources of
radiation, atmospheric effects and the mechanism of ground
Interaction of incident energy with matter on the atomic– interactions. The sensors used in the various spectral regions are
molecular scale leads to selective absorption of the EM shown in Fig. 2.4c. They include the human eye, radiometer,
radiation. An atomic–molecular system is characterized by scanner, radar, lidar and microwave passive sensors.
a set of inherent energy states (i.e. rotational, vibrational Further discussion is divided into two main parts: optical
and electronic). A different amount of energy is required range (Chaps. 3–14) and microwave range (Chaps. 15−17)
for transition from one energy level to another for each of (see Table 1.2).
these states. An object absorbs radiation of a particular
wavelength if the corresponding photon energy is just
sufficient to cause a set of permissible transitions in the References
atomic–molecular energy levels of the object. The wave-
lengths absorbed are related to many factors, such as Fraser RS, Curran RJ (1976) Effects of the atmosphere on remote
dominant cations and anions present, solid solutions, sensing. In: Lintz J Jr, Simonett OS (eds) Remote Sensing of
impurities, trace elements, crystal lattice etc. (for further Environment, Addison-Wesely, Reading, pp 34–84
Goetz AFH, Rock BN, Rowan LC (1983) Remote sensing for
details see Chap. 3). exploration: an overview. Econ Geol 79:573–590
Hunt GR (1980) E1ectromagnetic radiation: the communication link in
remote sensing. In: Gillepie AR (ed) Siega1 BS. Remote Sensing in
2.4.4 Earth’s Emission Geology, Wiley, New York, pp 5–45
Silva LF (1978) Radiation and instrumentation in remote sensing. In:
Swain PH, Davis SM (eds) Remote sensing: the quantitative
The Earth, owing to its ambient temperature, is a source of approach. McGraw Hill, New York, pp 21–135
blackbody radiation, which constitutes the predominant Suits GH (1983) The nature of electromagnetic radiation. In:
energy available for terrestrial sensing at wavelengths Colwell RN (ed) Manual of remote sensing. Am Soc Photogramm,
Falls Church, VA, pp 37–60
>3.5 µm (Fig. 2.4a). The emitted radiation depends upon U1aby FT, Goetz AFH (1987) Remote sensing techniques. Encyclo-
temperature and emissivity of the materials. These aspects pedia of Physical Science and Technology, vol 12, Academic Press,
are presented in greater detail in Chap. 12. New York, pp 164–196
Spectra of Minerals and Rocks
3

3.1 Introduction 3.2 Basic Arrangements for Laboratory


Spectroscopy
Interactions of the EM radiation with matter at atomic
−molecular scale result in selective absorption, emission and A number of methods have been developed for laboratory
reflection, which are best explained by the particle nature of spectral data measurements. These methods differ with
light. The relationship between the intensity of EM radiation regard to ranges of wavelength (visible, infrared, or
and wavelength is called the spectral response curve, or thermal-IR) and also the physical phenomenon (reflection,
broadly, spectral signature (Fig. 1.1). A single feature or a absorption or emission) utilized for the investigation. Here,
group of features (pattern) in the curve could be diagnostic in we will discuss only some of the basic concepts, so that the
identifying the object. In the context of remote sensing, objects variation in spectral response curves due to differing spec-
can be marked by the following types of spectral behaviour. troscopic arrangements is understandable.
Laboratory spectroscopic arrangements can be conceived
1. Selective absorption. Some of the wavebands are absor- as being of three types: reflection, emission and absorption
bed selectively and the spectral character is marked by a (also called transmission) (Fig. 3.1). Precise inter-
relatively weak reflection; this phenomenon is widely relationships between emission, reflection and absorption
observed in the solar reflection region. spectra are still not well understood.
2. Selective reflection. At times, a particular wavelength is
strongly reflected, leading to a ‘resonance-like’ phe- 1. Reflection arrangement. This has been the most exten-
nomenon; the selective reflection may be so intense that it sively used arrangement in the optical region. The EM
may lead to separation of a monochromatic beam (called radiation from an external source (the Sun or an artificial
residual rays or Reststrahlen). There occur numerous light source) impinges upon the object-sample and is
Reststrahlen bands in the thermal-infrared region. reflected onto a detector (Fig. 3.1a). It is customary to
3. Selective higher or lower emission. Some of the objects express the spectral reflectance as a percentage of the
may exhibit selective higher or lower emission at certain reflectance of MgO, in order to provide a calibration.
wavelengths. 2. Emission arrangement. The basic phenomenon that all
objects above zero Kelvin (K) temperature emit radiation
Obviously, spectral signatures constitute the basic infor- can form the basis for spectroscopy (Fig. 3.1b). In order
mation needed for designing any remote sensing experiment to measure radiation emitted by the object-sample at
or interpreting the data. During the last three decades or so, room temperatures, devices have to be cooled to very low
this subject matter has received a good deal of attention. In temperatures (so that the measuring instrument itself does
the following pages, we first give a brief introduction to not constitute a source of emission). Another possibility
laboratory spectroscopic methods of spectral data collection is to heat the sample, in order to measure the emitted
and then discuss the atomic–molecular processes which lead radiation; however, there are practical problems of
to spectral features. After this, spectra of selected ionic non-uniform heating. Owing to the above difficulties, in
constituents, minerals and rocks are summarized and field general, the emission spectra are computed from reflec-
and laboratory aspects are discussed. Finally, some aspects tion or transmission spectral measurements.
of the spectral response of other common objects such as 3. Transmission arrangement. This is most suited for gases
vegetation, water, concrete etc. are presented. and liquids; fine particulate solids suspended in air or
embedded in suitable transparent pellets can also be

© Springer-Verlag GmbH Germany 2018 23


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_3
24 3 Spectra of Minerals and Rocks

Fig. 3.1 Laboratory schemes of spectroscopic study and resulting spectral curves. a reflection, b emission, c absorption/transmission (S = source
of radiation; D = detector)

studied under this arrangement. The sample is placed absorption bands, irrespective of whether they are related to
before a source of radiation; the radiation intensity reflection, absorption, emission or transmission or may
transmitted through the sample is measured by a detector imply high or low spectral absorptivity.
(Fig. 3.1c). As the physical phenomenon involved is
spectral absorption, it is also called an absorption
arrangement. 3.3 Energy States and Transitions—Basic
Concepts
The type of laboratory arrangement employed for spectral
studies also depends upon the EM wavelength range under The energy state of an object is a function of the relative
investigation. In the visible-near-infrared (VNIR)—SWIR position and state of the constituent particles at a given time.
region, reflection is the most widely used arrangement, The sum-total energy of an atomic–molecular system can be
although transmission spectra are also reported. An emission expressed as the sum of four different types of energy states:
arrangement in this region would require the sample to be translational, rotational, vibrational and electronic. A differ-
heated to several thousand degrees, which is impractical. In ent amount of energy is required for each of these types of
the thermal-IR range, all three, viz. reflection, emission and transitions to occur. Therefore, different energy transitions
transmission arrangement have been used by different appear in different parts of the EM spectrum (Fig. 3.2).
workers. Translational energy, because of its unquantized nature, is
If an object exhibits selective high reflection at a partic- not considered here. Rotational energy, which is the kinetic
ular wavelength, the same feature will appear as selective energy of rotation of a moleculc as a whole in space, is also
low emission and selective low absorption (or high trans- not considered here because of the physical property of solid
mission) at the same spectral band in other arrangements. substances. The vibrational energy state is involved with the
Hence, the shape and pattern of any spectral curve depend movement of atoms relative to each other, about a fixed
on the spectroscopic arrangement used. In this treatment, all position. Such energy level changes are caused by radiation
spectra in the VNIR—SWIR range are given for reflection, of the thermal-IR and SWIR regions. The overtones and
and that in the TIR range for emission (unless otherwise combinations of vibrational energy level changes are caused
specified). by SWIR radiation. The Electronic energy state is related to
A spectral curve appears as a waveform comprising the configuration of electrons surrounding the nucleus or to
positive and negative peaks and slopes. The negative peaks the bonds; their transitions require an even greater amount of
in all types of spectral curves are commonly called energy, and are caused by photons of the near-IR, visible, UV
3.3 Energy States and Transitions—Basic Concepts 25

Fig. 3.2 Types of energy level changes associated with different parts of the EM spectrum

and X-ray regions. (Photons of the gamma ray are related to


nuclear transitions, i.e. radioactivity.)

3.3.1 Electronic Processes

Electronic processes (transitions) occur predominantly in the


UV–VIS–near-IR region. Several models and phenomena
have been conceived to explain the electronic processes
which lead to selective absorption.

1. Charge-transfer effect. In some materials, the incident


energy may be absorbed, raising the energy level of
electrons so that they migrate between adjacent ions,
but do not become completely mobile. This is called
the charge-transfer effect and may be caused by pho-
tons of the UV–visible region. It is typically exhibited Fig. 3.3 Spectral reflectance curves of jarosite, hematite and goethite
showing sharp fall-off in reflectance in the UV-blue region due to the
by iron oxide, a widespread constituent of rocks. Fe–O charge-transfer effect; also note a ferric ion feature at 0.87 lm present
absorbs radiation of shorter wavelength energy, such in all three (Segal 1983) (spectral curves offset for clarity)
that there is a steep fall-off in reflectance towards blue
(Fig. 3.3); it therefore has a red colour in the visible
Ferrous ion: 1.0 µm, 1.8–2.0 µm and 0.55–0.57 µm
range. Another example of charge-transfer effect is the
Ferric ion: 0.87 µm and 0.35 µm; sub-ordinate bands
uranyl ion (UO22+) in carnotite, which absorbs all
around 0.5 µm
energy less than 0.5 µm, resulting in yellow colour of
Manganese: 0.34 µm, 0.37 µm, 0.41 µm, 0.45 µm and
the mineral.
0.55 µm
2. Conduction-band absorption effect. In some
Copper: 0.8 µm
semi-conductors such as sulphides, nearly all the photons
Nickel: 0.4 µm, 0.74 µm and 1.25 µm
of energy greater than a certain threshold value are
Chromium: 0.35 µm, 0.45 µm and 0.55 µm
absorbed, raising the energy of the electrons to
conduction-band level. Thus, the spectra show a sharp The presence of these elements leads to absorption bands at
edge effect (Fig. 3.4). the appropriate wavelengths.
3. Electronic transition in transition metals. Electronic
processes frequently occur in transition metals, for 4. Crystal field effect. The energy level for the same ion may
example: be different in different crystal fields. This is called the
26 3 Spectra of Minerals and Rocks

Fig. 3.4 Reflection spectra of particulate samples of stibnite, cinnabar,


realgar and sulphur, displaying sharp conduction band absorption edge
effect (after Hunt 1980)

crystal field effect. In the case of transition elements, such


as Ni, Gr, Cu, Co, Mn etc., it is the 3d shell electrons that Fig. 3.5 Crystal field effect. Reflection spectra showing ferrous-ion
primarily determine the energy levels. These electrons are bands in selected minerals; the ferrous ion is located in an aluminium
not shielded (outer shell orbital). Consequently, their octahedral six-fold co-ordinated site in beryl, in a distorted octahedral
energy levels are influenced by the external field of the six-fold co-ordinated site in olivine, in an octahedral eight-fold
co-ordinated site in spessartine, and in a tetrahedral site in staurolite
crystal, and the electrons may assume new energy values (Hunt 1980) (spectra separated for clarity)
depending upon the crystal fields. In such cases, the new
energy levels, the transition between them, and conse-
quently their absorption spectra, are determined primarily and combinations. The fundamental tones occur mainly in
by the valence state of the ion (e.g. Fe2+ or Fe3+) and by its the thermal-infrared region (>3.5 µm) and their combinations
co-ordination number, site symmetry and, to a limited and overtones in the SWIR region (1–3 µm).
extent, the type of ligand formed (e.g. metal–oxygen) and
the degree of lattice distortion. Hunt (1980) has given a
set of examples of ferrous ions which, when located in 3.4 Spectral Features of Mineralogical
different crystal fields, produce absorption peaks at dif- Constituents
ferent wavelengths (Fig. 3.5).
Hunt and his co-workers have published considerable data on
In some cases, the influence of the crystal field on spectral this aspect, which has been summarized in Hunt (1977, 1979,
features may not be significant, for example in the case of 1980) and Salisbury and Hunt (1974). In addition, Lyon
rare earth atoms, where the unfilled shells are the 4f elec- (1962, 1965), Farmer (1974), Kahle et al. (1986) and Chu-
trons, which are well shielded from outside influence, and kanov and Chervonnyi (2016) have also contributed signifi-
show no significant crystal field effect. cantly to the understanding of these aspects. The following
summary is based mainly on the above investigations and
reviews. The discussion is divided into three parts: (1) the
3.3.2 Vibrational Processes VNIR region, (2) the SWIR region and (3) the TIR region.

Most of the rock-forming minerals (including silicates, oxi-


des, hydroxyls, carbonates, phosphates, sulphates, nitrates 3.4.1 Visible and Near-Infrared (VNIR) Region
etc.) are marked by atomic–molecular vibrational processes (0.4–1.0 µm)
occurring in the SWIR and TIR parts of the EM spectrum.
These are the result of bending and stretching molecular Spectral features in this part of the EM spectrum are domi-
motions and can be distinguished as fundamentals, overtones nated by electronic processes in transition metals (i.e. Fe,
3.4 Spectral Features of Mineralogical Constituents 27

Mn, Cu, Ni, Cr etc.). Elements such as Si, Al, and various
anion groups such as silicates, oxides, hydroxides, carbon-
ates, phosphates etc., which form the bulk of the Earth’s
surface rocks, lack spectral features in this region. Iron is the
most important constituent having spectral properties in this
region. The ferric-ion in Fe–O, a ubiquitous constituent in
rocks, exhibits strong absorption of UV–blue wavelengths,
due to the charge-transfer effect. It results in a steep fall-off
in reflectance towards blue, and a general rise towards
infrared (Fig. 3.3), with a peak occurring in the 1.3–1.6 µm
region. The absorption features due to iron, manganese,
copper, nickel and chromium have been mentioned above
(Sect. 3.3.1).

3.4.2 Shortwave-Infrared (SWIR) Region


(1–3 µm)

The SWIR region is important as it is marked by spectral


features of hydroxyls and carbonates, which commonly
occur in the Earth’s crust. The exact location of peaks may
shift due to crystal field effects. Further, these absorption
bands can be seen on the solar reflection images.

1. Hydroxyl ion. The hydroxyl ion is a widespread con-


stituent occurring in rock-forming minerals such as clays,
micas, chlorite etc. It shows a vibrational fundamental
absorption band at about 2.74–2.77 µm, and an overtone
Fig. 3.6 Reflectance spectra of some common clay minerals
at 1.44 µm. Both the fundamental (2.77 µm) and over-
tone (1.44 µm) features interfere with similar features
observed in the water molecule. However, when hydro-
xyl ions occur in combination with aluminium and occurs at about 1.6 µm (Fig. 3.6), the ratio of broad spectral
magnesium, i.e. as Al–OH and Mg–OH, which happens bands 1.55–1.75 µm/ 2.2–2.24 µm, is a very powerful
to be very common in clays and hydrated silicates, sev- parameter in identifying clay mineral assemblages (Abrams
eral sharp absorption features are seen in the 2.1–2.4 µm et al. 1977). Further, it is also useful in identifying
region (Fig. 3.6). hydrothermal alteration zones as many clay minerals, e.g.
kaolinite, muscovite, pyrophyllite, alunite, dickite, mont-
The Al–OH vibrational band typically occurs at 2.2 µm morillonite and diaspore, are associated with such zones.
and that of Mg–OH at 2.3 µm. If both Mg–OH and Al–OH
combinations are present, then the absorption peak generally 2. Water molecules. Important absorption bands due to
occurs at 2.3 µm and a weaker band at 2.2 µm, leading to a water molecules occur at 1.4 µm and 1.9 µm (Fig. 3.7).
doublet, e.g. in kaolinite (Fig. 3.6). In comparison to this, Sharp peaks imply that water molecules occur in
montmorillonite and muscovite typically exhibit only one well-defined sites, and when the peaks are broad it means
band at 2.3 µm, due to Mg–OH. Iron may substitute for either that they occur in unordered sites.
aluminium or magnesium, and this incremental substitution 3. Carbonates. Carbonates occur quite commonly in the
in clays structurally reduces the intensity of the Al–OH band Earth’s crust, in the form of calcite (CaCO3), magnesite
(2.2 µm) or the Mg–OH band (2.3 µm) (and increases the (MgCO3), dolomite [(Ca–Mg) CO3] and siderite (FeCO3).
electronic bands for iron in the 0.4–1.0 µm region). Important carbonate absorption bands in the SWIR occur
The absorption phenomenon within the 2.1–2.4 µm range at 1.9, 2.35 and 2.55 µm, produced due to combinations
due to Al–OH and Mg–OH leads to sharp decreasing and overtones (Fig. 3.8). The peak at 1.9 µm may inter-
spectral reflectance beyond 1.6 µm, if clays are present. This fere with that due to the water molecule and that at
broad-band absorption feature at 2.1–2.4 µm is used to 2.35 µm with a similar feature in clays at around 2.3 µm.
diagnose clay-rich areas. As the peak of the reflectance However, a combination of 1.9 and 2.35 µm and also an
28 3 Spectra of Minerals and Rocks

Typical spectra of representative anionic groups, plotted


as percent emission, are shown in Fig. 3.9. The negative
peaks (popularly called absorption bands) in these curves
indicate low spectral emissivity, which implies low spectral
absorptivity or, in other words, high spectral reflectivity
(Reststrahlen bands—physically, simply the inverse of the
reflection spectra!).
The carbonates show a prominent absorption feature at
7 µm. However, as this is outside the atmospheric window
(8–14 µm), it cannot be used for remote sensing; instead, the
weak feature around 11.3 µm can possibly be detected. The
sulphates display bands near 9 and 16 µm. The phosphates
also have fundamental features near 9.25 and 10.3 µm. The
Fig. 3.7 Reflectance spectra of selected water-bearing minerals. Note
features in oxides usually occupy the same range as that of
the absorption bands at 1.4 and 1.9 lm (Hunt 1980) bands in Si–O, i.e. 8 to 12 µm (discussed below). The
nitrates have spectral features at 7.2 µm and the nitrites at 8
and 11.8 µm. The hydroxyl ions display fundamental
vibration bands at 11 µm, e.g. in the H–O–Al bending mode
in the aluminium-bearing clays.
The silicates, which form the most abundant group of
minerals in the Earth’s crust, display vibrational spectral
features in the TIR region due to the presence of SiO4-
tetrahedron. Considering the spectra of common silicates
such as quartz, feldspars, muscovite, augite, hornblende and
olivine, the following general features can be outlined (Hunt
1980; Christensen 1986) (Fig. 3.10).

Fig. 3.8 Reflectance spectra of carbonates; note the carbonate absorp-


tion band at 2.35 lm (Whitney et al. 1983)

extra feature at 2.5 µm is considered diagnostic of car-


bonates. Further, presence of siderite is accompanied by
an electronic absorption band due to iron, occurring near
1.1 µm (Fig. 3.8) (Whitney et al. 1983).

3.4.3 Thermal-Infrared (TIR) Region


(Approx. 3–25 µm)

This part of the EM spectrum is characterized by spectral


features exhibited by many rock-forming mineral groups, e.g.
silicates, carbonates, oxides, phosphates, sulphates, nitrates,
nitrites, hydroxyls. The fundamental vibration features of the
above anionic groups occur in the thermal-IR region. Phys-
ical properties such as particle size and packing can produce
Fig. 3.9 Thermal infrared spectra of the major anionic mineral groups.
changes in emission spectra in terms of relative depth of the (generalized) (data in all figures are reflectance spectra converted to
absorption, but not in the position of the spectral band. emission spectra using Kirchoff’s law) (Christensen et al. 1986)
3.4 Spectral Features of Mineralogical Constituents 29

3.5 Spectra of Minerals

Minerals are naturally formed inorganic substances and


consist of combinations of cations and anions. They may be
chemically simple or highly complex. Some of the ions may
occur as major, minor or trace constituents. The spectrum of
a mineral is governed by the in toto effect of the following
factors:

– spectra of dominant anions,


– spectra of dominant cations,
– spectra of ions occurring as trace constituents, and
– crystal field effect.

A few examples are given below to clarify the above.

1. Limonite (iron oxide) exhibits a wide absorption band in


the UV–blue region, due to the Fe–O charge-transfer
effect; in addition, a ferric-ion absorption feature occurs
in the near-IR (0.87 µm) region (Fig. 3.3). If the mineral
is hydrated, the water molecule bands occur at 1.44 and
Fig. 3.10 Emission spectra of selected silicate minerals showing the 1.9 µm.
correlation between band location (vibrational energy) and mineral 2. Quartz consists of simple silicate tetrahedra and its
structure (Christensen et al. 1986) absorption bands occur only in the thermal-IR. How-
ever, the presence of impurities in quartz leads to
absorption bands in the visible region and hence col-
(a) In the region 7–9 µm there occurs a maximum, called oration, e.g. iron impurities give brown and green col-
the Christiansen peak; its location migrates systemati- ours to the mineral.
cally with the composition, being near 7 µm for felsic 3. In pyroxenes, absorption bands due to ferrous ion occur
minerals and near 9 µm for ultramafic minerals. in the VNIR region and those due to silicate ion in the
(b) In the region 8.5–12 µm an intense silicate absorption thermal-IR. The position and intensity of the absorption
band occurs, overall centered around 10 µm; there- bands are governed by crystal field effect.
fore, 10 µm is generally designated as the Si–O 4. In amphiboles, micas and clays, absorption bands due to
vibration absorption (Reststrahlen) region; however, iron occur in the VNIR region; those due to hydroxyl ion
its exact position is sensitive to the silicate structure in the SWIR region and those due to silicates in the
and shifts from nearly 9 µm (framework silicates or thermal-IR region.
felsic minerals) to 11.5 µm (chain silicates or mafic 5. In all carbonate minerals, the carbonate-ion absorption
minerals). bands occur in the SWIR and thermal-IR regions.
(c) The 12–15 µm region is sensitive to silicate and Additionally, absorption features due to iron in siderite,
aluminium-silicate structure of tectosilicate type; other and due to manganese in rhodocrosite are seen in the
silicates having structures of sheet, chain or ortho types VNIR region.
do not show absorption features in this region; the 6. Clays exhibit the characteristic SWIR bands for Al–OH/
absorption patterns in the form of numbers and location Mg–OH (bands at 2.2–2.3 µm) and water molecule
of peaks are different for different feldspars, thus per- bands at 1.4 µm. In the thermal-IR region, the Al–OH
mitting possible identification. feature appears at 11 µm. The presence of iron leads to
electronic transition bands in the near-IR region.
From a geological point of view, therefore, the
thermal-IR is the most important spectral region for remote Therefore, it can be summarily concluded that the spec-
sensing aiming at compositional investigations of terrestrial trum of a mineral is a result of the combination of the spectra
materials. of its constituents and crystal field effects.
30 3 Spectra of Minerals and Rocks

3.6 Spectra of Rocks analytical procedures have been proposed to understand the
spectra of polymineral mixtures (e.g. Johnson et al. 1983;
Rocks are aggregates of minerals and are compositionally Clark and Roush 1984; Smith et al. 1985; Adams and Smith
more complex and variable than minerals. Defining diag- 1986; Huguenin and Jones 1986).
nostic spectral curves of rocks is difficult. However, it is Four types of mixtures are distinguished: areal, intimate,
possible to broadly describe the spectral characters of rocks, coatings and molecular mixtures. These are discussed in
based on the spectral characters of the constituent minerals. greater detail in Sect. 14.2.4, as they are closely related to
The discussion here is divided into two parts: (a) the solar the concepts of spectral unmixing used in hyperspectral
reflection region and (b) the thermal-IR region. sensing. Briefly it may be mentioned here that areal mixtures
exhibit linear additive spectral behaviour, whereas intimate
mixtures show a non-linear spectral mixing pattern. Further,
3.6.1 Solar Reflection Region (VNIR + SWIR) surface coatings and encrustations have immense influence
on the reflection spectra.
Spectra of rocks depend on the spectra of the constituent
minerals and textural properties such as grain size, packing, 1. Igneous rocks. The representative laboratory spectra of
and mixing etc. Several models, semi-empirical methods and igneous rocks in the visible, near-infrared and SWIR

Fig. 3.11 Laboratory reflectance spectra of selected common rocks. a Igneous rocks, b Sedimentary rocks and c Metamorphic rocks (reflectance
divisions are 10%) (Salisbury and Hunt 1974)
3.6 Spectra of Rocks 31

regions are shown in Fig. 3.11a. The graphic granites (d) hydroxyls, water and metal hydroxides, which pro-
display absorption bands at 1.4, 1.9 and 2.2 µm, corre- duce absorption features at 1.4 and 1.9 µm;
sponding to absorption bands of OH and H2O. Biotite (e) iron oxides, which have spectral features in the VNIR
granites and granites have less water, and therefore the region.
OH absorption bands are weaker. Mafic rocks contain
The relative amounts of these assemblages may vary,
iron, pyroxenes, amphiboles and magnetite, and therefore
which may result in corresponding variations in spectra.
absorption bands corresponding to ferrous and ferric ion
appear at 0.7 and 1.0 µm. Ultramafic rocks contain still
larger amounts of opaque mineral and Fe2+—bearing
3.6.2 Thermal-Infrared Region
minerals, and therefore the ferrous bands become still
more prominent, e.g. in pyroxenite. Dunite is almost all
For the thermal-IR region, the mineral spectra are basically
olivine and hence there is a single broad absorption band
additive in nature (see e.g. Fig. 3.12). Therefore, rock
at 1.0 µm.
spectra in the TIR region are more readily interpretable in
2. Sedimentary rocks. The laboratory spectral response of
terms of relative mineral abundances.
important sedimentary rock types in the VNIR-SWIR
The thermal-infrared spectra of selected igneous rocks,
region is shown in Fig. 3.11b. All sedimentary rocks
arranged in decreasing silica content from top to bottom, are
generally have water absorption bands at 1.4 and 1.9 µm.
shown in Fig. 3.13. It can be seen that the centre of the
Clay-shales have an additional absorption feature at 2.1–
minimum emission band gradually shifts from about 9 µm in
2.3 µm. Ferrous and ferric ions produce absorption fea-
granite to about 11 µm in olivine-peridotite. This is due to
tures in the VNIR. Carbonaceous shales are featureless.
the corresponding shift in the Si–O absorption band
Pure siliceous sandstone is also featureless. However,
sandstones usually have some iron-oxide stains, which
produce corresponding spectral features (0.87 µm).
Limestones and calcareous rocks are characterized by
absorption bands of carbonates (at 1.9 and 2.35 µm, the
latter being more intense); the ferrous ion bands at
1.0 µm are more common in dolomites, due to the sub-
stitution of Mg2+ by Fe2+.
3. Metamorphic rocks. Typical laboratory spectra of com-
mon metamorphic rock types are shown in Fig. 3.11c.
The broad absorption due to ferrous ion is prominent in
rocks such as tremolite schists. Water and hydroxyl
bands (at 1.4 and 1.9 µm) are found in schists, marbles
and quartzites. Carbonate bands (at 1.9 and 2.35 µm)
mark the marbles.
4. Alteration zones. Alteration zones, which form important
guides for mineral exploration, are usually characterized by
the abundance of such minerals as kaolinite, montmoril-
lonite, sericite, muscovite, biotite, chlorite, epidote, pyro-
phyllite, alunite, zeolite, quartz, albite, goethite, hematite,
jarosite, metal hydroxyls, calcite and other carbonates,
actinolite–tremolite, serpentine, talc etc. These alteration
minerals can be broadly organized into five groups:
Fig. 3.12 Emission spectra modelling of mineral mixtures. The
(a) quartz + feldspar (framework silicates), which exhi- emission spectra of three minerals, olivine, augite, and laboradorite
bit no spectral feature in the VNIR–SWIR range and are shown. When these minerals are physically mixed to form an
lead to general increased reflectance; artificial rock composed of 1/3 of each of these components, the
(b) clays (sheet silicates), marked by absorption bands observed emission spectrum is shown as a physical mixture. Further, if
the spectra of the individual minerals are combined (weighted by the
at 2.1–2.3 µm; above relative amounts) a virtually identical spectral curve is obtained.
(c) carbonates, which possess spectral features at 1.9 This shows that mineral spectra are additive in the thermal-IR
and 2.35 µm; (Christensen et al. 1986)
32 3 Spectra of Minerals and Rocks

spectral features of the bedrock are more readily observable


on TIR remote sensing data, even if surficial coatings,
encrustation, varnish etc. are present.

3.8 Spectral Libraries

Spectral libraries host a large collection of spectral


curves/data of various types of objects such as minerals,
rocks, plants, trees, organic substances etc. Examples of such
libraries are those of the USGS (http://speclab.cr.usgs.gov)
(Kokaly et al. 2017); John Hopkins University-JPL (http://
speclib.jpl/nasa.gov) (Salisbury et al. 1991); and NASA-JPL
(http://asterweb.jpl.nasa.gov) (Baldridge et al. 2009).

3.9 Spectra of Other Common Objects

From the point of view of object discrimination and data


interpretation, it is necessary to have an idea of the spectra of
Fig. 3.13 Thermal infrared spectra of common rocks varying from other common objects.
high SiO2 (granite) to low SiO2 (peridotite). Note the systematic shift in The spectra of selected common natural objects in the
the absorption band with varying SiO2-content (Compiled after
Christensen 1986) VNIR–SWIR region are shown in Fig. 3.14. The deep clear
water body exhibits low reflectance overall. The turbid
shallow water has higher reflectance at the blue end, due to
(Fig. 3.10) in mineral groups which form the dominant sil- multiple scattering of radiation by suspended silt, and due to
icates in the above igneous rocks. bottom reflectance. Fresh snow generally has a high reflec-
Additional diagnostic bands in the TIR region are asso- tance. Melting snow/ice develops a water film on its surface
ciated with carbonates, hydroxyls, phosphates, sulphates, and therefore has reduced reflectance in the near-IR. Soil
nitrites, nitrates and sulphides. reflectance is governed by a number of factors such as the
parent rock, type and degree of weathering, moisture content
and biomass. Common sandy soil exhibits even-tenor
3.7 Laboratory Versus Field Spectra reflectance in the visible region and generally increasing
reflectance towards the near-IR, which may be greatly
Laboratory data are generally free from complexities and modified by the presence of other ingredients such as iron
interference caused by factors such as weathering, soil cover, oxide and water. Concrete and asphalt exhibit medium and
water, vegetation, organic matter and man-made features, low reflectance respectively, which are nearly uniform
which affect the in situ spectra (Siegal and Abrams 1976; throughout the VNIR–SWIR region.
Siegal and Goetz 1977). The extent to which in situ spectra A large part of Earth’s surface is covered with vegetation
are identical to laboratory spectra may be quite variable and and therefore vegetation spectra need a greater attention,
therefore field spectra need to be interpreted with care. In especially from the geobotanical point of view. A large
general, freshly cut surfaces show higher reflectance than number of workers have contributed to the understanding of
weathered surfaces. Scanty dry grass cover, thin soil and vegetation spectra (for a review, refer Thenkabail et al. 2012).
poor organic content in the soil tend to increase the similarity In general, in the visible region, leaf pigments govern the
between field and laboratory spectra. leaf spectrum (Fig. 3.15). The normal chlorophyll-pigmented
In the solar reflection region, the information comes from leaf has a minor but characteristic green reflection peak. In
about the top barely 50 µm surface layer zone. Therefore, the anthocyanin-pigmented leaf, the green reflection is absent
such spectra are affected by surface features. In such cases, and there is greater reflection in the red wavelength, leading
the correspondence between laboratory and field (geologi- to a red colour. The spectrum of the white leaf (no pigments)
cal) reflectance data may be only limited and has to be has a nearly constant level in the visible region. In the near-IR
carefully ascertained. region, in general, the spectral reflectance depends on the
In the thermal-IR region, the information is related to type of foliage and cell structure. Some leaves, such as fir and
about the top 10-cm-thick surface zone. Therefore, the pine, reflect weakly in the near-IR, whereas grass reflects
3.9 Spectra of Other Common Objects 33

Fig. 3.14 Generalized spectra of selected common objects

Fig. 3.15 a Spectral response of leaves with different types of pigmentation (Hoffer and Johannsen in Schanda 1986); b Spectral reflectance
curves for vegetation differing in foliage and cell structure (Goetz et al. 1983)

very strongly (Fig. 3.15b). This region can therefore be used molecule centred around the magnesium component of the
for identifying vegetation types. photoactive site. The region 0.8–1.3 µm shows a general high
The characteristic spectrum of a healthy green leaf is reflectance and is called the near-IR plateau. The reflectance
shown in Fig. 3.16 in the VNIR–SWIR region. Leaf pigments in this region is governed by leaf tissue and cellular structure.
absorb most of the light in the visible region. There is a minor The sharp rise near 0.8 µm, which borders the near-IR pla-
peak at 0.55 µm leading to a green colour. The absorption teau, is called the red edge of the chlorophyll absorption band.
feature at 0.48 µm is due to electronic transition in carotenoid The near-IR plateau also contains smaller and potentially
pigments, which work as accessory pigments to the chloro- diagnostic bands, which could be related to cellular structure
phyll in the photosynthetic process, and the 0.68 µm and water content in the leaf. The ratio of the near-IR to visible
absorption is due to electronic transition in the chlorophyll reflectance is, in general, an indication of the photosynthetic
34 3 Spectra of Minerals and Rocks

Fig. 3.16 A typical reflectance


curve of green vegetation in the
visible, near-infrared and
short-wave-infrared region (after
Goetz et al. 1983)

capacity of the canopy, and is used as a type of vegetation models of mineral abundance from field data; (4) investiga-
index (also see Sect. 19.17.1). The region 1.0–2.5 µm tion of directional effects on spectra obtained in the field and
(SWIR) contains prominent water absorption bands at 1.4, 1.9 under natural atmospheric conditions; (5) determination of
and 2.45 µm (Fig. 3.16). The reflectance in the SWIR is the exact relationship between emission, reflection and
related to biochemical content in the canopy, such as proteins transmission spectra; and (6) effects of spectral mixing.
(nitrogen concentration), lignin and other leaf constituents. Future research is, therefore, is moving from imaging
The spectra of vegetation over mineral deposits has spectroscopy to imaging spectrometry, where focus is on
drawn considerable attention recently, mainly because of the deriving quantitative information from hyperspectral images.
growing awareness among researchers that vegetation
spectra undergo fine modification due to geochemical
stresses. These aspects are related to hyperspectral sensing
References
and are discussed in Chap. 14 in more detail.

Abrams MJ, Ashley RP, Rowan LC, Goetz AFH, Kahle AB (1977)
Mapping of hydrothermal alteration in the Cuprite Mining District,
3.10 Future Nevada, using aircraft scanner images for the spectral region 0.46 to
2.36 µm. Geology 5:713–718
Substantial data have been now compiled and are available Adams JB, Smith MO (1986) Spectral mixture modeling: a new
as spectral libraries on various websites for representative analysis ofrock and soil types at the Viking Lander I site. J Geophys
Res 91(B8):8098–8112
rocks, minerals, and soils etc. Recent research has focused Baldridge AM, Hook SJ, Grove CI, Rivera G (2009) The ASTER
not only on target detection, but also on retrieving quanti- spectral library version 2.0. Remote Sens Environ 113:711–715
tative physical parameters such as mineral percentages in Christensen PR (1986) A study of filter selection for the thematic
rock assemblages, quantifying amount of hydrocarbon in mapper thermal infrared enhancement. Commercial applications and
scientific research requirements for thermal infrared observations of
soil etc. Major directions of future development in this field terrestrial surfaces, NASA-EOSAT Joint Report, pp 105–114
are anticipated to be the following (e.g. Kahle et al. 1986): Chukanov NV, Cherronnyi AD (2016) Infrared Spectroscopy of
(1) understanding the effects of coatings and differing par- Minerals and Related Compunds. Springer
ticle sizes etc.; (2) understanding the effects of chemical Clark RN, Roush TL (1984) Reflectance spectroscopy: quantitative
analysis techniques for remote sensing applications. J Geophys Res
changes including elemental substitution, solid-solution, 89(B7):6329–6340
lattice distortion etc. on spectral characters; (3) determina- Farmer VC (ed) (1974) The Infrared Spectra of Minerals. Mineralogical
tion of ‘real-world’ spectral properties, incorporating mixing Society Publications, London
References 35

Goetz AFH, Rock BN, Rowan LC (1983) Remote sensing for Lyon RJP (1965) Analysis of rocks by spectral infrared emission (18–
exploration: an overview. Econ Geol 79:573–590 25 lm). Econ Geol 60:715–736
Huguenin RL, Jones JL (1986) Intelligent information extraction from Salisbury JW, Hunt GR (1974) Remote sensing of rock type in the
reflectance spectra: absorption band positions. J Geophy Res 91: visible and near infrared. In Proceedings of 9th International
9585–9598 Symposium on Remote Sensing Environment, Ann Arbor, MI, vol
Hunt GR (1977) Spectral signatures of particulate minerals in the III, pp 1953–1958
visible and near-infrared. Geophysics 42:501–513 Salisbury JW, Walter LS, Vergo N, D’Aria DM (1991) Infrared (2.1–
Hunt GR (1979) Near-Infrared (1.3–2.4 µm) spectra of alteration minerals 2.5 µm) Spectra of Minerals. Johns Hopkins University Press,
potential for use in remote sensing. Geophysics 44:1974–1986 Baltimore, 1991, pp. 1–267
Hunt GR (1980) Electromagnetic radiation: the communication link in Schanda E (1986) Physical Fundamentals of Remote Sensing. Springer,
remote sensing. In: Siega1 BS, Gillepie AR (eds) Remote Sensing Berlin Heidelberg, p 187
in Geology, Wiley, New York, pp 5–45 Segal DB (1983) Use of Landsat multispectral scanner data for the
Johnson PE, Smith MO, Taylor-George S, Adams JB (1983) A definition of limonitic exposures in heavily vegetated areas. Econ
semiempricial method for analysis of the ref1ectance spectra of Geol 78:711–722
binary mineral mixtures. J Geophys Res 88(B4):3557–3561 Siegal BS, Abrams MJ (1976) Geologic mapping using Landsat data.
Kahle AB, Christensen P, Crawford M, Cuddapah P, Malila W, Photogram Eng Remote Sens 42:325–337
Palluconi F, Podwysocki M, Salisbury J, Vincent R (1986) Geology Siegal BS, Goetz AFH (1977) Effect of vegetation on rock and soil type
panel report. Commercial applications and scientific research discrimination. Photogramm Eng Remote Sens 43:191–196
requirements for TIR observations of terrestrial surfaces, Smith MO, Johnson PE, Adams JB (1985) Quantitative determination of
EOSAT-NASA Thermal IR Working Group, Aug 1986, pp 17–34 mineral types and abundanees from reflectance spectra using
Kokaly RF et al. (2017) USGS Spectral Library Version 7: U.S. principal component analysis. J Geophys Res 90(Suppl):C797–C804
Geological Survey Data Series 1035, p. 61. https://doi.org/10.3133/ Thenkabail PS, Lyon JG, Huete A (ed) (2012) Hyperspectral Remote
ds1035 Sensing of Vegetation, CRC press, Taylor & Francis
Lyon RJP (1962) Minerals in the Infrared a Critical Bibliography. Whitney GG, Abrams MJ, Goetz AFH (1983) Mineral discrimination
Stanford Research Institute Publications, Palo Alto, CA, p 76 using a portable ratio-determining radiometer. Econ Geol 78:688–698
Photography
4

4.1 Introduction forms, using improved photographic techniques. Photo-


graphic systems and interpretation have been discussed by
Photography has become outdated and obsolete, and the Smith and Anson (1968); Colwell (1960, 1976, 1983); Slater
typical film-camera photographic systems are to be found (1980, 1983); Curran (1985) and Teng et al. (1997) and in
now only in museums. Nevertheless, it is important to several other publications.
mention that during the period spanning about 1920s–
early1990s, photographic systems were used for imaging
purposes in all walks of life, world-wide, including for 4.1.1 Relative Merits and Limitations
remote sensing surveys. Aerial photography was flown
extensively for topographic mapping and cartographic The main advantages of using photographic systems for
applications, so much so that in early days of satellite remote remote sensing of the Earth have been the following.
sensing, it was termed as the conventional remote sensing. In
1990s, the digital imaging technology wave started sweeping 1. Analogy to the eye system: The photographic camera
the world and it dramatically changed the way imaging was works in a manner analogous to the human eye−both
done. The pace and impact of digital imaging technology having a lens system. Photographic products are, there-
can be imagined by the fact that the Eastman Kodak Co., that fore, easy to study and interpret.
manufactured films, filters and cameras for a wide range of 2. Geometric fidelity: This is another great advantage
purposes and held the dominant position in photographic associated with photographic systems, as intra-frame
industry during the most of the twentieth century till around distortions do not occur.
1995, was driven to bankruptcy by 2012. 3. Stereo capability. The photographic products have been
Here we include a short discussion on photography for easily used for stereoscopy.
two reasons: (a) the archival remote sensing data of the
period 1920s–1980s that one may happen to use could have Besides the above, these systems also have had advan-
been acquired by photographic systems; and (b) the termi- tages associated with all remote sensing techniques viz.
nology and principles of stereo-photogrammetry developed synoptic overview, permanent recording, feasibility aspects,
around aerial photography are still valid and extended to time saving capability, and multidisciplinary applications
digital cameras. Therefore, a short discussion on photo- (see Sect. 1.4).
graphic technique is considered necessary. Limitations or disadvantages in using photographic sys-
Photography was invented in 1839 by N. Niepce, W.H.F. tems arose from the following.
Talbot and L.J.M. Daguerre, and since then photographic
techniques have been used in applied sciences for various 1. Limited spectral sensitivity: Photographic films were
applications. Photographic pictures of the Earth were first sensitive only in the 0.3−0.9 µm wavelength range and
acquired from balloons and kites, until the aeroplane was the rest of the longer wavelengths cannot be sensed by
invented in 1903. World Wars I and II provided a new photographic techniques.
opportunities and challenges to apply photographic tech- 2. Retrieval of films: The exposed film containing the
niques from aerial platforms. Soon afterwards, man started information has to be retrieved, i.e. a transportation
exploring space and observing the Earth from space plat- system of some sort (aircraft, parachute, space shuttle

© Springer-Verlag GmbH Germany 2018 37


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_4
38 4 Photography

etc.) is necessary to retrieve the film. This aspect of the 4.2 Cameras
photographic system is in contrast to that of scanners,
where data is telemetered on microwave links to ground Commonly, the cameras used in remote sensing are preci-
receiving stations, making the latter a highly versatile sion equipment. Their main element is a highly sophisticated
technique. For this reason, the use of photographic lens assembly, which images the ground scene on the film.
techniques for Earth-observation purposes has generally The cameras are placed on stable mounts, as even very slight
remained confined to aerial platforms and some selected shaking would seriously affect the quality of photographs
space missions. and their resolution. A variety of cameras have been used for
3. Deterioration of quality: With age the deterioration in photographic remote sensing.
quality of both the film and printed photographs has been Basic components of a conventional aerial camera comprise
a common problem. the magazine, drive-mechanism, cone and lens (Fig. 4.2).
4. Deterioration in photographic duplication. The dupli-
cation of photographic products for distribution etc. is
often accompanied by some loss of information content.

4.1.2 Working Principle

The working principle of an aerial/space camera system is


simple. For an object being focused by a convex lens the
well-known relation giving distances is (Fig. 4.1):

1 1 1
þ ¼ ; ð4:1Þ
u v f
where, u = distance of object from lens centre, v = distance
of image from lens centre, and f = focal length of the lens.
If the object is far away (i.e. u  ∞), the image is formed
at the focal plane (v = f). Cameras for remote sensing pur-
poses have utilized this principle and are of fixed focus type.
They carried a photosensitive film placed at the focal plane
and the objects falling in the field-of-view of the lens were
imaged on the film.
A photographic system typically consisted of three main Fig. 4.2 Basic components of an aerial photographic camera (redrawn
components: camera, filters and film. after Colwell 1976)

Fig. 4.1 Working principle of photographic system. a Image formed by a converging lens system. b Configuration for remote sensing
photography
4.2 Cameras 39

The magazine holds the film (common width 240 mm) and In a camera, the angle subtended at the lens centre from
includes a supply spool and a take-up spool. The drive one image corner to the diagonally opposite image corner is
mechanism is a series of mechanical devices for forward called the angular field-of-view (FOV) or angle of the lens.
motion of the film after exposure and for image motion In aerial photography, the lens angles ranged from about 50°
compensation. The film for exposure is held in a plane to 125°, the lens of 90° being the most commonly used type.
perpendicular to the optical axis of the lens system, and after The focal length of the lens system directly controls the scale
exposure is successively rolled up. The cone is a light-tight of photography as scale S = f/H (Fig. 4.1b). Larger focal
component, which holds the lens at a distance ‘f’ from the length implies smaller angular field-of-view, less areal cov-
film plane. The lens system is a high-quality chromatically erage, and larger scale of photography, other factors
corrected assembly to focus the ground objects on the film remaining the same. In aerial photography, focal length
plane. It also comprises filters, diaphragm and shutter etc. (f  150 mm) have been commonly used. In space
Attached to the camera are a view-finder (to sight the cam- photography, the altitude is very high and, in general, a
era), an exposure meter (to measure the light intensity) and camera lens of a smaller FOV (about 15°−20°) and large
an intervalometer (to set the speed of the motor drive and focal length (about 300−450 mm) has been used.
obtain the desired percentage of overlap for stereoscopic Single-lens frame cameras are distinguished into two
purposes). On-board GPS could be employed to locate the types: (a) frame reconnaissance cameras and (b) mapping
precise position of the photo-frame. cameras. Reconnaissance cameras are less expensive than

Fig. 4.3 A typical aerial photograph showing fiducial marks and flight data; the photograph shows a glacially fed river (Tanana river, Alaska)
with high sediment load (courtesy of Geophysical Institute, UAF, Alaska)
40 4 Photography

mapping cameras, both to buy and to operate, and come in a Three types of colour films have been used: colour negative
variety of configurations. A number of frame reconnaissance film, colour positive film and colour infrared film. A colour
cameras have been flown on space missions, such as Gemini, negative film yields a negative, i.e. objects photographed on
Apollo etc., the most worthy of mention being the Earth the film appear in their complementary colours (viz. blue
Terrain Camera (ETC) flown on SKYLAB during 1973−74. object appears in yellow, green object appears in magenta and
Mapping cameras have been used to obtain high-quality red object appears in cyan). A colour positive film yields
vertical photographs. These are also variously named metric directly a colour positive transparency. A colour infrared
cameras, photogrammetric cameras or cartographic cameras. (CIR) film is photo-sensitive in the visible as well as in the
A distinctive feature of such cameras is very high geometric near-infrared part of the EM spectrum. On this film, various
accuracy, which enables photogrammetric measurements. objects appear in false colours, viz. green objects appear blue,
Reseau marks (consisting of several fine cross-marks across red objects appear green and NIR-reflecting objects appear
the photographs) are exposed on the film to enable determi- red. Therefore, this type of film is also called a false colour
nation of any possible dimensional change in the photo- film. Further, in view of its specific ability to discriminate
graphic product. Fiducial marks and reseau marks are exposed green colour from vegetation (which is strongly reflecting in
simultaneously with the exposure of the ground scene, and the near-infrared), it is also called a camouflage detection film.
extensive flight and camera data are also shown alongside The CIR analogy is used to generate false colour composites
each frame (Fig. 4.3). The main application of mapping from multispectral images (see Sect. 9.2.5).
cameras is to acquire vertical photogrammetric photography. With the advent of high resolution laser black-and-white
Besides the conventional aerial camera, camera of dif- and colour printers, which are directly linked to digital
ferent types, viz. panoramic, strip and multispectral were cameras, the usage of photographic films gradually declined
developed and deployed from aerial platforms in the past for and has now come to a halt as the films are no more man-
specific applications. The erstwhile unique advantages of ufactured. Readers more interested in erstwhile photographic
such cameras for specific applications have now been film technology, e.g. design, exposure, processing, sensi-
overshadowed by the satellite multispectral image data, tivity, etc. may see other texts (e.g. Mees and James 1966).
which provide good resolution imagery in a geometrically
correct format.
4.4 Filters

4.3 Films Filters form a very important component of all camera sys-
tems, being used in both film photographic systems as well
Two main types of films can be distinguished: as digital imaging systems. They permit transmission of
(a) black-and-white and (b) colour. selected wavelengths of light by absorbing or reflecting the
The heart of a film comprises a layer of photo-sensitive unwanted radiation. They are applied in order to improve
silver halide crystals. Black-and-white film portrays only image quality and for spectrozonal photography. On the
brightness variations across a scene. The sensitivity of the basis of the physical phenomenon involved, filters can be
B-&-W film is limited to visible region (sensitivity classified as (a) spectral filters, (b) neutral density filters and
0.3−0.7 µm) or included part of near infrared (sensitivity (c) polarization filters.
0.3−0.9 µm or sometimes up to 1.2 µm). An important
characteristic is the film speed which is a measure of sensi- 1. Spectral Filters. These filters lead to spectral effects, i.e.
tivity to light. Film resolution denotes the spatial resolution transmitting some selected wavelengths and blocking the
capability of a film and is given for high-contrast objects, in rest. These are of two types: absorption filters and
terms of line-pairs/mm (e.g. 100 line-pairs/mm or interference filters.
400 line-pairs/mm).
Colour films have utilized the principle of additive and Absorption filters work on the principle of selective
subtractive colours. It is well known that there are three absorption and transmission. They are composed of coloured
primary additive colours (blue, green and red), and com- glass or dyed gelatine. A filter of this type typically absorbs
plementary to these there are three primary subtractive col- all the shorter wavelengths than a certain threshold value and
ours (yellow, magenta and cyan) (Appendix A). Mixing any passes the longer wavelengths (Fig. 4.4a). Therefore, it is
one set of the primary colours (additive set or subtractive set) also called a ‘long–wavelength pass’ or ‘short-wavelength
in different proportions can produce the entire gamut of blocking’ filter (short-wavelength pass filters of the absorp-
colours. All colour films utilized this principle. Colour dyes tion type do not exist). A number of long-wavelength pass
corresponding to the primary subtractive colours were used filters are available. For example, Wratten 400 cuts off all
in the manufacture of colour films. radiation shorter than 0.4 µm wavelength and effectively
4.4 Filters 41

Fig. 4.4 Typical transmittance curves for a Absorption filter and b Interference filter

acts as a haze cutter. Wratten 12 (yellow filter, also called incorporated in the other types of filters, in order to
minus-blue filter) cuts off all radiation shorter than 0.5 µm, reduce the number of filters to be physically handled.
and use of this filter with a black-and-white film or colour 3. Polarization Filters. Polarization filters use the principle
infrared film enhances image contrast. Wratten 25A cuts off of polarization of light. Such a filter permits passage of
all radiation shorter than red, and Wratten 89B eliminates all only those rays that vibrate in a particular plane, and
radiation of visible range and passes only near-IR radiation. blocks the rest. However, the potential of this type of
Interference filters work on the principle of interference of filter has yet to be adequately demonstrated in remote
light. An interference filter consists of a pack of alternating sensing.
high and low refractive index layers, such that the required
wavelengths pass through, and other shorter and longer
wavelengths are blocked either by destructive interference or
by reflection. This effectively acts as a band-pass filter, i.e. an 4.5 Vertical and Oblique Photography
optical window, on either side of which the view is blocked
(Fig. 4.4b). The band-pass width is susceptible to the angle of The geometric fidelity of photographs is controlled by the
incidence of incoming rays. Band-pass filters are used in orientation of the optic axis of the lens system. If the optic
multispectral cameras and CCD cameras, where information axis is vertical, scale of photo remains constant, geometric
in only a specified wavelength range is required. Examples fidelity is high, and accurate geometric measurements such
are: Wratten 47 transmits blue radiation; Wratten 58 trans- as heights, slopes and distances (or X, Y, Z co-ordinates of
mits only green radiation; Wratten 18A passes radiation in objects in a co-ordinate system) from photographs are pos-
the UV (0.3−0.4 µm) region; similarly, Wratten 39 passes sible (see Chap. 8). Vertical stereoscopic photographs from
radiation in the ultraviolet−blue region. photogrammetric and frame reconnaissance cameras have
been common remote sensing data products from aerial
2. Neutral Density Filters. Neutral density (including gra- platforms. The utility of this technique as a practical tool still
ded neutral density) filters have no spectral character. remains beyond question.
Their most frequent use is to provide uniform illumina- The photographs are said to be oblique or tilted when the
tion intensity over the entire photograph. An anti-vi- optic axis is inclined. Oblique photography may be done for
gnetting filter is a typical example. In a lens system, the some specific purpose, for example: (1) to view a region
intensity of light transmitted is greater in the central part from a distance for logistic or intelligence purposes, (2) to
and less in the peripheral region, leading to non-uniform study vertical faces, e.g. escarpments, details of which would
illumination. An anti-vignetting filter is darker in the not show up in vertical photography, (3) to read vertical
central part and becomes gradually lighter in the snow-stacks in snow surveys, and (4) to cover large areas in
peripheral region, and compensates for the above geo- only a limited number of flights. Such photographs, how-
metric fall-off in light intensity from the centre outwards. ever, may have limited photogrammetric applications as the
It is also called a graded neutral density filter, and lacks photo-scale varies and there are inherent geometric
spectral features. Sometimes the anti-vignetting effect is distortions.
42 4 Photography

4.6 Ground Resolution Distance pairs per mm. This gives the ground resolution distance
(GRD) as:
Broadly speaking, resolution is the ability to distinguish  
between closely spaced objects. In relation to photographic ðH Þ 1
GRD ¼  ð4:2Þ
data products, it is used to denote the closest discernible ðf Þ Rs
spacing of bright and dark lines of equal width (Fig. 4.5). It
where Rs = resolution of the system, f = focal length and
is given as line-pairs per mm (e.g. 100 or 300 line-pairs per
H = flying height. For example, a photograph at a scale of
mm etc.). The photographic resolution depends on several
1:25,000, taken with a system with overall resolution of
factors.
50 line pairs per mm, would have a GRD of
(25,000/50 mm) = 0.5 m.
1. Lens resolution. The characteristics of a camera lens are
important, as it forms the heart of the photographic
system. The resolving power of the lens depends on
several factors, namely wavelength used, f-number of the 4.7 Photographic Missions
lens, relative aperture and angular separation from the
optic axis (for details, see Slater 1980). The modulation 4.7.1 Aerial Photographic Missions
transfer function (MTF) (Appendix B) of lenses used in
remote sensing missions is quite high, and constraints put Planning an aerial photographic mission used to involve a
by other factors are usually more stringent in limiting the number of considerations related to technological feasibility
overall system resolution. and mission-specific needs, such as (1) type of camera,
2. Film resolution. This is the inherent resolution of the (2) vertical or oblique photography, (3) type of film−filter
film, as discussed earlier, given in line pairs per mm. combination, (4) scale of photography, including focal
3. Object contrast ratio. This is the ratio of the intensity of lengths available and flying heights permissible, (5) area to
radiation emanating from two adjacent objects that are be covered, (6) degree of overlap required and (7) time of
being imaged. For objects with a high contrast ratio, the image acquisition including local meteorological factors, and
resolution is greater. diurnal and seasonal considerations. A number of repetitive
aerial photographic coverages have been made world-wide
In addition to the above, there are several other factors by various agencies.
such as navigational stability, image motion, and atmo- The new growing trend of the aerial missions is to use
spheric conditions which also affect the photographic image digital cameras in low altitude dedicated missions such as by
quality and resolution. using UAVs for resources surveys and repetitive environ-
Thus, photographic resolution is dependent on several mental surveys (digital cameras are discussed in Sect. 5.4.5).
factors. It is usually found to be in the range of 20–100 line

4.7.2 Space-Borne Photographic Missions

As far as space photography is concerned, as mentioned


earlier, the difficulties in film retrieval from orbital platforms
have made photography a secondary tool for civilian appli-
cations. Some of the more important space-borne photo-
graphic missions have been the following.
Gemini. These photographs taken from hand-held cam-
eras were collected in the mid 1960s from the Gemini
missions III through XII. Photographs were taken on 70 mm
black and white, colour, and colour infrared films.
Earth Terrain Camera (ETC). This was the most note-
worthy space camera experiment in early 1970s. The ETC
was flown on SKYLAB and was a high-performance frame
reconnaissance camera, which yielded photographs at a scale
of nearly 1:950,000, providing a ground distance resolution
of nearly 30 m.
Metric Camera and Large-Format Camera (LFC). The
Fig. 4.5 Resolving power test chart (courtesy of Teledyne Gurley Co.) space shuttle provided opportunities for two dedicated
4.7 Photographic Missions 43

Table 4.1 Characteristics of selected spaceborne photographic systems


SN Mission Metric camera (Europe) Large format camera (USA) KVR-1000 (Russia) TK-350 (Russia)
1. Satellite altitude (km) 250 250 220 220
2. Flight vehicle Space shuttle Space shuttle Kosmos Kosmos
3. Scene coverage (km) Variable Variable 34  57 175  175
4. Spatial resolution (m) 20–30 20 2–3 5–10
5. Film type Panchromatic, and CIR film Panchromatic, normal colour and CIR film Pachromatic film Panchromatic film
6. Stereo-overlap 60–80% 60–80% Minimal 60–80%

cartographic experiments, the Metric Camera (Konecny overlapping photographs for stereoscopic analysis. The
1984) and the Large-Format Camera (Doyle 1985). These KVR1000 camera employed a 1000 mm lens, and provided
systems provided limited Earth coverage and rekindled the 2 m ground-resolution photographs covering a large area
interest of scientists and engineers in stereo-space photog- (160 km  40 km) in a single frame with minimal over-
raphy for mapping applications from space (Table 4.1). lap. Both these systems used panchromatic film. The film
Photographs from Handheld Cameras. The Space has been scanned to produce digital images. Their image-
Transportation System (space shuttle) flights have provided products, called SPIN-2, are now available for general dis-
opportunities for Earth photography from space. These tribution in digital and analog forms (web-sites: www.Terra-
photographs were taken from smaller-format cameras (35, Server.com; www.spin-2.com).
70 and 140 mm), using colour and in black-and-white films,
providing ground resolution on the order of 30−80 m.
Corona Photographs. Corona program was started was 4.7.3 Product Media
US for defence and surveillance purposes in 1959 through
1972. The Corona satellites used 70 mm film with a 24-in. The products of aerial/space photography were convention-
focal length camera. The film was dropped from the satellite ally stored and distributed on photographic media (films of
and retrieved by various means. The photographs from various types and/or paper prints). Now, these photographic
Corona cameras were declassified in 1995 (McDonald products could be available as scanned image data. In this
1995). These photographs focusing mainly on the way, photographic products from the old archives can be used,
Sino-Soviet block, provide spatial resolution of about processed and integrated with other remote sensing/ancillary
2−8 m. GIS data. Digitization also helps minimize degradation in
Big Bird: Big Bird, also known as KH-9 or Hexagon, was quality with age and duplication for distribution.
a series of photographic reconnaissance missions launched The geometric aspects of photographs are discussed in
by the USA between 1971–1986. The mission used a Chap. 7, the radiometric quality and interpretation in
mapping camera with a 9-inch film format. The photo- Chaps. 9 and 11 respectively.
graphic film aboard the KH-9 was sent back to Earth in
recoverable film capsules for processing and interpretation.
The ground resolution achieved was up-to *0.6 m. The References
KH-9 was declassified in 2011.
Photographs from Gemini, ETC, LFC, Corona and Big Colwell RN (ed) (1960) Manual of photographic interpretation. Am
Soc Photogramm, Falls Church, VA
Bird are archived in the US Geological Survey (USGS)
Colwell RN (1976) The visible portion of the spectrum. In: Lintz J Jr,
EROS data center and are available for search and download Simonett DS (eds) Remote sensing of environment.
from the USGS Earth Explorer portal (earthexplorer. Addison-Wesley, Reading, pp 134–154
usgs.gov). Colwell RN (ed) (1983) Manual of remote sensing, vols I, II, 2nd edn.
Am Soc Photogramm, Falls Church, VA
Photographs from the Russian Cameras. The Russian
Curran PJ (1985) Principles of remote sensing. Longman, London
unmanned spacecraft KOSMOS, orbiting the Earth at about Doyle FJ (1985) The large format camera on shuttle mission 41-G.
220 km, carried two sophisticated cameras: KVR-1000 and Photogram Eng Remote Sens 51:200
TK-350 (Table 4.1). These cameras operated in conjunction Konecny G (1984) The photogrammetric camera experiment on
Spacelab I. Bildmessung und Luftbildwesen 52:195–200
with each other and generated rectified imagery even without
McDonald RA (1995) Opening the cold war sky to the public—
ground control. The TK-350 was a 10 m resolution topo- declassifying satellite reconnaissance imagery. Photogramm Eng
graphic camera with 350 mm focal length and provided Remote Sens 61:385–390
44 4 Photography

Mees CEK, James TH (1966) The theory of the photographic Smith JT Jr, Anson A (eds) (1968) Manual of colour aerial
processes, 3rd edn. Macmil-Ian, New York photography. Am Soc Photo-gram, Falls Church, VA
Slater PN (1980) Remote sensing-optics and optical systems. Addison Teng WL et al. (1997) Fundamentals of photographic interpretation.
Wesley, Reading, 575 p In: Philipson WR (ed) Manual of photographic interpretation,
Slater PN (1983) Photographie systems for remote sensing. In: 2nd edn. Am Soc Photogram Remote Sens, Bethesda, Md,
Colwell RN (ed) Manual of remote sensing, 2nd edn. Am Soc pp 49–113
Photogram Falls Church, VA, pp 231–291
Multispectral Imaging Techniques
5

5.1 Introduction 5.1.1 Working Principle of a Digital Sensor

In this chapter we discuss the working principle of A non-photographic (digital) sensor consists of two main
non-photographic digital sensors, particularly multispectral parts: an optical part and a detector part (Fig. 5.1).
imaging techniques, operating in the optical range of the
electromagnetic spectrum. Important spaceborne optical 5.1.1.1 Optical Part
sensors are described in the next chapter (Chapter 6). The This includes radiation-collecting optics and
optical range has been defined as that range in which the radiation-sorting optics. The optics for radiation collection
optical phenomena of reflection and refraction can be used to primarily comprises mirrors, lenses and a telescopic set-up to
focus the radiation. It extends from X-rays (0.02 µm wave- collect the radiation from the ground and focus it onto
length) through visible and infrared, reaching up to micro- radiation-sorting optics (Fig. 5.1). A calibration source is
waves (<1 mm wavelength) (Fig. 2.3). However, as the often provided on-board, and a chopper enables radiation
useful region for remote sensing of the Earth lies between from the calibration source to be viewed by the detector at
0.35 and 14 µm, we largely focus our attention on this regular intervals. Radiation-sorting optics use optical devices
specific region. For valuable reviews on non-photographic such as gratings, prisms and interferometers to separate
sensors, see Silva (1978), Slater (1980, 1985), Joseph (1996), radiation of different wavelength ranges. In some cases, such
and Ryerson et al. (1997). as pushbroom scanners, spectral separation may be carried

Fig. 5.1 Main components of a non-photographic remote sensor

© Springer-Verlag GmbH Germany 2018 45


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_5
46 5 Multispectral Imaging Techniques

out by appropriate band-pass optical filters covering the lens there is no emission of electrons (solid-state technology),
or detectors. After spectral sorting, the radiation of selected the energy requirements in photo-conductors are lower than
wavelength ranges is directed to detectors. for photo-emission devices, and therefore lower-energy
radiation (such as photons corresponding to SWIR and
5.1.1.2 Detector Part TIR) can also be detected by such devices. The develop-
The detector part primarily includes devices which transform ment of appropriate photo-conductors has been a field of
optical energy into electrical energy. The heart of the device intensive and priority research. In many cases, dopants
is a quantum or photo-detector unit. The incident photons (impurities) are used to make alloys so that photons of a
interact with electronic energy level of the detector material, certain wavelength range can be detected. Some of the
and electrons or charge carriers are released (photoelectric photo-conductors used in VNIR–SWIR–TIR ranges are
effect). The response in photo-detectors is very quick, and silicon, lead sulphide, indium antimonide, merecury-
the intensity of the electrical signal output is proportional to cadmium-telluride and gallium arsenide (Fig. 5.2b). Photo-
the intensity of photons incident in a specified energy range. diodes use the same material as the photo-conductors, and
Major limitations of photo-detectors are due to the fact that: differ in operation only in the way that noise is reduced. An
(i) their response varies quickly with wavelength (Fig. 5.2a), important evolution of the photodiode array is the
and (ii) photoconductors operating at longer wavelengths charge-coupled device (CCD) which forms the heart of
have to be operated at very low temperature (195 K, 77 K, modern remote sensing devices.
44 K or sometimes 5 K; Fig. 5.2b) to avoid noise, which is The electrical signal from quantum detectors is amplified
done by placing the detector within a double-walled vessel and quantized, i.e. given one of several possible integer
called a Dewar, filled with liquid helium or nitrogen for numbers depending upon the intensity. It is recorded on tape
cryogenic cooling. (digital recorder), relayed down to the ground receiving
In photo-conductive detectors, the absorption of the station, and may be used for real-time display and/or sub-
incident photon is accompanied by the raising of the energy sequent processing and applications (Fig. 5.1).
levels of the electrons from valence levels (where they are The main advantages of non-photographic devices as
bound) to conduction level (where they become mobile). compared to the erstwhile photographic ones are listed in
Thus, the bulk conductivity of the detector is increased in Table 5.1. As such, the non-photographic sensors are ideally
proportion to the photon flux, and this can be measured. As suited for use on free-flying space platforms.

Fig. 5.2 a Spectral detectivity curves of some selected b Detector materials commonly used in different wavelength ranges
photo-detectors (2p steradians FOV, 295 K background temperature) in the optical region (operating temperature shown in parentheses
(Hughes Aircraft Company, Santa Barbara Research Centre). above)
5.1 Introduction 47

Table 5.1 Relative merits of imaging systems over the erstwhile photographic sensors
Advantages
1. Generate digital information which can be telemetered to ground from space
2. Problem of film retrieval associated with photographic sensors is absent
3. Remote sensing in extended wavelength range of 0.3–1 mm possible, in contrast to photographic range of 0.3–0.9 µm
4. Higher spectral resolution
5. Higher radiometric resolution
6. High spatial resolution from modern sensors
7. The information can be stored and is reproducible without loss of content
8. Amenability of data to digital processing for enhancement and classification
9. Flexibility in data handling
10. Repeatability of results
.

5.1.2 Imaging Versus Non-imaging Optical 5.2 Factors Affecting Sensor Performance
Sensors and Terminology
Physical processes governing the energy emitted and
At this stage it is important to make a distinction between reflected from the ground have been discussed in Chap. 2.
imaging (scanning) and non-imaging digital sensors. A non- Consider the case of a remote sensor viewing a uniform
imaging sensor measures the total intensity of EM radiation object on the ground (Fig. 5.3). The radiant power (Pk)
falling in its field-of-view (FOV) as one piece of illuminating the detector is (after Lowe 1976).
information/data and provides a profile-like record of
intensity with distance in the direction of line of flight (or Ik  TaðkÞ  ToðkÞ  As  Ao
Pk ¼ ; ð5:1Þ
with time). It does not sample the scene in the across-track H2
direction with-in the field-of-view. Therefore, a non-imaging where
sensor produces only one digital number of radiation
intensity for its entire FOV (see Fig. 5.5). On the other hand, Ik spectral radiance of the source (object) in
there are imaging (scanning) sensors which sample the W cm−2 sr−1 lm−1,
scene with-in the FOV in the across–track direction, i.e. they
create thousands of smaller instantaneous-field-of-views
(IFOVs) with-in the FOV (see Fig. 5.7). This provides
data on spatial variation of radiation intensity with-in the
FOV, leading to an image.
By definition, the term radiometer means an instrument
used for measuring radiation intensity. Generally, it is used
to imply a non-imaging sensor (e.g. SMIRR). However,
there is no universal acceptance of terminology, and the term
radiometer has also been used for imaging sensors (e.g. in
MESSR, ASTER). We prefer here to use the term radiometer
only for a non-imaging sensor working in profiling mode. It
is a passive sensor. Photometer is a term used for a similar
device operating at shorter wavelengths (k < 1 µm).
Multi-band radiometer and spectroradiometer are terms
applied to radiometers which measure radiation intensity in
more than one wavelength bands. Another term sometimes
seen in the literature is scanning radiometer, e.g. in VISSR
(in SMS-GOES geostationary satellite); we group such
sensors under imaging sensors.
In the following pages, first, the various factors affecting
sensor performance are considered, after which working
principles of non-imaging and imaging instruments are Fig. 5.3 Schematic of geometric relations involved in radiant power
discussed. reaching the sensor (after Lowe 1976)
48 5 Multispectral Imaging Techniques

Ta(k) spectral transmittance of the atmosphere, to take into used should have minimum attenuation through the
account the atmospheric losses, atmosphere;
To(k) = spectral transmittance of the optical system, to take • To(k) (transmittance of the wavelength through the optical
into account losses within the optical system, system) is directly related—an efficient optic system
As area of the source under view, permits through-put with minimum losses;
Ao effective area of the collector optic system collecting • Ao (area of the collecting optics) is directly related,
the radiation. however collecting optics with very large areas are not
H distance of the sensor from the object, permitted owing to size and weight constraints;
• b (IFOV) is directly related, however increase in b leads
The noise equivalent spectral power (NEPk) of any
to deterioration of spatial ground resolution and therefore
detector is given by
a trade-off between S/N ratio and b is necessary;
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi • D*k (spectral detectivity) is directly related—several
Ad  Df
NEPk ¼ ; ð5:2Þ materials are available for use as detectors in different
Dk
wavelength ranges with varying spectral detectivity val-
where ues (Fig. 5.2a) and a detector with higher spectral
detectivity should be used;
Ad area of the detector,
• f (focal length) is inversely related, i.e. short focal length
Df electronic bandwidth, (being, physically, inversely
systems give higher signal-to-noise ratio, but on the other
proportional to observation time)
hand they lead to a decreased scale and reduced spatial
Dk* spectral detectivity of the material used (a measure of
resolution, and hence again a trade-off is required;
sensitivity of the material)
• Δf (electronic bandwidth) is inversely related; as elec-
Therefore, signal-to-noise ratio (S/N), over a certain tronic bandwidth is inversely related to dwell time, S/N
wavelength range, can be written as: ratio gets directly related to dwell time;
• S/N ratio is also directly proportional to the width of the
Zk2 wavelength band (k1 − k2), but on the other hand
S Pk
¼ dk ; ð5:3Þ increasing the spectral range results in reduced spectral
N NEPk
k1 resolution, and therefore again a trade-off has to be made.

Ik  TaðkÞ ToðkÞ  As  Ao Dk In a spectral region like the visible, the spectral radiance
¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  dk ; ð5:4Þ
H2 Ad  Df (Ik) is very high (Fig. 2.4a), and hence the sensor S/N ratio is
also generally high. In such a situation, both spectral and
Further, if b is the angle of instantaneous field of view
spatial resolutions can be made much finer, in comparison to a
(IFOV), then
situation where scene brightness is relatively lower, as in the
As Ad case of the thermal-IR region. Furthermore, several trade-offs
¼ 2 ¼ b2 ; ð5:5Þ
H2 f are possible between spectral and spatial resolutions. A judi-
cious decision must be based on a very careful analysis of the
where Ad is detector area and f is focal length of the system. available technology and understanding of the requirements.
This yields: The Modulation Transfer Function (MTF) is a useful
parameter for evaluating the performance of digital sensors.
Zk2
S Ik  TaðkÞ  ToðkÞ  b  Ao Dk It has the same connotation as in photographic sensors.
¼ pffiffiffiffiffiffi  dk ; ð5:6Þ
N f Df The MTF evaluates how faithfully and finely the spatial
k1 variation of radiance in the scene is emulated in the image
This is a fundamental equation of great importance in (see Appendix B). The MTF is used to evaluate the per-
understanding and evaluating performance of a sensor. formance of the sensor or its components, such as optics,
A good sensor is one which provides a higher S/N ratio. detector system etc., on individual basis.
Several factors affect the S/N ratio:
5.2.1 Sensor Resolution
• Ik (brightness of the ground object) is directly related,
therefore conditions of higher scene brightness (reflection It will be appropriate here to become conversant with the
or emission) are better suited; various terms related to sensor resolution, viz. spatial reso-
• Ta(k) (transmittance of the wavelength through the lution, spectral resolution, radiometric resolution and tem-
atmosphere) is directly related, therefore the wavelength poral resolution.
5.2 Factors Affecting Sensor Performance 49

1. Spatial resolution of a sensor implies the area on the 5.3 Non-imaging Radiometers
ground that fills the IFOV of the sensor. It is also called
the ground element or ground resolution cell, and cor- As mentioned earlier, a non-imaging sensor measures EM
responds to one pixel on the image. Further, a scanner radiation intensity falling in its field-of-view (FOV) and
can be considered as a device that samples the ground for provides a profile-like record of intensity with distance in the
brightness at equal intervals of distance; therefore, the direction of line of flight (or time).
term ground sampled distance (GSD) is also used to
indicate sensor resolution.
2. Spectral resolution means the span of wavelength range 5.3.1 Working Principle
(k1 − k2) over which a spectral channel operates. The
band-pass response of a channel is most commonly of a The working principle of a radiometer is quite simple
gaussian type (Fig. 5.4). The width of the band pass (Fig. 5.5). An optic system (usually a refractive lens system)
(spectral bandwidth) is defined as the width in wave- collects and directs radiation onto an optical filter (such as a
length at 50% response level of the function. For grating, prism, interferometer or filter wheel etc.), where the
example, in Fig. 5.4, the 50% response level of the radiation is sorted out wavelength-wise. The selected radi-
detector is from 0.55 to 0.65 µm, providing a spectral ation is directed to the detector system, which quantizes the
bandwidth of 0.1 µm. It is also called ‘full-width at half radiation intensity.
maximum’ (FWHM) for the band. In many radiometers, especially in those operating at
3. Radiometric resolution of a sensor means the degree of longer wavelengths (k > 1 µm), reference brightness sour-
sensitivity of a sensor to radiation intensity variation. ces are kept within the housing for calibration, since the
Basically, it corresponds to noise equivalent power dif- various objects including parts of the radiometer also form a
ference (NEΔPk), or noise equivalent temperature dif- source of radiation at these wavelengths. The detector is
ference (NEΔT) in the case of thermal sensors. It is the made to receive radiation alternately from the ground and the
brightness difference between two adjacent ground res- reference sources by a moving chopper.
olution elements that produces a signal-to-noise ratio of The ground resolution element of the radiometer is the
unity at the sensor output. A good scanner has a smaller area corresponding to IFOV on the ground (Fig. 5.5). It is
noise-equivalent power difference. However, it is at usually a circle, the diameter D of which is given by
times loosely used to imply the total number of quan- D = H  b (where H is the flying height, b is the IFOV).
tization levels used by the sensor (e.g. 8-bit, 10-bit, The ground resolution cell is given as p (D/2)2. As the
12-bit etc.). remote sensing platform keeps moving ahead, measure-
4. Temporal resolution refers to the repetitiveness of ments are made at successive locations, which result in
observation over an area, and is equal to the time interval profile data.
between successive observations. It depends on orbital The design of a radiometer, i.e. the spectral bandwidth,
parameters and swath-width of the sensor. angular field-of-view etc., depends on numerous factors such
as the purpose of investigation, scene brightness, detector
technology etc., all of which affect the S/N ratio.

5.4 Imaging Sensors (Scanning Systems)

5.4.1 What Is an Image?

Imaging sensors or scanning devices build up a


two-dimensional data array of spatial variation in brightness
over an area. The entire scene to be imaged is considered as
comprising a large number of smaller, equal-sized unit areas,
which form ground resolution cells (Fig. 5.6). Starting from
one corner, line-by-line and cell-by-cell, the radiation from
Fig. 5.4 Spectral bandwidth (resolution) of a detector. The spectral
each unit area is collected and integrated by the sensor to
range of this detector is from 0.55 to 0.65 µm, giving a bandwidth of yield a brightness value which is ascribed to that unit area. In
0.1 µm this manner, spatial information is converted into a
50 5 Multispectral Imaging Techniques

Fig. 5.5 Design and working principle of a radiometer in profiling mode

Fig. 5.6 The scanning process. a The entire scene is considered to be brightness value (DNi,j). b The scanner data has a structure of rows
comprised of smaller, equal-sized unit areas, and radiation from each (scan lines) and columns (pixels)
unit area (Wi,j) is collected by the sensor and integrated to yield a
5.4 Imaging Sensors (Scanning Systems) 51

time-dependent signal. This process is called scanning, and one of the various levels, called quantization levels (since
the data are called scanner data. measuring actual photo-current at each unit area would be a
The photo-radiation emanating from a unit area is col- too time-consuming and laborious exercise and, for most of
lected, filtered, and quantized to yield an electrical signal. our investigations, it is sufficient to have relative brightness
The signal, depending upon its intensity, is classified into values). In this manner, a scanner provides a stream of

Fig. 5.7 Representation of digital scanner data in optical analogue as corresponds to the small rectangular box located within the channel in
an image. a Example of a scanner digital data array; b a typical gray the middle-left of the image; (SPOT image of the Oil Pipeline, Alaska)
wedge to convert digital numbers into gray tones; c the corresponding (c Courtesy: Geophysical Institute, UAF, Alaska)
black-and-white (gray tone) image; the digital scanner data in a
52 5 Multispectral Imaging Techniques

Table 5.2 Salient differences between a film-photograph and an image


Film-photograph Image
1. Generated by photographic film system 1. Generated from line scanners and digital cameras.
2. Originally produced in analogue form 2. Originally produced in digital form.
3. Does not have any pixels 3. Basic element is a pixel
4. Lacks row and column structure 4. Possesses rows and columns
5. Scan lines absent 5. Scan lines may be observed
6. Zero indicates no data 6. Zero is a value, does not indicate absence of data
7. No numbering at any point 7. Every point has a certain digital number
8. Photography is restricted to photo graphic range of EM spectrum 8. Image can be generated for any part of the EM spectrum, or
9. Once a photograph is acquired, colour is specific and cannot be any field
(inter-changed) 9. Colour has no specific role and can be changed during
processing

Digital Numbers (DN’s). These data are stored on tape may be mentioned here that a photograph can be scanned to
recorders on-board the remote sensing platform and/or produce an image, but not the other way round.
relayed down to the ground receiving station using a Broadly, three types of imaging sensors can be identified:
microwave communication link. On the ground, the scanner (1) optical mechanical line scanner, (2) CCD linear array and
data can be rearranged as a two dimensional array and be (3) digital camera (CCD area array).
presented as an optical analogue by choosing a suitable gray
scale (Fig. 5.7). The various brightness values measured
over ground unit areas are depicted as gray tones at the 5.4.2 Optical-Mechanical Line Scanner
corresponding positions on the optical analogue. This is (Whiskbroom Scanner)
called an image. Therefore, an image is an optical analogue
of a two-dimensional data array. The unit area on the This has been a widely used scanning technology. The major
ground is variously termed ground resolution/spatial reso- advantages of the optical-mechanical (OM) line scanners
lution//spot size/ground IFOV. The same unit area on the have been that they can easily operate in multispectral mode,
image is called the picture element or pixel. Obviously, an at a wide range of wavelengths from 0.3 to 14 µm, and
image consists of a large number of pixels. generate digital image data. Pioneering work in this field was
A critical review of ‘what’s in a pixel’, in terms of both a carried out at the University of Michigan (Lowe 1969).
geometrical point of view and a more physical point of view, The collector optics of an aerial/spaceborne OM scanner
is given by Cracknell (1998). Table 5.2 gives the salient includes a plane mirror, which revolves or oscillates along an
differences between a digital image and a film-photograph. It axis parallel to the nadir line (flight line) (Fig. 5.8). This

Fig. 5.8 Working principle of an opto-mechanical line scanner


5.4 Imaging Sensors (Scanning Systems) 53

permits radiation from different parts of the ground, lying in calibration (which is a must at wavelengths >1 µm) or per-
the across-track direction, to be reflected onto the filter and forms no useful function. In spaceborne OM scanners, the
detector assembly. The radiation is separated according to total FOV is much smaller (10°–15°) and oscillating mirrors
wavelength by grating, prism etc. and directed onto photo- are deployed in radiation-collecting optics.
detectors, where it is converted into an electrical signal. The The spectral resolution (i.e. bandwidth), radiometric res-
signal is amplified, integrated and chopped at regular time olution (NEΔPk) and spatial resolution (IFOV) are all
intervals and quantized. The chopping provides subdivision interdependent and critical parameters, and suitable
in the scan line (pixels!). The forward motion of the sensor- trade-offs have to be judiciously arrived at [see Eq. (5.6)].
craft allows advancing of the scene of view. The signal is The Bendix 24-channel scanner and Daedulus 11-channel
recorded, transmitted, stored, or displayed as per the available scanner, both operating in the VNIR to TIR region, were
output mode. In this fashion, across-track scanning is facili- flown from aerial platforms for many experiments in the
tated by the moving mirror, and along-track by the forward 1970s and 1980s. The TIMS aerial scanner was an improved
motion of the sensorcraft, and the entire scene is converted 6-channel OM scanner operating in the TIR region. On the
into a two-dimensional data array. Owing to its similarity in space platforms, the important OM scanners have been MSS
operation, it is also known as a whiskbroom scanner. and TM/ETM+ on Landsats, and MTI and ASTER
The instantaneous field-of-view (IFOV = ß) (Fig. 5.8) is (thermal-IR part).
the angle of field-of-view at a particular instant. It governs
the dimension of the ground resolution cell or sensor’s
spatial resolution. The total angle of view through which the 5.4.3 CCD Linear Array Scanner (Pushbroom
radiation is collected as the mirror moves, is called the total Scanner)
angular field-of-view or simply angular field-of-view (FOV).
The total angular FOV together with the altitude of the Charge-Coupled Device (CCD) linear arrays were first
sensorcraft controls the swath width (length in across-track conceived and tested for operation in the early 1970s and the
direction sensed in one pass or flight). Ground swath can be tremendous interest in this new technology gave it a big
given as: boost (Melen and Buss 1977; Thompson 1979; Beynon and
  Lamb 1980). We will consider here some of the fundamental
FOV
Ground swath ¼ 2H:tan : ð5:7Þ concepts relevant to imaging systems. The CCDs utilize
2
semi-conductors as the basic detector material, the most
Although the working principle is similar in both aerial important so far having been silicon. An array is a large
and spaceborne OM scanners, the actual resolution specifi- number of detector elements, individual element acting as a
cations differ in the two cases. For example, in airborne OM photodiode. The entire fabrication is carried out using
scanners, the mirror is rotated by a constant-speed motor and micro-electronics, so that a chip about 1 cm long carries
the total angular field-of-view is about 90°–120°. During the several thousand detector elements (Fig. 5.9a).
rest of the rotation of 270°–240°, the mirror either directs The basic technique of pushbroom line scanning is
radiation from standard lamps onto the detector for illustrated in Fig. 5.9b. A linear CCD array, comprising

Fig. 5.9 a Photograph of a linear array with photodiode elements; in array placed at the focal plane of an optic system, records radiation, the
the background is an enlarged view of a silicon chip (courtesy of pattern of signal being analogous to that of scene illumination along the
Reticon Co., USA) b Principle of line-imaging by CCD linear array; the scan line
54 5 Multispectral Imaging Techniques

several thousand detectors, is placed at the focal plane of a register located at the back. The purpose of the shift register
camera lens system. The lens system focuses radiation is to store the charge temporarily and to permit read-out at
emanating from the ground on to the CCD line array. At convenience. To understand how the charge is read out in
each detector element, the radiation is integrated for a short the shift register, we have to go a little deeper into the matter.
period and recorded. As the sensorcraft keeps moving, the In the shift register, there are three electrodes corresponding
data are collected line by line. to each element in the photo-gate (Fig. 5.10), which are
In a simplified manner, the CCD line imager may be interconnected in a cycle of three. The charge is laterally
considered to have three main segments: (a) a photo-gate, (b) a moved from one element to another, by cyclic variation of
transfer gate and (c) a shift register (Fig. 5.10). The photo- voltage, and read out sequentially at one output point (hence
gate comprises a series of radiation-sensing photo-conductor the name ‘charge-coupled device’).
elements or cells. They are separated from each other by a The mechanism of lateral charge transfer is illustrated in
channel-stop diffusion to avoid cross-talk, and all cells are Fig. 5.11. To understand how the charge is moved from one
placed at a high positive potential of equal value. As the potential well to another, consider a set of four closely
photo-gate is exposed to radiation, the incident photons raise spaced electrodes, A, B, C and D such that B is at a high
the energy level of the electrons in the semi-conductor to potential (VH). The other three electrodes are at rest or low
conduction band level, and the electrons become concentrated potential (VL). At the end of the integration period, the
at the positively charged electrodes. The number of electrons charge collected at the photo-gate gets transferred to the
collected at a given detector electrode within a given time B-electrode (with the high positive potential, VH). Now, if
period, called integration period, or dwell time, is proportional the potential of C-electrode is raised to VH level, then the
to the local photon flux. Thus, the pattern of charge collected charge will flow from B to C (Fig. 5.11), provided that the
at the detector array becomes analogous to the pattern of scene inter-electrode spacing is small enough. At the same time, if
illumination along the scan line. the bias (voltage) of B-electrode is reduced to VL, all the
At the end of the integration period, the charge collected charge will be transferred to C. In this manner, the charge
is quickly transferred through the transfer gate to the shift can be transferred from one element to another by cyclic

Fig. 5.10 Main segments of CCD linear imaging device—photo-gate, transfer gate and shift register

Fig. 5.11 Mechanism of lateral charge transfer by cyclic variation of bias (voltage) in the shift register of a CCD line imaging device (for details,
see text)
5.4 Imaging Sensors (Scanning Systems) 55

variation of bias. The read-out is carried out at one output in a CCD linear array scanner, all the cells along a scan line
point (Fig. 5.10). The procedure is repeated until the charge are viewed concurrently; the sensor integrates photon flux at
transfer and read-out are completed in the entire array in the each cell for the entire period during which the IFOV
shift register. advances by one resolution element. This leads to increment
Thus, as the sensorcraft keeps moving, the charge is read in dwell time by a factor of about 1000. The increased S/N
out in the shift register by transfer technique, and at the same ratio can be traded-off with reduction in size of collecting
time the photo-gate keeps collecting video information for optics (size and weight) and superior spectral and/or spatial
the next line. The large number of detectors in the CCD line resolution [c.f. Eq. (5.6)].
array provides scanning in the cross-track direction, and Since mid 1980s, a number of CCD linear imaging sys-
forward motion of the sensorcraft facilitates along-track tems have been successfully flown for remote sensing from
movement. This generates an image. Owing to its similarity, space (e.g. SPOT-HRV-series, IRS-LISS-series; PRISM and
this is also referred to as the pushbroom system. AVINIR sensors etc.). The CCD linear pushbroom scanners
The ‘opening’ at each of the detector elements is the also form the heart of very high resolution space sensors,
IFOV. It is given in terms of centre-to-centre spacing of the such as Ikonos, QuickBird, EROS, Cartosat, GeoEye,
detector cells, and is of the order of a few micrometers. It is a WorldView (see Sect. 6.10).
measure of the along-track aperture. The dimension of the
ground resolution cell is governed by IFOV, flying height,
velocity vector and integration period. The parameters are 5.4.4 FPA and TDI Architecture of Spaceborne
usually designed to generate square-shaped ground resolu- CCD Linear Arrays
tion cells. Distance on the ground corresponding to one
IFOV or cell size is also called the GSD (ground sampled There appear two important limitations in acquiring image
distance). data with high spatial resolution and large swath width from
The CCD-based pushbroom line scanner can easily spaceborne pushbroom linear arrays:
operate in multispectral mode. The spectral separation is
provided by discrete optical filters, which may be mounted • There exists a technological constraint that the number of
on the camera lens or alternatively may be used as coatings detector cells manufactured on individual CCD linear
on the CCD chips. Typically, one detector array is required array chip is limited; this number is often not enough to
for each spectral channel. obtain the required number of pixels in a swath.
The relative advantages and disadvantages of the push- • The second point is with regard to detector size on the
broom scanners over the opto-mechanical scanners are given chip; smaller the detector cell size, better the spatial
in Table 5.3. The main advantages of CCD line scanner arise resolution (i.e. smaller value of GSD); however, a
from two aspects: (a) the solid-state nature and (b) the fact smaller detector cell also implies reduced dwell time, or
that the dwell time of the sensor is appreciably increased. In in other words reduced intensity of the radiation collected
an OM line scanner, the various cells along a scan line are by the detector, which results in a lower S/N ratio of the
viewed successively, one after the other. On the other hand, sensor.

Table 5.3 Relative advantages and disadvantages of pushbroom scanners over OM-type scannersa
Advantages Disadvantages/limitations
1. Light weight 1. Owing to the presence of large number of cells, striping occurs in
2. Low voltage operation CCD line scanners which necessitate radiometric calibration and
3. Long life correction
4. Smaller size and compactness 2. Low temperature operation is necessary to minimize noise
5. No moving parts 3. Number of detector elements fabricated on a single chip is limited,
6. Higher camera stability and geometric accuracy owing to absence with the present technology; use of several staggered chips in
of moving parts precision alignment is necessary
7. Higher geometric accuracy along each scan line
8. Dwell time significantly higher, providing significant increase in
S/N ratio
9. Higher radiometric resolution (NEΔPk)
10. Higher spectral resolution
11. Higher spatial resolution
12. Lower cost
13. Off-nadir viewing configuration facilitates in-orbit stereo viewing
and increased temporal resolution
a
Summarized after Colvocoresses (1979), Thompson (1979), Tracy and Noll (1979), Slater (1980)
56 5 Multispectral Imaging Techniques

The FPA and TDI architecture take care of the above two mounted in parallel to one another in the cross-track direc-
constraints (Petrie and Stoney 2009). tion within the same focal plane (Fig. 5.13a). Physically, this
becomes similar to the arrays of multispectral imaging, but
1. Focal Plane Arrays (FPA) here they all operate in the same spectral band. As the
pushbroom scanner flies past the ground, these multiple
The problem of the limited number of cells on each chip is linear arrays collect repeated exposure of the same line on
obviated by using multiple linear arrays that can provide the the ground, one line after another. Photo-generated electrons
desirable swath width. However, the difficulty is that indi- are transferred from one TDI line to the next, successively,
vidual linear arrays cannot be physically butted together in a till the last TDI line, where the accumulated charge is read
single line, as this will lead to dead elements at junctions. out (Fig. 5.13b). The technique results in higher overall
Therefore, the CCD linear arrays have to be staggered in the dwell time and improved S/N ratio. For example, Ikonos and
optical plane, parallel to each other (hence the name focal QuickBird used 13 TDI lines, Pleiades (panchromatic
plane array), with some overlap (Fig. 5.12a). This results in channel) uses up-to 20 TDI lines, and EROS-B uses up-to 96
sub-strips with marginal overlapping images. These multiple selectable TDI integration lines. The TDI technique can
image data have to be digitally stitched together and merged facilitate imaging in relatively poor illumination condition.
to create a full swath image.
For example, in the case of IRS-1C/1D-LISS- sensor,
individual chip carried 4096 cells, and combining three chips 5.4.5 Digital Cameras (Area Arrays)
in one line yielded 12,000 pixels in a swath. Similarly, in the
case of Pleiades, individual chip carries more than Digital cameras are now commonly used from aerial plat-
6000 cells, and by combining five chips in a line, the sensor forms to provide high-definition images at reduced costs.
generates a swath of 30,000 pixels in panchromatic band. The camera employs a solid-state area array and produces a
Resurs DK-I used 36 chips in a line, each with 1024 cells to digital two-dimensional frame image. Other terms used
generate a swath of 36,000 pixels. synonymously for the same type of technology include:
For multispectral imaging, multiple sets of linear arrays digital electronic imaging, digital snapshot system, staring
are required, typically one set of linear arrays for each arrays, CCD/CMOS area array, CCD/CMOS matrix sensor,
spectral band, placed appropriately in the focal plane and FPA system.
(Fig. 5.12b). Spectral separation is accomplished by using The basic difference between a film-based conventional
optical filters on top of the linear arrays. photographic camera and a digital imaging camera is that, in
the latter, the film is replaced by solid-state electronics such
2. Time Delay Integration (TDI) as CCD (charge coupled device) or CMOS (complementary
metal-oxide semi-conductor) detectors. These detectors, with
The TDI technique improves upon the single-line CCD array millions of photon-sensitive sites, form a two-dimensional
scan system. It takes care of the reduced radiation intensity array, called an area array or matrix, which is placed at the
due to poor illumination and/or smaller detector cell size. focal plane of the camera system (Fig. 5.14). The incident
Basically, this uses multiple linear arrays of detectors light is focused by the optics onto the area array, which

Fig. 5.12 FPA; a schematic of staggered multiple linear arrays such generate a full swath image; b multispectral imaging using linear
that there is a marginal overlap with immediate neighbours; each array arrays; for each spectral band, a separate staggered linear array is used
produces a sub-image; the multiple sub-images are digitally merged to
5.4 Imaging Sensors (Scanning Systems) 57

Fig. 5.13 Principle of time delay integration (TDI) line imaging; linear arrays A1-A5; b The photon-generated electrons are transferred
a Multiple CCD linear arrays (A1-A5) are placed at the focal plane of the from one TDI line to the next, successively, till the last TDI line, where
imaging system oriented perpendicular to the flight path; as the the charge is read out. This effectively increases the dwell time of the
sensorcraft moves ahead, video data from the ground swath unit (G1- sensor
G2-G3-G4) is sequentially and repeatedly collected by the multiple

generates an analogue signal. The signal is then quantized to


yield an image. Therefore, in a digital camera, data storage
takes place directly in digital form (using magnetic, optical
or solid-state media).
Both CCD and CMOS are composed of metal-oxide
semiconductors and measure photon intensity by converting
it into electrical signal. In case of CCD technique, as
described above in Sect. 5.4.3, when the exposure is com-
plete, the accumulated charge at each cell is sequentially
transferred to a common output, which converts the charge
into voltage, buffers it and reads it off-chip (shift register).
On the other hand, in a CMOS, the charge to voltage con-
version takes place at each cell/pixel, i.e. there is set of
dedicated transistors and circuitry to provide read-out
(on-chip) (Fig. 5.15). Therefore, whereas in a CDD, the
entire pixel (cell) is devoted to photon capture, in a CMOS,
each cell carries devices for charge to voltage conversion,
amplification, noise control etc., which increase the degree
of complexity and reduce the area available for photon
capture, as some of the photons may hit the adjacent tran-
sistors instead of the photodiode. The CMOS provides lower
Fig. 5.14 Working principle of a digital imaging camera image quality and lower resolution but advantages of lower
58 5 Multispectral Imaging Techniques

Fig. 5.15 Working principles of CCD and CMOS area arrays (for details, see text) (after Litwiller 2001)

cost, smaller size and lower power consumption. Therefore, digital output, fast processing, higher sensitivity, better
CMOS area arrays have been typically used in lower-end image radiometry, higher geometric fidelity and lower cost
digital imaging applications such as video-conferencing, (King 1995).
biometrics, and reconnaissance surveys etc. The CCDs offer The digital imaging cameras have now fully replaced the
superior image quality at the expense of system size and this photographic-film-camera systems for aerial remote sens-
has been the most suitable technology for precision digital ing. The unmanned aerial vehicles (UAV) or mini-UAV
imaging applications (Litwiller 2001). carrying digital camera are fast emerging as tools for
Table 5.4 gives the relative advantages and disadvantages acquiring dedicated remote sensing images with very high
of digital cameras vis-à-vis film photographic systems. The spatial resolution (a few centimetres) for specific project
main advantages of digital cameras arise from their direct purposes.

Table 5.4 Relative advantages and disadvantages of digital camera vis-à-vis film- photographic systems
Advantages
1. Direct computer-compatible output, which enables direct digital processing of data. The various advantages of digital techniques automatically
follow (see Table 5.1)
2. Another prime advantage is in-flight viewing and analysis of imagery to ensure image quality (exposure, contrast etc.) and target coverage
3. Ability of rapid change detection (following less turn-around time) is of high tactical importance in military reconnaissance and also of
significance in various civilian applications, such as for disaster missions (monitoring floods, storm damage, large fires, volcanic eruptions etc.)
4. The dynamic range of digital cameras is about 16-bit which is much superior to the 6- or 7- bit typical of the films
5. Silicon chips (CCDs) possess much higher sensitivity in the near-IR as compared to standard films
6. The high sensitivity of the CCD sensor, along with its wide dynamic range, permits markedly higher contrast to be recorded in the image
7. Exposure time in digital cameras may be shorter than in photography, thereby reducing the problem of image motion
8. Another important advantage of digital camera sensor arrays is the rigid two-dimensional geometry, which can provide an even higher stability
and reproducibility than a film-based camera. The polyester substrate used in films is not perfectly rigid in its geometry; effects of stretching or
differences in humidity may produce non-linear geometric errors in the range of 5–10 µm, which put a limit on photogrammetric applications
of film products
9. The photogrammetric solutions developed for aerial photography can be applied to solid-state two-dimensional array images
10. Silicon sensor response is linear in all spectral bands, i.e. the digital gray level is directly proportional to the target radiance. This permits
fairly simplified radiometric calibration of digital cameras
11. Permits fairly simplified radiometric
12. Additional advantages are: no film development, no scanning, no chemical waste
Disadvantages
1. A limitation arises due to the relative smaller size of digital camera area arrays compared to large format (23  23 cm) photography. This
implies reduced areal coverage (at the same photo scales) and greater expense for data collection
2. Technical sophistication required for two-dimensional radiometric corrections/calibrations is another constraint
5.4 Imaging Sensors (Scanning Systems) 59

References Melen R, Buss D (eds) (1977) Charge-coupled devices: technology and


applications. IEEE press, New York, p 415
Petrie G, Stoney WE (2009) The current status and future direction of
Beynon JDE, Lamb DR (eds) (1980) Charge-coupled devices and their spaceborne remote sensing platforms and imaging systems. In:
applications. McGraw Hill, London Jackson MW (ed) Earth observing platforms and sensors, manual of
Colvocoresses AP (1979) Multispectral linear array as an alternative to remote sensing 3rd ed. Vol. 1.1, Amer Soc Photog Remote Sens
Landsat-D. Photogramm Eng Remote Sens 45:67–69 (ASPRS), Bethesda, Md., pp 387–447
Cracknell AP (1998) Review article: synergy in remote sensing-what’s Ryerson RA, Morain SA, Budge AM, (eds) (1997) Earth Observing
in a pixel? Int J Remote Sens 19:2025–2074 Platforms and Sensors (CD-ROM). Manual of Remote Sensing, 3rd
Joseph G (1996) Imaging sensors for remote sensing. Remote Sens Rev edn., Am Soc Photogramm Engg Remote Sens
13:257–342 Silva LF (1978) Radiation and instrumentation in remote sensing. In:
King DJ (1995) Airborne multispectral digital camera and video sensors: Swain PH, Davis SM (eds) Remote Sensing: the Quantitative
a critical review of system designs and applications. Canadian J Approach. McGraw Hill, New York, pp 21–135
Remote Sens 21 (3):245–273. (Special Issue on Aerial Optical Slater PN (1980) Remote sensing -optics and optical systems. Addison
Remote Sensing). web-site: www.carleton.ca/*dking/papers.html Wesley, Reading, 575 p
Litwiller D (2001) CCD versus CMOS—facts and fiction. Photonics Slater PN (1985) Survey of multispeetral imaging systems for earth
Spectra, Laurin Publishing Co. Inc., Pittsfield observations. Remote Sens Environ 17:85–102
Lowe OS (1969) Line scan devices and why we use them. In: Thompson LL (1979) Remote sensing using solid state array technol-
Proceedings of the 5th international symposium remote sensing ogy. Photogram Eng Remote Sens 45:47–55
environmental, Ann Arbor, MI, pp 77–101 Tracy RA, Noll RE (1979) User-oriented data processing considera-
Lowe OS (1976) Non-photographic optical sensors. In: Lintz J Jr, tions in linear array applications. Photogramm Eng Remote Sens
Simonett OS (eds) Remote sensing environment. Addison-Wesley, 45:57–61
Reading, US, pp 155–193
Important Spaceborne Missions
and Multispectral Sensors 6

6.1 Introduction 6.2 Orbital Motion and Earth Orbits

The techniques of multispectral remote sensing data collec- 6.2.1 Kepler’s Laws
tion have been discussed in the last chapter. In this chapter
we discuss important spaceborne missions used for multi- The German mathematician and astronomer Johannes
spectral optical remote sensing of the Earth. Not all the Kepler, early in the 17th century, set forth the laws of
satellite missions and sensors launched so far on global basis planetary motion around the Sun. Kepler’s laws are:
are included in this discussion, as a review of this type
would be just too long and out of place here. For such an I. Each planet revolves around the Sun in an elliptical
exhaustive review, the reader may refer to the Committee on orbit with Sun at one focus of the ellipse.
Earth Observation Satellites (www.ceos.org). We discuss II. The speed of the planet in its orbit varies in such a
here only those selected missions which have been used for way that the radius connecting the planet and Sun
primarily obtaining land surface information particularly of sweeps equal areas in equal time.
geological interest and have provided broadly world-wide III. The squares of orbital period of any two planets are in
data. The emphasis in this chapter is on long-duration the same ratio as the cubes of their semi-major
operational satellites. The optical satellite sensors used for elliptical axes. Mathematically this can be stated as:
atmospheric-meteorological-oceanic observations are exclu-
T21 D31
ded from this presentation to keep the focus on geology. ¼ ð6:1Þ
T22 D32
Radar sensors are discussed in Chaps. 15–17.
Satellites typically have a life span on the order of where T1 and T2 are the orbital periods of two satellites, and
10 years. The free-flying satellites such as Landsat once D1 and D2 are the semi-major axes of the two orbits.
injected into orbit continue to provide data on a regular basis
world-wide. A number of factors have led to the global The above implies that: (a) the planet in its orbit moves
ascendancy of satellite remote sensing data (cf. Curran faster near the Sun so that the same area is swept out in a given
1985): (1) world-wide data coverage; (2) unrestricted time interval, as at larger distances the planet moves relatively
availability of data without any political or security con- slowly, and (b) the planet in higher orbits move slowly (and
straints; (3) ready availability of multispectral and therefore higher orbits are more stable) and take proportion-
multi-temporal data, without the necessity of planning ded- ately longer orbital time such that T2/D3 remains constant.
icated flights; (4) low-cost (many agencies now provide Kepler’s laws of orbital motion are valid for all orbital
archival data even free of cost); (5) fairly good geometric motions including for satellites around the Earth.
accuracy; and (6) the digital nature of the data that makes it
amenable to digital processing, enhancement and interpre-
tation. Needless to say, these immense and unique advan- 6.2.2 Earth Orbits
tages have made satellite remote sensing the back-bone of
modern geoscientific exploration and environmental Satellite orbits around the Earth are considered on the basis
monitoring. of three parameters—altitude, inclination and eccentricity:

© Springer-Verlag GmbH Germany 2018 61


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_6
62 6 Important Spaceborne Missions and Multispectral Sensors

(a) Altitude is the height of the satellite in orbit above the even lower altitude free-flying satellites, say 160 km over
Earth’s surface. Centre to centre distance between the the Earth surface, have been deployed for defence recon-
Earth and the satellite can be computed by adding the naissance purposes and used this type of orbit.
radius of the Earth (6370 km) to the height.
(b) Inclination of the orbital plane is the angle between the (b) Sun synchronous orbit (SSO)
orbital plane and the Earth’s equatorial plane measured
in a clockwise direction. Most of the satellites for Earth observation and mapping
(c) Eccentricity is the ratio of the distance between the two such as Landsat, SPOT, IRS, Terra etc. have been placed in
foci of the ellipse to the length of the major axis. A high this type of orbit. This is also called heliosynchronous orbit.
value of eccentricity implies a largely elongated orbit These orbits are near-circular and near-polar meaning
and a near-zero value implies a near-circular orbit. thereby that the orbital plane is steeply inclined (97°–99°)
with respect to the equator and the satellites orbit the Earth
Based on the above parameters, the following types of from almost pole to pole (Fig. 6.1b) (Table 6.1). The orbital
Earth orbits can be identified which are used for Earth altitude is generally between 700–950 km above the Earth’s
observation applications. surface, and the orbital period is around 97–101 min. The
term Sun-synchronous means that the orbit is synchronized
6.2.2.1 Low Earth Orbit (LEO) with the Sun such that a particular satellite would cross the
Broadly, satellite orbits up-to a height of *2000 km above equator at the fixed local solar time, always. For example,
the Earth’s surface are treated as low Earth orbit. These are for Terra satellite, it is always 10:30 am whenever and
extensively used for Earth’s surface observations. In LEO, wherever the satellite crosses the equator. In images from
the following two types can be distinguished: Sun-synchronous orbit, the adjacent areas are seen with
almost no change in solar illumination angle or azimuth,
(a) Virtual top of the atmosphere low orbit permitting good comparison and mosaicking.
Considering the case of solar reflection sensing, typically
With the rise in altitude above the Earth’s surface, the atmo- satellites are placed in a Sun synchronous orbit with
sphere gradually thins out. It is quite impossible to put a clear equatorial crossing between 9.30 and 10.30 am in
boundary between the atmosphere and the space. Besides, descending node, with orbital period of around 97–
vertical profile of the atmosphere may also vary with factors 101 min. This implies that during one half of the orbit, viz.
such as solar activity, lunar phases and the Earth’s latitude etc. from north pole to south pole pass (for convenience called
Generally, for practical purposes, altitudes of about 125– the ‘descending node’, though in space there is no sense of
150 km may be considered as the beginning of the space ascend/descend) the satellite views the Earth in the day
upward as the air beyond that is very thin. This is corroborated time, whereas in the ascending node it passes over the night
by the observation that at altitudes of about 125 km, space side of the Earth.
vehicles such as space shuttles are said to make “atmospheric As the satellite orbits around the Earth, the Earth below
interface” when they re-enter the atmosphere prior to landing. spins on its axis from west to east. Therefore, in the next
Therefore, orbits at altitudes of >125–150 km may be treated north to south orbital pass, the satellite comes over a fresh
as virtually at the top of the atmosphere. relatively westerly located part of the Earth. As the satellite
Historically, numerous experimental satellites have been takes say 99 min for one orbit, it makes about 15–16 number
placed in these virtual top of the atmosphere low altitude of passes over the Earth each day (Fig. 6.2). The orbit alti-
orbits, e.g. Sputnik, Explorer, Salyut, Skylab etc. These tude, inclination and sensor swath are so carefully selected
orbits can be highly elliptical to near-circular, and may be and designed so as to cover the entire globe in a certain
inclined at different angles with respect to the equatorial number of days called the ‘repeat cycle’, after which the next
plane (Fig. 6.1a) (Table 6.1). The space shuttle (1981–2011) cycle of Earth surface observation commences.
flying at an altitude around 350 km with an orbital inclina- The Sun synchronous orbits of the Earth are very well
tion of approx. 30° in a near-circular orbit with orbital period defined—i.e. a unique combination of altitude and inclination
of nearly 90 min, can be also considered to have used this makes a Sun synchronous orbit. It is quite narrow in path.
type of orbit. Presently, the International Space Station Deviation in height or inclination due to factors like atmo-
(ISS), originally placed in orbit in 1998 and still orbiting, spheric turbulence, solar flare, and pull of gravity from the
also uses this orbit. Further, it may be mentioned here that Sun or Moon may lead to drifting of the satellite out of the
6.2 Orbital Motion and Earth Orbits 63

Fig. 6.1 Types of Earth orbits. a Top of the atmosphere low orbit; b Sun-synchronous orbit; c GPS orbit; d Molniya orbit; e Geostationary orbit

Sun synchronous orbit. Therefore, regular adjustments are and mosaicking of adjacent scenes in solar reflection images.
required to maintain a satellite in the Sun synchronous orbit. Further images from the same season over several years
The data from Sun synchronous orbits is highly valuable interval can be mutually compared without worrying too
for application scientists as this permits good comparison much about the shadows and illumination. Most of the
64

Table 6.1 Salient features of various Earth orbits


Orbit Altitude above the Eccentricity Orbital Orbital Inclination (approx.) Application
Earth’s speed (km/s) (approx.) period (approx.)
surface (km) (approx.)
Low Earth Virtual top of the 150–350 Highly 6.5–8.2 (highly variable 85–90 min 30–45 deg Experimental
orbit (LEO) atmosphere orbit elliptical to in case of elliptical orbits)
near-circular
Sun synchronous 700–900 Near-circular 7.5 98–103 min 98–99 deg Operational—Earth
orbit observations
Medium Medium Earth 20,200 Near-circular 3.9 12 h 55 deg GPS
Earth orbit (GPS)
orbit (MEO) Molniya orbit 500–39,900 Highly 1.5–10.0 (highly variable) 12 h 63.4 deg High latitude
elliptical communications and
observations
High earth Geostationary 35,788 Near-circular 3.1 24 h Equatorial Communications and
orbit (HEO) orbit regional meteorology
Orbit of the Moon 357,000–399,000 Near-circular 1.0 27.3 days – –
(for comparison)
6 Important Spaceborne Missions and Multispectral Sensors
6.2 Orbital Motion and Earth Orbits 65

Fig. 6.2 Typical ground trace of Sun-synchronous orbit for 1 day (only southbound passes shown); the example corresponds to Landsat-1, but all
Sun-synchronous satellites such as IRS, SPOT, JERS, Radarsat etc. have a similar ground trace pattern, differing only in number of passes per day
and repeat cycle

remote sensing data presented in this treatment is from (b) Molniya orbit
various satellite sensors in Sun-synchronous orbits.
There is another Sun synchronous orbit, called the Molniya orbit was invented by Russian scientists for mainly
dusk/dawn orbit, being used for radar (synthetic aperture communication purposes in the near-polar regions and high
radar, SAR) sensors. The SAR sensors need to artificially latitudes. A geostationary satellite (see below) is placed in
illuminate the Earth from space for remote sensing, and are the equatorial plane and therefore does not work well for
required to operate both in ascending node and descending high latitude and polar regions, for which Molniya orbit
node, continuously. Therefore they have huge energy provides a better alternative. It is a highly eccentric orbit,
requirements. In a dusk/dawn Sun synchronous orbit, the the height of the satellite above the Earth varying between
satellite crosses the equator at 6 pm in descending node and 500 and 39,900 km. Thus it is an elongated ellipse with the
at 6 am in ascending node, being under solar illumination Earth located close to one edge (Fig. 6.1d). A satellite in
throughout the orbital motion. This facilitates continuous this orbit moves very fast when it is close to the Earth and
generation of power by the solar panels required for SAR slows down as it moves away from the Earth. It has an
sensing. orbital period of 12 h and about two-thirds of it is spent
over one hemisphere (Table 6.1). Therefore Molniya orbit
6.2.2.2 Medium Earth Orbit (MEO) is particularly useful for communication in Polar Regions
Medium Earth Orbits typically have orbital period of approx. and high latitudes.
12 h. Two types of medium Earth orbits are notable:
semi-synchronous GPS orbit and the Molniya orbit. 6.2.2.3 High Earth Orbit
At 35,788 km above the Earth’s surface (i.e. 42,000 km
(a) Semi-synchronous GPS orbit from the Earth’s centre), a satellite has an orbital period of
24 h (Table 6.1). This configuration is well suited to make
The orbit at 20,200 km above the Earth’s surface (i.e. geostationary satellites. A geostationary satellite is posi-
26,570 km from the centre of the Earth) with an inclination tioned in the equatorial plane (Fig. 6.1d). As the Earth spins
of 55° has an orbital period of 12 h. As the Earth spins around its axis in 24 h, the satellite orbits around the Earth in
around its axis in 24 h, a satellite in this orbit crosses over the same time period and is therefore constantly viewing the
the same two spots on the equator everyday, and therefore it same part of the globe. Such a satellite does not move north
is called semi-synchronous. This orbit is near-circular, or south during the day or night and appears permanently
highly stable, consistent and predictable and is used for fixed above one point on the equator of the Earth. Therefore,
Global Positioning System (GPS) satellites (Fig. 6.1c). it is also called a geostationary satellite. Such an orbit is used
66 6 Important Spaceborne Missions and Multispectral Sensors

for communication and regular weather monitoring. Exam- 6.3 Landsat Programme
ples are SATCOM, SYMPHONIE, GOES, INSAT etc.
A satellite is called a bus or platform on which sensors Following the successful photographic experiments on the
and experimental devices or instruments, called payload, are initial manned space missions (Mercury, Gemini and Apollo),
mounted. The power is derived from solar cells for NASA developed plans for unmanned orbital imaging sen-
day-to-day working. A satellite’s expected life span is sors for Earth observations. The first in the series, called the
commonly *10 years and uses fuel thrusters to remain Earth Resources Technology Satellite (ERTS), later renamed
aligned in the proper orbit. Collision of satellites is avoided the Landsat Program, was a phenomenal success. It gave
as each satellite is usually assigned a certain 2° arc in the tremendous impetus to remote sensing programmes and
orbital plane. technology world-wide. As this was the first one in its type, it
In the following important spaceborne missions and is described in somewhat more detail here. The Landsat
multispectral sensors particularly useful for geological programme was initiated by NASA as an experimental pro-
remote sensing are described. gramme, and subsequently it acquired an operational status.

Fig. 6.3 a Design of Landsat-1, -2 and -3. b Design of Landsat-4 and -5. c Design of Landsat-8. d Location and areal coverage of various
receiving stations of Landsat data (all figures after NASA)
6.3 Landsat Programme 67

Table 6.2 Salient specifications of Landsat series satellites (summarized after NASA)
Spacecraft Launch Orbit altitude (km) Orbit inclination (deg) Orbital period (min) Sensors
Landsat-1 1972 918 99° 103 MSS
Landsat-2 1975 918 99° 103 MSS
Landsat-3 1978 918 99° 103 MSS
Landsat-4 1982 705 98.2° 99 MSS; TM
Landsat-5 1984 705 98.2° 99 MSS; TM
Landsat-6 1993 Failed ETM
Landsat-7 1999 705 98.2° 99 ETM+
Landsat-8 2013 705 98.2° 99 OLI; TIRS

1. Landsat Orbit of these have provided complete coverage of the Earth every
16 days. Landsat-7 and then Landsat-8 were further rede-
Landsat-1 was launched in July 1972 and since then seven signed, and have also been placed in the same orbit.
more satellites in this series (Landsat-2, -3, -4, -5, -6, -7, and Landsat sensors have recorded data in the descending node
-8) (Fig. 6.3a–c) have been launched (Table 6.2). All these (i.e. as the satellite moves from north to south). The data are
satellites have a near-polar, near-circular, Sun-synchronous pre-processed, sometimes stored on magnetic tapes and relayed
orbit. Landsat-1, -2 and -3 had similar appearance and down to the Earth, either directly or through TDRS (tracking
identical orbital parameters, each completing the Earth’s and data relay satellite—a communication satellite). A number
coverage every 18 days. Landsat-4, -5, -6 were redesigned of ground receiving stations have been built to receive the
for higher stability and placed in a relatively lower orbit to Landsat data around the globe (Fig. 6.3d). The satellite sensor
permit higher spatial resolution of the TM sensor, and each data are reformatted and indexed in terms of path-and-row

Fig. 6.4 Working principle of Landsat MSS; each oscillation cycle of the mirror generates six scan lines; the data are recorded in four spectral
bands by a total of 24 detectors (after NASA)
68 6 Important Spaceborne Missions and Multispectral Sensors

numbers for easy referencing and distribution world-wide. etc.; therefore, some discussion on this early sensor is con-
Mostly now, Landsat data are available free of charge. sidered desirable.
The MSS used a plane mirror, oscillating along an axis
2. MSS Sensor parallel to the flight direction, to scan the ground in an
across-track direction (Fig. 6.4). On Landsat-1, -2 and -3,
It was the Multispectral Scanning System (MSS) which which orbited at an altitude of 918 km, mirror oscillation
made the Landsat Program a tremendous success and gave through ±2.88° provided an FOV of 11.56°, which corre-
remote sensing a huge impetus. In 1970–1980s, it gathered a sponds to a ground swath of 185 km. The active scan was
large amount of remote sensing data world-wide. In some during the west-to east phase of the mirror oscillation only.
areas it may constitute the only available remote sensing data Owing to the ground track velocity (6.47 km/s) and a mirror
in archives for 1970s, for example as may be required for oscillation rate of 13.6 Hz, the MSS needed to generate six
comparative studies and/or environmental investigations scan lines (or use six detectors adjacent to one another in a

Table 6.3 Salient specifications of Landsat sensors (summarized after NASA)


(a) MSS (OM scanner)—Landsat-1, -2, -3, -4, -5
Band no. Spectral Name Ground No. of scan lines per Quantization Image swath (km)
range (µm) resolution mirror sweep
MSS-1 0.5–0.6 Green {79  79 m 6 6-bit 185
MSS-2 0.6–0.7 Red {Spaced 6 6-bit -do-
MSS-3 0.7–0.8 Near-IR {At 56 m 6 6-bit -do-
MSS-4 0.8–1.1 Near-IR {Interval 6 6-bit -do-
(b) TM (OM scanner)—Landsat-4, -5
TM-1 0.45–0.52 Blue–Green 30 16 8-bit 185
TM-2 0.52–0.60 Green 30 16 8-bit -do-
TM-3 0.63–0.69 Red 30 16 8-bit -do-
TM-4 0.76–0.90 Near-IR 30 16 8-bit -do-
TM-5 1.55–1.75 SWIR-I 30 16 8-bit -do-
TM-7 2.08–2.35 SWIR-II 30 16 8-bit -do-
TM-6 10.4–12.5 Thermal IR 120 4 8-bit -do-
(c) ETM+ (OM scanner)—Landsat-7
The bands are similar to Landsat-TM as above with two changes:
(i) Thermal-IR band (Band-6) has an improved ground resolution of 60 m (in place of 120 m)
(ii) There is an additional ‘eighth panchromatic’ band with 15 m ground resolution
(d) OLI-TIRS (Pushbroom scanner)—Landsat-8
Sensor Band no. Spectral Name Ground No. of scan lines Quantization Image
range (µm) resolution (m) swath (km)
Operational land B1 0.433–0.453 Coastal/Aerosol 30 About 7000 detector cells 12-bit 185
imager (OLI) per spectral band resampled to
B2 0.450–0.515 Blue 30
16-bit
B3 0.525–0.600 Green 30
B4 0.630–0.680 Red 30
B5 0.845–0.885 Near-IR 30
B6 1.560–1.660 SWIR-1 30
B7 2.100–2.300 SWIR-2 30
B8 0.500–0.680 Pan 15
B9 1.360–1.390 Cirrus 30
Thermal infrared B10 10.30–11.30 LWIR-1 100 About 2000 detector cells 12-bit 185
sensor (TIRS) per spectral band resampled to
B11 11.50–12.50 Failed
16-bit
Note Initially, in Landsat-1, -2, and -3, the MSS bands -1, -2, -3, and -4 were numbered as MSS-4, -5, -6 and -7 respectively
6.3 Landsat Programme 69

row) per spectral band to provide along-track ground reso- operated in seven wavelength bands, lying in the range of
lution of 79 m. 0.45–12.5 µm (Table 6.3) (Fig. 6.5). The main differences
The MSS recorded data in four spectral channels of TM in comparison to MSS sensor were as follows.
(Table 6.3). The MSS channels 1, 2 and 3 used photo-
multiplier tubes and the MSS channel 4 used silicon pho- 1. TM sensor used seven wavebands lying in the VIS, NIR,
todiodes. For radiometric calibration, MSS used on-board SWIR and TIR regions, in comparison to the four
calibration lamps, which were scanned by the mirror during wavebands of MSS lying in the VIS and NIR.
each oscillation cycle. 2. In case of TM, the spectral limits of the wavebands were
The MSS ground resolution cell was 79  79 m; how- based on the knowledge of spectral characteristics of
ever, sampling of the signal was carried out in an overlap- common natural objects and transmission characteristics
ping fashion, such that sampling interval corresponded to of the atmosphere. This is in contrast to the spectral limits
only 56 m on the ground in the scan direction. For all of the MSS sensor, which were quite arbitrarily chosen,
Landsat sensor data, the swath width is 185 km and the as it was an initial experimental programme and the
scene is nominally 185  185 km in dimension. information on spectral characteristics of objects and
For use on Landsat-4 and -5, which orbited at a slightly transmission characteristics of the atmosphere was
lower altitude of 705 km, minor changes in the MSS instru- limited.
mentation were incorporated; for example, the FOV was 3. MSS used active scan only during the west-to-east phase
increased to 14.93° to achieve ground resolution of of the oscillation mirror; in TM, both back and forth
82  82 m. oscillation phases of the mirror could be used for active
Performance characteristics of the Landsat sensors have scanning.
been described by Markham and Barker (1983, 1986) and 4. In the case of MSS, 6 scan lines were generated for each
Chander et al. (2009). of the four spectral channels. In TM, 16 scan lines were
generated in each sweep of the mirror for TM bands 1–5
3. TM Sensor and 7, and 4 scan lines for TM B6.
5. The TM sensor had relatively higher resolution specifi-
The Thematic Mapper (TM) sensor was a refined OM cations—spatial, spectral and radiometric; the quantiza-
multispectral scanner used on Landsat-4 and -5 missions. It tion level was 8-bit (256 levels) in all bands.

Fig. 6.5 Working principle of Landsat TM; the sensor records data in seven bands; a total of 16 scan lines are concurrently generated for bands
1–5 and 7 and four lines for band 6 (after NASA)
70 6 Important Spaceborne Missions and Multispectral Sensors

Fig. 6.6 Comparative spectral distribution of Landsat sensors—MSS, TM, ETM+ and OLI-TIRS (after NASA)

4. ETM+ Sensor Continuity Mission, LDCM) was launched in 2013. It has


the same orbit as Landsat-4, -5, -7. However, the imaging
After the failure of Landsat-6 (carrying Thematic Mapper, technology has been changed from the earlier
1993), Landsat-7 was launched in April 1999, carrying the opto-mechanical line scanner to pushbroom line scanner for
Enhanced Thematic Mapper Plus (ETM+) sensor. Its main both the solar reflection and thermal-IR sensing. The
aim was to provide continuity of Landsat-4 and -5 TM-type Landsat-8 has two imaging sensors: the Operational Land
data. Therefore it had a similar orbit and repeat cycle as Imager (OLI) and the Thermal Infrared Sensor (TIRS).
Landsat-4/-5. ETM+ had eight spectral bands in all. Bands The OLI uses an array of *7000 detectors per spectral band
1–5 and band 7 operated in blue, green, red, near-IR, and TIRS uses *2000 detectors per spectral band. The
SWIR-I and SWIR-II, exactly the same way as in Land- salient specifications of OLI/TIRS sensors are given in
sat TM with 30 m ground resolution. The thermal-IR band Table 6.3.
(band 6) had an improved ground resolution of 60 m (in In comparison to the earlier ETM+ sensor, the main
comparison to the 120 m of TM). Further there was an additional features are as follows:
additional panchromatic eighth band with a spatial resolution
of 15 m resolution (Table 6.3). (a) There is a new coastal/aerosol band (0.433–0.453 µm)
The data from the TM/ETM+ was relayed either directly (B1); it is useful for shallow water and ocean colour
or over the TDRS (Tracking and Data Relay Satellites) to the investigations, and for tracking atmospheric fine parti-
Earth receiving stations, where it is pre-processed and made cles like dust and smoke.
available to users. On Landsat 4/5, there were no on-board (b) There is a new SWIR cirrus band (1.36–1.39 µm) (B9)
tape recorders, in contrast to the earlier MSS. However, to detect high thin cirrus clouds that may contaminate
Landsat 7 again carried on-board solid-state recorder for other shorter visible band channels.
temporary storage of remote sensing data to allow acquisition (c) TIRS was designed to have two thermal bands (B10
over areas outside the reach of ground receiving stations. and B11) instead of the earlier one band; however, B11
The TM/ETM+ sensors have provided high-quality data, failed.
which have been highly valuable in numerous types of
applications for Earth resources investigations, particularly Figure 6.6 shows the comparative spectral distribution of
for geological mapping and exploration (for applications, see various Landsat sensors—MSS, TM, ETM+, and OLI-TIRS.
Chap. 19). The archive data are now available even free of
cost on global basis from USGS/Landsat websites.
6.4 SPOT Programme
5. OLI/TIRS Sensors
The French Satellite System SPOT (Syste’me Probatoire de
With the aim of providing continuity of Landsat-type data on 1’Observation de la Terre, which literally means Experi-
global basis, the Landsat-8 (formerly called Landsat Data mental System for Observation of the Earth) has been the
6.4 SPOT Programme 71

first free-flying European Earth resources satellite pro- 1. SPOT-1/-2—HRV


gramme, and has been operated under the French Centre
National d’Etudes Spatiales (CNES) and SPOT-IMAGE Inc. SPOT-1 and -2 carried a set of two identical sensors called
SPOT-1 (Fig. 6.7a) was the first one in this series and was HRV (High Resolution Visible) sensors, each of which was
placed in orbit in 1986, and SPOT-2 was launched in 1990. a CCD linear array pushbroom scanner. The HRVs could
SPOT-3 (1993) was also a similar one but failed. This was acquire data in several interesting configurations (Table 6.5):
followed by SPOT-4 (1998), SPOT-5 (2002), SPOT-6
(2012) and SPOT-7 (2014). All the satellites of SPOT ser- (a) In nadir-looking panchromatic (PAN) mode, each
ies have been placed in the same orbit: near-polar, HRV, using an array of 6000 CCD detectors, provided
Sun-synchronous, 832 km high orbit (Table 6.4). The sen- data with 10-m ground resolution, in a swath width of
sors on-board under the SPOT programme have gradually 60 km; the combined swath of two HRVs was 117 km
evolved over the years. (Fig. 6.7b).

Fig. 6.7 a SPOT platform. b Schematic of nadir-viewing HRV/HRVIR/HRG sensor. c World-wide ground receiving stations of SPOT data
(all figures after CNES/SPOT image)
72 6 Important Spaceborne Missions and Multispectral Sensors

Table 6.4 Salient specifications of SPOT satellites (summarized after CNES/SPOT image)
Spacecraft Launch Orbit altitude (km) Orbit inclination (deg) Orbital period (min) Sensors
SPOT-1 1986 832 98.7° 101.4 HRV
SPOT-2 1990 832 98.7° 101.4 HRV
SPOT-3 1993 Failed HRV
SPOT-4 1998 832 98.7° 101.4 HRVIR
SPOT-5 2002 832 98.7° 101.4 HRS, HRG
SPOT-6 2012 694 98.2° 99 Pan; Multi
SPOT-7 2014 694 98.2° 99 Pan; Multi

(b) In multispectral (XS) mode, the HRVs acquired data in CCD-pushbroom line scanner. The major changes in
three spectral bands (green, red and near-IR) with a SPOT-4—HRVIR were as follows (Table 6.5).
ground resolution of 20 m and swath of 60 km.
(c) The two HRVs could be directed to view off-nadir up (a) The higher-resolution 10-m panchromatic band was
to ±27 deg across-track; this was employed to view changed to red band (0.61–0.68 µm) and was called
regions of interest not vertically below the satellite for monospectral.
higher repeat cycle and for stereoscopy (Fig. 6.8a). (b) In the multispectral mode, the HRVIR acquired data in
four bands: green (0.50–0.59 lm), red (0.61–0.68 lm),
NIR (0.79–0.89 lm) and an additional SWIR (1.58–
2. SPOT-4—HRVIR 1.75 lm), each band having with 20 m spatial resolution.

The sensor on SPOT-4 (1998–2013) was called HRVIR Besides, the HRVIR could also be steered up to ±27 deg
(High Resolution Visible and Infra-Red). It was again a for off-track viewing, as in the case of earlier HRV.

Fig. 6.8 Principle of stereo generation using inclined viewing: a across-track and b in-orbit (along-track)
6.4 SPOT Programme 73

Table 6.5 Salient specifications of SPOT sensors (summarized after CNES/SPOT image)
Band no. Spectral Name Ground resolution (m) Quantization (bit) Image
range (µm) swath (km)
(a) HRV (Pushbroom scanner)
B1 0.50–0.59 Green 20 8 60
B2 0.61–0.68 Red 20 8 60
B3 0.78–0.89 Near-IR 20 8 60
Pan 0.51–0.73 Panchromatic 10 8 60
(b) HRVIR (Pushbroom scanner)
B1 0.50–0.59 Green 20 8 60
B2 0.61–0.68 Red 20 8 60
B3 0.78–0.89 Near-IR 20 8 60
B4 1.58–1.75 SWIR 20 8 60
Mono 0.61–0.68 Red 10 8 60
(c) HRG (Pushbroom scanner)
B1 0.50–0.59 Green 10 8 60
B2 0.61–0.68 Red 10 8 60
B3 0.78–0.89 Near-IR 10 8 60
B4 1.58–1.75 SWIR 20 8 60
Pan 0.48–0.71 Panchromatic 5 8 60
(d) HRS (Pushbroom scanner)
Band no. Spectral Inclination Time between two Vertical Ground Quantization (bit) Image
range (µm) images accuracy (m) resolution (m) swath (km)
Panchromatic 0.48–0.71 ±20 deg 90 s 15 10 8 120
(in-orbit stereo) (simultaneous)
(e) SPOT6/7a—PAN & MULTI
Band no. Spectral Name Ground resolution (m) Quantization (bit) Image
range (µm) swath (km)
Pan 0.45–0.75 Panchromatic 1.5 12 60
Multi 0.45–0.52 Blue 6 12 60
0.53–0.59 Green 6 12 60
0.62–0.69 Red 6 12 60
0.76–0.89 NIR 6 12 60
a
Note SPOT-6 and -7 make a constellation together with Pleiades -1A and -1B

3. SPOT-5—HRG (b) Besides, there was one broad panchromatic band (0.49–
0.69 lm) with a spatial resolution of 5 m. Data from
The sensor on SPOT-5 (2002–2015), called High Resolution HRG sensor could be processed to produce simulated
Geometric (HRG) was a multispectral mapper. Two numbers imagery of 2.5 m resolution.
of identical HRG instruments were placed side-by-side, each (c) Further, the HRGs also possessed off-track steering
with a swath width of 60 km and the two combined pro- capability (±27 deg), as in the earlier SPOT-HRVIR
viding a swath of 117 km (same as in earlier SPOT-1 HRV for more frequent observations.
sensor). The major features of SPOT-5—HRG were as
follows: 4. SPOT-5—HRS

(a) The multispectral bands were spectrally the same as in The HRS (High Resolution Stereoscope) was a high reso-
HRVIR, i.e. green, red, NIR, and SWIR. The spatial lution stereo instrument. It used in-orbit stereo imaging
resolution in green, red and NIR bands was 10 m and principle, i.e. two radiometers inclined fore and aft at ±20
that in SWIR was 20 m. deg, imaging the same ground scene at a time interval of
74 6 Important Spaceborne Missions and Multispectral Sensors

about 90 s (Fig. 6.8b), as the satellite made an over-pass 6.5 IRS/Resourcesat Programme
above the ground. Image data were collected in panchro-
matic band (0.49–0.69 lm) with a spatial resolution of 10 m Under the Indian Remote Sensing Satellite (IRS) pro-
in a swath with of 120 km. It is estimated that these stereo gramme, the Indian Space Research Organization (ISRO)
pairs provided altitude accuracy of around 15 m. has launched a number of satellites. The first to be launched
Besides, SPOT-4 and -5 also carried a large swath coarse was IRS-1A (1988) (Fig. 6.9a), followed by IRS-1B (1991),
resolution sensor called “Vegetation Instrument” (VI) pos- both from the Soviet Cosmodrome, Baiknoure. IRS-1A and -
sessing a high repeat cycle for mainly vegetation mapping. 1B were exactly similar in orbit and sensors. They were
placed in an orbit very similar to that of the early Landsats
5. SPOT-6/7—Pan, Multi (Sun-synchronous orbit, 904 km altitude, equatorial crossing
at 10.00 h, descending node). The second-series of satellites
SPOT-6 (2012) and SPOT-7 (2014) possess the same were IRS-1C (1995) and -1D (1997) and these two were
architecture as the Pleiades satellites and have been placed in similar in orbit and sensors. ISRO commenced launching of
the same orbit as Pleiades-1A and 1B. The two satellites are IRS-Resourcesat series of satellites from Sriharikota, India,
phased 180º apart from each-other. Together with the two with Resourcesat-1 (2003), Resourcesat-2 (2011) and
Pleiades, it makes a constellation of 2-by-2 satellites, 90° Resourcesat-2A (2016) (Table 6.6).
apart from the adjacent one. SPOT-6 and -7 acquire image
data in panchromatic mode (1.5 m resolution) and multi- 1. LISS-I and LISS-II Sensors
spectral mode (blue, green, red and near-IR) (6 m resolu-
tion). The swath width is 60 km. The Linear Imaging Self Scanning (LISS-I and LISS-II)
The SPOT data have been widely used for various Earth sensors, carried on-board IRS-1A and -1B, were CCD linear
resources investigations. Figure 6.7c shows the global dis- array pushbroom scanners. Both LISS-I and -II provided
tribution of receiving stations of SPOT data. data in four spectral bands: blue, green, red, and near-IR

Fig. 6.9 a Schematic of IRS satellite. b Location and areal coverage of various receiving stations of IRS data (all figures after ISRO)
6.5 IRS/Resourcesat Programme 75

Table 6.6 Salient specifications of IRS/Resourcesat satellites (summarized after ISRO)


Spacecraft Launch Orbit altitude Orbit virtual Orbit period (min) Sensors
IRS-1A 1988 904 99.1° 103 LISS-I; LISS-II
IRS-1B 1991 904 99.1° 103 LISS-I; LISS-II
IRS-1C 1995 817 98.7° 101 PAN; LISS-III
IRS-1D 1997 817 98.7° 101 PAN; LISS-III
Resourcesat-1 2003 817 98.7° 101 LISS-III; LISS-IV
Resourcesat-2 2011 817 98.7° 101 LISS-III; LISS-IV
Resourcesat-2A 2016 817 98.7° 101 LISS-III; LISS-IV

(Table 6.4). Light-emitting diodes (LED) provided on-board 2. LISS-III Sensor


calibration of CCD detector arrays. The LISS-I used a
152-mm focal length optics and provided data with LISS-III is an improved version of the earlier LISS-II. It is
72.5  72.5 m resolution in a swath width of 148 km, quite CCD pushbroom linear scanner. LISS-III was first carried
comparable to Landsat MSS. The LISS-II used a set of two on-board IRS-1C and -1D platforms and was also continued
cameras, A and B, each of 304 mm focal length optics, and on the subsequent Resourcesat-1, -2, -2A. This sensor
provided data with 36.25  36.25 m ground resolution, included green, red and near-IR bands with resolution of
quite comparable to Landsat TM sensor data (Table 6.7). 23.5 m and one SWIR band initially with 70.5 m resolution,

Table 6.7 Salient specifications of IRS/Resourcesat sensors (summarized after ISRO)


Band no. Spectral range (µm) Name Ground resolution (m) Quantization level Image swath (km)
(a) LISS-I sensor
B1 0.45–0.52 Blue 72.5 7 148
B2 0.52–0.59 Green 72.5 7 148
B3 0.62–0.68 Red 72.5 7 148
B4 0.77–0.86 Near-IR 72.5 7 148
(b) LISS-II sensor
B1 0.45–0.52 Blue 36.25 7 74
B2 0.52–0.59 Green 36.25 7 74
B3 0.62–0.68 Red 36.25 7 74
B4 0.77–0.86 Near-IR 36.25 7 74
(c) IRS-PAN sensor
PAN 0.5–0.75 Panchromatic 5.8 6 70
(d) IRS-LISS-III sensor
B2 0.52–0.59 Green 23.5 7/10a 140
a
B3 0.62–0.68 Red 23.5 7/10 140
B4 0.77–0.86 Near-IR 23.5 7/10a 140
a a
B5 1.55–1.70 SWIR 70.5/23.5 7/10 148
(e) IRS-LISS-IV sensor
B2 0.52–0.59 Green 5.8 10 23/70b
B3 0.62–0.68 Red 5.8 10 23/70b
B4 0.77–0.86 Near-IR 5.8 10 23/70b
a
SWIR band resolution increased to 23.5 m and quantization to 10-bit in Resourcesat
b
Image swath is 23 km in Resourcesat-1 and 70 km in Resourcesat-2/2A
76 6 Important Spaceborne Missions and Multispectral Sensors

subsequently increased to 23.5 m resolution, the swath Subsequently, satellites in the Cartosat series have been
width being about 140 km (Table 6.7). launched by ISRO to facilitate high spatial resolution car-
tographic mapping (see Sect. 6.10).
3. IRS-PAN Sensor Data from IRS satellites have been received by a large
number of ground stations world-wide (Fig. 6.9b). Several
One broad-band (0.50–0.75 µm) panchromatic sensor geologic application examples of LISS- and PAN-sensor
(PAN) with spatial resolution (5.8 m) and a swath width of data are given later (Chap. 19).
70 km was inducted on the IRS-1C and -1D. It could be
steered across-track (±26 deg) for more frequent repeat
cycle and stereo capability (Fig. 6.8a). 6.6 Japanese Programmes

4. LISS-IV Sensor The Japanese satellites JERS-1 and ALOS-1 carried an


optical sensor and a radar imaging sensor, the two together
LISS-IV sensor carried on Resourcesat-1, -2, -2A platforms on the same platform.
is a further improvement of the earlier LISS-III sensor. It is a
CCD linear imaging sensor and provides multispectral image 1. JERS (FUYO-1)—OPS
data in three spectral bands (green, red and Near-IR) with a
ground resolution of 5.8 m. The ground swath was initially Japan’s first Earth Resources Satellite-1, JERS-1, (also
23 km, increased to 70 km in later missions (Table 6.4). The called Fuyo-1) was launched in a Sun-synchronous in 1992
sensor can be steered across-track (±26 deg) for more fre- (Table 6.5). It carried an optical sensor system called OPS
quent repeat cycle and stereo capability. which basically was a CCD linear imaging sensor. It con-
Besides, relatively coarser spatial resolution large FOV sisted of two sub-systems: VNIR and SWIR. Spectrally,
sensors (e.g. WIFS and AWIFS) with a quick repeat cycle bands 1–3 were similar to Landsat TM bands 2–4, operating
have also been employed on the IRS/Resourcesat for mainly in green, red and near-IR. OPS bands 3 and 4 provided
vegetation monitoring. stereoscopic capabilities (with B/H = 0.3). Bands 5–8

Table 6.8 Salient specifications of JERS and ALOS satellites and optical sensors (summarized after NASDA)

(a) Satellites
Spacecraft Launch Orbit altitude Orbit inclination Orbital period Sensors
(km) (deg) (min)
JERS (Fuyo-1) 1992 568 97.7 98 OPS
ALOS 2006 692 98.2 99 PRISM, AVINIR-2

(b) Sensors
Band Satellite program
(a) JERS-1 (Foyu-1) (b) ALOS
OPS AVINIR-2 PRISM

Wavelength 1. 0.52−0.60 (Green) 1. 0.42 – 0.50 (Blue) Panchromatic


(µm) 2. 0.63−0.69 (Red) 2. 0.52 – 0.60 (green) 0.52 – 0.77 µm
3. 0.76−0.86 (NIR) 3. 0.61 – 0.69 (Red)
VNIR
4.*0.76−0.86 (NIR) 4. 0.76 – 0.89 (NIR)
* Forward viewing
for stereo imaging
5. 1.60−1.71
6. 2.01−2.12
7. 2.13−2.25
Ground 2.5 m; stereo (across-track
resolution (m) 18.3 ×24.2 10 inclined line of sight)
6.6 Japanese Programmes 77

covered critical regions in the SWIR, useful for mineralog- 3. HRC, a High Resolution Panchromatic Camera (HRC),
ical discrimination. Its salient specifications are summarized was carried on CBERS-2B. It provided image data with a
in Table 6.8. ground resolution of 2.7 m and swath of 27 km.
4. PANMUX CBERS-4 (launch 2014) has carried the
2. ALOS—PRISM and AVNIR-2 IRMSS, and two additional sensors PANMUX and
MUXCAM, both being pushbroom scanners. PANMUX
The Japanese Advanced Land Observation Satellite-1 (Panchromatic and Multispectral Camera, PANMUX) is
(ALOS-1) was a follow-up programme to JERS (Fuyo)-1. a panchromatic and multispectral imaging system that
It was launched in January 2006 and operated till 2011 generates image data in a panchromatic band (5 m res-
(Table 6.8). It had two optical sensors: (a) PRISM—a olution) and three multispectral bands (green, red, NIR)
panchromatic sensor for stereo mapping; and (b) AVNIR-2 (10 m resolution), with swath width of 60 km.
—an Advanced Visible and Near Infrared Radiometer-2. 5. MUXCAM is a relatively coarser resolution multispectral
The salient specifications of PRISM and AVINIR-2 are sensor that provides image data in four spectral bands
given in Table 6.5. (Besides the above, ALOS-1 carried a (blue, green, red, near-IR) with spatial resolution of 20 m
radar imaging sensor, PALSAR, described in Chap. 15; and swath of 120 km (Table 6.10).
ALOS-2 launched in 2014 carried only a SAR sensor.)

6.7 CBERS Series 6.8 RESURS-1 Series

The series of China-Brazil Earth resources satellites Russia has launched a series of remote sensing satellites
(CBERS), also known as Ziyuan series, commenced with the called RESURS series. RESURS-01-3 was launched in
launch of CBERS-1 in 1999. This was followed by November 1994 and RESURS-01-4 in July 1998. Both these
CBERS-2, 2B, -3 and -4 (Table 6.9). All these satellites satellites operated in polar Sun-synchronous orbits with a
have been placed in the same orbit, 778 km high, SSO, with mean altitude of 678 km (RESURS-01-3) and 835 km
orbit inclination of 98.5 deg, orbital period of 100 min, and (RESURS-01-4). The remote sensing systems consisted of
repeat cycle of 26 days. The satellites have carried different CCD linear arrays. There were two sensors: MSU-E
sensors (Table 6.10). (high-resolution) and MSU-SK (medium-resolution). The
MSU-E sensor provided data in three spectral bands (green,
1. MR-CCD sensor was a Medium Resolution CCD push- red and near-IR) with pixel dimensions of about 35  45 m.
broom scanner that operated in five spectral ranges The MSU-SK sensor has four spectral bands (green, red,
(panchromatic, blue, green, red and near-IR) and pro- near-IR and thermal-IR) with pixel dimensions of about
vided images with 20 m ground resolution and 120 km 140  185 m in VNIR bands, and about 600  800 m in
swath width. TIR band.
2. IRMSS was Infrared Multispectral Scanner Sensor that RESURS-DK-1 is an improved Russian civilian satellite
provided image data in four spectral bands (panchro- launched in June 2006 in an elliptical low Earth orbit. It
matic, SWIR-I, SWIR-II, and thermal-IR), initially with carried one panchromatic and three multispectral (green, red,
spatial resolution of 80/120 m and swath of 120 km. visible near IR) bands. The spatial resolution in panchro-
Later in CBERS-4, the resolution was increased to matic mode is about 1–0.8 m and that of multispectral sensor
40/80 m (Table 6.10). is 2–3 m.

Table 6.9 Salient specifications of CBERS series satellites (summarized after INPE,CAST)
Spacecraft Launch Orbit altitude (km) Orbit inclination (deg) Orbital period (min) Sensors
CBERS-1 1999 778 98.5 100 MR-CCD; IRMSS
CBERS-2 2003 778 98.5 100 MR-CCD; IRMSS
CBERS-2B 2007 778 98.5 100 MR-CCD; IRMSS; HRC
CBERS-3 2013 Failed
CBERS-4 2014 778 98.5 100 IR-MSS; PANMUX; MUXCAM
78 6 Important Spaceborne Missions and Multispectral Sensors

Table 6.10 Salient Spectral range (µm) Name Ground resolution (m) Swath (km)
specifications of CBERS sensors
(summarized after INPE,CAST) (a) MR-CCD
0.51–0.73 Panchromatic 20 120
0.45–0.52 Blue 20 120
0.52–0.59 Green 20 120
0.63–0.69 Red 20 120
(b) IR-MSS
0.50–0.9 Panchromatic 80/40a 120
a
1.55–1.75 SWIR-I 80/40 120
2.08–2.35 SWIR-II 80/40a 120
a
10.4–12.5 TIR 120/80 120
(c) HRC
0.52–0.80 µm Panchromatic 2.7 27
(d) PANMUX
0.51–0.85 Panchromatic 5 60
0.52–0.59 Green 10 60
0.63–0.69 Red 10 60
0.77–0.89 NIR 10 60
(e) MUXCAM
0.45–0.52 Blue 20 120
0.52–0.59 Green 20 120
0.63–0.69 Red 20 120
0.77–0.89 NIR 20 120
a
IR-MSS resolution is increased for CBERS-4

It was followed by RESURS-P-1 (2013), RESURS-P-2 comprehensively understanding global changes, especially
(2014) and RESURS-P-3 (2016), all these three missions climatic changes. ASTER carries moderate resolution
being identical. These satellites have been placed in an orbit imaging sensors and aims at contributing to the under-
475 km altitude, SSO, 97.3° inclination, 93.9 min orbital standing of local and regional phenomena on the Earth’s
period. They carry identical payloads—a high resolution surface and in the atmosphere (Yamaguchi et al. 1998).
sensor, a multispectral sensor and a hyperspectral sensor. The EOS-AM-1 (TERRA) (Fig. 6.10) has been placed
The heart is a 4 m focal length camera that provides high in a near polar, near circular orbit, with nominal alti-
resolution imagery with 1 m spatial resolution in a swath of tude 705 km, orbit inclination 98.2°, Sun-synchronous,
38 km. The multispectral sensor provides images in 5 bands descending node, equatorial crossing 10.30 h, and a repeat
(B, G, R, NIR-I, NIR-II) with a resolution of 4 m. The cycle of 16 days.
hyperspectral sensor has 96 bands and generates images with
30 m spatial resolution. Besides, KOSMOS-2506 launched
in 2015 provides very high resolution panchromatic band
images with 33 cm spatial resolution.

6.9 TERRA-ASTER Sensor

ASTER, the ‘Advanced Spaceborne Thermal Emission and


Reflection’ radiometer, was launched by NASA as a part of
the first ‘Earth Observation Satellite’ (EOS-AM-1) pro-
gramme in 1999 has been collecting data eversince then
(Table 6.7). ASTER is one of the five imaging sensors
carried on-board the EOS-AM-1, which aims at Fig. 6.10 Terra satellite platform with ASTER sensor (after NASA)
6.9 TERRA-ASTER Sensor 79

ASTER has incorporated several improvements in order image pairs (bands 3N and 3B) for stereoscopic
to exceed the performance of earlier optical sensors such as viewing with a B/H ratio of 0.6. The image data are
Landsat-TM, SPOT-HRV, JERS-OPS, and IRS-LISS/PAN. automatically corrected for positional deviation due to
ASTER has a total of 14 spectral channels spread over the the Earth’s rotation around its axis.
range of 0.53–11.65 µm (VNIR—SWIR—TIR) (Table 6.11) (iv) The stereo image pair are generated in the same orbit,
and an increased B/H ratio of 0.6 (as compared to 0.3 of in real time (time lag of barely 55 s).
earlier sensors) for better stereo imaging.
In order to provide wide spectral coverage, the ASTER 2. SWIR radiometer
instrument has three radiometer subsystems: (a) a visible and
near-infrared (VNIR) radiometer subsystem, (b) a This is also a pushbroom (CCD line) scanner with 30 m spatial
short-wave infrared (SWIR) radiometer subsystem, and (c) a resolution. It acquires multispectral image data in the SWIR
thermal-infrared (TIR) radiometer subsystem. Table 6.7 range (1.60–2.43 µm) in six bands. These data are highly
gives the salient specifications of the ASTER radiometers. useful for the study of minerals, rocks, volcanoes, snow, and
vegetation. The SWIR scanner has an off-nadir cross-track
1. VNIR radiometer pointing capability of ±8.55°, by rotation of the pointing
mirror.
This is a pushbroom (CCD line) scanner and acquires data in
the VNIR range (0.52–0.86 µm). Its main features are as 3. TIR radiometer
follows:
This is an opto-mechanical line scanner and collects multi-
(i) For multispectral imaging, there are three bands— spectral image data in five bands in the thermal-IR range (8–
green, red and near-IR in nadir-looking mode, with 12 µm), with a spatial resolution of 90 m and swath of 60 km.
15 m ground resolution and swath width of 60 km. ASTER can acquire data in various modes, making
(ii) The instrument has cross-track pointing capability combinations of various radiometers in day and night. Out of
(±24°) which leads to a 232 km swath. all the sensors currently in orbit, the ASTER sensor has
(iii) There is a back (or aft)-looking CCD linear array in generated the most interesting type of remote sensing data
the near-IR, called 3B. This enables generation of for geological studies.

Table 6.11 Salient specifications of Terra-ASTER sensor (summarized after NASA)


Subsystem Band no. Spectral range (µm) Radiometric resolution Absolute accuracy Scanner type Detector Spatial resolution Quantization
VNIR 1 0.52–0.60 NEDq  0.5%  ±4% Pushbroom 5000 cells Si-CCD 15 m 8 bit
2 0.63–0.69
3N 0.78–0.86
3B* 0.78–0.86
SWIR 4 1.600–1.700 NEDq  0.5%  ±4% Pushbroom 2048 cells Pr-Si-CCD 30 m 8 bit
5 2.145–2.185 NEDq  1.3%
6 2.185–2.225 NEDq  1.3%
7 2.235–2.285 NEDq  1.3%
8 2.295–2.365 NEDq  1.0%
9 2.360–2.430 NEDq  1.3%
TIR 10 8.125–8.475  3 K (200–240 K) OM Hg-Cd-Te 90 m 12-bit
11 8.475–8.825  2 K (240–270 K)
12 8.925–9.275 NEDT  0.3 K  1 K (270–340 K)
13 10.25–10.95  2 K (340–370 K)
14 10.95–11.65
Swath width: 60 km
Coverage in cross-track direction by pointing function: 232 km
Base to height (B/H) ratio of stereo capability: 0.6 (along-track)
Cross-track pointing VNIR: ±24°, SWIR: ±8.55°, TIR: ±8.55
3B* Band is backward-looking for stereo
80 6 Important Spaceborne Missions and Multispectral Sensors

6.10 High Spatial Resolution Satellite in multispectral bands. Their swath width is generally small,
Sensors typically in the range of 10–15 km. However, nearly all of
these sensors possess across-track and/or along-track steer-
During the last decade, a general trend in the evolution of ing capability of line-of-sight. Therefore, the nadir-looking
spaceborne remote sensing systems has been the develop- repeat cycle is no more as relevant as the revisit capability
ment of high spatial resolution panchromatic band sensors with steerable across-track line-of-sight. For example, the
with stereoscopic capability, with deployment in constella- nadir-looking repeat cycle may be 42–48 days, but the
tion. This may be accompanied with or without multispectral revisit interval can be reduced to 2–3 days by using tiltable
bands (green, red, and infrared) with good spatial resolution line-of-sight. Further, the use of constellation of several
to allow generation of pan-sharpened CIR composites. Main satellites in conjunction can now provide daily access to
application of high spatial resolution satellite images is for almost any point on the Earth’s surface.
cartographic, cadastral, urban mapping, including disaster
management and rapid change detection. Military intelli- 1. Ikonos was the first commercial high-resolution satellite
gence satellites (such as IGS of Japan and Helios of France) launched in 1999 (Fig. 6.11a) (Table 6.8) by Space
are not included in discussion here. Imaging/Eosat Inc. It used CCD pushbroom line scan-
A comprehensive review of high resolution satellites is ning technology. Ikonos had two sensors: (a) a
given by Petrie and Stoney (2009). Table 6.8 provides a list panchromatic sensor (0.45–0.90 µm) with 1 m spatial
of selected high resolution satellites. All these satellites have resolution and (b) a multispectral sensor with 4 m spatial
been placed in Sun-synchronous, near-polar, near-circular resolution, and working in four (blue, green, red and
orbits. The sensors are typically high-performance CCD near-IR spectral bands), data quantization being 11 bit.
pushbroom line scanners and provide spatial resolution of An interesting feature was that both along-track and
the order of 0.4–1 m in panchromatic band and about 1–4 m across-track stereoscopic capability existed by ±45° tilt,

Fig. 6.11 Selected high and vey high resolution satellites: a Ikonos; b QuickBird; c Eros; d Kompsat; e Cartosat; f GeoEye; g Formosat;
h WorlView-4; i Pleiades
6.10 High Spatial Resolution Satellite Sensors 81

leading to a high B/H ratio (0.6) and VE of about 4. height resolution was <5 m. Subsequently, Cartosat-2
Further, the revisit interval was about 1.5–3 days. Indi- was launched in 2007 but suffered problems after
vidual scenes are about 11  11 km. launch. Cartosat-2A (2008) (Fig. 6.11e) and -2B (2010)
2. QuickBird-2 DigitalGlobe (earlier called EarthWatch) were improved identical versions and provided image
Inc. developed early plans to deploy its own satellites for data with 0.65 m panchromatic band resolution, and
commercially utilizing spaceborne remote sensing. Its both along-track and across track steering capability.
first satellite, EarlyBird, launched in 1997, unfortunately Cartosat-2C (2016) and -2D (2017) are further improved
lost contact with the Earth receiving station soon after identical versions and generate image data with resolu-
launch. QuickBird-2 (Fig. 6.11b), the next mission from tions of 0.6 m in panchromatic band and 2.0 m in
DigitalGlobe Inc., was successfully launched in 2001. multispectral bands. Cartosat-3, planned for 2018, is
The sensor provided ground resolution of 61 cm in the proposed to generate image data with spatial resolution
panchromatic band and 2.5 m in four multispectral of 0.25 m.
bands (B, G, R and NIR). The images are approx. 7. GeoEye-1 (Fig. 6.11f) was launched in 2008
16.5  16.5 km. The image data can be acquired (Table 6.12). At Nadir point, the sensor provided a
with ±25° inclination, both along-track and resolution of 0.41 m in the panchromatic band and
across-track. 1.65 m in four multispectral (blue, green, red and
3. Eros series Image Sat International (ISI) proposed a near-IR) bands, with a nominal swath width of 15.2 km.
constellation of high-resolution low-cost, agile, light The sensor was capable of imaging in any direction,
weight, low-orbit remote sensing satellites with frequent both along-track and across-track.
re-visit. It has been named the Eros series. Eros-A 8. Formosat-2 (Fig. 6.11g) was launched by the Republic
(Fig. 6.11c) was launched in 2000 and orbited at an of China (Taiwan) in 2004. It carried a high resolution
altitude of 480 km, carrying a pushbroom scanner to pushbroom scanner for Earth Observation that provided
produce images of 1.9 m ground resolution and 14 km data in a panchromatic band with 2 m resolution, and in
swath width. Eros-B was launched in 2006. It uses a four multispectral bands (blue, green, red, near-IR) with
CCD/TDI (charge-coupled device/time delay integra- 8 m resolution, with a swath of 24 km. It had daily
tion) focal plane. This allows imaging even under poor revisit capability with tiltable viewing angles, both along
lighting conditions. It has a standard panchromatic res- track and across track ±45º.
olution of 0.7 m and 7.2 km swath width. Further, the 9. RapidEye: is a constellation of 5 exactly similar satel-
EROS satellites can turn up to 45° in any direction for lites in the same orbit (Sun-synchronous, 630 km high;
in-orbit stereo imaging. equatorial crossing time of 11.00 am.). The satellites
4. Kompsat series (Korean Multipurpose Satellites) (Ari- employ pushbroom CCD linear array for scanning in 5
rang) belong to South Korea. Kompsat-1 (Arirang-1) spectral bands (blue 0.44–0.51 µm; green 0.52–
launched in 1999, had a spatial resolution of 6 m. 0.59 µm; red 0.63–0.69 µm; red-edge 0.69–0.73 µm
Kompsat-2 (Arirang-2) (Fig. 6.11d) was a lightweight and near-IR 0.76–0.88 µm). It is the first space-borne
satellite (launched 2006) with four multispectral bands remote sensor to utilize red-edge band, which is helpful
(blue, green, rouge, near-IR) possessing ground resolu- in monitoring vegetation health, improve species sepa-
tion of 4 m and one panchromatic band with ground ration, and help in measuring protein and nitrogen
resolution 1 m. Kompsat-3A was launched in 2015 and content in biomass. The nominal ground resolution is
provides images with 0.55 m panchromatic band reso- 5 m and swath width is 77 km. The 5 satellites together
lution and 2.2 m multispectral band resolution. provide a revisit time of 1 day.
5. Orbview satellites were launched by Orbimage (later 10. World View-1 satellite (this series also belonging to
acquired by GeoEye). Orbview-3, orbited around the DigitalGlobe Inc.) was launched in 2007 (Table 6.12). It
Earth during 2003–2007 and acquired 1 m panchromatic provided ground resolution of 0.5 m at nadir and the
and 4 m multispectral imagery in four (blue, green, red revisit time is 1.7 days. World View-2 (2009) provided
and near-IR) bands in an 8-km ground swath. It could be high resolution images with a ground resolution of
steered up-to ±50° off-nadir to provide stereo and 0.46 m for panchromatic and 1.85 m for multispectral
revisit capability of <3 days. band with a swath width of 17.7 km and 11-bit quanti-
6. Cartosat-1 was launched in 2005 as a follow-up to the zation. WV-2 carried eight (8) multispectral bands (blue,
IRS-series by ISRO, with the aim to generate high green, red, NIR-1, coastal, red edge, yellow and NIR-2).
spatial resolution stereo image data (Table 6.12). It The swath width is nominally 16.4 km at nadir. The
carried two panchromatic cameras, one looking aft (−5°) sensor can be steered off-nadir ±40° nominally, higher
and the other fore (+26°) to provide in-orbit stereo pair. viewing angles being selectively available.
The resolution was 2.5 m for both the cameras and the WorldView-3 (2014) and WorldView-4 (2016)
82

Table 6.12 Salient specifications of selected high spatial resolution satellites


Spacecraft Launch Orbit altitude (km); Revisit Steering capability of Sensor
inclination (deg); cycle (days) line-of-sight Pan Multispectral Special
period (min) features/comments
IKONOS 1999 681; 98.1°; 98 1.5–3 Both along-track and 1m 4m
across track ±45°
QuickBird-2 2001 450; 98°; 93 3.5 Both along-track and 0.61 2.5
across track ±30°
OrbView-3 2003 470; 97°; 93 <3 Off-nadir up-to ±50° 1 4
EROS-A 2000 525; 97.6°; 95 2 ±45° in any direction 2 –
EROS-B 2006 500; 97.4°; 94.8 2 ±45° in any direction; 0.7 –
Kompsat-2 2006 685; 98°; 98 5 Off-nadir up-to ±30° 1 4
Kompsat-3 2012 685; 98°; 98 Off-nadir up-to ±30° 0.5 2.0
RapidEye 2008 630; 97.8°; 97 1 Constellation of 5 – 5 Five (5) multispectral
exactly similar satellites bands
GeoEye-1 2008 681; 98.1°; 98 <3 Any direction, 0.41 1.65
along-track and across
track
Cartosat-1 2005 618; 97.87°; 97 5 In-orbit stereo: aft (−5°) 2.5 – Height
and fore (+26°); resolution <5 m
Cartosat-2A/Cartosat-2B 2008/2010 630; 97.9°; 97 4 ±45° along-track <1 –
and ±26° across-track
Cartosat-2C/Cartosat-2D/Cartosat 2016/2017/2017 505; 97.5; 95 ±45° along-track 0.65 2.0
-2E and ±26° across-track
WorldView-1 2007 496; 97.2°; 95 1.7 Along-track and across 0.5 NIL
track ±30°
WorldView-2 2009 770; 97.2°; 100 1.1 ±40° off-nadir 0.46 1.85 Eight (8) multispectral
nominally bands
WorldView-3 2014 610; 97.9; 96.9 3 ±60° off-nadir 0.31 1.24
WorldView-4 2016 610; 97.9; 96.9 3 ±60° off-nadir 0.31 1.24 Four (4) multispectral
bands
PLEIADES-1 2011 694; 98.1°; 98 Daily revisit in Both along-track and 0.5 2.0 Imaging anywhere in
PLEIADES-2 2012 constellation across track 800 km wide strip
6 Important Spaceborne Missions and Multispectral Sensors
6.10 High Spatial Resolution Satellite Sensors 83

(Fig. 6.11h) are further improved versions and provide


image data with 0.31 m resolution in panchromatic band
and 1.24 m resolution in four multispectral bands (B, G,
R, and NIR).
11. Pleiades-1, -2 Pleiades-1 (2011) (Fig. 6.11i) and
Pleiades-2 (2012) have been designed and launched by
Astrium/CNES for high resolution mapping from space.
Both Pleiades-1, and -2 are exactly similar, being placed
in the same orbit, 180 deg apart, and form a constella-
tion with daily revisit (Table 6.12). The sensor uses
CCD pushbroom technology with time delay integration
(TDI) for improved performance. It provides panchro-
matic image with 0.5 m resolution and multispectral
(blue, green, red, near-IR) images with 2 m resolution.
The image data can also be processed to produce
pan-sharpened products with 0.5 m resolution. The
imaging swath is 20 km at nadir. The line of sight can be Fig. 6.12 Example of imagery from the SKYLAB scanner; note the
steered obliquely both along-track and across-track to conical scan lines (after NASA)
facilitate imaging in any area within 800 km wide strip
and also stereo imaging.
wide FOV (scan angle of ±60°), resulting in a swath
width of 720 km. The ground resolution varied from
0.6 km at the centre of the image to about 1 km at the
6.11 Other Programmes (Past) edge. The satellite had no on-board tape recorder and so
did not provide world-wide data, the HCMR data having
A number of other space-borne sensors have been developed been limited to North America, Europe and Australia.
and deployed for remote sensing observations (see e.g. The repetitive coverage was used for detecting thermal
Ryerson et al. 1997). However, only a selected few of these differences in rock/soil properties day and night (see
will be mentioned here considering that they represent a Chap. 12).
distinct scientific advancement. 3. MOS The Japanese MOS-1 (Marine Observation Satel-
lite) was launched in early 1987, in an orbit very similar
1. SKYLAB was a manned space station and was launched to that of the Landsat. An important sensor on MOS1
by NASA during 1973–74 into an oblique orbit with was the MESSR (Multispectral Electronic Self-Scanning
430 km altitude. It became news in 1979 when it Radiometer). This sensor used a pushbroom scanner to
re-entered the Earth’s atmosphere and disintegrated, yield video data in four wavelength bands (VNIR) with a
scattering itself near the Australian continent. One of the ground resolution of 50  50 m.
experiments carried by SKYLAB involved a 13-channel 4. Modular Optoelectronic Multispectral Scanner (MOMS-
opto-mechanical line scanner. It used conical scanning to 1) was a German (DLR) sensor and the first ever
allow uniform atmospheric path length (Fig. 6.12). pushbroom-type scanner used for remote sensing from
However, the data from conical scan lines was difficult to space (1983). It operated in two spectral bands (Band1:
manipulate and rearrange for generating the image, and 0.57–0.62 µm; and Band2: 0.82–0.92 µm). The sensor
hence could not be much used by scientists. provided spatial resolution of 20  20 m covering
2. Heat Capacity Mapping Mission (HCMM) was another selected parts of the Earth in a swath width of 140 km
interesting mission from NASA. It was dedicated to from the shuttle (Hiller 1984; Bodechtel et al. 1985;
thermal mapping of the Earth’s surface from orbit (Short Bodechtel and Zilger 1996).
and Stuart 1982). The spacecraft HCMM was launched
in April 1978 and provided data until September 1980. It MOMS-2 was an upgraded version, emphasizing upon the
circled the Earth at 620 km altitude in a near-polar orbit. in-orbit stereo capability and high-resolution panchromatic
The sensor, named Heat Capacity Mapping Radiometer band sensor. It was flown on ESA-Spacelab Mission (alti-
(HCMR), consisted of a visible channel (0.55–1.1 µm; tude 300 km) in 1993. In fact, this was the first system to
500 m resolution) and a thermal channel (10.5–12.5 µm; acquire in-orbit stereo images of the Earth from space. It
NE ΔT = 0.4 K at 280 K, 600 m resolution). In order to generated images with a ground resolution of 4.5 m in
provide repetitive coverage, the radiometer had a very panchromatic band and 13.5 m in multispectral band.
84 6 Important Spaceborne Missions and Multispectral Sensors

Fig. 6.13 Schematic of multispectral and multilook imaging used in for imaging in blue, green, red and near-infrared bands, and a pair of
the MOMS-02 experiment; the sensor consisted of a high-resolution inclined (fore and aft) PAN cameras for stereo (after DLR)
nadir-looking PAN camera, two nadir-looking multispectral cameras

Subsequently, the sensor MOMS-2 was refurbished to fly thermal-IR. The sensor had a highly accurate radiometry.
on the Russian space-station MIR, and the sensor was called Estimated resolution is 5 m in the three visible bands and
MOMS-02P (1996–98). Due to its longer space life, 20 m in the remaining 12 bands lying in the NIR, SWIR
MOMS-02P covered several parts of the Earth providing and TIR region. However, only very limited data are
interesting and useful image data. MOMS-02P employed available from MTI.
CCD linear array detectors. There were three main sensor 6. MISR: Multi-angle Imaging Spectro-Radiometer (MISR)
components: (a) a multispectral camera with four bands was designed to study change in sun-lit reflection char-
(channels B, G, R, and NIR respectively) in nadir-looking acteristics at different viewing angles and was launched
mode; (b) a high-resolution panchromatic nadir-looking on-board NASA’s Terra spacecraft (1999). For sensing, it
camera; and (c) a set of two identical panchromatic cam- used CCD linear imaging (pushbroom) technology and
eras inclined (+21.4° and −21.4°) in fore and aft directions operated in four spectral bands: blue, green, red and
for in-orbit stereo (Fig. 6.13). near-infrared, and generated images with a nominal
ground resolution of 275 m. The sensor used 9 fixed
5. MTI: The Multispectral Thermal Imager (MTI) was viewing angles: 0.0° (nadir), ±26.1, ±45.6, ±60.0
launched in March 2000 in a near-polar, Sun-synchronous and ±70.5° (both fore and aft of nadir). For each viewing
orbit of 555 km altitude, at 97° inclination to the equator. angle, a discrete camera was used. MISR could acquire
It was an advanced multispectral thermal imaging sensor multiple observations of the same site from a wide
and carried 15 spectral bands ranging from visible to variety of zenith angles in multispectral bands in a matter

Fig. 6.14 Schematic of data flow in a non-photographic remote sensing mission


6.11 Other Programmes (Past) 85

Fig. 6.15 Example of system of path and row numbers for global indexing and referencing

of a few minutes. Its main applications so far have been organized in scenes, which are indexed in terms of paths and
in atmospheric aerosol studies, characterization of land rows (Fig. 6.15).
surface properties, and structure of vegetation canopies A number of factors affect the geometric and radiometric
and properties of snow and ice fields. quality of the image data which are discussed in Chaps. 7
and 9. The principles of interpretation are presented in
In addition to the above, a number of other spacecraft Chap. 9, digital image processing in Chap. 13 and applica-
have been launched for Earth observations, such as the tions in Chap. 19.
TIROS—NOAA series, Nimbus series etc. in polar orbits,
and the GOES series, Meteosats, Himawari, INSATS etc. in
geostationary orbits. These satellites are primarily for
References
meteorological—oceanographical purposes. They have large
IFOV/FOV, substantial geometric distortions and coarse
Bodechtel J, Zilger J (1996) MOMS—history, concepts, goals. In:
spatial resolution, and therefore their data are of limited
Proceedings of the MOMS-02 Symposium, Cologne, Germany.
utility for geological applications, particularly as better- European Association of Remote Sensing Laboratories (EARSeL),
quality data are available from other sensors. Paris, 5–7 July 1995, pp 12–25
Bodechtel J, Haydn R, Zilger J, Meissner D, Seige P, Winkenbach H
(1985) MOMS-01: missions and results. In Schanpf A (ed) Mon-
6.12 Products from Scanner Data itoring earth’s oceans, land and atmosphere from space. The
American Institute of Aeronautics and Astronautics, New York,
pp 524–535
The satellite remote sensing data are pre-processed on board, Chander G, Markham BL, Helder DL (2009) Summary of current
and relayed down to the Earth receiving stations (Fig. 6.14). radiometric coefficients for Landsat MSS, TM, ETM+, and EO-1
On the ground, the data are preprocessed, partly rectified and OLI sensors. Rem Sens Environ 113:893–903
Curran PJ (1985) Principles of remote sensing. Longman, London
formatted. They may be fed to a monitor for real-time dis- Hiller K (1984) MOMS-O1 experimental missions on space shuttle
play or could be readied for distribution. Commonly all flights STS-7 June’83, STS-II Feb.’84-data catalogue. DFVLR,
satellite-sensor remote sensing data are reformatted and Oberpfaffenhofen
86 6 Important Spaceborne Missions and Multispectral Sensors

Markham BL, Barker JL (1983) Spectral charaeterization of the Ryerson RA, Morain SA, Budge AM (eds) (1997) Earth observing plat-
Landsat-4 MSS sensors. Photogramm Eng Remote Sens 49(6): forms and sensors (CD-ROM). Manual of remote sensing,
811–833 3rd edn. American Society for Photogrammetry and Remote
Markham BL, Barker JL (1986) Landsat MSS and TM post-calibration Sensing
dynamic ranges, exoatmospheric reflectances and at-satellite tem- Short NM, Stuart LM Jr (1982) The heat capacity mapping mission
peratures. EOSAT Tech Notes 1:3–8 (HCMM) anthology. NASA SP-465, US Govt Printing Office,
Petrie G, Stoney WE (2009) The current status and future direction of Washington, DC, p 264
spaceborne remote sensing platforms and imaging systems. In: Yamaguchi Y, Kahle AB, Tsu H, Kawakami T, Pniel M (1998)
Jackson MW (ed) Earth observing platforms and sensors, manual of Overview of advanced spaceborne thermal emission and reflection
remote sensing, vol 1.1, 3rd edn. American Society for Photogram- radiometer (ASTER). IEEE Trans Geosci Rem Sens 36(4):
metry and Remote Sensing (ASPRS), Bethesda, MD, pp 387–447 1062–1071
Geometric Aspects of Photographs
and Images 7

It is often pertinent to know not only what the object is, but distortion is uniform over one full frame and varies from
also where it is; therefore, a universal task of remote sensing frame to frame. This typically occurs in products of photo-
scientists is to deliver maps displaying spatial distribution of graphic systems and digital cameras, where the entire scene is
objects. Geometrically distorted image data provide incor- covered simultaneously; each individual frame has uniform
rect spatial information. Geometric accuracy requirements in distortion parameters, although the distortion may vary from
some applications may be quite high, so much so that the frame to frame. In the intraframe type, the distortion varies
entire purpose of the investigation may be defeated if the within a frame. Intraframe distortion can be further subclas-
remote sensing data are geometrically incorrect beyond a sified into two types: (a) interline, i.e. distortion is uniform
certain level. In brief, geometric fidelity of remote sensing over one scan line but varies from line to line; and (b) intra-
data is of paramount importance for producing scaled maps line, in which distortion varies within the line, from pixel to
and for higher application purposes. pixel. In a linear array CCD device, one line is imaged con-
currently and therefore the distortion is uniform in each line,
although it may differ from line to line. In an opto-mechanical
7.1 Geometric Distortions line scanner, scanning is carried out pixel by pixel, and
therefore geometric distortion may vary within one line, from
The geometric distortions occurring in remote sensing data one pixel to another. However, as the speed of OM scanning
products can be considered in many ways. For planning and is very high, within-line distortions can usually be ignored.
developing rectification procedures, it is important to know The headings under which geometric effects are described
whether a certain geometric distortion occurs regularly or in this chapter are guided by genetic considerations, and can
irregularly. On the basis of regularity and randomness in be classified into four broad groups as follows (Table 7.1):
occurrence, the various geometric distortions can be grouped
into two categories: systematic and non-systematic. System- 1. Distortions related to sensor system factors.
atic distortions result from planned mechanism and regular 2. Distortions related to sensor-craft altitude and
relative motions during data acquisition. Their effects are perturbations.
predictable and therefore easy to rectify. Many of the sys- 3. Distortions arising due to the Earth’s curvature and
tematic distortions are removed during preprocessing of the rotation below.
raw data. Non-systematic distortions arise due to uncontrolled 4. Effects of relief displacement.
variations and perturbations. They are unpredictable and
require more sophisticated processing (e.g. rubber sheet The resulting geometry of a photograph/image is gov-
stretching) for removal, and are generally ignored at initial erned by a complex interplay between various types of
stages during routine investigations. Digital processing for distortions occurring concurrently.
geometric rectification is discussed in Chap. 10.
There is another way to classify geometric distortions. If
several sets of photo-graphs/images are available, two dif- 7.1.1 Distortions Related to Sensor System
ferent levels of distortions can be distinguished depending
upon whether the variations occur frame-to-frame, or within a 7.1.1.1 Instrument Error
frame, line-to-line. These are called interframe type and Sensor instruments may not function uniformly or perfectly,
intraframe type respectively (Fig. 7.1). In the interframe type, especially if a moving mechanics is involved in data collection,

© Springer-Verlag GmbH Germany 2018 87


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_7
88 7 Geometric Aspects of Photographs and Images

Fig. 7.1 Interframe and intraframe distortions. a Nominal ground, b interframe distortion, c intraframe distortion

Table 7.1 Factors affecting Factors Whether systematic (S)/non-systematic


geometry of images and (NS)
photographs
1. Sensor system factors
–Instrument error NS, S
–Panoramic distortion NS, S
–Over-sampling or scanning S
–Scan time shift S
2. Sensor craft attitude and perturbations
–Variation in velocity and altitude of the sensor-craft NS
–Pitch, roll and yaw distortions due to platform instability NS
3. Earth’s shape and spin
–Skewing due to the earth’s rotation S
–Effects of the earth’s curvature NS
4. Relief displacement
–Local terrain relief NS
–Sensor look angle S

and this may affect the image geometry. For example, in OM oblique viewing mode, i.e. with the optical axis inclined in
line scanners, the mirror motion may have nonlinear charac- the fore, aft and/or across-track directions. Such image data
teristics, especially in those scanning devices which use an also carry panoramic distortion.
oscillating mirror (e.g. Landsat MSS, TM, ETM+). Other
possible instrument errors could occur due to distortion in 7.1.1.3 Aspect Ratio
optical system or alignment, non-uniform sampling rate, etc. The ratio of linear geometric scales along the two rectan-
The instrument errors may be systematic or non-systematic. gular arms of an image is called aspect ratio, and a distortion
arising out of this not being unity is called aspect distortion.
7.1.1.2 Panoramic Distortion Aspect distortion may be associated with scanner image
Panoramic distortion results from non-verticality of the optical data, such as those from OM and CCD line scanners. A line
axis in the optical imaging device. Typical examples include scanner will produce a geometrically correct image if the
products from OM line scanners and panoramic cameras. As velocity and height of the sensorcraft, i.e. V/H factor, is
the sensor sweeps over the area, across the flight line, data are commensurate with the rate of scan cycle, IFOV, and rate of
collected with line-of-sight inclined at varying angles to the sampling in the scan direction. Any variation in this rela-
vertical (Fig. 7.2). As a result, the ground element size varies tionship leads to aspect distortion, which could be in the
as a function of angle of inclination of the optical axis such that form of oversampling/undersampling or overscanning/
the scale at margins is squeezed. This results in scale distor- underscanning (Fig. 7.3). This could be systematic (e.g. as
tion. A correction for this distortion is necessary (see in the case of Landsat MSS where oversampling was
Sect. 13.3.1), particularly in aerial surveys, where inclination incorporated in the design) or non-systematic (viz. generated
is typically about 50° (total angular field-of-view, FOV, about due to uncontrolled V/H variations).
100°); however, in satellite surveys, as the inclination is very
small (total FOV about 10–12°), the error is often ignored. 7.1.1.4 Scan-Time Shift
More recently, a number of space-borne CCD linear array In OM line scanners, the scanner sweeps across the path
sensors have been launched which acquire video data in collecting video data from one end of the swath to the other.
7.1 Geometric Distortions 89

however, the new aerial scanners have a very high speed of


scanning reducing this type of distortion substantially. The
space-borne OM scanners use oscillating mirrors in view of
the fact FOV is quite small and suitable corrective measures
are taken for parallel alignment of scan lines (see e.g. Col-
well 1983; Gupta 2003; Lillesand et al. 2007).

7.1.1.5 Degradation Due to Sampling


and Quantization
To form a digital image, a continuous scene is broken into
discrete units, and an average digital number (brightness
value) is assigned to each discrete unit area. This artificial
segmentation, i.e. sampling and quantization, is a type of
degradation inherent in all data where analog-to-digital
(A/D) conversion is involved (Fig. 7.4).

7.1.2 Distortions Related to Sensor-Craft


Altitude and Perturbations

7.1.2.1 Pitch, Roll and Yaw Distortions Due


to Sensor-Platform Instability
One of the most important parameters governing the geo-
metric quality of remote sensing images is the orientation of
Fig. 7.2 Panoramic distortion resulting from non-verticality of optic the optical axis. When the optic axis is vertical, image data
axis causing compression of scale at margins. a Scan mechanism,
has high geometric fidelity. Many of the sensors are
b ground plan, c image. Note the decrease in ground element size with
increase in inclination of optic axis designed to operate in this mode. However, sensor platform
instability may lead to angular distortions. Any angular
During this short interval of time, the sensor-craft keeps distortion can be resolved into three components: pitch, roll
moving ahead. This relative motion results in scan lines and yaw (Fig. 7.5).
being inclined with respect to the nadir line, and is called the Yaw is the rotation of the sensor-craft about the vertical.
scan-time shift. The earlier generation of aerial OM line This rotation leads to a skewed image such that the area
scanners had prominent distortions due to this factor; covered is changed; however, no shear or scale deformation

Fig. 7.3 Aspect ratio distortion. a Nominal ground. b Oversampling along-track direction and e corresponding aspect distortion—the image
along scan line direction and c corresponding aspect distortion—the is elongated in the along-track direction
image is elongated along the scan line direction. d Overscanning in
90 7 Geometric Aspects of Photographs and Images

occurs. Roll is the rotation of the sensor-craft about the axis


of flight or velocity vector; it leads to scale changes. Pitch is
the rotation along the across-track axis; it also leads to scale
changes. Figure 7.6 shows schematically the above types of
distortions in photographs and scanner images. The angular
distortions may occur in combination.
The aerial platforms are subject to greater turbulence due
to wind etc., and their data may carry larger amounts of
pitch, roll and yaw errors as compared to those from orbital
platforms, which are more stable and steady. In most aerial
scanners, the effect of roll is rectified by an electronic device,
which adjusts the start of line scan in the imagery. The
effects of yaw and pitch are not usually rectified, but the
effect of pitch is eliminated by over-scanning, and yaw leads
only to skewed images, which can be later accounted for by
Fig. 7.4 Degradation due to sampling and quantization; aerial scanner appropriate alignment. The errors due to sensor platform
imagery (red band) of part of the Mahi river area, India (courtesy of
Space Applications Centre, ISRO)
instability are typically non-systematic (Table 7.1).

Fig. 7.5 Pitch, roll and yaw


distortions—terminology

Fig. 7.6 Schematic of pitch, roll and yaw distortions. a Is the nominal ground; b, c, d show the pitch, roll and yaw distortions in photographs
(interframe type); e, f, g show the same in scanner image data (intraframe type)
7.1 Geometric Distortions 91

7.1.2.2 Variations in Velocity and Altitude way, there is no omission/repetition of ground area. The
of the Sensorcraft length of the along-track arm of the ground element is
A remote sensor is designed to operate at a certain altitude controlled by the satellite velocity, altitude and integra-
and velocity combination. Variations in these parameters tion time. The length of the across-track arm of the
produce geometric distortion in the images or over-/ ground element is governed by the altitude and IFOV.
under-coverage. They are typically non-systematic. The The V/H ratio is thus very critical and any variation in
geometric distortions depend on the type of sensor. Exam- V/H ratio produces aspect distortion (Fig. 7.8). A rela-
ples include the following tive increase in H increases the across-track arm of the
pixel, and a relative decrease in H produces a shortening
(a) In the case of photographic systems and digital cameras, of the across-track arm. A change in velocity affects the
an increase in altitude leads to larger areal coverage and along-track arm of the ground element; higher velocity
decreased photographic/image scale. On the other hand, leads to longer along-track arm, and vice versa. Varia-
an increase in platform velocity leads to decreased area tions in the V/H factor thus produce aspect distortions
of overlap in stereo coverage, i.e. under-coverage (Fig. 7.8).
(Fig. 7.7).
(b) In OM line scanners, the governing factor is the V/H
ratio. Increase in the V/H ratio leads to under-scanning
(skipping of some areas) and decrease in the V/H ratio 7.1.3 Distortions Related to the Earth’s Shape
leads to over-scanning (repetition of some areas). and Spin
(c) In CCD line scanners, the forward motion of the push-
broom is accompanied by integration of radiation over a If an image or photo covers extensive areas on the Earth’s
particular time (dwell time), after which the process of surface, the Earth’s curvature affects geometric scale and
accumulating radiation for the next line starts. In this exerts a type of panoramic effect (Fig. 7.9). This phenomenon

Fig. 7.7 Schematic variation in velocity and altitude (V/H factor) causing over- and under-coverage in photographic systems

Fig. 7.8 Schematic variation in V/H factor causing aspect and scale distortion in CCD line scanner images
92 7 Geometric Aspects of Photographs and Images

is commonly observed in scenes acquired from high altitudes. pictured simultaneously, the distortion is absent. Figure 7.10
The Earth’s curvature varies with latitude, and therefore the shows a typical situation with the Earth rotating around its axis
effects of the Earth’s curvature are more pronounced at higher from west to east and the satellite making an orbital pass
latitudes. Although the shape of the Earth is well known, for around the Earth. As the satellite sensor completes one scan
remote sensing image construction purposes it is considered as operation and positions itself for the next scan operation, the
a non-systematic type of distortion. Earth below rotates around its axis from west to east over a
certain distance. This brings relatively west-located parts of
7.1.3.1 Skewing Due to the Earth’s Rotation the scene before the sensor system every time a fresh scan
This geometric distortion typically occurs in scanner (OM and cycle starts, thus causing image skew. The skew is maximum
linear CCD) images obtained from high altitudes and in polar orbits and zero in the equatorial one. The method for
space-borne sensors. The effect of the Earth’s rotation is quite skew correction is given in Sect. 13.3.2.
negligible in low-altitude (aerial) scanner surveys. In pho- Geometric quality of photographs and images used for
tographs and digital camera products, as the entire scene is interpretation is also influenced by the susceptibility of

Fig. 7.9 Effects of the earth’s


curvature on scale. Ground
distance AB = CD, but image
distance A′ B′ < C′ D′

Fig. 7.10 Earth’s rotation and


image skewing. a Relative motion
of a satellite in polar orbit around
the Earth and the Earth’s rotation
around its axis. b Schematic
depiction of geneis of image
skew. (i) Nominal ground scene;
a number of scan lines and a
straight road segment are shown.
(ii) Effect due to the Earth’s
rotation; note the disrupted road
in the uncorrected image data.
(iii) To relocate the image data in
proper geometry, the scan lines
have to be shifted successively
westwards, leading to image skew
7.1 Geometric Distortions 93

photographic material to expansion. For this reason, some- photographs show less relief displacement than aerial
times regularly placed reseau marks are printed on the image photographs.
for precision requirements. 2. Relief displacement on line-scanner images: On line
scanning images, relief displacement occurs perpendic-
ular to the nadir line, outward and away from the nadir
7.1.4 Relief Displacement point. Consider a tall object AB being imaged by a linear
array scanning device (Fig. 7.12). The base B of the
Displacement in the position of the image of a ground object object is imaged at B’ and the top A at A’. Although the
due to topographic variation (relief) is called relief dis- points A and B occupy the same position in plan (map),
placement. It is a common phenomenon on all remote they appear at different positions on the image, the top
sensing data products, particularly those of high-relief ter- being displaced outwards and away from the nadir line in
rain. The magnitude of relief displacement is given as the scan direction. Figure 7.13 (IRS-1D PAN image from
the Himalayas) gives an example of relief displacement.
rh
Relief displacement  ð7:1Þ
H Figure 7.14 shows the typical geometrical configuration
where r is the distance of the object from the principal point, for optical sensors (such as Landsat-TM, SPOT-HRV,
h is the object height and H is the flying height. Therefore, IRS-LISS/PAN and ASTER). The relief displacement is
relief displacement is dependent upon local terrain relief and governed by off-nadir angle of look at the satellite, relative
look angle at the satellite (which in turn depends upon terrain height, and satellite altitude.
sensor-craft altitude and distance of the ground feature from
the nadir point).
The pattern of relief displacement depends upon the
perspective geometry of the sensor. Two main types can be
distinguished: (1) sensors with central perspective geometry;
and (2) imaging systems with line-scanning devices.

1. Relief displacement on central perspective geometry


products: This includes the digital cameras and the
erstwhile photographic-film camera systems. Here, the
key optical—geometrical feature is that for each frame
(photograph or image), all rays must pass through the
lens centre, which is (ideally) stationary. Figure 7.11
shows an example where objects of varied heights are
photographed in a stereo pair. Although, the top and base
of a tower are at the same plan location, they appear at
different positions on the photographs. This is due to
relief displacement. It should be noted that: (a) the image
displacement occurs in such a way that higher points on
the ground are displaced radially away from the principal
point; (b) the amount of image shift is related to the
Fig. 7.12 Schematic relief displacement on line-scanning images. The
relief; and (c) the relief displacement decreases with relief displacement occurs outward and away from the nadir line, in the
increasing flying height, for which reason space scan direction

Fig. 7.11 Relief displacement seen on a photographic stereo pair (courtesy of Aerofilms, London)
94 7 Geometric Aspects of Photographs and Images

Fig. 7.13 Relief displacement in


CCD line-scanning image. The
scene (dated 28 November 1998)
is an IRS-1D PAN image of a part
of the Himalayas. The area has a
high relief; the main river is the
Bhagirathi (Ganges) river; in the
NE corner is the Maneri dam and
reservoir. Note the relief
displacement of hill-tops and
resulting partial shadowing of the
Ganges valley at several places

Fig. 7.14 Schematic showing


terrain-induced displacement
error for Landsat-TM,
SPOT-HRV and
IRS-LISS/PAN-type image
geometry

Figure 7.15 gives a nomograph of relief displacement for 7.2 Stereoscopy


Landsat TM. It is observed that in mountainous terrains such
as the Himalayas and the Alps, where relative relief of about 7.2.1 Principle
2000 m could occur, Landsat TM data may possess relief
displacements of the order of nearly 200 m (6–7 pixels!). The aim of stereoscopic viewing is to provide
Similarly, Fig. 7.16 shows a nomograph for SPOT-HRV three-dimensional perception (also now called 2.5 D, see
(off-nadir viewing), where it is seen that the SPOT-HRV Sect. 13.10). The principle of stereoscopy is well known
data in such areas (2000 m relative relief) may possess relief (Fig. 7.17). If objects located at different distances are
displacements of nearly 750 m (equivalent to distance of 75 viewed from a set of two viewing centres, the viewing
pixels!). Therefore, for precision planimetric work, as also centres subtend different perspective angles (also called
for geocoding and image registration, it is necessary to take parallactic angles) at the objects (Fig. 7.17a). The angle
relief displacements into account, particularly in mountain- depends on the distance or depth; the larger the distance, the
ous terrain. smaller the angle. In other words, an idea of relative
7.2 Stereoscopy 95

photographs are interchanged (i.e. the left photo is placed on


the right and the right photo on the left), the mental model
shows inverted relief—the ridges appearing as valleys and
vice-versa called pseudoscopy.

7.2.2 Vertical Exaggeration

In a stereoscopic mental model, almost invariably there is a


geometric distortion, as the horizontal and vertical scales no
longer match. The distortion is called vertical exaggeration
(VE), as very often the vertical scale is greater than the
horizontal scale. The distortion is primarily caused by the
fact that the B/H ratio during photography does not match
with the corresponding be/h ratio during stereo-viewing. The
Fig. 7.15 Nomograph showing displacement in Landsat TM images
induced by terrainheight at various look angles (after Almer et al.
vertical exaggeration can be written as
(1996)
B h
VE ¼  ð7:2Þ
H be

where B = air base, H = flying height, be = eye base, and


h = depth at which stereo model is perceived.
As a result of vertical exaggeration, the relief appears
enhanced, and slopes and dips appear steeper in the mental
model. Common values of VE for aerial photography range
between 3 and 5. The VE is at times helpful in investiga-
tions, e.g. in a flat terrain, as minor differences in relief
would get enhanced. However, it may be a problem in a
highly rugged terrain. Rarely, a phenomenon called ‘nega-
tive VE’ occurs, in which the horizontal scale is larger than
the vertical, and the relief becomes depressed (for more
details, see e.g. Moffitt and Mikhail 1980; Wolf 1983).

Fig. 7.16 Nomograph showing displacement in SPOT-HRV


7.2.3 Aerial and Spaceborne Configurations
(off-nadir viewing) images induced by terrain height at various look for Stereo Coverage
angles (after Almer et al. (1996)
Remote sensing products from all types of platforms, i.e.
distances is conveyed by perspective angles. This implies aerial, space and terrestrial, can be used for stereoscopic
that for relative depth perception, it is necessary that the viewing, provided the two images or photographs form a
same set of objects be viewed from two perspective centres. geometrically mutually compatible pair. Some arrangements
Now, if an area is photographed from two stations, a set for stereo coverage are schematically shown in Fig. 7.18.
of two photographs can be used for stereoscopic viewing. The most common and typical arrangement is vertical pho-
The left photograph is viewed by the left eye and the right tography (Fig. 7.18a) from an aerial or space platform with
photograph by the right eye (Fig. 7.17b), commonly using ca. 70% overlap for stereo viewing. The convergent type of
stereoscopic instruments. When the two, left and right photography, in which one camera is forward-inclined and
images, fuse or merge into each other, a three-dimensional another aft-inclined, to provide coverages of the same scene
mental model is perceived, called the stereoscopic model. from two different perspective directions, is another con-
This technique is called stereoscopy. The process virtually figuration (Fig. 7.18b).
transposes the eyes in such a manner that the eye base The coverages from some satellite sensors (e.g. Landsat)
(distance between the two eyes) is transposed to airbase are such that successive paths have a small area of overlap,
(horizontal distance between two successive photographic and these can also be used for stereo viewing (Fig. 7.18c).
stations). If per chance the relative positions of the However, a specific problem occurring in space data for
96 7 Geometric Aspects of Photographs and Images

Fig. 7.17 Principle of stereoscopy. a Relationship between object B = airbase; H = flying height; L1 and L2 are lens centre positions; P1
distance and parallactic angle. Two objects P and Q situated at and P2 are principal points; N1 and N2 are nadir points; a1b1 and a2b2
distances DP and DQ are viewed by the two eyes L and R and subtend are images of A and B on the two photographs respectively (Wolf
parallactic angles uP and uQ respectively. As DP < DQ, uP > uQ. 1983)
b Photographs from two exposure stations sighting the same building.

stereo studies is the low B/H ratio owing to very high alti- across-track stereos. These satellites possess the capability of
tudes. Typically, in aerial stereo photography the B/H ratio is large tilt angles of ±45°, yielding stereos with a B/H ratio of
around 0.4–0.6, which gives a VE of about 3–4. In contrast, 0.6 and VE of about 4.
in space sensors such as Landsat MSS and TM, the B/H ratio In this treatment, the following examples of stereo pairs
is around only 0.17–0.03, giving a VE of about only 1–0.2 are included:
(negative vertical exaggeration where relief gets depressed in
the stereo vision!). • aerial photographic stereo pairs: Figs. 19.21, 19.52,
During the past decade, a lot of attention has been 19.53 and 19.59
focused on improving the VE of stereo sensors from space. • aerial digital camera stereo pair: Fig. 19.16
In order to improve the B/H factor in space image data, • Metric Camera photograph stereo pair: Fig. 11.5
oblique-viewing spaceborne sensors are being used. For • ASTER-SRTM derived stereo pair: Fig. 19.46
example, • MOMS scanner stereo pair: Fig. 19.3
SPOT-HRV and IRS-PAN use tiltable sensors such that • ERS-SAR image stereo pair: Fig. 19.32
the line of sight can be tilted across-track to generate stereo • SRTM image stereo pair: Fig. 16.7
coverage (Fig. 7.18d). Furthermore, configurations have • synthetic stereo pair: Figs. 13.37, 18.16 and 18.17
been designed for dedicated spaceborne stereo programmes
called ‘in-flight’ stereo capability sensors (e.g. JERS-OPS,
stereo MOMS-02 and ASTER). Such systems include mul-
tiple CCD linear arrays placed at the focal plane of the 7.2.4 Photography Vis-à-Vis Line-Scanner
optical system, to acquire multiple-look coverage, using Imagery for Stereoscopy
either single-lens optics (Fig. 7.18e) or multiple-lens optics
(Fig. 7.18f). These configurations generate along-track The mutual geometric compatibility of the
in-orbit stereos. However, most of the earlier spaceborne photographs/images forming a stereo pair is important for
systems had the limitation of low B/H ratio (approx. 0.3), stereoscopy. In general, distortions due to platform insta-
due to smaller tilt angles of around 20°. bility (roll, pitch and yaw), altitude—velocity variations,
In the last decade, numerous high resolution satellites panoramic viewing, Earth curvature and oblateness affect the
have been launched (e.g. IKONOS, QuickBird-2, GeoEye, stereo compatibility. Besides, there are some basic differ-
Eros, Cartosat, Pleiades etc. see Table 6.8) with the special ences in the characteristics of digital camera
capability to tilt line of sight in either along-track or images/photographs and line-scanner images relevant to
across-track direction, to generate both along- and stereo viewing.
7.2 Stereoscopy 97

Fig. 7.18 Configurations for stereoscopic coverage. a Vertical pho- across track line of sight. e Stereoscopic coverage from multilook linear
tography. b Convergent photography. c Stereoscopic coverage from arrays using single optics and f using multiple optics
Landsat MSS type images. d Stereoscopic coverage using a tiltable

1. Products from digital cameras and photographic systems nadir line, and thus they lack the central perspective
possess central perspective geometry, as each frame is configuration. Therefore, a conventional stereogram is
acquired with the lens centre at one position. The lacking in stereos generated from line-scanner images.
line-scanner images, on the other hand, are generated 2. In digital camera images and photographs, as the entire
line by line, as the scanner keeps moving along the scene is imaged simultaneously, there may be only
98 7 Geometric Aspects of Photographs and Images

Fig. 7.19 Stereoscopic instruments. a Lens stereoscope (courtesy of Carl Zeiss, Oberkochen). b Mirror stereoscope and a parallax bar (courtesy
of Wild, Heerbrugg). c Portable mirror stereoscope (courtesy of Wild, Heerbrugg). d Interpretoscope (courtesy of Wild, Heerbrugg)

interframe distortions; however, in scanner images larger field-of-view of the mirror stereoscope. The scanning
intraframe distortions may creep in, as the images are mirror stereoscope permits stereo viewing of the entire stereo
generated line after line. This may complicate the model at variable magnifications. The interpretoscope
geometry and hence the stereo compatibility, especially (Fig. 7.19d) is a refined scanning stereoscope providing
for photogrammetric applications. variable and high magnifications.

7.2.5 Instrumentation for Stereo Viewing


7.3 Photogrammetry
Optical instruments used for stereo viewing differ in com-
plexity and sophistication. The common viewing instru- Photogrammetry is defined as the science and technique of
ments are the lens stereoscope, mirror stereoscope, scanning making precise measurements on photographs. As such, the
stereoscope and interpretoscope. term photogrammetry is now extended to measurements on
The lens stereoscope is a very simple instrument all remote sensing data products, whether photographs or
(Fig. 7.19a) consisting of a pair of lenses mounted on a scanner images. The basic purpose of photogrammetry is to
common stand. The distance between the two lenses can be determine distances, angles and heights from photo (or
adjusted to suit the eye base of the viewer. Its main advantages image) measurements, produce geometrically correct maps,
are its light weight, low cost and portability. However, its and extract geometric information. Owing to its strategic and
limitations are the small field-of-view and a distorted view in intelligence importance, photogrammetry has found numer-
peripheral areas. The mirror stereoscope (Fig. 7.19b) is ous applications during the past some decades.
technically more improved than the lens stereoscope and uses
a set of prisms and mirrors. Its main advantage is its large
field-of-view, enabling viewing of the entire stereo model at 7.3.1 Measurements on Photographs
the same time. However, magnification is generally low, and
binoculars are used to improve it, which in turn reduce the 7.3.1.1 Geometric Elements of Vertical
field-of-view. The portable mirror stereoscope (Fig. 7.19c) is Photographs and Terminology
a relatively new development, coupling the advantages of the In the earlier times, vertical photographs from aerial plat-
light weight and portability of the lens stereoscope and the forms have been extensively used for photogrammetric
7.3 Photogrammetry 99

applications. A vertical photograph is one in which the The scale of a vertical photograph thus depends only on
optical axis is vertical (the optical axis of a thin lens is camera focal length and flying height above the ground. It
defined as the line joining the centre of curvature of the does not depend on the angular location with respect to the
spherical surfaces of the lens). The lens centre acts as the principal point in a flat terrain. However, if the terrain has a
perspective centre through which all rays must pass. Thus variable elevation, the scale accordingly varies: the scale is
the position of points on the image plane can be found by larger for elevated areas and smaller for depressed areas. In
drawing rectilinear rays emanating from ground points and areas of relief, it is often convenient to have an average
passing through the lens centre (e.g. ground points A and B scale, obtained by using the average height of the
are imaged at ‘A’ and ‘B’ respectively in Fig. 7.20). sensor-craft in Eq. (7.3).
The term principal point (P) denotes the geometric centre The scale of a photograph can also be computed by
of the photograph. The principal point can be located with comparing distances on the photograph to corresponding
the help of marks appearing on edges or corners, called distances on a map of known scale. In this approach, some
fiducial marks. If the optical axis is inclined, it is called an control points are first identified and the photo scale is
oblique photograph. In high oblique photographs, the hori- computed as
zon is visible, and in low oblique photographs, the horizon it
photo distance
not visible. The point on the ground vertically below the lens Photo scale ¼  map scale ð7:4Þ
centre is called the nadir point (N) and the line joining map distance
successive nadir points is termed the nadir line; it gives the In the above procedure, care should be taken to select
ground track of the flight path. In a remote sensing camera, control points having nearly same elevation, otherwise relief
the negative plate is placed at a distance f (focal length) displacement would also affect the computed scale.
above the lens centre. For all practical projection purposes, The scale of oblique photographs varies depending on the
an equivalent positive plane can be imagined at a distance angle of inclination of optic axis. Such photographs are first
‘f’, below the perspective centre (Fig. 7.20). rectified, and only then they can be used for photogram-
Scale: the scale of a map is defined as the ratio of the metric applications (for further details, the reader may refer
distance on the map to the corresponding distance on the to standard works in photogrammertry, e.g. Moffitt and
ground. Consider points A and B lying on a flat ground Mikhail 1980; Slama 1981; Wolf 1983).
covered in a vertical photograph and imaged at A’B’
(Fig. 7.20). The scale of the photograph can be given as 7.3.1.2 Measuring Distances and Areas
A0 P B0 P f
(Two-Dimensional Hotogrammetry)
Scale = ¼ ¼ ð7:3Þ The simplest device for measuring distances on photographs
AN BN H
is an interpreter’s scale. It has both black and white markings
for use on areas of light and dark photographic tones
respectively. The distance is computed as

Ground distance ¼ photo distance  scale factor ð7:5Þ


Areas on photographs can be measured in several ways,
e.g. by using an overlay grid, a planimeter or a digitizing
table. In addition to these, there are other methods such as
using optical analog systems, equidensity contour films etc.,
which depend on density slicing and finding the area
occupied by a certain range of gray tone.

7.3.1.3 Relief Displacements and


Three-Dimensional Photogrammetry
Objects of different heights photographed in a stereo pair are
shown in Fig. 7.11. The relative shift in the image position of a
point is called parallax. Its component in the X direction, i.e.
parallel to the flight line, can be used to deduce elevation
differences, and standard procedures for this are described in
several works (e.g. Wolf 1983) (also see Chap. 8).
Thus, all the X, Y and Z coordinates of objects can be
Fig. 7.20 Geometric elements of a vertical photograph and
measured from a photo-stereogram. The data can be applied
terminology
100 7 Geometric Aspects of Photographs and Images

to compute dip, thickness of beds, displacement along faults, satellite orbits around the Earth, its position can be predicted
elevation differences, and to prepare maps of various types and accurately located; and (3) to cover a particular terrain,
(see Miller and Miller 1961; Ray 1965; Ricci 1982; Pandey fewer control points are needed in satellite photogrammetry
1987). For example, from the Metric Camera stereo pair of than in aerial photogrammetry. Satellite data—both photo-
an area in Iran, Bodechtel et al. (1985) generated dip data for graphic and scanner images—are fairly well suited for
structural analysis (see Fig. 19.26). two-dimensional (planimetric) photogrammetric applica-
tions. The scale of a satellite image is in general more
uniform in comparison to that of an aerial photograph, for
7.3.2 Measurements on Line-Scanner Images the simple reason that in satellite images relief displacements
are smaller, the platform is more stable, and angular
Although line-scanner images have been available for quite distortions are only minimal. For 3D measurements, digital
some time, their photogrammetric applications have been photogrammetry is used.
somewhat constrained, owing to the following three main
reasons.
References
1. Lack of central perspective geometry, which implies that
a true conventional stereogram is lacking Almer A, Raggarn J, Strobl D (1996) High precision geocoding of
2. Presence of intraframe distortions of systematic and remote sensing data of high relief terrain. In: Buchroithner MF
non-systematic types, which affect their geometric (ed) Proceedings of the international symposium on high mountain
fidelity remote sensing cartography, held at Schladming, Austria, Dresden
University of Technology, Dresden, 26–28 Sept 1990, pp 56–65
3. Photogrammetric methodologies developed during the Bodechte1 J, Kley M, Münzer U (1985) Tectonic analysis of typical
past some 80 years have focused on applications of fold structures in the Zagros Mountains, Iran, by the application of
camera photographic products which can be directly quantitative photogrammetric methods on Metric Camera data. In:
extended to digital camera products. The CCD line scan- Proceeding DFVLR-ESA Workshop Oberpfaffenhofen, ESA
SP-209, pp 193–197
ner stereo systems came onto the scene only in the mid Colwell RN (ed) (1983) Manual of remote sensing, vols. I, II, 2nd ed.
1980s. For handling line scanner images, the techniques of Am Soc Photo-gram, Falls Church, VA
digital photogrammetry have been developed and are now Gupta RP (2003) Remote sensing geology, 2nd ed. Springer, Berlin,
used for generating orthophotographs and digital eleva- 655 p
Lillesand TM, Kiefer RW, Chipman JW (2007) Remote sensing and
tion models (see Sect. 8.2.4). These products can be used image interpretation, 6th ed. Wiley, London
for all photogrammetric measurements, including, dis- Miller VC, Miller CF (1961) Photogeology. McGraw-Hill, New York
tances, areas, elevations, and 3-D photogrammetry. Moffitt FH, Mikhail EM (1980) Photogrammetry, 3rd edn. Harper &
Row, New York
Pandey SN (1987) Principles and applications of photogeology. Eastern
Wiley, New Delhi, p 366
7.3.3 Aerial Vis-à-Vis Satellite Photogrammetry Ray RG (1965) Aerial photographs in geologic interpretation and
mapping. USGS Prof Paper 373
Concepts and methods of photogrammetry developed Ricci M (1982) Dip determination in photogeology. Photogram Eng
around aerial vertical photography have been suitably Remote Sens 48:407–414
Slama CC (ed) (1981) Manual of photogrammetry, 4th edn. Am Soc
adapted and extended to satellite image data. Satellite pho- Photogram, Falls Church, VA
togrammetry is of interest for the following main reasons: Wolf PR (1983) Elements of Photogrammetry, 2nd edn. McGraw-Hill,
(1) satellite platforms are more stable in attitude; (2) as the New York
Digital Elevation Model
8

8.1 Introduction 8.2 Data Acquisition for Generating DTM

Cartographers have tried to represent the world in different It must be mentioned at the outset that the accuracy of input
ways of which paintings can be considered as the oldest data is of paramount importance and governs the accuracy of
form of maps to represent the Earth’s surface. Ancient maps the final DTM. Terrain data (X, Y Z coordinates of points)
employed semi-symbolic and semi-pictorial representation are required for generating DTM and these can be acquired
of the terrain surface and were of low metric quality. As the in different ways and different sources:
Earth’s surface has relief, the necessity of incorporating
height information into the maps was soon realized. Modern 1. Ground surveys
maps use well established mathematical basis and symbols 2. Digitization of topographic contour maps
for representing topographic and non-topographic data and 3. Conventional aerial photographic photogrammetry
possess high geometric fidelity. A common way of putting 4. Digital photogrammetry utilizing remote sensing image
height information on maps is by depicting topographic data
contours, which are imaginary lines of equal elevation. 5. UAV-borne digital camera
A DEM is the next generation digital method for depicting 6. Satellite SAR
the terrain. 7. Aerial LIDAR
A Digital Elevation Model (DEM) is an ordered array of
numbers that represents the spatial distribution of elevations
above an arbitrary datum. In principle, a DEM describes
elevations of various points in a given area in digital format. 8.2.1 Ground Surveys
The term DEM usually applies to land surface topography;
however, it is inherently a general term and can also be used Dedicated detailed topographic ground surveys can be taken
to depict spatial pattern of any surface such as groundwater for smaller target areas such as part of a city/landscape,
level or top/bottom of an aquifer, or any surface etc. reservoir sites, or dumping grounds etc., wherever specifi-
There are two other terms frequently used in the litera- cally required. It involves use of instruments like DGPS, GPS,
ture: digital terrain model (DTM) and digital surface model total stations and laser ranger to collect X, Y, Z, coordinate
(DSM). The term DTM is applied to DEM of the Earth point data of a part terrain in a gridded pattern. These points
terrain strictly, i.e. the bare ground; on the other hand, DSM can then be used to interpolate the terrain elevation and
includes objects on the ground such as buildings and trees generate the DEM. Such surveys may not be suitable for a
(Fig. 8.1). In the following discussion, the terms DTM and rugged terrain where there is much elevation variation. This
DEM are used interchangeably, as focus is solely on the technique has limitations of high cost and time and is labour
ground terrain. intensive. For smaller targets, dedicated UAV digital camera
For creating DEM, different types of data structures coverage can be more conveniently used.
have been used by different workers, viz. the line model,
triangulated irregular network (TIN) model and grid
models (see e.g. Meijerink et al. 1994; Li et al. 2005).
8.2.2 Digitization of Topographic Contour Maps
Here, we will confine the discussion to square grid net-
This has been the most common method of generating DEM.
works that are compatible with remote sensing data (raster
It involves digitization of topographic map with contour and
GIS).

© Springer-Verlag GmbH Germany 2018 101


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_8
102 8 Digital Elevation Model

and base B of the tower are imaged at A1, B1 in the EPP1


and at A2, B2 in the EPP2.
By way of construction, draw lines parallel to L2-A2 and
L2-B2 from L1 to intersect the EPP1 at A2’ and B2’. The shift
in image position of A is A1-A2’ (called parallax of A, pA)
and that of B is B1-B2’ (called parallax of B, pB). The
important characteristic is that parallax is related elevation,
i.e. the shift in the image position is related height—higher
the elevation, greater the parallax.
From similarity of Ds L1L2A and L1A1A2’, we have:
Fig. 8.1 Terminology—digital terrain model and digital surface model p   B

A
¼ ð8:1Þ
f ðH  dhÞ
Similarly, from Ds L1L2B and L1B1B2’, we have:
point elevation data and applying interpolation to generate
elevation raster data set. However, generally this type of p   B 
B
DEM has often accuracy problems as errors creep into the ¼ ð8:2Þ
f H
data set at various stages, viz.—first topographic map gen-
eration itself, then digitization, analogue/digital transforma- Rearranging the above equations, we have:
tion, rasterization, interpolation etc. However, for old  
B:f
records and archive information, this may still be the only dh ¼ H  ð8:3Þ
option in many cases. pA
This implies that if H (flying height above the lower
point), B (air base) and f (focal length) are known, elevation
8.2.3 Conventional Aerial Photographic can be computed from parallax pA.
Photogrammetry Further, simplification of Eq 8.2 gives:

A major leap in 3-D representation of the terrain came with B:f ¼ H:pB ð8:4Þ
the development of aerial stereo photogrammetry, i.e. the
technique of making measurements on aerial stereo pho- From Eqs. 8.3 and 8.4:
tographs. The subject of stereoscopy has been introduced in ðH:ðdpAB ÞÞ
Sect. 7.2. Here we will discuss the basic concept of elevation dh ¼ ð8:5Þ
pA
measurement from stereo aerial photographs (for details
refer ASP 1980, Wolfe 1983). This implies that elevation difference dh between two
Parallax and height relationship points can be obtained from parallax difference (dpAB), if
Figure 8.2 shows a schematic diagram where two suc- flying height above the lower point (H) and parallax of the
cessive vertical photographs (with lens centre positions L1 higher point (pA)are known.
and L2) have imaged the common ground area with a tower The above equations form the foundation of deriving
A–B. H is the flying height above the lower point (B) and f elevation from stereo coverage. The technique has been
is the focal length of the camera. The two corresponding applied extensively world-wide using earlier aerial photog-
equivalent positive planes are EPP1 and EPP2. The top A raphy and now satellite imaging.

Fig. 8.2 Schematic diagram to


deduce relationship between
height of objects and their image
parallax from an aerial stereo
photographic pair
8.2 Data Acquisition for Generating DTM 103

Various types of sophisticated equipments such as ana- digital file containing X, Y, Z coordinates of all the image
lytical stereo plotters were developed to handle stereo aerial points.
photography and generate high quality data. As a general Stereo-matching: After achieving geometric coherence of
rule, height measurements in photogrammetric instruments image data as above, stereo matching in successive image
could be done with an accuracy of a 10,000th of the flying pairs is carried out to determine parallax shift of each pixel
height with 60% overlap and a 152 mm camera, implying to help estimate elevation of the pixel. This is called
accuracy of * 0.3 m with a flying height of 3000 m. stereo-matching. For this, first overlapping areas in the
The change in position of the image of a point from one images are determined to establish a coarse match, followed
photograph to the next, parallel to the flight line, is called by a more accurate point matching. There are several
x-parallax and carries information on elevation of the point. approaches to the problem:
On the other hand, the change in position of a point from one
photograph to the next, perpendicular to the flight line, is (a) Feature based matching: This involves matching of high
called y-parallax and is a noise or error, caused by platform contrast features (lines, edges etc.) for correspondence
instability and needs to be removed or minimized. analysis between the two images forming a pair.
(b) Area or template matching: This involves matching a
part of first image with a moving window in the second;
8.2.4 Digital Photogrammetry Utilizing Remote using cross correlation function or minimum intensity
Sensing Image Data differences to find the best match; from this deduce the
parallax shift.
With the developments in computer and digital imaging
technology, the technique of digital photogrammetry has It may be briefly mentioned here that the automatic pro-
rapidly evolved during the last some 3–4 decades (ASPRS cess of stereo matching may be hindered by several factors
1996). It has now replaced the earlier form of conventional such as differences in scene illumination, scene dynamic
photogrammetry that was applied to aerial photographs, (vegetation, precipitation, snow etc.) atmosphere (cloud) and
though the basic concept of deducing elevation from image resampling of data, to some extent.
parallax remains the same. By digital photogrammetry,
digital stereo images of the optical wavelength range are
processed to obtain elevation data in the form of DEM, 8.2.5 UAV-Borne Digital Camera
digital maps or orthophotos. The input digital images may be
obtained from CCD line scanners (e.g. from SPOT, IRS, Unmanned aerial vehicles (UAV) or mini-UAV (mUAV) are
ASTER, Ikonos, WorldView, Cartosat, QuickBird etc.) or fast emerging as new platforms for acquiring digital images
by scanning the archive photographs. with very high spatial resolution (a few centimetres) in a cost
Orientation and triangulation: These are the basic oper- competitive manner needed for smaller project areas, such as
ations of photogrammetry for precisely defining all image local erosion, sedimentation, settlement planning etc. (Nex
points with geometric coherence. The purpose of orienta- and Remondino 2014). These digital camera images can also
tion is to recover the geometric relationship between the be handled by the same general methods of digital pho-
object on the ground and the image. The process of trian- togrammetry for making DEM, as outlined above.
gulation or block triangulation establishes a mathematical
relationship between the images, the camera or sensor
model, and the ground. Some of the basic modules used to 8.2.6 Satellite SAR Data
derive 3D coordinate information of objects from imagery
include interior orientation, exterior orientation, and block In general, the ordinary SAR image data though carries
triangulation (bundle block adjustment). Interior orientation information on slope, aspect and height of slopes, remains a
defines the internal geometry of a camera (or sensor) as it marginal technique for elevation estimation, owing to the
existed at the time of image acquisition. Exterior orientation persistent radiometric ambiguity between terrain reflectivity,
defines the position and angular orientation of the camera radar backscattering cross section and incidence angle
that acquired the image. After the interior and exterior (Toutin and Gray 2000). The radar stereoscopy has also
orientations are defined, the next step is to establish the distinct limitations—the same side repeat coverage has a
relationship between the image coordinate and the ground limitation of small base distance and the opposite side repeat
coordinate system, for which ground control points (GCPs) coverage suffers from radiometric disparity (see Sect. 16.3).
are used. The final step is triangulation that generates The principle of SAR interferometry (InSAR) has been
ground coordinates of each and every image point. These discussed in detail in Chap. 17, and the InSAR technique has
steps finally yield a geometrically coherent and correct shown enormous potential to deliver high resolution DEM.
104 8 Digital Elevation Model

The data from SRTM mission using InSAR has been used The best example of satellite altimetry is the ICESat (Ice,
to generate DEM world-wide (Rabus et al. 2003; Jarvis et al. Cloud, and land Elevation Satellite), that was a part of
2008). A slightly coarser resolution DEM (resolution of NASA’s Earth Observing System. The satellite mission was
3 arc-sec, approx. 90 m spatial resolution) is available free designed for generating elevation data for ice sheet mass
of charge from NASA/USGS on global basis. This data is balance, cloud and aerosol heights, as well as land topog-
reported to have a vertical error of less than 16 m. Besides, a raphy and vegetation.
higher resolution DEM from the SRTM (resolution of
1 arc-sec, 30 m spatial resolution) (https://lta.cr.usgs.gov/
SRTM1Arc) is also available for a large part of the globe. 8.3 Orthorectification

An orthoimage is an image in which displacement occurring


8.2.7 Aerial LIDAR due to topographic variation, photographic system, platform
instability, or earth’s curvature etc. has been rectified. It
LIDAR provides a cost-effective and efficient method of results in planimetrically correct maps or images on which
obtaining dense and precise elevation data. It is a measurements of distances, area, location of features can be
day-and-night active sensor and has been operated from both accurately done. Orthoimages or orthophotographs are
aerial and satellite platforms. It emits pulses of laser beams. generated from remote sensing image data. The procedure of
The incident laser beam interacts with the ground surface generating digital orthoimages is well established. It uses
and is scattered, and the back scattered signal is recorded by digital stereo images which are draped on DEM and each
the sensor. The location and height of the instrument is pixel is orthogonally projected on a flat plane that yields
precisely known by tracking the platform. The two-way orthorectification.
travel time is used to calculate distance or elevation of the
ground object, and the intensity of received signal can give
an idea of the object attributes. The vertical accuracy of 8.4 Derivatives of DEM
LIDAR is considered to be high. However, it has a rather
lower planimetric accuracy than the vertical accuracy. Dem can be used to generate several derivatives which are
The LIDAR can penetrate canopy. Even in a dense forest, used to throw valuable light on the terrain topography.
the LIDAR pulses can travel to the ground and map the A selected list is given in Table 8.1. Apart from the above
ground surface under the trees. Therefore, it can provide data DEM derivatives, a DEM can be used for several other
on height of trees as it can sense tree canopy as well as the applications, such as, drainage analysis, watersheds analysis,
ground. The high density point elevation data, called point determination of volume change between two surfaces, to
cloud, is then used to generate the elevation grid or the find out natural sinks in the surface, visibility analysis and
DEM. contour generation etc.

Table 8.1 Selected DEM derivatives


DEM Description Use
derivative
Shaded Provides the illumination condition of an area based on a Visualization of terrain, diurnal simulation of shaded area,
relief user-specified position of the sun; the areas in that are sun-lit snow melt simulation, etc.
and the areas in shadow are shaded
Slope Provides the steepness or the degree of inclination of a surface Hydrology, geo-morphometric analysis, infrastructure
on the pixel basis planning, runoff simulation, etc.
Aspect Depiction of orientation of slope. It is measured in clockwise Hydrology, geo-morphometric analysis, infrastructure
direction from 0 to 360 degrees, where 0 is north-facing, 90 is planning, vegetation analysis, radiation budget analysis, etc.
east-facing, 180 is south-facing, and 270 is west-facing
Curvature Curvature is the second order derivative of a surface, or the Identification of areas of rapid change in slope or aspect, e.g.
slope of the slope; it provides the concaveness or convexity of sedimentary strata of variable erosion rate and surface
the surface; it is of two types: plane curvature and profile drainages, for improved visualization geologic features,
curvature. Plane curvature is perpendicular to the direction of geo-morphometric analysis, hydrology, etc.
the maximum slope while profile curvature is parallel to the
direction of maximum slope
Viewshed It provides the visibility to or from a particular location Commonly used in infrastructure development and many
geological applications; also used in landing site selections of
different robotic planetary probes during planetary missions
8.5 Geological Applications of DEM 105

Fig. 8.3 DEM and the


corresponding shaded relief
image of the Chota-Shigri glacier,
Himalayas (courtesy: Reet Kamal
Tiwari)

8.5 Geological Applications of DEM time-consuming and laborious process. Therefore, in many
regional and global investigations, where the level of details
There are numerous geological applications of DEM such as required is not high, the freely available global DEMs are
the following: frequently used. Some of the best examples of such DEMs are
GTOPO30, GMTED2010, ASTER GDEM, SRTM, ICESat
• DEM, like topographic maps, serves as the primary Global Land Surface Altimetry Data (GLA14) and ALOS
database on which all other data can be co-registered to World 3D (AW3D30). The resolution of these DEMs varies
generate GIS. from 30 to 1 arc-sec, and the time span corresponds to between
• DEM is used to visualize the terrain topography the years 1996–2015. Apart from these freely available global
(Fig. 8.3). elevation data, some high precision commercial data may also
• It can be used to visualize distribution of various geo- be available e.g. SPOT-DEM and World-DEM.
logical features (rocks types, structures such as faults,
shear zones, igneous intrusive, etc.) and relate the same
with topography (hills, valleys, scarps, etc.).
• To visualize the distribution of various geochemical, References
geophysical anomalies in relation to topography and
geological features. ASP-American Society of Photogrammetry (1980) Manual of pho-
• DEM can also be used to generate lineaments (e.g. see togrammetry, 4th edn. ASP Falls Curuch, VA
ASPRS-American society of photogrammetry and remote sensing
Fig. 19.71). (1996) Digital photogrammetry: an addendum to the manual of
photogrammetry. ASPRS, Bethesda, MD
Jarvis A, Reuter HI, Nelson A, Guevara E (2008) Hole-filled SRTM for
the globe, version 4, available from the CGIAR-CSI SRTM 90 m
database (http://srtm.csi.cgiar.org). Accessed 23rd January 2017
8.6 Global DEM Data Sources Li Z, Zhu Q, Gold C (2005) Digital terrain modelling: principles and
methodology. CRC Press, Boca Raton, p 318
As outlined above, DEM can be generated from various data Meijerink AMJ, Brouwer HAN, Mannaerts CM, Valenzuela CR (1994)
sources, depending upon the scale and details required and Introduction to the use of geographic information systems for
practical hydrology. ITC Publ No. 23, Enschede
data availability. However, in general, generation of DEM is a
106 8 Digital Elevation Model

Nex F, Remondino F (2014) UAV for 3D mapping applications: a acquired by spaceborne radar. ISPRS J Photogram Remote Sens
review. Appl Geomat 6:1–15. doi:10.1007/s12518-013-0120-x 57:241–262
Rabus B, Eineder M, Roth A, Bamler R (2003) The shuttle radar Wolf PR (1983) Elements of photogrammetry, 2nd edn. McGraw-Hill,
topography mission—a new class of digital elevation models New York
Image Quality and Principles
of Interpretation 9

9.1 Image Quality (2) environmental factors, and (3) sensor system factors
(Table 9.1).
Image quality is a major factor governing the amount of
information extractable from a remote sensing product. It is 9.1.1.1 Ground Properties
therefore necessary to first have an idea of the various factors Lateral variation in the relevant ground properties (namely,
affecting image quality, before proceeding to interpretation spectral reflectivity, thermal properties such as thermal
and applications. Basically, there are two aspects to image inertia, emissivity etc., as the case may be) across the scene
quality—radiometric and geometric. Both collectively govern influences the radiometric image quality. Detecting such
the amount of extractable information. Whereas the geometric variations in ground across the scene is the crux of the
aspects were discussed in Chap. 6, we shall focus our atten- problem and fashions the scope of remote sensing
tion here on the radiometric aspects of image quality. applications.
During visual interpretation, objects on an image or
photograph are discerned from one another primarily by 9.1.1.2 Environmental Factors
relative differences in brightness (tone) or colour. It is a 1. Solar illumination and time of survey. Remote sensing
common experience that a bright object can be easily marked data should be acquired at a time when energy conditions are
if located against a dark background. However, the same stable and optimum for detecting differences in ground
bright object may be difficult to locate against a bright properties. In the solar reflection region, energy conditions
background (Fig. 9.1). Brightness contrast ratio, i.e. the depend on the azimuth and angle of Sun’s elevation and the
ratio of brightness of any two objects occurring side by side time of survey. A noontime survey is generally preferred for
on an image or photograph, is an important factor in obtaining images with uniform illumination and a minimum
deciding to what degree any two features can be differenti- of shadow. In specific cases, however, a low-Sun-angle
ated from each other by visual inspection (Fig. 9.2). survey may be required, e.g. for enhancing structural-
Sometimes the term contrast ratio is used to denote the ratio morphological features. Similarly, the thermal-IR survey is
between the maximum and minimum brightness in any one usually carried out in the pre-dawn hours when ground
scene. For this, however, we prefer to use the term dynamic temperatures are quite stable.
range. In general, a remote sensing product displaying good 2. Path radiance. Path radiance works as a background
dynamic range is said to be of good image quality and an signal and tends to reduce image contrast ratio (Fig. 9.3). In
image with low dynamic range is termed flat or washed out. the solar reflection region, scattering is the major source of
Digital processing techniques are available to enhance image path radiance and its effect can be minimized by cutting off
contrast ratio and dynamic range in images (Sect. 13.5). shorter wavelengths during photography/imaging. In the
Here, we shall discuss some of the more basic factors which thermal-IR region, the major cause of path radiance is
govern the radiometric quality of images. atmospheric emission; its effect can be minimized by con-
fining sensing to atmospheric windows. Furthermore,
atmospheric—meteorological models such as MODTRAN
9.1.1 Factors Affecting Image Quality may be used to estimate the magnitude of atmospheric
emissions. Digital techniques are also available for reducing
The radiometric quality of photographs and images depends the effects of path radiance and improving the image quality
upon three main groups of factors: (1) ground properties, (see Chap. 10).

© Springer-Verlag GmbH Germany 2018 107


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_9
108 9 Image Quality and Principles of Interpretation

Fig. 9.1 Role of image contrast ratio in visual discrimination. This dark shadows with cirrus-type structure and the absence of a dune
Landsat MSS image covers a part of the Sahara desert. In the pattern at some places in the image are the only indirect evidence of the
background are very light-toned sand dunes and the presence of equally existence of the clouds, such as at A (courtesy of R. Haydn)
light-toned clouds is not readily detected on the image. The Presence of

region. For example, rain increases soil moisture, which


alters ground albedo and thermal inertia. Clouds cast shad-
ows and therefore alter the energy budget in the solar
reflection and the thermal-IR region and also lead to
restricted or poor coverage. Wind accelerates cooling—a
factor which affects thermal response, and may also cause
aerial platform instability. Therefore, meteorological factors
may affect the image quality in three ways: (a) by changing
the local ground properties, (b) by altering the energy budget
and (c) by leading to poor ground coverage and platform
Fig. 9.2 Gray wedge illustrating the importance of image contrast
ratio for visual discrimination. The boundary between the horizontal instability.
and vertical bars is sharp when the contrast, ratio is high, and becomes
less distinct as the contrast ratio decreases. Note further that the, 9.1.1.3 Sensor System Factors
horizontal bar of the same gray tone appears to the eye to darker in the
1. Effect of optical imaging systems. The optical imaging
brighter background and brighter in the darker background
components, namely lenses, mirrors, prisms etc., are not
absolutely perfect but real, and therefore minor diffraction,
Table 9.1 Factors affecting radiometric quality of images and aberrations etc., are present. However, their effects are quite
photographs
negligible.
A. Ground/terrain properties 2. Shading and vignetting. As view angle increases, the
Lateral variations in relevant ground properties such as albedo, radiation intensity at the sensor surface decreases, theoreti-
thermal inertia, anomalous heat source etc., including effects of cally as a function of cos4 or cos3 of the view angle; this
topography and slope aspect
variation is often termed ‘shading’ or ‘cos4 fall-off’ (Slater
B. Environmental factors 1980). Vignetting is absorption and blocking of more radi-
1. Solar illumination and time of survey ation by the lens wall, thus de-creasing the radiation reach-
2. Path radiance
ing the sensor. Normally, both shading and vignetting, in
3. Meteorological factors
combination, produce significantly decreased image bright-
C. Sensor system factors
ness towards the corners of images. To counter this effect,
1. Effects of optical imaging image detection and recording systems commonly an anti-vignetting filter is mounted on the optical
2. Shading and vignetting
3. Image motion lens system. Additionally, if the illumination fall-off char-
4. Striping acteristics in the lens system are known, digital techniques
Adapted after Moik 1980, Sabins 1987 can be used to process and rectify the images. Yet another
widely used technique is ‘dodging’, carried out during the
printing stage (Fig. 9.4).
3. Meteorological factors. Meteorological factors such as 3. Image motion. The relative movement of the sensor
rain, wind, cloud cover etc. may significantly alter the platform with respect to the ground being imaged during the
ground properties and influence response in the optical period of exposure or sensing leads to image motion. This
9.1 Image Quality 109

Fig. 9.3 Effect of path radiance on image quality. a Landsat MSS1 has led to obscuring of many details and a poor image contrast ratio
(green band) image and b Landsat MSS4 (near-IR band) of the same (courtesy of R. Haydn)
area in Venezuela; note that the path radiance in the MSS1 band image

Fig. 9.4 a Vignetting effect; the


image is lighter in the central part
and darker in the peripheral
region. b Vignetting effect having
been rectified by ‘dodging’ at the
printing stage (courtesy of GAF
mbH, Munich)

results in the formation of streaks on the image. It is typi- of information at each photographic regeneration stage and
cally a problem in photographic systems where exposure the amount of extractable information from the daughter
duration are relatively long (about 1/30–1/100 s), and for- products usually decreases, successively, at each next stage.
ward motion compensation (FMC) devices have to be used
for better results. In digital cameras, although the exposure
duration is relatively short (about 1/100–1/500 s), FMC 9.2 Handling of Photographs and Images
devices may still be necessary in order to achieve high
spatial resolution. 9.2.1 Indexing
4. Striping. When a series of detector elements is used for
imaging a scene (e.g. in the case of Landsat TM or in CCD Indexing provides the following inputs, which are helpful
linear or area arrays), the radiometric response of all the during interpretation:
detector elements may not be identical. This non-identical (1) location of the area; (2) orientation, i.e. north;
response causes striping, and could lead to a serious (3) scale, (4) geometric distortion in the images (approxi-
degradation in image quality (de-striping is discussed in mate); and (5) regional setting. The index map is useful, for
Sect. 13.2.2). example, in planning reconnaissance study, stereoscopic or
It should also be borne in mind that the radiometric detailed study, and for collecting ancillary information.
quality of images and photographs is subject to the efficiency Indexing is the first step carried out during the study of
of the duplicating system, i.e. characteristics of the dupli- photographs and images. It aims at identifying the where-
cating material and the photographic process (exposure, abouts of the area portrayed on the photographs and images.
development and printing). Invariably there is a certain loss Generally, a small-scale topographical map (smaller than the
110 9 Image Quality and Principles of Interpretation

photograph by a scale factor of 5–10) is used as a base map for geometric rectification for angular (tilt and tip)
for indexing. Using control points, the area covered in each distortions.
photograph or image is demarcated on the small-scale index
map. Each image or photograph is assigned a certain index
number, which can be used as an identification number. 9.2.4 Stereo Viewing
These days website data based indexing is possible.
The photographs and images are studied stereoscopically for
2.5-D (earlier called 3-D) perception and interpretation (see
9.2.2 Mosaic Sects. 7.2).

A mosaic is a set of photographs (or images) arranged to


facilitate a bird’s-eye view of the entire area (Figs. 9.5 and 9.2.5 False Colour Composites (FCCs)
9.6). In a mosaic, adjacent photographs or images are
arranged side by side so that there is no overlap, the adjacent The multispectral techniques of photography and imaging
boundaries match, and the features continue laterally. Often, (scanning) provide black-and-white products of the same
photographs are cut and pasted together to generate a ground scene in a number of spectral bands. For interpre-
mosaic. Satellite images such as those from Landsat, IRS tation purposes, these can be studied one-by-one or collec-
etc., in general, possess good geometric fidelity and are tively. For combining multispectral data products in
suitable for making reasonably scaled mosaics. These visual/analog mode, the additive or subtractive theory of
mosaics may exhibit sharp radiometric breaks at frame colours is used (Appendix A).
boundaries, particularly if the images pertain to different As mentioned earlier, there are three primary additive
seasons or times (Fig. 9.5). The break in radiometric conti- colours (blue, green and red), and correspondingly, there are
nuity can be removed by digital processing so that the three primary subtractive colours (yellow, magenta and cyan)
radiometric levels of various objects in the two sets of image (Appendix A). Colour films utilize the primary subtractive
data match each other (Zobrist et al. 1983), and a larger colour route. On the other hand, primary additive colour
homogeneous scene is generated (Fig. 9.6). model is used in the image display route for combining
black-and-white multispectral images for generating FCCs.
At a time, only three spectral band images can be input
9.2.3 Scale Manipulation for concurrent display in colour mode, one band in one
primary additive colour. The basic principle is as follows:
The scale of photographs or images can be suitably altered image data of different spectral bands are co-projected, one
by a variety of projection equipment, such as a photographic spectral band in one colour (blue or green or red) (Fig. 9.7).
enlarger and rectifier. A photographic enlarger is the sim- The optical combination produces a colour composite which
plest device to enlarge transparent images or photographs. carries information from all of the three input images. The
A rectifier is a more refined piece of equipment and is used collective in-formation is rendered in terms of colour—in

Fig. 9.5 Simple mosaic


generated from Landsat MSS2
(red band) images. Note that the
mosaic generated from images
acquired in different seasons
exhibits a break in photo-tones
9.2 Handling of Photographs and Images 111

Fig. 9.6 A digitally processed


mosaic generated from
ALOS-PRISM image data of part
of Delong Mountains, Alaska; the
radiometric levels in the two sets
of image data have been
processed to match each other to
create a homogeneous mosaic; the
two arrows indicate the line of
stitching for generating the
mosaic (Courtesy: Anupma
Prakash)

varying hue, saturation and bright-ness across the scene.


This is called a false colour composite (FCC). A quite
commonly applied colour coding scheme is the one analo-
gous to colour infrared (CIR) film, viz. green band image
coded in blue colour, red band image in green colour and
infrared band image in red colour; this is called ‘standard
FCC’ or ‘CIR composite’.
FCCs can be generated by combining image data of
different types, sources, temporal coverages, processing
levels etc. FCCs are very widely used in digital processing
and image interpretation, particularly when geologic identi-
fication and interpretation is sought by trial and error.
Image handling and processing techniques have under-
gone a dramatic change during the past three decades. At the
beginning of the Landsat era, in 1970s–80s, an additive col-
our viewer was a simple instrument used extensively for
Fig. 9.7 Working principle of generating an FCC. Images of three generating FCCs. Now, with the proliferation of PC-based
spectral bands are co-projected, one image in one colour (blue or green processing devices, it has become obsolete. Miniaturization,
or red); the optical combination produces a composite carrying
information from all of the three input images in colour mode along with mass production of low-cost computer systems has
changed the scenario in favour of digital processing systems.
112 9 Image Quality and Principles of Interpretation

9.3 Fundamentals of Interpretation Coarse texture: developed due to


clustering of trees with rounded
Photo-interpretation is the art and science of examining crowns in a large-scale photograph
photographs to identify the objects portrayed on them and
evaluate their significance. Principles of photo-interpretation
were initially developed for aerial photographic work. They
are now extended to all remote sensing visual data products, Medium texture: developed due
including processed and unprocessed images. The concepts to bushy vegetation and scattered
trees in a medium-scale photograph
of photo-interpretation have been well elucidated in various
works on photo-interpretation and photogeology (e.g. Col-
well 1960; Miller and Miller 1961; Ray 1965; Mekel 1978;
Von Bandat 1983; Teng et al. 1997).
Fine texture: developed due to bushy
vegetation and scattered trees in a
medium-scale photograph
9.3.1 Elements of Photo-Interpretation

A photograph or image is studied in terms of the following


parameters: (1) tone or colour, (2) texture, (3) shape, (4) size,
(5) shadow, (6) site or association and (8) pattern. Smooth texture: developed due to
alluvial deposits in desertic region in a
1. Tone or colour. Tone is a measure of the relative medium-scale photograph
brightness of an object in shades of gray. The term is used for
each distinguishable shade from black to white, such as dark
gray, medium gray, light gray etc. The tone is an important
parameter of photo-interpretation and could be linked to various
Rough texture: developed due to
physical ground attributes, e.g. reflectivity in the solar reflection irregularly vegetated rough glacial
region, or radiant temperature in the TIR region. Colour prod- terrain in a small-scale photograph
ucts are obtained from colour photography or by colour coding
of multispectral image data. The use of colour space dramati-
cally increases the interpretability of data, as much subtler
distinctions can be made. Appropriate terms are used to Rippled texture: developed due to
describe the colour, such as deep red, pink, light blue etc. water waves over shallow water
2. Texture. Texture signifies the tonal arrangement and surface in a small-scale photograph
changes in a photographic image. It is defined as the
‘composite appearance presented by an aggregate of unit
features too small to be individually distinct’. It is a product
of their individual colour, tone, size, spacing, arrangements Mottled texture: developed due to
and shadow effects (Smith 1943). Texture is dependent on pitted outwash plain in fluvio-glacial
scale; the same group of objects could have different textures region in a medium-scale photograph
on different scales. Texture is more important on larger-scale
photographs. Various terms could be used to describe tex-
ture, e.g. fine, medium, coarse, speckled, granular, mottled,
banded, linear, blocky, woolly, criss-cross, rippled, smooth, Speckled or granular texture:
even etc. Some examples are given in Fig. 9.8. developed due to scattered vegetation
over poorly bedded loosely cemented
3. Shape. Shape refers to the outline of an object. Many sandstone in a small-scale photograph
geomorphologic shapes are diagnostic, such as, alluvial fans,
sand dunes, ox-bow lakes, volcanic cones etc. Further,
structures such as dolerite dyke ridges and bedding strike
ridges can also be identified by morphological shape. Criss-cross texture: developed due to
closely spaced inter-secting joints in
4. Size. The size of a feature is also a significant param- basic igneous rocks in a small-scale
eter in photo-interpretation. An idea of the size of an object photograph
can be obtained only after the scale of the image or pho-
tograph is known. Size, when considered in conjunction with
shape and association, is a very useful parameter. Fig. 9.8 Some typical photo textures (after Pandey 1987)
9.3 Fundamentals of Interpretation 113

5. Shadow. Shadow cast by objects is at times quite vegetation, land use and soil. From the study and analysis of
informative, especially in the case of man-made objects. It these surface features, which are referred to as geotechnical
gets more pronounced on low-Sun-angle images. elements, significant information on lithology, structure,
This parameter should be considered together with shape mineral occurrences and subsurface geology may be derived.
and size, and direction of illumination. In some photo-interpretation studies, any one of these
6. Site or association. Certain features are preferentially geotechnical elements could itself form the objective of study.
associated with each other, and this mutual association 1. Landform. The shape, pattern and association of some
of objects is one of the most important guides in landform features can be helpful in identifying geological
photo-interpretation; for example, extrusive rocks are asso- features. For example, sand dunes have a peculiar typical
ciated with volcanic landforms such as cones, calderas, pattern and shape, and are produced by wind action. Alluvial
dykes, lava flows etc.; similarly, alluvial land-forms include land-forms such as ox-bow lakes, natural levees etc. are
fans, meandering channels, ox-bow lakes, point bars etc. quite characteristic and typically produced by fluvial pro-
7. Pattern. Pattern refers to the spatial arrangement of cesses. Similarly, many marine landforms are distinctive in
objects. It is an important parameter in photo-interpretation, due shape and pattern. Erosional landforms resulting in linear
to the fact that a particular pattern may have genetic significance ridge-and-valley topography due to differential weathering
and could be diagnostic. Patterns can be formed by different are characteristic of alternating competent and incompetent
types of objects, such as rock outcrops, drainage, streets, fields, horizons. Therefore, a systematic study of landforms is
soil type etc. A specific term can be used to describe the spatial generally a pre-requisite in nearly all geological photo-
arrangement of each of these, e.g. linear, radial, rectangular, interpretation studies.
annular, concentric, parallel, en-echelon, checkerboard; other 2. Drainage. Drainage is one of the most important
suitable terms may also be used as needed. It is important to note geotechnical elements for geological photo-interpretation.
that pat-tern depends upon scale. The units which may be vis- The study of drainage on photographs includes three aspects:
ible individually on a larger scale may coalesce or merge into (a) drainage texture, (b) valley shape and (c) drainage pattern.
each other on a smaller scale, and thus a group of objects The study of drainage texture comprises drainage density
forming a pattern on a larger scale may have to be described (ratio of the total stream length within a basin to the area of
under the term texture on a smaller-scale photograph. the basin) and drainage frequency (number of streams in a
The above elements of photo-interpretation are used to basin divided by the area of the basin). Drainage texture is
make observations on photographs and images. However, primarily influenced by three factors: climate, relief and
interpretations in terms of various physical attributes and character of the bedrock or soil (i.e. porosity and perme-
phenomena have to be based on sound knowledge of the ability). Drainage density can be described as fine, medium or
relevant scientific discipline. This becomes even more evi- coarse. Drainage is said to be internal when few drainage
dent when we take into account the convergence of evidence lines are seen on the surface and drainage appears to
approach. Convergence of evidence implies integrating all the be mostly sub-surface, e.g. commonly in limestones and
evidence and interpretations gathered from different photo gravels. External drainage refers to cases in which the
recognition elements, i.e. considering where all the evidence drainage network is seen to be well developed on the surface.
collectively leads to. The approach of convergence of evi- Low drainage density (coarse-textured drainage) implies
dence is very important for accurate geological interpretation, porous and permeable rocks, such as gravels, sands and
and for this a sound knowledge of geology is necessary. limestones. High drainage density (fine-textured drainage)
implies im-permeable lithology, such as clays, shales etc.
Figure 9.9a illustrates coarse- and fine-textured drainage.
9.3.2 Geotechnical Elements The shape of the valley may also vary and can give a good
idea of the bedrock or soil. Figure 9.9b shows some typical
The elements of photo-interpretation are applied to study types of valley cross-sections. Short gullies with V-shaped
features on the Earth’s surface such as landforms, drainage, cross-sections often develop in sands and gravels, whereas

Fig. 9.9 a Drainage textures: i coarse and ii fine. b Typical valley cross-sections: i V-shaped, ii U-shaped and iii gently rounded
114 9 Image Quality and Principles of Interpretation

U-shaped gullies develop in silty soils. Long gullies with Therefore, tone of vegetation could be related to the bed-
gently rounded cross-section generally indicate clayey soils. rock, which may lead to broad vegetation bandings, parallel
Drainage pattern is the spatial arrangement of streams. to the lithology (see e.g. Fig. 19.27). In some cases,
Drainage patterns are characteristic of soil, rock type or structural tectonic features such as faults, fractures and
structure. Several authors have described drain-age charac- shear zones produce water seepage zones, along which
ters and classified them on the basis of genetic and geometric vegetation may become aligned (Figs. 19.28 and 19.35).
considerations (Zernitz 1932; Parvis 1950; Miller and Miller This alignment may be picked up on the remote sensing
1961; Howard 1967). Commonly, six drainage patterns have data, especially in semi-arid to arid areas with generally
been considered as the basic types whose gross character- scant plant cover. Plants can thus reveal both structural and
istics can be readily distinguished from one another. They lithological features in a terrain. Recently, much work on
are: dendritic, rectangular, parallel, trellis, radial and annular. geobotanical exploration using remote sensing data has
A number of modified basic patterns have also been been undertaken (Sect. 19.8.7).
described (see Fig. 16.6). Their utility in geological inter-
pretation is summarized in Table 16.1.
3. Soil. The operation and interaction of natural agencies References
of weathering and erosion on the bedrock produce soil. The
physical nature of soil therefore depends on the bedrock Colwell RN (ed) (1960) Manual of photographic interpretation. Am
material and agencies of weathering. Soc Photogramm, Falls Church, VA
Soils are classified as residual, transported or organic, Howard AD (1967) Drainage analysis in geological interpretation: a
summation. Am Assoc Petrol Geol Bull 51:2246–2259
depending upon their origin. On the basis of composition and Mekel JFM (1978) ITC Textbook Of Photo-Interpretation. Chap VIII.
physical characteristics, soils can be designated as clayey, The use of aerial photographs and other images in geological
loamy, silty, sandy, gravelly and combinations thereof. mapping. ITC, Enschede
Broadly, they are called fine-textured, medium-textured or Miller VC, Miller CF (1961) Photogeology. McGraw-Hill, New York
Moik JG (1980) Digital processing of remotely sensed images. NASA
coarse-textured. Soils have characteristic hydrological prop- SP-431, US Govt Printing Office, Washington, DC
erties, namely soil permeability and porosity, which govern Pandey SN (1987) Principles and applications of photogeology. Eastern
the surface run-off vis-à-vis subsurface infiltration. Soils can Wiley, New Delhi, p 366
be grouped as poorly drained, moderately drained, well Parvis M (1950) Drainage pattern significance in airphoto identification
of soils and bed-rocks. Photogram Eng 16:387–409
drained and excessively drained. The coarse-textured soils, Ray RG (1965) Aerial photographs in geologic interpretation and
owing to their larger grain size, are invariably better drained mapping. USGS Prof Paper 373
than the fine-textured soils, in which infiltration of water is Sabins FF Jr (1987) Remote sensing principles and interpretation. 2nd
inhibited. These properties underlie the response of soils on edn, Freeman, San Francisco, 449 pp
Slater PN (1980) remote sensing optics and optical systems. Addison
photographs and images. Wesley, Reading, 575 p
4. Vegetation. Vegetation in an area is controlled by Smith HTU (1943) Aerial photographs and their application.
climate, altitude, microclimate (local conditions), Appleton-Century, New York, p 372
geological/soil factors and hydrological characteristics. The Teng WL et al (1997) Fundamentals of photographic interpretation. In:
Philipson WR (editor-in-chief) Manual of photographic interpreta-
occurrence of plant association in different climate and tion, 2nd ed, Am Soc Photogram Remote Sens, Bethesda, Md,
altitude conditions is well known (see e.g. Von Bandat pp 49–113
1983). Commonly, alignment or banding of vegetation is Von Bandat HF (1983) Aerogeology. Gulf Publishing, Houston, Texas
observed on remote sensing photographs, which may be Zernitz ER (1932) Drainage patterns and their significance. J Geol
40:498–521
related to lithological differences or structural features. The Zobrist AL, Bryant NA, MeLeod RG (1983) Technology for large
height, foliage, density, crown, vigour and plant associa- digital mosaics of Land-sat data. Photogram Eng Remote Sens
tions depend on the soil-hydrogeological conditions present. 49:1325–1335
Atmospheric Corrections
10

10.1 Introduction different, as per the above effects and requirements. Broadly
speaking, atmospheric corrections can be carried out by
In this chapter we discuss the atmospheric corrections empirical-statistical methods, or by using complex radiative
required in the optical part (0.3–14 lm) comprising transfer models or by hybrid methods etc. and will be briefly
VIS-NIR-SWIR-TIR wavelengths of the EM spectrum. As discussed in this chapter.
the EM radiation passes through the atmosphere, it under-
goes modification in intensity due to atmospheric interac-
tion, viz. selective scattering, absorption and emission 10.2 Atmospheric Effects
(discussed in Sect. 2.3). The near UV and visible wave-
lengths are significantly influenced by atmospheric scatter- 10.2.1 Solar Reflection Region
ing; the near-IR and SWIR wavelengths are practically free
of atmospheric scattering effects but have some selective In the solar reflection region, the path radiance caused by
absorption bands; for the longer thermal IR wavelength, the atmospheric scattering, particularly Raleigh scattering, is the
atmosphere is dominantly absorptive at selected wave- main concern. Selective absorption occurs due to ozone
lengths. The image digital number (DN) obtained from (0.35 µm), H2O-vapour (0.69, 0.72, and 0.76 µm) CO2 (1.6,
remote sensing only depicts the relative brightness value of a 2.005 and 2.055 µm) and the effects of atmospheric
target pixel. absorption are minimized by confining remote sensing to
The main objective of atmospheric corrections is to atmospheric windows. For a typical case of solar reflected
retrieve the realistic surface reflectance or emittance values sensing, the paths the radiation follows before reaching the
of a target from remotely sensed image data, by accounting sensor are shown in Fig. 10.1. These include the following:
for and removing the atmospheric influences on image Downwelling solar radiance. This refers to the direct
radiometry. Converting the remotely sensed measurement incoming solar radiance that undergoes minimal attenuation
into a realistic physical parameter such as surface reflectance before reaching the target that is under the field of view of
or emissivity or temperature is useful in many respects, such the sensor (path 1). The radiance reaching the target is a
as the following: function of the solar viewing angle and atmospheric trans-
mittance, with the radiance reaching the target being much
• The remote sensing measurements can be directly com- higher in dry clear sky conditions when atmospheric trans-
pared with ground based measurements, providing mittance is close to 1.
greater validity to remote sensing technology and sanc- Diffuse sky radiance. This is the component of the total
tity to its data. solar radiance that is scattered in the atmosphere and largely
• It simplifies satellite data inter-comparisons, such as for enters the sensors field of view without ever reaching the
quantitative change detection evaluation. target (path 2). Some scattered and attenuated energy can
• The various modeling studies these days need to have however be directed to the target in the form of downwelling
quantitative inputs from remotely sensed images, for atmospheric radiance (path 3). Diffuse sky radiance is more
which atmospheric corrections are quintessential. pronounced in the shorter wavelength range (UV—blue)
where Raleigh scattering is dominant, and may be quite
In view of the varying interaction of the atmosphere with negligible in the near-IR—SWIR region.
EM radiation (scattering or absorption or emission), tech- Reflectance from adjacent areas. Some direct solar radi-
niques and procedures of atmospheric correction have to be ance and diffuse sky radiance reaches the areas adjacent to

© Springer-Verlag GmbH Germany 2018 115


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_10
116 10 Atmospheric Corrections

decrease in effective spatial resolution of the sensor, and also


reduced image contrast.
In any case, the diffuse sky radiance (path 2) and the
radiance reflected from adjacent areas (path 4) that reach the
sensors field of view, together comprises the path radiance
(LP). Path radiance causes reduction in image contrast due to
the masking effect, as a result of which dark objects appear
less dark and bright objects appear less bright on the image.
All other radiance reaching the sensor (combination of paths
1, 3, and 5) constitute the target radiance (LT). Path radiance
and target radiance together constitute the total radiance
reaching the sensor (LS). In a simple form, this can be
expressed as:

L S ¼ LT þ LP ð10:1Þ
Fig. 10.1 Components of radiance reaching the sensor in the solar
reflection region; 1 solar radiance reflected from the ground; 2 diffuse Expressing the same to show the relationship between the
sky radiance scattered from the atmosphere; 3 diffuse sky radiance
reflected from the target; 4 solar radiance reflected from the adjacent
radiance received at the sensor and the target reflectance
areas entering the sensor’s IFOV; 5 a small amount of radiance we get:
reflected-off from the adjacent areas falling on the target and getting
reflected into the IFOV. Here, 1 + 3 + 5 constitute the ground radiance; LS ¼ L D  q  s a þ LP ð10:2Þ
and 2 + 4 constitute the path radiance
where,
the target and gets reflected into the sensors field of view LS the total radiance received at the sensor,
(path 4). A small amount of the radiance reflected off from LD total downwelling radiance,
the adjacent areas can also fall on the target area (path 5). q target reflectance,
Both path 4 and path 5 contribute to what is known as sa atmospheric transmittance,
adjacency effect. The radiance from adjacent areas leads to a LP path radiance.

Table 10.1 Overview of atmospheric correction procedures


S. Name Salient features
no.
1. Dark object subtraction A simple and fast technique for correcting for scattering effect. It assumes that clear open deep water
bodies and very deep shadow zones ought to have zero reflectance in the IR-SWIR bands; therefore, the
minimum DN value in all other shorter wavelength bands over such pixels is attributed to atmospheric
path radiance and is subtracted from all other pixel values in the respective spectral bands
2. Empirical line calibration It requires field-measurements of reflectance spectra for at least one bright target and one dark target; the
at-sensor radiance of the corresponding bright and dark target pixels is derived from the remote sensing
data; for each spectral band, a linear calibration relationship between the field and the image data is
obtained; the intercept is attributed to atmospheric path radiance, and is subtracted from the whole scene
3. Multiple image regression It is used to normalize multi-temporal images by using pseudo-invariant features (PIFs); one image is
selected to serve as the base image; the reflectance of corresponding PIFs on all multi-temporal images
are successively plotted against the base image to develop regression equations; the transformed images
have approximately the same radiometry scale as the base image
4. Flat field calibration It is generally used in hyperspectral remote sensing; a mean spectrum of a flat field (an area whose
reflectance does not change with wavelength) is established to serve as a reference spectrum; reflectance
spectra of all other pixels are derived by dividing the spectra of each pixel by the reference spectrum
5. Internal average relative Also used commonly in hyperspectral sensing, it is a modification of the flat field calibration method; the
reflectance average spectrum of the entire image (internal average reflectance) is used as a reference spectrum;
reflectance spectra of all other pixels are derived by dividing the spectra of each pixel by this averaged
reference spectrum
6. Radiative transfer Radiative Transfer Models (RTMs) or Codes provides absolute calibration of remote sensing image data
modeling (RTM) by modeling the atmospheric conditions; various models such as LOWTRAN, 5S, SMAC, 6S,
MODTRAN have been developed; besides, several commercially available models such as ATCOR,
FLAASH, HATCH are available
7. Hybrid methods Use combinations of RTM and empirical approaches for complimentary advantages
10.2 Atmospheric Effects 117

Thus, the basic task in all atmospheric correction Lr is the reflected atmospheric radiation due to down-
approaches in the solar reflection region is to estimate and welling atmospheric flux D; with surface emissivity being
remove the additive component of path radiance, so as to e (=absorptivity), the reflected component is equal to
reduce the at-sensor radiance value to ground radiance value. s  (1 − e) (D/p). In the thermal-IR spectral range 8–12 lm,
In general, corrections may involve establishing empirical- for most natural surfaces, the emissivity ranges between 0.95
statistical relationships between ground observations and and 0.99; and therefore, the reflected component (Lr) in this
sensor measurements or may be done by sophisticated spectral range is very small and can be neglected.
modeling that requires detailed knowledge of the atmo- Therefore, the total signal Ls can be considered as equal
sphere. Different levels of atmospheric correction can lead to to (Lg + Lp). Hence, ground radiance equals:
different approximations of scene reflectance. Table 10.1 
provides an overview of different atmospheric correction Ls  Lp
LBBðTÞ ¼ ð10:3Þ
approaches. se
In the thermal-IR region, the aerosol influence is
strongly reduced. The path radiance (Lp) and transmittance
10.2.2 Thermal IR Region (s) are governed H2O vapour and visibility, which can be
determined from the coregistered concurrent solar reflec-
In the thermal-IR region, the atmosphere is mainly absorp- tion image data. Thus, with the surface emissivity (e)
tive, with features being caused by H2O-vapour, O3 and CO2 being known or assumed, the temperature (T) can be
(see Fig. 12.1). The absorption is nearly complete in some computed.
parts of the spectrum, e.g. between 5 and 7 lm. For terres-
trial remote sensing, the thermal-IR atmospheric window
(8–14 µm) is used. 10.3 Procedures of Atmospheric Correction
Figure 10.2 shows a schematic of radiation components
in the thermal-IR region. The total signal Ls comprises of 10.3.1 Empirical-Statistical Methods
three main radiation components: Lg, Lp and Lr:
Lg is the blackbody radiation emitted by the Earth Several empirical-statistical methods have been developed
reaching the sensor, the Earth’s surface temperature being T, over the years for atmospheric correction in the solar
and emissivity e; if the atmospheric transmittance is s, Lg = s reflection region.
 e  LBB(T). 1. Dark-object subtraction method. This is a simple
Lp is the thermal path radiance due to emitted and scat- method that is fast to implement and therefore quite popular
tered radiance of different layers of the atmosphere. to carry out a first-order atmospheric correction. The method

Fig. 10.2 Components of radiance reaching the sensor in the radiance of the atmosphere; Lr reflected atmospheric radiation due to
thermal-IR region. The three main components are: Lg the blackbody downwelling atmospheric flux D (Lr being negligible in the 8–12 µm
radiation emitted by the Earth, with surface temperature T, and atmospheric window)
emissivity Ɛ; Lp thermal path radiance due to emitted and scattered
118 10 Atmospheric Corrections

is based on the assumption that the IR bands are essentially (Fig. 10.4) and the slope (m) and intercept (a) of the empirical
free of atmospheric effects, and therefore, clear open deep linear plot are determined:
water bodies and very deep shadow zones (that constitute the 
‘dark objects’) will have no reflectance in the IR-SWIR bands Reflectancefield ¼ m Radianceimage  offset ð10:4Þ
(completely dark pixels; DN = 0) (e.g. Gordon 1978). The
The intercept (offset) ‘a’ can be attributed to atmospheric
minimum DN value in all other shorter wavelength bands
path radiance, and is subtracted from the whole scene to
over such pixels is attributed to atmospheric effects (haze;
approximately correct for haze. The method assumes that the
path radiance) and is subtracted from all other pixel values in
relationship between at-satellite radiance and ground
the respective spectral bands. The DN value to subtract from
reflectance is linear and the atmospheric effects are constant
each band can either be the band minimum (also referred to as
across the whole image.
the histogram method; Fig. 10.3a), or an average based upon
The ELC method forces the remote sensing image data to
a user defined region of interest, or a specific value. Alter-
match the field-based spectral reflectance measurements. For
natively, the user can compute TOA reflectance for each
hyperspectral images, where the spectral bands are closely
spectral band, and then subtract TOA reflectance of the ‘dark
spaced, this method results in generating image-based
object’ for the corresponding bands. Table 19.5 presents an
example of the ‘dark-object subtraction’ method.
A variation of this is the scatterogram method (Fig. 10.3b)
(Crippen 1987). DN values of the visible band (blue/green/
red) are plotted against the near-IR band. The line of best fit
will intercept the short-wavelength axis at a DN approximat-
ing the haze component. This DN value can then be subtracted
from all the pixels to remove the haze component. Chavez
(1988, 1996) suggested improvements to the ‘dark-object
subtraction’ method by incorporating a relative scattering
model. It is based on the fact that Raleigh scattering is inver-
sely proportional to the ‘n-th’ power of the wave-length, the
value of ‘n’ varying with the atmospheric turbidity condition.
2. Empirical line calibration. The empirical line calibra-
tion (ELC) method attempts to calibrate the at-sensor radiance
with field measurements. It requires field-measurements of
reflectance spectra for at least one bright target and one dark
target (Farrand et al. 1994; Smith and Milton 1999). The
at-sensor radiance (Lƛ) of the corresponding bright and dark
target pixels is derived from the remote sensing data. For each
spectral band, the image-based spectral radiance (Lk) is
plotted against field-based spectral reflectance (Rk) Fig. 10.4 Empirical line calibration method

Fig. 10.3 Correction for atmospheric scattering by ‘dark-object’ subtraction method using a histogram, and b scatterogram
10.3 Procedures of Atmospheric Correction 119

reflectance spectra that closely match the field or laboratory 5. Internal average relative reflectance method. When
spectra (Aspinall et al. 2002). there is insufficient information about the study area to
Key to the success of the ELC method is that the field establish a flat field spectrum, a modification of the flat field
reflectance spectra of the targets are measured synchronous calibration method can be used (Kruse 1988). The average
to, or as close as practically possible to, the satellite overpass spectrum of the entire image (internal average reflectance) is
or the airborne data acquisition time. Though natural dark computed and used as a reference spectrum. Again, reflec-
targets (e.g. deep water bodies) and bright targets (e.g. dry tance spectra of all other pixels are derived by dividing the
barren sandy area) can be selected for field measurement, it spectra of each pixel by this averaged reference spectrum. In
is often difficult to find ideal dark and bright targets within case where surface cover with strong absorption features is
the same scene in the area of interest. It is therefore common present, correction by IARR method could produce artifacts
practice to set-up dark (black) and bright (white) calibration which may lead to erroneous interpretation.
panels in the field sites. To serve as good calibration targets,
the panels need to be larger than the sensors field of view to
ensure that the at-sensor radiance is solely from a uniform 10.3.2 Radiative Transfer Modelling Based
target. Methods
3. Multiple image regression method. This method is
used to normalize multi-temporal images so that the cor- Atmospheric correction based on Radiative Transfer Models
rected images appear as if obtained under the same atmo- (RTMs) or Codes provides absolute calibration of remote
spheric conditions and with the same sensor as the reference sensing image data, in comparison to relative calibration by
image (Hadjimitsis 2009). The method is based on selecting the empirical statistical methods. Absolute calibration of
pseudo-invariant features (PIFs), whose spectral character- optical data using any of the several existing RTMs converts
istics change very little over time (Schott et al. 1988; scene specific DN values to scaled surface reflectance val-
Schroeder et al. 2006). Ideally PIFs must be large, smooth ues that can be compared to similarly scaled reflectance
and flat (approximating a Lambertian reflector) and at the values at any other site (Gao et al. 1993).
same elevation as the rest of the landcover in the image. Figure 10.5 shows a typical atmospheric transmittance
Artificial structures, such as flat rooftops, and concrete and curve. Major absorption bands due to H2O vapour are cen-
asphalt surfaces are good PIFs. Deep water bodies and flat tred at 0.94, 1.14, 1.38 and 1.88 µm, O2 bands at 0.76 µm
bare soil exposures can also serve as good natural PIFs. The and CO2 bands near 2.01 and 2.08 µm. In addition, other
method works on the assumption that differences in PIF gases such as ozone, carbon monoxide, nitrous oxide and
reflectance values on different date images are due to varying methane produce absorption features in the 0.4–2.5-µm
atmospheric conditions. range. Atmospheric models include corrections for atmo-
Practically, one image from the multitemporal data set is spheric effects, viz. absorption and scattering, as well as for
selected to serve as a base image and PIFs are isolated on illumination condition.
this image. Then, reflectance values of all the PIFs on other
date images are successively plotted against the base image.
The resulting regression equations are used to normalize the
images. The additive component in the regression equation
corrects the path radiance among dates, and multiplicative
term corrects the detector calibration, sun angle, earth-sun
distance, atmospheric attenuation, and phase angle between
dates. The transformed images have approximately the same
radiometric scale as the base image and are easier to com-
pare for change detection.
4. Flat field calibration method. The flat field calibration
is generally used in hyperspectral remote sensing. A mean
spectrum of a flat field (an area in the image whose reflec-
tance does not change with wavelength) is established either
from the image or from field measurement and serves as a
Fig. 10.5 Typical atmospheric transmittance curve; major absorption
reference spectrum. Reflectance spectra of all other pixels bands occur due to H2O vapour (centred at 0.94, 1.14, 1.38 and
are derived by dividing the spectra of each pixel by the 1.88 µm), O3 (0.76 µm) and CO2 (near 2.01 and 2.08 µm) (Berk et al.
reference spectrum (Roberts et al. 1986). 1989)
120 10 Atmospheric Corrections

Many radiative transfer codes have been developed. The place of specific aerosol concentration values. Local atmo-
more popular ones are: Simulation of the Satellite Signal in spheric visibility (in kilometers) at the time of image
the Solar Spectrum (5S) (Tanre et al. 1990); Simplified acquisition can be used to approximate aerosol concentration
Method for Atmospheric Corrections (SMAC) that uses 5S and aerosol optical depth in the visible region. These
as the reference model (Rahman and Dedieu 1994); Second atmospheric characteristics are then used to invert the remote
Simulation of the Satellite Signal in the Solar Spectrum (6S) sensing radiance to scaled surface reflectance.
(Vermote et al. 1997); and MODerate resolution atmospheric The atmospheric water vapour can also be calculated
TRANsmission (MODTRAN) versions 5 and 6 developed from sensor data in the 0.94- and 1.14-µm bands. Then, the
and maintained by Spectral Science Incorporated LLC and estimated water vapour content and solar and observational
the US Air Force Research Laboratory (Berk et al. 2008, geometry data can be used to simulate transmission spectra
2014). These models may be used to carry out atmospheric of mixed gases.
corrections in the entire optical range of the EM spectrum For absolute atmospheric correction and realistic esti-
(UV-VIS through TIR) (e.g. Richter and Schläpfer 2016). mation of ground reflectance values, knowledge of the image
All these models take a similar general approach. They illumination geometry, sensor spectral profile, and if possi-
model the atmosphere as a layered medium where the atmo- ble, ground reflectance data is also essential. The illumina-
spheric conditions change from the ground upwards. The tion geometry and sensor spectral profile can be obtained
composition and vertical temperature–pressure profiles of the from the image header information and standard published
standard layered atmosphere control its radiative effects. values for the sensors provided by the satellite mission team.
The gaseous composition of the atmosphere is dominated Common inputs in all models under this category include
by 78% nitrogen (negligible radiative influence) and 21% latitude and longitude of the scene, date and exact time of
oxygen (concentration varies with pressure) which are scene acquisition, image acquisition altitude, and mean
straightforward to model. The complexity in radiative mod- elevation of the scene. The models also require the spectral
eling of atmosphere comes primarily from the presence of band width of each input band and radiometrically calibrated
water vapour and ozone, which are radiatively complex gases, image radiance data.
whose concentration varies spatially and with altitude. Currently, several open source and commercial software
Atmospheric corrections based on vertical atmospheric pro- packages, such as ATREM, HATCH, FLAASH, ACRON,
file (radisonde data) require the following four main input ATCOR etc., are available to carry out atmospheric correc-
parameters as a function of height: temperature (K), pressure tions on remote sensing images. These packages are based on
(mB), water vapour density (g m−3), and ozone density one of the many RTMs mentioned earlier, and allow the user
(g m−3). An additional input required, that varies locally, is to either directly input the radiosonde profile data, or model
percent aerosol component (dust, soot etc.) and aerosol optical the atmosphere based on the general inputs described above.
depth in the visible region. Such atmospheric profile data Ben-Dor et al. (2004) provide a comparative assessment of
(atmospheric truth!) needs to be acquired at the time of image the strengths and weaknesses of some of the techniques.
acquisition and is used to model the absorption and scattering
characteristics of the atmosphere at that time and place.
Atmospheric profiles are available only for selected sites 10.3.3 Hybrid Methods
globally where routine radiosonde measurements are recor-
ded by meteorological stations. Atmospheric properties for Hybrid approaches use combinations of RTM and empirical
other sites are difficult to acquire even when planned. For approaches (Gao et al. 2009). The hybrid approach has the
most historic satellite data, they are not available. In the distinct advantage that RTM provides good atmospheric
absence of atmospheric profile data, the specific inputs correction for higher elevations and the in situ measurements
required by RTMs can be substituted by generalized or associated with some empirical methods help to minimize
standardized atmospheric inputs. Several general/standard the residual errors associated with RTMs.
atmospheric models (e.g. tropical, mid latitude summer, mid Figure 10.6 presents an example of path radiance cor-
latitude winter, subarctic summer, subarctic winter) have rection of blue band image (pre-atmospheric correction and
been established to represent common atmospheric condi- post-atmospheric correction image set). Importance of path
tions for different regions of the world. Similarly, several radiance correction in ratioing is discussed in Sect. 13.7.5.
standard aerosol models (e.g. continental aerosol model, Atmospheric correction of thermal-IR data is further inclu-
maritime aerosol model, urban aerosol model) can be used in ded in Sects. 12.4.2 and 12.5.2.
10.3 Procedures of Atmospheric Correction 121

Fig. 10.6 Path radiance correction by dark object subtraction. a Land- image c shows the presence of overall general brightness (a sort of
sat-7 ETM + B1 (blue band) image of a part of Delhi (without path blanket cover) attributed to atmospheric path radiance in the image,
radiance correction); the prominent river is the Yamuna river flowing in which vanishes in d after path radiance correction. Clearly, the DN
the area; b the same image after path radiance correction by the dark values in (b) image would correspond to the ground attributes rather
object subtraction method. Note that both (a) and (b) images are quite than those in (a) image. Also note that in the western part of the
dark, (b) being still darker than (a) (because of subtraction of a subscene there appears some haziness due to the widely scattered thin
component of radiance from each pixel). To peep inside the images, cirrus clouds that remains unrectified by DOS (a–d courtesy of
these have to be contrast-stretched (shown at c and d respectively). The Ashis K. Saha)

References Farrand WH, Singer RB, Merényi E (1994) Retrieval of apparent


surface reflectance from AVIRIS data—a comparison of empirical
line, radiative-transfer and spectral mixture methods. Remote Sens
Aspinall RJ, Marcus WA, Boardman JW (2002) Considerations in Environ 47(3):311–321
collecting, processing, and analysing high spatial resolution hyper- Gao BC, Heidebrecht KB, Goetz AFH (1993) Derivation of scaled
spectral data for environmental investigations. J Geograph Syst surface reflectances from AVIRIS data. Remote Sens Environ
4:15–29 44:165–178
Ben-Dor E, Kindel B, Goetz AFH (2004) Quality assessment of several Gao BC, Montes MJ, Davis OC, Goetz AFH (2009) Atmospheric
methods to recover surface reflectance using synthetic imaging correction algorithms for hyperspectral remote sensing data of land
spectroscopy data. Remote Sens Environ 90:389–404 and ocean. Remote Sens Environ 113:S17–S24
Berk A, Anderson G, Acharya P, Shettle E (2008) MODTRAN 5.2.0.0 Gordon HR (1978) Removal of atmospheric effects from the satellite
user’s manual. Air Force Geophysics Laboratory, Hanscom, AFB, imagery of the oceans. Appl Opt 17:1631–1636
MA, US Hadjimitsis DG (2009) Aerosol optical thickness (AOT) retrieval over
Berk A, Bernstein LS, Robertson DC (1989) MODTRAN: a moderate land using satellite image-based algorithm. Air Qual Atmos Health
resolution model for LOWTRAN7. Tech Rep GL-TR-89-0122, 2:89–97
Geophysics Laboratory, Bedford, Mass Kruse FA (1988) Use of airborne imaging spectrometer data to map
Berk A, Conforti P, Kennett R, Perkins T, Hawes F, van den Bosch J minerals associated with hydrothermally altered rocks in the
(2014) MODTRAN6: a major upgrade of the MODTRAN radiative northern Grapevine Mountains, Nevada and California. Remote
transfer code. Proceedings SPIE 9088, algorithms and technologies Sens Environ 24:31–51
for multispectral, hyperspectral, and ultraspectral imagery XX, Rahman H, Dedieu G (1994) SMAC: a simplified method for the
90880H (13 June 2014). doi:10.1117/12.2050433 atmospheric correction of satellite measurements in the solar
Chavez PS Jr (1988) An improved dark object subtraction technique for spectrum. Int J Remote Sens 15:123–143
atmospheric scattering correction of multispectral data. Remote Richter R, Schläpfer D (2016) Atmospheric/topographic correction for
Sens Environ 24:459–479 airborne imagery. ATCOR-4 User Guide, Version 7.1.0, Nov 2016
Chavez PS Jr (1996) Image-based atmospheric corrections Roberts DA, Yamaguchi Y, Lyon R (1986) Comparison of various
revisited and improved. Photogramm Eng Remote Sens 62: techniques for calibration of AIS data. In: Vane G, Goetz AFH
1025–1036 (eds) Proceedings of the 2nd airborne imaging spectrometer data
Crippen RE (1987) The regression intersection method of adjusting analysis workshop, JPL Publication, vol 86–35, pp 21−30, Jet
image data for band ratioing. Int J Remote Sens 9:767–776 Propulsion Lab, Pasadena, CA
122 10 Atmospheric Corrections

Schott JR, Salvaggio C, Volchok WJ (1988) Radiometric scene Tanre D et al (1990) Description of a computer code to simulate the
normalization using pseudoinvariant features. Remote Sens Environ satellite signal in the solar spectrum: the 5S code. Int J Rem Sensing
26(1):1–14 11:659–668
Schroeder TA, Cohen WB, Song C, Canty MJ, Yang Z (2006) Vermote EF, Tanre D, Denze JL, Herman M, Morcette JJ
Radiometric correction of multi-temporal Landsat data for charac- (1997) Second simulation of the satellite signal in the solar
terization of early successional forest patterns in western Oregon. spectrum 6S: an overview. IEEE Trans Geosci Remote Sens
Remote Sens Environ 103:16–26 35(3):675–686
Smith GM, Milton EJ (1999) The use of the empirical line method to
calibrate remotely sensed data to reflectance. Int J Rem Sens
20(13):2653–2662
Interpretation of Solar Reflection Data
11

11.1 Introduction region. The emphasis in this chapter is on solar reflection


multispectral remote sensing data (hyperspectral remote
As stated earlier (Chap. 2), the EM spectral region extending sensing is discussed in Chap. 14). Numerous application
from 0.3 µm to approximately 3 µm is the solar reflection examples of solar reflection remote sensing data are given in
(SOR) region in terrestrial remote sensing. The Sun is the Chap. 19.
only natural source of energy in this spectral range, and the
solar radiation scattered by the Earth’s surface is studied for
ground object discrimination and mapping. 11.2 Energy Budget Considerations
This spectral region has been the most intensively studied for Sensing in the SOR Region
region for remote sensing of the Earth, for the following
reasons. Figure 11.1 shows a schematic of energy flow in the solar
reflection region. The Sun radiates EM energy, which illu-
1. Remote sensing has evolved primarily from the method minates the Earth’s surface. As the radiation passes through
of aerial photo interpretation, the use of which is con- the atmosphere, it gets modified due to atmospheric inter-
fined to the visible and near-infrared parts of the solar actions (scattering and absorption). The radiation reflected
reflection region. Even the earliest scanners and from the Earth’s surface again passes through the atmo-
photo-detectors operated only in this part of the EM sphere, and again interacts with the atmosphere, before being
spectrum. Therefore, it is logical that the solar reflection collected by an aerial or space-borne remote sensor.
range should have become the best investigated part of The total radiance received at the sensor is dependent
the EM spectrum. upon five main groups of factors: (1) attitude of the Sun,
2. The intensity of radiation available for sensing is highest (2) atmospheric-meteorological conditions, (3) topographi-
in this region (Fig. 2.2); there is also a good atmospheric cal slope and aspect, (4) sensor look angle and (5) ground
window (Fig. 2.4), permitting acquisition of good quality target characteristics (Table 11.1).
aerial and space-borne remote sensing data.
3. The region includes the visible spectrum, in which the
response of objects has been easy to interpret in terms of 11.2.1 Effect of Attitude of the Sun
directly observed objects and physical phenomena.
Sun is the only natural source of energy for sensing in this
The interpretability and application potential of remote spectral range. The magnitude of solar incident radiation
sensing data in the SOR region depends on image quality, reaching the Earth’s surface is thus an extremely important
which in turn is governed by a number of factors, grouped factor and depends on the attitude of the Sun.
broadly under two categories: (1) sensor characteristics The Earth revolves around the Sun in a near-circular
(discussed in Chaps. 4, 5 and 6) and (2) energy budget elliptical orbit. It is closest to the Sun in early January and
considerations (see Sect. 11.2). farthest away in early July, although the variation in the
The geometric quality of images and photographs were Earth–Sun distance is found to have little impact on the
discussed in Chap. 7, and radiometric aspects in Chap. 9. intensity of solar radiation reaching the Earth (Nelson 1985).
Here, we first briefly review the parameters which govern The angle of incoming solar radiation is one of the most
the reflected radiance reaching the sensor, and then discuss important factors in reflected solar energy. The inclination of
the methodology for interpreting data in the solar reflection the Sun’s rays is a function of latitude, yearly season or day

© Springer-Verlag GmbH Germany 2018 123


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_11
124 11 Interpretation of Solar Reflection Data

Fig. 11.1 Scheme of energy flow in the solar reflection region

Table 11.1 Factors governing solar reflected energy reaching the sensor
Primary variables Secondary variables Comments
1 Sun attitude – Time of the day Vary with time and day, but constant within a
– Yearly season, day of the year scene
– Latitude
– Earth-sun distance
2 Atmospheric meteorological factors – Composition of the atmosphere May vary within a scene, from place to place
– H2O-vapour, CO2, O3 concentrations etc.
leading to absorption
– Particulate and aerosol concentration
leading to scattering and path radiance
– Relative humidity
– Cloud cover and rain
3 Topography and slope aspect – Landscape slope direction Vary from place to place within a scene,
– Landscape position depending on sun-local topography relation
– Goniometric aspects
4 Sensor look angle – Sensor-target view angle For space-borne systems, nearly constant
with-in a scene; for aerial sensors, varies with-in
scene
5 Target reflectance – Albedo of the objects To decipher this attribute and relevant
– Surface coating differences holds the clue in remote sensing
– Surface texture affecting
– Lambertian vis-à-vis specular reflection pattern

of the year, and local time of day. The Sun’s angle and owing to shadows. On the other hand, a very low Sun
direction can noticeably change the appearance of features elevation may lead to unduly reduced signal and extensive
on a scene (Fig. 11.2). Consequently, season (or day) of the shadows. Therefore, image data sets with proper solar
year is important; summer and winter images may bring out illumination condition must be carefully chosen for opti-
different features. The reflected radiance from the ground mum results.
may vary at different times of the day. Fixed Sun-angle transformation:
A low Sun-angle setting enhances structural features in The effect of differences in the Sun’s angle of
a direction perpendicular to the direction of sunrays, inclination/illumination in different scenes can be reduced by
11.2 Energy Budget Considerations for Sensing in the SOR Region 125

Fig. 11.2 Landsat MSS4 (infrared) images of a part of the Himalayas. a Winter and b summer images show distinct differences in manifestation
of landform, drainage and geological/structural features due to change in the solar illumination condition

digitally transforming image data to a fixed Sun angle. This 11.2.3 Effect of Topographic Slope and Aspect
is a first-order correction and may be necessary to allow
comparison of spectral responses in different scenes or areas. Uneven ground topography leads to a varying local angle of
For this, the azimuth direction is ignored; each DN value is incidence. Topographical slope (direction and magnitude)
multiplied by a factor derived from the Sun elevation angle, vis-à-vis Sun angle is one of the most important single
as follows: factors affecting the intensity of incident illumination and
cos z back-scattered radiation (Stohr and West 1985).
DNnew ¼ ðm  DN þ bÞ ð11:1Þ Most natural terrestrial surfaces behave as a semi-diffuse
cos h
reflector, being in between an ideal specular reflector and a
where DNnew is the new digital value, z is the fixed solar Lambertian reflector. The radiation is scattered in various
zenith angle, h is the solar angle corresponding to the data to directions, the intensity of back-scattered radiation being
be rectified, DN is the digital number to be transformed, and maximum in a direction corresponding to the specular
m and b are linear constants. This computation provides DN reflection (Fig. 11.3).
values corrected for variation in Sun angle, and may be Topographical effects on reflected radiance data can be so
carried out during the pre-processing stage. severe, especially in areas of high relief, that correlation and
extrapolation of photo-units on the basis of simple tonal
signatures may be quite impossible. Topographic normal-
11.2.2 Effect of Atmospheric-Meteorological ization can be helpful in this respect (see Sect. 11.5.5).
Conditions

Solar radiation, before being sensed by a remote sensor, has


to travel twice though the atmosphere—first while incom-
ing from the Sun, and then, after being back-scattered from
the Earth’s surface. In this process, it becomes modified
through interaction with the atmosphere by scattering and
absorption.
Atmospheric-meteorological conditions prevailing at the
time of observations play an important role. Haze, aerosols
and suspended particles in the atmosphere cause scattering
and path radiance. Further, cloud cover blocks the solar
radiation and casts shadows on the ground, limiting the
effective ground area to be sensed and possibly increasing
the atmospheric path radiance to some extent. These factors
need be considered at the time of acquiring and interpreting Fig. 11.3 Effect of topography on intensity of back-scattered radia-
image data. Digital processing methods for path radiance tion. Due to topographic orientation, the ground at A appears brighter
than that at B, the images being acquired under a similar Sun angle and
correction are discussed in Chap. 10 (also see Fig. 10.6) sensor configuration
126 11 Interpretation of Solar Reflection Data

11.2.4 Effect of Sensor Look Angle image gets more lighted (and exhibits greater contrast) than
the other.
Most remote sensors operate with the optical axis nominally
vertical (nadir-looking). However, in some cases, such as for
stereoscopic applications, oblique-looking optical systems 11.2.5 Effect of Target Reflectance
are employed. As the look angle of the sensor (i.e. the angle
made by the sensor’s optical axis with the vertical) changes, Solar radiation incident on the ground surface may be
the Sun-target-sensor goniometry also changes; this in turn absorbed or back-scattered, as permitted by the spectral
changes the intensity of back-scattered radiation reaching the characters of the materials, discussed in detail in Chap. 3.
sensor. The ratio of intensity of reflected to incident energy is called
Figure 11.4a shows a set of MOMS-02P images, which albedo. Albedo values for common natural objects are listed
were acquired from fore- and aft-looking cameras of the in Table 11.2.
in-orbit stereo system. The images exhibit different The depth of penetration of the radiation in the SOR
radiometry, although the two were acquired barely about region is of the order of barely 50 µm. Therefore, surface
15 s apart. The difference in radiometry is related to properties, such as surficial coatings, moss, algae, clays,
Sun-target-sensor goniometry for the two images weathered products, oxidation and leaching, are highly
(Fig. 11.4b). With the Sun illuminating from one side, one important in the solar reflection region.

Fig. 11.4 a Fore and aft camera


images from MOMS-02P
acquired barely about 15 s
apart. b Schematic explanation of
the difference in radiometry of the
two images in a (a courtesy of
DLR, Oberpfaffenhofen)
11.2 Energy Budget Considerations for Sensing in the SOR Region 127

Table 11.2 Albedo of various surfaces (integral over the visible The various sensors have been deployed from a variety of
spectrum) platforms. In the past, conventional black-and-white or
Surface Percent of reflected light intensity colour photography from aerial platforms was the most
General albedo of the earth common mode of acquiring data in the solar reflection
Total spectrum *35 region. Now, this task has been taken over by panchromatic
Visible spectrum *39
and multispectral digital cameras. From space platforms,
multispectral sensors (e.g. Landsat MSS and TM,
Clouds (stratus) <200 m thick 5–65
SPOT-HRV and IRS-LISS, Terra-ASTER) have been
200–1000 m thick 30–85
extensively used. Lately, very high resolution image data
Snow, fresh fallen 75–90 from space platforms (e.g. IKONOS, QuickBird, GeoEye,
Snow, old 45–70 Cartosat, WorldView etc.) have been finding applications.
Sand, “white” 35–40 (increasing towards red) Ground measurements aim to provide a reference base
Soil, light (deserts) 25–30 (increasing towards red) including calibration and validation, necessary for reliable
Soil, dark (arable) 5–15 (increasing towards red) interpretations, and these parameters have been briefly dis-
Grass fields 5–30 (peaked at green)
cussed in Sect. 1.6.
Processing of solar reflection image data may be aimed to
Crops, green 5–15 (peaked at green)
carry out the following:
Forest 5–10 (peaked at green)
Limestone *36 • Geometric and radiometric corrections,
Granite *31 • To compute images of some physical attributes such as
Volcano lava (Etna) *16 spectral radiance or reflectance,
Water: sun’s elevation (degrees) • To carry out image enhancement, transformation, fusion
90 2 and classification etc. or,
60 22
• To integrate remote sensing image data with other
geodata sets.
30 6
20 13.4
10 35.8
5 *60 11.4 Interpretation
<3 >90
Urban reflectance *6–20 11.4.1 Interpretation of Panchromatic
After Schanda (1986) Black-and-White Products

Technique of interpretation of panchromatic aerial pho-


The degree of homogeneity and the density of objects are tographs has been described in numerous standard publica-
other relevant variables. Surface texture influences the tions (e.g. Miller and Miller 1961; Mekel 1978; Von Bandat
scattering characteristic of the target, i.e. the extent to which 1983; Avery and Berlin 1985; Pandey 1987). Panchromatic
a surface will behave either as a specular surface or as a image products have also been also available from various
Lambertian surface. space-borne sensors, though they may differ from each-other
With the knowledge that the above parameters influence in terms of spatial resolution (Table 11.3). However, as far
the observed reflected radiance, the aim of remote sensing as radiometric aspects are concerned, data products from all
investigations is to distinguish between various types of the panchromatic band sensors are quite alike; therefore,
ground surfaces. interpretation of products from these sensors must follow the
same line of logic.
The methodology of interpretation utilizes elements of
11.3 Acquisition and Processing of Solar photo-recognition and geotechnical elements. Further, stereo
Reflection Image Data viewing, which enables appraisal of relief, slope and 3-D
morphology, is a special advantage which may be available
All types of sensors, including photography, in some cases (Fig. 11.5).
opto-mechanical line scanners, CCD line scanners and dig- On broad-band panchromatic black-and-white pho-
ital cameras have been employed to acquire data in the solar tographs and images, snow appears bright white due to high
reflection region. These sensors have been described in albedo. A deep and clear water body appears dark, as the
Chaps. 4, 5 and 6. solar radiation is either specularly reflected or penetrates the
128 11 Interpretation of Solar Reflection Data

Table 11.3 Comparison of spatial resolutions of selected spaceborne remote sensors (all data in m)
Panchromatic Blue Green Red Near IR SWIR I SWIR II Thermal-IR
LANDSAT MSS – – 79  79 79  79 79  79 – – –
TM – 30  30 30  30 30  30 30  30 30  30 30  30 120  120
ETM+ 15  15 30  30 30  30 30  30 30  30 30  30 30  30 60  60
OLI/TIRS 15  15 30  30 30  30 30  30 30  30 30  30 30  30 100  100
TERRA ASTER – – 15  15 15  15 15  15 30  30 30  30 90  90
SPOT HRV-Pan 10  10 – 20  20 20  20 20  20 – – –
HRVIR-HR – – – 10  10 – – – –
-multi – – 20  20 20  20 20  20 20  20 – –
HRG 2.5  5 – 10  10 10  10 10  10 20  20 – –
SPOT-HRS 10  10 – – – – – – –
IRS LISS-1 – 72  72 72  72 72  72 72  72 – – –
LISS-2 – 36  36 36  36 36  36 36  36 – – –
LISS-3 – – 23  23 23  23 23  23 70  70 – –
Pan 5.8  5.8 – – – – – – –
Resources at-LISS-3 – – 23  23 23  23 23  23 23  23
LISS-4 – – 5.8  5.8 5.8  5.8 5.8  5.8 – – –
JERS-OPS – – 18  24 18  24 18  24 18  24 18  24 –
DAICHI (ALOS)
PRISM 2.5  2.5 – – –
AVNIR 10  10 10  10 10  10 10  10 – – –
CBERS 20  20 20  20 20  20 20  20 20  20 – – –
IRMSS 40  40 – – – – 40  40 40  40 80  80
CBRES-HRC 2.5  2.5 – – – – – – –
PANMUX 55 – 10  10 10  10 10  10
MUXCAM – 20  20 20  20 20  20 20  20
Rapid eye – 55 55 55 55
FORMOSAT 22 88 88 88 88 – – –
KOMPSAT-2 11 44 44 44 44
-3 0.5  0.5 22 22 22 22
Eros-A 1.9  1.9 – – – – – – –
-B 0.7  0.7 – – – – – – –
IKONOS 11 44 44 44 44 – – –
QuickBird-2 0.6  0.6 2.4  2.4 2.4  2.4 2.4  2.4 2.4  2.4 – – –
CARTOSAT-1 2.52.5 – – – – – – –
CARTOSAT-2 0.6  0.6 22 22 22 22 – – –
GEOEYE-1 0.4  0.4 – 1.65  1.65 1.65  1.65 1.65  1.65 – – –
Orb view 11 44 44 44 44 – – –
WorldView-1 0.55  0.55 – – – – – – –
World View-2 0.46  0.46 1.85  1.85 1.85  1.85 1.85  1.85 1.85  1.85
World View 3/-4 0.31  0.31 1.24  1.24 1.24  1.24 1.24  1.24 1.24  1.24
SPOT-6/-7 Pan 1.5  1.5 – 66 66 66 – – –
Pleiades-1, -2 0.5  0.5 – 22 22 22 – – –
11.4 Interpretation 129

Fig. 11.5 a, b Metric camera stereo photo pair, French Alps; note the low Sun angle (Sun elevation 15°, azimuth 145°) (processed by DLR,
Oberpfaffenhofen)

Fig. 11.6 Multispectral a blue band, b green band, c red band and d near-IR band Landsat TM images of a part of Jharia coal field, India. Note
that atmospheric scattering and path radiance is highest in the blue band image and decreases successively in longer wavelength band images

water body to a limited depth and is absorbed, with little or damage assessment. Soils appear in the various shades of
no volume scattering. On the other hand, a turbid and gray, which may be related to the type of soil and its origin.
shallow water body appears in shades of light gray to gray, On broad-band VNIR photographs and images, coarse-
due to volume scattering and bottom reflection. Occasionally textured porous sandy soils of alluvial fans, natural levees
the sensor, Sun and water surface may be in such a geo- and aeolian landforms are very light to almost white in tone;
metric configuration that specular reflection is received at the on the other hand, fine textured clayey soils of backswamps,
sensor—this is called sun glint. Vegetation appears dark gray flood plains and lakes are medium to dark gray. Local
to light gray, the actual tone at the site being a function of variation in tone may occur due to moisture content, organic
the density and type of vegetation. Various photo parame- matter, relief or grain size of the soil. In general, soils in arid
ters, such as shape and size of crown, density of foliage, climates are lighter-toned, because of the lack of vegetation
stage of development, time of year, shadow etc., can be used and surface moisture, than in humid climates. Calcareous
to identify tree species, crop estimation and vegetation soils generally give a medium tone on VNIR data and show
130 11 Interpretation of Solar Reflection Data

a pitted appearance or mottling due to variation in moisture image data from all these sensors would follow a common
content. Alkaline soils often show light tones, due to the line of argument, and for this reason they have been grouped
presence of salts and scarce vegetation. Rock surfaces appear together.
in shades of gray, the tone of the surface being dependent on Spectral characteristics of the objects and the sensor
the type of rock, its degree of weathering, soil cover, wavelengths govern the response of objects in different chan-
moisture and vegetation cover. Discrimination between rock nels of the sensor. The multispectral data can therefore help
groups could be based on converging evidence derived from discriminate and identify different types of objects, depending
a number of parameters including landform, soil, vegetation, upon their spectral attributes (Tables 11.4 and 11.5).
drainage and structure. A number of cultural features like Fresh snow appears bright white both in the VIS and NIR
cities, townships and settlements, roads and railway tracks range (Fig. 11.7a). Melting snow and ice appear bright white
can often be recognized on the photographs and images, as in the visible range but darker in the NIR. In the SWIR, all
these are marked by contrasting shapes, outlines and snow and ice appear very dark due to strong absorption
patterns. (Fig. 11.7b). Water exhibits different types of responses,
depending upon its silt content and the depth of the water
body (Fig. 11.8). Clear deep water bodies are dark in the
11.4.2 Interpretation of Multispectral Products VIS, NIR and SWIR ranges. Silted and shallow water bodies
strongly reflect the shorter wavelengths and therefore appear
The spectral distribution of several sensors can be consid- light-toned on blue, green and even red bands, the brightness
ered to be quite comparable (except for some SWIR bands in gradually decreasing towards longer wavelengths. In the
JERS-OPS and ASTER), although there are differences in NIR and SWIR, the water bodies, whether shallow or deep,
terms of spatial resolution (Table 11.3). Interpretation of silted or clear, appear black.

Table 11.4 Salient response characteristics of the multispectral bands


Band Blue Green Red Near-IR SWIR-I SWIR-II
name
Important Very strong absorption Some vegetation Very strong High General higher High absorption by
spectral by vegetation and Fe– reflectance; absorption by reflectance reflectance; insensitive hydroxyl-bearing
band O; good water good water vegetation; some by vegetation to moisture contained minerals, carbonates,
characters penetration; high penetration; water penetration; and limonite; in vegetation or to hydrous minerals,
scattering by scattering by scattering by total hydroxyl-bearing vegetation leaves,
suspended atmospheric atmospheric atmospheric absorption by minerals; absorption water and snow
particles (Fig. 11.6a) particles particles water by water, snow (Fig. 11.9b)
(Fig. 11.6b) (Fig. 11.6c) (Fig. 11.6d) (Fig. 11.7b)

Table 11.5 Response of common objects on multispectral bands


Band name object Blue Green Red Near-IR SWIR-I SWIR-II Colour on standard FCC
Forest Dark gray Dark Very dark Light Light Dark gray Deep red bright
– Deciduous Light Light Med. gray to light Darker tone Light Light Grayish to brownish red
– Defoliated gray gray gray
Cropland Gray Gray Med. gray Light Light Light Pinkish red
gray gray
Water Dark Dark Black Black Black Black Black
– Clear and deep Light Light Gray Black Black Black Bluish
– Silty and shallow
Soil Light Light Light gray Light gray Light Darker Pale yellowish
– Fallow fields Light gray Light gray Very dark to black gray Dark Cyanish-light grayish
– Moist ground gray Light Dark
gray
Snow White White White White Very dark Very dark White
– Fresh White White White Very dark Very dark Very dark Cyanish white
– Melting/ice
Urban/Industrial Light Light Gray Darker Gray Gray Bluish gray mottled
areas gray
Rocky terrain (bare) Lighter Lighter Gray Gray Gray to dark colour
11.4 Interpretation 131

Fig. 11.7 a Near-IR band and b SWIR band images (ASTER) of a hand, on the SWIR band image, cloud appears bright white but snow
part of the Himalayas. Note that both cloud and snow appear bright appears dark due strong absorption
white due to high reflectance on the near-IR band image; on the other

Fig. 11.8 Images in a green, b red and c near-IR bands of a part of the Gangetic plains, exhibiting differences in spectral response of various
objects (IRS-LISS III sensor). W = water body; s = sand; v = vegetation (crop); f = fallow fields

Forests in general appear medium dark in the visible and forests appear brighter in the visible and darker in the NIR,
bright in the NIR. In the SWIR-I, they appear bright and in owing to the absence of leaves. The response over defoliated
the SWIR-II again dark. Coniferous forests reflect less forests also depends on the type of soil/bedrock. Cropland
strongly in the NIR than deciduous forests. Defoliated has generally medium density of leaves and vegetation and
132 11 Interpretation of Solar Reflection Data

Fig. 11.9 a Near-IR and


b SWIR-II band images (Landsat
TM data of part of Khetri copper
belt, India); arrow indicates the
area of hydroxyl-bearing
alteration zone in b

Fig. 11.10 Coding of


multispectral images of a green,
b red and c near-IR bands in blue
(B) green (G) and red (R) colours
respectively, to generate a
standard FCC or CIR composite
d. Various objects A, B, C, D and
E have different spectral
characteristics and appear in
correspondingly different colours
(for details see text and
Fig. 11.11)
11.4 Interpretation 133

therefore appears medium gray in the visible channels (blue, • response in blue wavelength—cut off
green, red) and light in the NIR (Fig. 11.8) and SWIR. The • response in green wavelength—shown in blue colour
response of cropland in the VIS, NIR and SWIR ranges is a • response in red wavelength—shown in green colour
mixture of that of vegetation and soil. The cropland may also • response in NIR wavelength—shown in red colour.
be marked by characteristic field pattern, which may be
observed on suitable scales. Soils and fallow fields (dry) are The responses of common objects such as forests, crop-
light in the visible and medium gray in the NIR and SWIR land, water bodies, snow, bare ground etc. on standard
bands. Moist ground is medium gray in the visible but very FCC/CIR film are also given in Table 11.5.
dark in the NIR and SWIR. Rocky terrain (bare) is usually Before proceeding to interpretation, it is important to
brighter in the visible than in the NIR and SWIR, and is recall the methodology of generating an FCC (Fig. 9.7). As
characterized by peculiar landform and structure. Limonite an example here, a set of multispectral images of green, red
(iron oxide) exhibits strong absorption towards UV-blue and and near-IR bands are projected in blue, green and red col-
is therefore very dark in the blue–green bands, and ours, respectively (Fig. 11.10a, b, c). (This task is performed
light-toned in the red, NIR and SWIR bands. Clays and by selecting blue, green and red planes for displaying green
alteration zones, on the other hand, are light-toned in the band, red band and near-IR spectral images respectively, in a
visible, NIR and SWIR-I ranges, but are very dark in the colour monitor. As the colour display planes are usually
SWIR-II range, due to strong absorption by the hydroxyl selected in the order red, green and blue, the term RGB is
group of anions (Fig. 11.9). commonly used). Figure 11.10d shows the standard FCC so
The multispectral data from space-borne sensors have generated of the area. Note the following features.
opened up vast opportunities for mapping and monitoring of
surface features and geological exploration. These aspects A: Bare sandy soil/fallow fields appear very light to light
are discussed in detail in Chap. 19. gray, due to near-equal reflectance in all the bands.
B: Vegetation appears deep red, due to high reflectance in
the near-IR band, which is coded in red colour.
11.4.3 Interpretation of Colour Products C: Deeper water body is deep blue, due to some reflec-
tance in the green band (coded in blue) and absorption
11.4.3.1 Standard FCCs and CIR Film in other bands.
The false colour composite (FCC) technique is extensively D: Silted water body appears light cyanish due to reflec-
used for combining multispectral images through colour tance in green and red bands (coded in blue and green
coding (see Sect. 9.2.5). In addition, colour infrared pho- respectively) and absorption in the NIR band
tographs have also been earlier acquired from aerial plat- (coded in red).
forms and selected space missions. Interpretation of colour E: Township is light cyanish due to higher reflectance in
infrared film and standard FCCs follows a common line of green and red bands (coded in blue and green respec-
argument, since the two are generated through a similar tively) and low reflectance in the NIR band due to
scheme of falsification of colours (i.e. coding spectral bands absence of vegetation (coded in red). It is also marked
into colours): by the characteristic reticulate texture.

Fig. 11.11 RGB colour ternary


diagram showing plots of colour
features A, B, C, D and E in
Fig. 11.10
134 11 Interpretation of Solar Reflection Data

The positions of different colours corresponding to A, B, spectral radiance can be converted to reflectance in case of
C, D and E have been located in the RGB ternary diagram solar reflection sensing, as discussed below (or to tempera-
(Fig. 11.11), which yields the relative contributions of the ture, in case of thermal sensing, see Sect. 12.4.1).
three end members at the features. The information is The DN values can be converted to spectral radiance by
indicative of spectral characteristics of objects, from the using the formulation (Markham and Barker 1986; Chander
same can be differentiated and identified. et al. 2009):

11.4.3.2 Other Colour Displays LmaxðkÞ  LminðkÞ


Lk ¼ LminðkÞ þ  ðQcal  Qcalmin Þ ð11:2Þ
The approach using an RGB ternary diagram, as presented Qcalmax  Qcalmin
above, is a practical means of interpreting all types of colour
where Lk is spectral radiance received by the sensor for the
displays, including various types of (non-standard) FCCs,
pixel in question, Lmin (k) is minimum detected spectral
colour composites, e.g. generated from ratio images, thermal
radiance by the sensor, Lmax(k) is maximum detected spectral
images, and other spatial data sets etc.
radiance by the sensor, Qcalmin is the minimum grey level,
Qcalmax is maximum grey level, and Qcal is the grey level for
the analysed pixel.
11.5 Computation of Reflectance In a simple way, this can be also written as:
Reflectance is a measure of the percentage of light reflected Lk ¼ Grescale  Qcal þ Brescale ð11:3Þ
from a given target. It is defined as the ratio of the upward
flux reflected from the surface to the total incoming flux where Grescale is the gain (multiplicative) rescaling factor and
impinging on-to the surface. In remote sensing, it forms one Brescale is the bias (additive) rescaling factor, and Qcal is the
of the most important physical attributes of materials. pixel value in question. The rescaling factors may be
Reflectance properties of materials are highly variable and obtained from the metadata file, or from the satellite oper-
angle-dependent, and are ideally given by BRDF ating agency or from publications (e.g. for sensors of
(bi-directional reflectance distribution function), which Landsat series, the rescaling data are comprehensively pro-
conceptually describes reflectance from a surface for all vided in Chander et al. 2009). (Note: Spectral radiance is
possible angles and directions of incidence combined with given in units of W/m2/sr/µm or mW/cm2/sr/µm, the two
all possible angles and directions of exitance (observation). being inter-related by a factor of 10).
The BRDF is usually unknown and hardly determinable and
therefore directional reflectance is generally used.
Quantitative estimation of reflectance may be required for 11.5.2 Top of the Atmosphere
various purposes such as for comparison of realistic reflec- (TOA) Reflectance
tance characteristics across a scene, or required to be input in
various modeling exercises such as for estimation of thermal Top of the atmosphere (TOA) reflectance (also known as
inertia, or physical atmospheric-meteorological purposes, exoatmospheric reflectance or planetary reflectance) is an
global heat balance etc. important related parameter frequently used in remote sensing.
Reflectance values can be computed for broad-band It is a useful quantitative parameter that can be relatively easily
sensor data or for multispectral/hyperspectral sensor data, calculated while ignoring the atmospheric effects. It is com-
the computational procedure being the same. Before com- puted by assuming that the solar radiation is incident on a
mencing with reflectance estimation, it is first necessary to Lambertian ground target and is derived from spectral radiance
carry out pertinent basic corrections including geometric data by accounting for the strength of the incoming solar radi-
rectification, registration, radiometric correction, and also ation and the general angle of incidence of radiation at the time
atmospheric correction. If the above basic corrections are not of overpass, using the following relation (Chander et al. 2009):
implemented on remote sensing data first, one may end-up
p  L k  d2
with erroneous computed reflectance values. Reflectance ¼ qk ¼ ð11:4Þ
Esunk  Sina

where qk = Planetary TOA reflectance (unitless),


11.5.1 Spectral Radiance Lk = Spectral radiance at the sensor’s aperture (W/m2/
sr/lm), d = Earth–Sun distance (astronomical units),
Conversion of satellite sensor DN values to spectral radiance ESUNk = Mean exoatmospheric solar irradiance (W/m2/lm),
is invariably the first step in all such exercises. After this, the and a is the solar elevation (90° minus solar-zenith angle).
11.5 Computation of Reflectance 135

TOA reflectance is a ratio of the spectral radiance mea- The direct solar irradiance makes up the largest amount;
sured at the sensor to the solar irradiance incident at the top however, in shadows, it is absent and at such places only
of the atmosphere and is expressed as a decimal fraction diffuse and terrain irradiances may be present (Fig. 11.12).
between 0 and 1. It provides an image with values under- The diffuse irradiance is largely due to skylight and is a
standable in physical terms (Robinove 1982). There are two function of the portion of the sky hemisphere not obstructed
main advantages to this normalization process: (1) the effect by topography (sky view factor). The terrain irradiance takes
of different solar zenith angles due to the time difference care of the irradiance arising from the neighboring terrain,
between data acquisitions gets removed, and (2) it com- i.e. illumination of the target by (cross-) reflections from the
pensates for different values of the exoatmospheric solar adjacent terrain. This is generally quite small in magnitude
irradiances arising from spectral band differences. TOA but may be relatively important in case of deep snow cov-
reflectance values are therefore good for inter-comparison of ered valleys.
data from different times and different sensors.

11.5.4 Influence of Topography on Solar


11.5.3 Target Irradiance in Solar Reflection Reflection Image Data
Region in an Undulating Terrain
Effects of surface topography are particularly marked on
Radiance reaching the sensor depends upon the target remote sensing images in the solar reflection region. The
reflectance and the solar irradiance on the target. Reflectance magnitude of topographic effect on the image may vary
(q) is defined as the ratio of the upward flux reflected from depending upon the sensor characteristics, its orientation etc.
the surface to the incoming total flux impinging on to the and the time of data acquisition. For example, pronounced
surface. The effective magnitude of the solar irradiance effects of topography are typically exhibited on images of a
impinging on a sloping surface is highly dependent on the mountainous terrain in the solar reflection region acquired in
orientation of the surface in relation to the Sun. The total oblique solar illumination condition (see Fig. 11.13a).
solar irradiance on a sloping ground surface is comprised of Local topography may influence the solar reflected radi-
three components: (a) direct solar irradiance, (b) diffuse ance as recorded by a remote sensor in two ways: (a) It leads
irradiance and (c) terrain irradiance (Fig. 11.12) (e.g. to differential solar illumination (i.e. some slopes receive
Duguay and LeDrew 1992; Gratton et al. 1993; Sandmeier more sunlight than others), and (b) the goniometric geometry
and Itten 1997). of Sun-target-sensor is largely influenced by target surface

Fig. 11.12 Schematic showing


the total solar irradiance on a
sloping surface being composed
of three components: direct solar
irradiance, diffuse (skylight)
irradiance and terrain
(cross-reflection) irradiance
136 11 Interpretation of Solar Reflection Data

Fig. 11.13 Topographic correction of solar reflection image. a Orig- regions; c image corrected by modified cosine correction method;
inal ASTER NIR band image (Himalayan region), note the deep d image corrected by C-correction method (for details see text)
shadow zones indicated by the arrows (dark areas); b image corrected (courtesy of T. Rajpurohit)
by cosine correction method, note the overcorrection in shadow

orientation (i.e. topographic slope), and this along with the geometry (IL). This requires a DEM of the same resolution
BRDF (bidirectional reflectance distribution function) con- as the image so that local incidence angle (i) can be com-
trols the magnitude of reflected radiance reaching the sensor. puted for each facet/pixel. The local incidence angle can be
These factors result in different reflected intensity values from defined as the angle between the Sun’s direct rays and the
different slope surfaces, even though the surface reflectance normal to the topographic surface. It can be computed from
may be the same. Therefore, for comparing multi-temporal input parameters: slope of the surface (hp), aspect of the
image data sets, the raw digital numbers (DN values) cannot surface (uo), solar zenith angle (hz) and solar azimuth angle
be used directly since they include effects arising from (ua) (Kawata et al. 1995 and Riano et al. 2003). Cosine of
topography, as well as from atmospheric interferences. For (i) gives the illumination (IL):
this purpose, various corrections including for topography
need to be implemented for any meaningful comparative IL ¼ cosðiÞ  
analysis and data integration in an undulating terrain. ¼ cos hp cosðhz Þ þ sin hp sinðhz Þ cosðua  uo Þ
ð11:5Þ

11.5.5 Topographic Correction of Solar The value of IL is computed for each pixel in the image
Reflection Images and can vary from −1 to +1.
Utilizing the above IL parameter as an input, several
methods have been proposed for topographic normalization
1. Computation of local illumination geometry
of reflectance data. Reviews of these methods are provided
by several workers (Riano et al. 2003; Gupta et al. 2007;
The first step in topographic correction of solar reflection
Richter et al. 2009 among others). It may be mentioned here
data is invariably the computation of solar illumination
11.5 Computation of Reflectance 137

Table 11.6 An overview of important methods of topographic correction of reflectance data


Method Formulation Symbol
1 Cosine correction (Lambertian) qH = qT (cos hz/cos i) qH = reflectance of a horizontal surface
2 Modified cosine correction qH = qT [qT (cos im − cos i)/cos im] qT = reflectance of the inclined surface
+
hz = sun-zenith angle
3 Minnaert correction qH = qT (cos hz/cos i)k i = local incidence angle
4 Statistical-empirical correction qH = qT − (cos i) m − b + qT im = mean incidence angle of the image
k = Minnaert constant for a particular band
5. C-correction qH = qT[(cos hz + C)/(cos i + C)] qT = mean uncorrected reflectance value for the study area
m = slope of the regression line
b = y-intercept of the regression line
C=b/m

that most of the studies so far for topographic normalization where im = mean incidence angle in the image. Figure 11.13c
have been carried out on forested slopes which can be gives an example.
assumed to behave as near-homogeneous diffuse (Lamber-
tian) reflectors. An overview of important topographic pro- 4. Minnaert correction
cedures is provided in Table 11.6.
For semi-Lambertian surfaces, the method initially pro-
2. Cosine correction for Lambertian surfaces posed by Minnaert (1941) to assess the roughness of the
moon’s surface, can also be applied for topographic nor-
For ideally Lambertian surfaces, a simple method for malization and determining BRDF. The method utilizes an
topographic correction is the cosine correction proposed by empirical coefficient (Minnaert constant, k), estimated sta-
Teillet et al. (1982). It is fundamentally a trigonometric tistically for each image. The formulation of the method is
method in which azimuth directions of Sun and topographic as follows:
slope are ignored as a first-order approximation. The same  
concept has also been applied to normalize illumination cosðhz Þ k
qH ¼ qT ð11:8Þ
differences arising out of different Sun positions in cosðiÞ
multi-temporal data sets in a flat terrain. In this method the
corrected reflectance of a horizontal surface is computed as: where, k = Minnaert constant for a particular band and is
  related to the surface roughness describing the type of
cosðhz Þ scattering. For perfectly Lambertian reflector, k = 1;
qH ¼ qT ð11:6Þ
cosðiÞ for semi-diffuse reflection, k is between 0 and 1. The
value of k can be determined by linearizing the above
where, qH = corrected reflectance of a horizontal surface, equation.
qT = reflectance of the inclined surface, hz = solar-zenith Although the method is useful in reducing topographic
angle and i = local incidence angle. effects, a major problem is that it is scene-dependent, and
A major drawback of this method is that it assumes the the Minnaert coefficient has to be determined uniquely for
presence of only the direct part of solar irradiance. There- each set of Sun—sensor geometry and for each spectral
fore, at higher local incidence angles, i.e., in weakly illu- band.
minated regions, where the amount of diffuse irradiation is
relatively more significant, the cosine correction has a dis- 5. Statistical-Empirical method
proportionate brightening effect and an over-correction
occurs. This is illustrated in Fig. 11.13b. Another method is the Statistical-Empirical method pro-
posed by Teillet et al. (1982). It is a purely statistical
3. Modified cosine correction (Civco method) approach based on a significant correlation between a
dependent and one or several independent variables such as
In order to take care of over-correction occurring in the reflectance in each band and cos(i).
cosine method, an improved version was proposed by Civco
(1989) that takes into consideration the mean value of illu- qH ¼ qT  cosðiÞm  b þ qT ð11:9Þ
mination in the image. The formulation is:
where, qT = mean uncorrected reflectance value for the
 
ðcos im  cos iÞ study area, m = slope of the regression line and b =
qH ¼ qT þ qT ð11:7Þ
cos im y-intercept of the regression line.
138 11 Interpretation of Solar Reflection Data

6. C-correction method SRM image from the satellite data image, as follows (Sun-
daram 1998):
A variation of the above Statistical-Empirical approach has
also been proposed by Teillet et al. (1982) and is named as DNnew ¼ mðSATDN  SRMDN Þ þ a; ð11:3Þ
C-correction. Here, first, qT is related with cos(i) as: where m and a are the scaling constants (gain and bias). This
qT ¼ cosðiÞm þ b ð11:10Þ generates a spectral/ albedo image where topographic effects
get largely subdued.
This corresponds to the regression line in the
statistical-empirical-approach and gives b and m, from which
a new parameter C (=b/m, i.e. quotient of b and m) is 11.6 Active Optical Sensor-Luminex
computed. Then the corrected reflectance is computed as:
  The luminex method is based on the detection of photolu-
ðcos hz þ CÞ
qH ¼ qT ð11:11Þ miniscence (Robbins and Seigel 1982). It is an active sensor
ðcos i þ CÞ like lidar and radar. The method is based on the phenomenon
Figure 11.13d gives an example of image correction by that certain minerals, when struck by UV radiation, exhibit
this method. According to Teillet et al. (1982) the parameter luminescence. The sensor employs UV laser beams fired
C might emulate the effect of path radiance on the from airborne platforms, which cause active
slope-aspect correction, but the physical analogies are not photo-luminiscence in certain key minerals such as scheelite,
exact. powellite, hydrozincite, autunite etc. These minerals are
In addition to the above, several modifications have also either themselves of interest or act as pathfinder minerals to
been proposed such as ‘smooth C-correction’ (Riano et al. certain deposits, e.g. tungsten and skarn deposits, molyb-
2003), ‘gamma method’ (Shepherd et al. 2003; Richter et al. denum, tin, zinc, base metals, gold, uranium etc. The fired
2009), ‘SCS+S geometry’ method (Soenen et al. 2005), and UV laser beam excites photo-luminiscence in minerals in its
‘slope matching technique’ (Nichol et al. 2006). However, in target area or foot-print. The light emanating from the target
general, C-correction can be considered as the most suitable area is viewed by a telescope, spectrally separated and
statistical method for first-order normalization for topo- analysed to determine the presence of the mineral of interest.
graphic effects, as this is easy to implement and provides
reasonably acceptable results.
It may be mentioned here that band ratioing can also be 11.7 Scope for Geological Applications
used for minimizing/suppressing topographic effects on solar
reflection image data. The ratioing method is quite simple The scope and potential of solar reflection data in geological
and does not need any additional input data for implemen- applications are so high that remote sensing has become
tation and is valid for all incidence angles. However, a major almost an operational tool. Numerous examples are provided
general disadvantage of the method is that the radiometric in Chap. 16.
resolution of the ratio image tends to decrease as the ratio In comparison to other data types, SOR data have some
image is noisier than the original images. advantages: (1) the thermal-infrared data have relatively
coarser spatial resolution and inferior spectral radiometric
7. Use of SRM for topographic normalization quality, and (2) radar data have relative disadvantages of
angular looks and distortions. The chief limitation of the
A shaded relief model generated from a topographic data can solar reflection data for geological application arises from
also be applied for topographic normalization of remote the fact that the response in the SOR region is governed by
sensing data and enhancement of spectral characters. This barely the top 50 µm layer on the ground. In some cases, this
requires a digital elevation model (DEM) from which an may predominantly comprise lichen, moss, soil, vegetation,
SRM image is generated with a constant albedo. The SRM oxidation film and surficial coatings, which are invariably
should correspond to the same illumination angle and ignored by field geologists. However, systematic and
direction as the image to be rectified. This SRM image gives detailed study of the geotechnical elements, together with
brightness variation across the scene, as dependent solely the principle of convergence of evidence, can help unravel
upon topography, irrespective of the spectra/albedo varia- geological features in the region.
tion. Ratioing of satellite sensor image by SRM image would
yield an image showing the spectral radiance of the ground. 1. Landforms. Physiological features, drainage and relief
However, as the ratio image is generally rather noisy, are very well recorded in solar reflection data, due to high
another simple approach would be to use a subtraction of spatial resolution and contrasting albedos of the ground
11.7 Scope for Geological Applications 139

materials. Stereo pairs in particular serve as an excellent Gupta RP, Ghosh A, Haritashya UK (2007) Empirical relationship
medium for landform studies. Further, landform features between near-IR reflectance of melting seasonal snow and envi-
ronmental temperature in a Himalayan basin. Rem Sens Environ
can at times be better delineated on multispectral images 107:402–413
when subtle differences in moisture, vegetation etc. play Kawata Y, Ueno S, Ohtani A (1995) The surface albedo retrieval of
a diagnostic role. mountainous forest area from satellite MSS data. Appl Math
2. Structure. Structural features such as folds, faults, lin- Comput 69:41–59
Markham BL, Barker JL (1986) Landsat MSS and TM post-calibration
eaments etc. can often be well detected on panchromatic dynamic ranges, exoatmospheric reflectances and at-satellite tem-
black-and-white and multispectral data products in the peratures. EOSAT Tech Notes 1:3–8
SOR region, so much so that these have acquired the Mekel JFM (1978) ITC textbook of photo-interpretation. Chapter 8.
status of an essential technique in relatively less-explored The use of aerial photographs and other images in geological
mapping. ITC, Enschede
areas. Vegetation alignments and variation in surface Miller VC, Miller CF (1961) Photogeology. McGraw-Hill, New York
moisture mark many of the structural features, and these Minnaert M (1941) The reciprocity principle in lunar photometry.
features are amenable to detection on multispectral J Astrophysics 93:403–410
images. Nelson R (1985) Reducing landsat MSS scene variability. Photogram
Eng Remote Sens 51:583–593
3. Lithology. Different rock types exhibit differences in Nichol J, Hang LK, Sing WM (2006) Empirical correction of low sun
landform, drainage, soil, vegetation etc. The cumulative angle images in steeply sloping terrain: a slope matching technique.
effect of these may permit discrimination of different rock Int J Rem Sens 27(3–4):629–635
types. In addition, multispectral data and particularly Pandey SN (1987) Principles and applications of photogeology. Eastern
Wiley, New Delhi, p 366
high-spectral resolution remote sensing data have Riano D, Chuvieco JS, Aguado I (2003) Assessment of different
demonstrated the capability for mineralogic/lithologic topographic corrections in landsat-TM data for mapping vegetation
identification (see this chapter). types. IEEE Trans Geosci Remote Sens 41(5):1056–1061
4. Mineral exploration. Owing to its general utility in mineral Richter R, Kellenberger T, Kaufmann H (2009) Comparison of
topographic correction methods. Remote Sensing, (open access) 1,
exploration, remote sensing is considered as an efficient 184–196. doi:10.3390/rs1030184
forerunner in all exploration programmes. In addition, Robbins J, Seigel HO (1982) The luminex method—a new active
remote sensing data are of proven utility for identification remote sensing method for exploration for mineral deposits. In:
of limonite and hydroxyl minerals, which form significant Proceedings of the international symposium remote sensing envi-
ronment, 2nd thematic conference remote sensing exploration
guides to hydrothermal mineral deposits. Further, various geology, Fort Worth, Texas, pp 203–204
minerals and mineral groups can be identified using Robinove CJ (1982) Computation of physical values from Landsat
reflection data in the SWIR (see Sect. 19.7). digital data. Photogram Eng Remote Sens 48:781–784
Sandmeier S, Itten KI (1997) A physical-based model to correct
atmospheric and illumination effects in optical satellite data of
Additionally, data in the solar reflection region have been rugged terrain. IEEE Trans Geosci Rem Sens 35(3):708–717
extensively applied for hydrocarbon exploration, groundwater Schanda E (1986) Physical fundamentals of remote sensing. Springer,
investigations, engineering geology, geo-environmental sur- Berlin, p 187
veys and a host of other applications (discussed in Chap. 19). Shepherd JD, Dymond JR (2003) Correcting satellite imagery for the
variance of reflectance and illumination with topography. Int J Rem
Sens 24:3503–3514
Soenen SA, Peddle DR, Coburn CA (2005) SCS+C: a modified
References sun-canopy-sensor topographic correction in forested terrain. IEEE
Trans Geosci Rem Sens 43(9):2148–2159
Stohr CJ, West TR (1985) Terrain and look angle effects upon
Avery TE, Berlin GL (1985) Interpretation of aerial photographs, 4th multispectral scanner response. Photogram Eng Remote Sens
edn. Burgress, Minneapolis, Minn 51:229–235
Chander G, Markham BL, Helder DL (2009) Summary of current Sundaram RM (1998) Integrated GIS studies for delineation of
radiometric coefficients for Landsat MSS, TM, ETM+, and EO-1 earthquake-induced hazard zones in parts of Garhwali Himalaya.
OLI sensors. Rem Sens Environ 113:893–903 Ph.D. Thesis (unpublished), University of Roorkee, Roorkee
Civco DL (1989) Topographic normalization of landsat thematic mapper Teillet PM, Guindon B, Goodenough DG (1982) On the slope-aspect
digital imagery. Photogram Eng Remote Sens 55(9):l303–l309 correction of multispectral scanner data. Canad J Remote Sens 8
Duguay CR, LeDrew EF (1992) Estimating surface reflectance and (2):84–106
albedo from landsat-5 Thematic Mapper over rugged terrain. Von Bandat HF (1983) Aerogeology. Gulf Publ, Houston, Texas
Photogram Eng Remote Sens 58:551–558
Gratton DJ, Howarth PJ, Marceau DJ (1993) Using landsat-5 thematic
mapper and digital elevation data to determine the net radiation field
of a mountain glacier. Rem Sens Environ 43:315–331
Interpretation of Thermal-IR Data
12

12.1 Introduction radiation. Active techniques, deploying monochromatic


wavelength laser beams (also called laser radar or LIDAR)
The EM wavelength region of 3–35 µm is popularly called have also been developed for some research investigations.
the thermal-infrared region in terrestrial remote sensing. This
is because of the fact that, in this wave-length region, radi-
ation emitted by the Earth due to its thermal state is far more 12.2 Earth’s Radiant Energy—Basic
intense than solar reflected radiation (Fig. 12.1), and there- Considerations
fore any sensor operating in this region would primarily
detect the thermal radiative properties of ground materials. The atomic and molecular units within a body having a
Out of the 3–35 µm wavelength region, the greatest temperature above ab-solute zero (0 K or −273.1 °C) are in
interest has been in the 8–14 µm range, owing to the agitated form, owing to which they interact, collide and
following three main reasons. radiate EM energy. How much energy an object on the
ground radiates is a function of two parameters: surface
1. At ambient terrestrial temperatures, the peak of the temperature and emissivity. These parameters may vary
Earth’s blackbody radiation occurs at around 9.7 µm spatially and temporally.
(Fig. 12.1), which indicates the highest energy available
for sensing in this region.
2. An excellent atmospheric window lies between 8 and 12.2.1 Surface (Kinetic) Temperature
14 µm, and poorer windows exist at 3–5 and 17–25 µm.
Interpretation of the data in the 3–5 µm region is rather The surface temperature of the ground is called kinetic
complicated, due to overlap with solar reflection radia- temperature. It is dependent on two main groups of factors:
tion in day imagery, and the 17–25 µm region is still not heat energy budget and thermal properties of materials.
well investigated. This leaves 8–14 µm as the preferred A detailed review of these parameters is given by Kahle
window for terrestrial remote sensing. (1980).
3. Prominent and diagnostic narrow spectral features
(high-reflectivity or reststrahlen bands) occur, due to 12.2.1.1 Heat Energy Budget
bending and stretching molecular vibrations in minerals Heat energy transfer takes place from higher temperature to
in this region (see Chap. 3). These bands vary with lower temperature, by radiation, convection or conduction.
composition and structure of minerals and can therefore Changes in net thermal energy lead to variations in kinetic
be usefully applied to give information on mineral surface temperature. The following factors influence the heat
composition of rocks. energy budget.

In view of the above, the 8–14 µm region has been of 1. Solar heating. The most important source of heat energy
great interest for geological remote sensing, and the tech- to the Earth’s surface is the Sun. The solar radiation
nique has made tremendous strides during the past nearly falling on the Earth’s surface is partly absorbed and
three decades (Kahle 1980; Kahle et al. 1980; Quattrochi and partly back-scattered. The absorbed radiation leads to a
Luvall 2004; Quattrochi et al. 2009). rise in the level of heat en-ergy, and therefore surface
Remote sensing in the TIR region has generally been of a temperature. It has been found that the thermal effect of
passive-type, i.e. sensors collect data on the naturally emitted the diurnal (day and night) cycle usually exists up to a

© Springer-Verlag GmbH Germany 2018 141


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_12
142 12 Interpretation of Thermal-IR Data

almost constant value from nearly midnight to just before


sunrise.
The amount of solar energy incident on the Earth’s sur-
face depends on several parameters, such as solar elevation,
cloud cover, atmospheric conditions, topographical attitude
and slope aspect of the surface. The solar elevation (which
depends on latitude and time of day and month) is a sys-
tematic variable and relatively easy to quantify (see e.g.
Price 1977). Variables such as cloud cover and atmospheric
conditions are accounted for by monitoring meteorological
conditions. Topographical factors such as relief and slope
aspect lead to unequal illumination, and terrain models may
be used to account for such variations (Gillespie and Kahle
1977).
Since solar heating is the most important source of heat
energy to the Earth’s surface, a very important factor
Fig. 12.1 a Atmospheric windows in the thermal-IR region; note the
ozone absorption band at 9.6 µm. b Energy available for sensing; influencing heat budget is the solar albedo (A), i.e. the
beyond 3–4 µm, blackbody radiation emitted by the Earth is the percent of solar radiation back-scattered, the co-albedo
dominant radiation with a peak at around 9.7 µm (1 − A) being absorbed and directly responsible for the rise
in surface temperature. Materials having higher albedo
depth of nearly 1-m, the most important being the top generally exhibit lower temperatures, and those having
10-cm zone. During the daytime, heat is transmitted from lower albedo values exhibit higher temperatures (Fig. 12.3a)
surface to depth, and at night the reverse happens. (Watson 1973, 1975). The solar albedo for different objects
can be estimated from data in the VNIR region, which can
In a generalized way, as the Sun rises in the morning, help compute the fraction of solar energy absorbed by the
the Earth’s surface temperature also starts rising ground surface.
(Fig. 12.2a, b). At noon, the Sun is at its zenith, after which
it starts descending; however, the surface temperature keeps 2. Long wave upwelling and downwelling radiation. The
rising and reaches a maximum in the early afternoon longwave upwelling ra-diation corresponds to the radia-
(around 14.00 h). After this, the surface temperature starts tion emitted by the Earth’s surface. This compo-nent
declining as the surface cools off, first at a rapid rate and removes heat from the Earth’s surface and depends on
later at a gentle rate. The surface temperature assumes an prevailing ground temperature and emissivity. The
longwave downwelling radiation is the energy emitted by
the atmosphere that reaches the ground and depends on
the gases present in the atmosphere and prevailing
atmospheric temperature. Empirical relations are used to
estimate these components of heat energy.
3. Heat transfer at the Earth—atmosphere interface. Heat
transfer by conduction and convection takes place at the
Earth—atmosphere interface. Further, the processes of
evaporation and dew formation involve latent heat and
affect net heat transfer on the ground. These heat trans-
fers depend on the thermal state of the ground, the
atmosphere and meteorological conditions. Empirical
relations incorporating meteorological data can be used
to estimate the amount of such heat transfers. Therefore,
monitoring of meteorological conditions is important for
modelling and interpreting thermal data.
4. Active thermal sources. Active geothermal sources such
Fig. 12.2 Bearing of solar heating cycle on the Earth’s surface as volcanoes, fumaroles, geysers, etc. and man-made
temperature. a Solar heating cycle. b Variation in surface temperature sources such as fire and thermal effluents etc., if any,
(after Watson 1973) introduce additional factors in the heat energy budget.
12.2 Earth’s Radiant Energy—Basic Considerations 143

Fig. 12.3 Diurnal temperature curves for varying values of a albedo and b thermal inertia (Watson 1975)

Heat balance. The various components of the heat energy general, rock has a low value of K, water has a higher
fluxes give an esti-mate of the net heat flux conducted into value of K, and that of steel is still higher (Table 12.1).
the ground. The heat balance equation can be written as 2. Specific heat (c) is a measure of the amount of heat
(after Kahle 1977). required to raise the temperature of 1 g of substance
through 1 °C. Physically, a higher value of specific heat
Es þ Er þ Em þ Ei þ Ea þ Eg ¼ 0 ð12:1Þ implies that more heat is required to raise the temperature
of the material. Its units are cal/g/°C. Relatively speak-
where
ing, water has much higher specific heat than rocks, and
Es net solar radiation flux absorbed by the ground steel significantly lower (Table 12.1).
Er net longwave radiation 3. Density (q). Mass per unit volume (g/cm3) is another
Em sensible heat flux between the atmosphere and the physical property which comes into play in determining
ground the distribution of temperature pattern. It is included in
Ei latent heat flux between the atmosphere and the other parameters such as heat capacity, thermal inertia
ground and thermal diffusivity.
Ea heat flux due to active sources 4. Heat capacity (C = q.c) is the amount of heat required to
Eg net heat flux conducted into the ground, which raise the temperature of a unit volume of substance by
governs the rise in ground temperature 1 °C. Its units are cal/cm3/°C.
5. Thermal diffusivity [k = K/(q.c)] is a measure of the rate
at which heat is transferred within the substance. It
12.2.1.2 Thermal Properties of Materials governs the rate at which heat is conducted from surface
Thermal properties of ground materials shape the pattern of to depth in the daytime and from depth to surface in the
distribution of the net heat energy conducted into the night-time. Its units are cm2/s. Water possesses high
ground, and therefore govern ground temperatures. The specific heat and therefore minor changes in moisture
material properties are influenced by mineral composition, content have significant effects on thermal diffusivity of
grain size, porosity and water saturation. Important thermal soils.
properties are briefly described below (typical values listed 6. Thermal inertia (P = (Kqc)1/2) is a measure of the
in Table 12.1). resistance offered by a substance in undergoing temper-
ature changes. Its units are cal/cm2/s1/2/°C. The thermal
1. Thermal conductivity (K) is a measure of the rate (Q/t) at inertia increases if thermal conductivity (K), density (q)
which heat (Q) is conducted by a medium through unit or specific heat (c) increase. This is quite logical, for, if K
area of cross-section, under unit thermal gradient. It is increases, then more heat is conducted to depth and rise
expressed as cal/cm/s/°C. Thermal conductivity is in surface temperature will be relatively less. Similarly, if
dependent on porosity and the fluid filling the pores. In q increases, then more material in gcm−3 is available to
144 12 Interpretation of Thermal-IR Data

Table 12.1 Typical values of thermal properties of selected materials (most data from Janza 1975)
Geological materials K thermal q density c specific k Thermal P thermal inertia
conductivity (g/cm3) heat (cal/g/°C) diffusivity (cal/cm2/s1/2/°C)
(cal/cm/s/°C) (cm2/s)
Igneous rocks
Basalt 0.0050 2.8 0.20 0.009 0.053
Gabbro 0.0060 3.0 0.17 0.012 0.055
Peridotite 0.0110 3.2 0.20 0.017 0.084
Granite 0.0075 2.6 0.16 0.016 0.052
Rhyolite 0.0055 2.5 0.16 0.014 0.047
Syenite 0.007 2.2 0.23 0.009 0.047
Pumice, loose 0.0006 1.0 0.16 0.004 0.009
Sedimentary rocks
Sandy soil 0.0014 1.8 0.24 0.003 0.024
Sandstone, quartz 0.0120 2.5 0.19 0.013 0.054
Clay soil 0.0030 1.7 0.35 0.005 0.042
Shale 0.0042 2.3 0.17 0.008 0.034
Dolomite 0.0120 2.6 0.18 0.026 0.075
Limestone 0.0048 2.5 0.17 0.010 0.045
Metamorphic rocks
Marble 0.0055 2.7 0.21 0.010 0.056
Quartzite 0.0120 2.7 0.17 0.026 0.074
Slate 0.0050 2.8 0.17 0.011 0.049
Other materials
Water 0.0013 1.0 1.01 0.001 0.037
Steel 0.030 7.8 0.20 – 0.168

Table 12.2 Physical factors affecting thermal (radiant temperature) data (after Ellyett and Pratt 1975)
Variable Physical properties Ground and atmospheric factors
1. Emissivity a. Composition Type of rock, soil, vegetation etc.
b. Surface geometry Surface configuration of ground objects
2. Kinetic a. Physical/thermal properties – Rock, soil (composition)
temperature of materials – Grain size and porosity
– Moisture content
b. Heat budget – Solar heating Season
factors Latitude
Cloud cover
Solar elevation
Time of day
Topography and aspect
Albedo (and co-albedo)
Atmospheric absorption
– Longwave radiation and heat transfer at the Ground temperature
Earth-atmosphere interface Emissivity
Atmospheric temperature
Wind speed
Humidity
Sky temperature
Cloud cover
Rain
Topographic elevation
– Active thermal sources Fumaroles, geysers, fire, thermal
effluents, volcano, etc.
12.2 Earth’s Radiant Energy—Basic Considerations 145

to show that if heat balance is constant, then for the same


value of thermal inertia P, the variation in K, q and c can affect
only the depth temperature profile, an not the surface tem-
perature, which remains the same (Fig. 12.5). Therefore,
thermal inertia becomes an intrinsic property of materials in
the context of thermal remote sensing. Remote sensing for
estimating thermal inertia is discussed in Sect. 12.3.5.

12.2.2 Emissivity

Emissivity is a property of materials which controls the


radiant energy flux. Emissivity (e) for a blackbody is
unity and for most natural materials is less than 1,
Fig. 12.4 Relationship between thermal inertia and density of rocks
(the various values pertain to the data in Table 12.1) ranging generally between 0.7 and 0.95. If a natural body
and a blackbody possess the same surface temperature,
then the natural body will emit less radiation than the
be heated, and accordingly more heat is required to raise blackbody.
the temperature (Fig. 12.4). Similarly, if c is higher, then Emissivity (e) depends on two main factors: composition
the material needs relatively more heat for the same rise and surface geometry. It is intimately related to reflectance
in its temperature. Figure 12.3b shows surface tempera- or colour (spectral property). Dark materials absorb more
ture variations in a diurnal cycle for objects having dif- and therefore emit more energy than light coloured materi-
ferent thermal inertia values, other factors remaining als. Spectral absorptivity is equal to spectral emissivity
constant (Watson 1975). The temperature variations are (Kirchoff’s Law). It has been shown that the percentage of
found to be greater for objects having lower thermal silica, which is an important constituent in the Earth’s crust,
inertia values and smaller for those having higher thermal is inversely related to emissivity (in the 8–14 µm region)
inertia values. (Fig. 12.6). Therefore, the presence of silica, which has low
emissivity, significantly affects the bulk emissivity of an
12.2.1.3 Importance of Thermal Inertia assemblage. Moreover, smooth surfaces have lower emis-
in Remote Sensing sivity than rough surfaces. In the case of broad-band thermal
Thermal-IR sensing deals with the measurement of surface measurements, lateral emissivity variations are generally
temperatures. It is shown that the variation in surface tem- ignored. On the other hand, in multispectral thermal sensing
perature of a periodically heated homogeneous half-space is attention is primarily focused on detecting lateral variations
dependent on a single thermal property called thermal inertia in spectral emissivity across an area, which in turn sheds
(Carlsaw and Jaegar 1959). Kahle (1980) made computations light on rock composition.

Fig. 12.5 Temperature profiles with depth at different time instances. B is the same, although the temperature profiles with depth are different
Materials A and B have the same value of thermal inertia but differ in in the two cases (after Kahle 1980)
density and thermal conductivity. The surface temperature for A and
146 12 Interpretation of Thermal-IR Data

Fig. 12.7 Relationship between energy radiated by a blackbody in the


8–14 µm region (i.e. blackbody radiant emittance) and surface kinetic
temperature

the equivalent temperature of a blackbody which would give


the same amount of radiation, as obtained from a real body.
Radiant temperature depends on the ground temperature,
also called kinetic temperature (TK), and emissivity (e), and
corresponds to the temperature obtained in a remote sensing
measurement.
In the case of a non-blackbody, the total amount of
radiation (W) emitted is given by the Stefan–Boltzmann
Law as

W ¼ e:r:T4K ð12:2Þ
Fig. 12.6 Relationship between emissivity (8–14 µm region) and
silica percentage in rocks (after Reeves 1968) ¼ r:T4R ð12:3Þ

where e:T4K ¼ T4R ð12:4Þ

This gives,
12.3 Broad-Band Thermal-IR Sensing
TR ¼ e1=4 :TK ð12:5Þ
In the case of sensors from aerial platforms, the entire
8–14 µm region is used for broad-band thermal-IR sensing, Radiant temperature for a natural body will thus be less
for the simple reason that it provides a high signal-to-noise than that for a black-body at the same temperature. This also
ratio. On the other hand, as an ozone absorption band occurs implies that temperatures measured by remote sensing
at 9.6 µm, the bandwidth of broad-band thermal-IR sensors methods are less than the prevalent surface kinetic temper-
from space platforms is usually restricted to 10.4–12.6 µm. atures by a factor of e1/4.
Thermal-IR wavelengths lie beyond the photographic The relation between temperature and radiant emittance
range and the thermal radiation is absorbed by the glass of for the temperature range 270–350 K in the spectral band
conventional optical cameras. Thermal remote sensing data 8–14 µm is illustrated in Fig. 12.7. It can be broadly
is collected by radiometers and scanners. The working approximated by a linear function, although for precise
principle of imaging instruments, including their operation, measurements a better match is a curve with a T4-relation
calibration and generation of imagery from scanner data (Scarpace et al. 1975; Dancak 1979).
have been discussed in Chap. 5. The thermal-IR image data
can be displayed in real-time or recorded as per the
requirements. These data can be combined with other 12.3.2 Acquisition of Broad-Band Thermal-IR
spectral remote sensing data and/or ancillary geo-data for Data
integrated interpretation.
12.3.2.1 Aerial Broad-Band TIR Data Acquisition
For aerial broad-band thermal sensing, the wavelength range
12.3.1 Radiant Temperature and Kinetic 8–14 µm is commonly used as a single band. A two-level
Temperature calibration is provided on-board. Temperature data obtained
in this way are still apparent data, the various possible errors
In thermal-IR sensing, radiation emitted by the ground being due to instrument functioning, calibration, atmo-
objects is measured. Radiant temperature (TR) is defined as spheric interference and unknown emissivity. In general,
12.3 Broad-Band Thermal-IR Sensing 147

absolute temperature data are only seldom needed, and rel- The maximum contrast in surface temperatures is available
atively calibrated temperature data, as commonly obtained, at around 14 h (early afternoon). However, noon—early
may be sufficient for most geological remote sensing afternoon is usually windy and may increase the instability
applications. problems of the aerial platform. Further, rapid temperature
Flight lines can be laid as single airstrips or in a mosaic changes occur in the noon hours with time. The pre-dawn
pattern. Selecting the day of survey is always a critical time is generally considered as best for most thermal-IR
decision in aerial thermal surveys, the predominant consid- surveys because (a) the effects of topography and differential
erations being as follows. solar heating can be avoided and (b) objects maintain steady
temperatures from midnight to pre-dawn and therefore
(a) The meteorological conditions should be optimum; variations occurring due to logistic reasons can be
monitoring of atmospheric parameters may give a clue minimized.
to the meteorological conditions. For high spatial resolution thermal surveys of small target
(b) Ground conditions should permit maximum geological areas, it is convenient these days to deploy UAV based-FLIR
discrimination. The geological discrimination is largely cameras. Aerial thermal-IR imagery may have many geo-
affected by soil moisture; hence, soil moisture should metric distortions, e.g. pano-ramic distortion, distortions due
preferably be different in different types of soils, and to platform instability etc. These were discussed in Chap. 7
the soils be neither too wet nor too dry on the day of and methods to rectify the same are given in Chap. 13.
flight. An empirical norm is to delay the flight after
rains by up to a day, which should maximize soil 12.3.2.2 Orbital Broad-Band TIR Data Acquisition
moisture variation in the ground. The use of broad-band thermal-IR channels from orbital
platforms commenced with meteorological missions (e.g.
Ground objects exhibit a systematic variation in radiant TIROS, NOAA etc.), having a typical spatial resolution of
temperature in the diurnal cycle (Fig. 12.8). Therefore, a set 1–5 km. These data had only limited geological applications.
of day and night thermal IR data is commonly used in A thermal-IR channel has been included in some missions for
thermal-IR investigations. It is seen that objects with dif- land resources applications, e.g. Landsat-4, -5 TM and
fering thermal inertia values have similar temperatures in Landsat-7 ETM+, and Landsat-8 TIRS (Table 12.3).
late afternoon and early morning (Figs. 12.3b and 12.8); this For sensors on free-flying platforms, there are no navi-
renders these hours unsuitable for thermal discrimination. gational or planning considerations for acquiring data; these
aspects are taken care of while designing the sensor (e.g.
total field of view, swath width, etc.) and selecting the orbital
parameters.
The geometric characters of orbital imagery are more
systematic, regular and uniform, as the platform is at a
higher elevation and therefore more stable. The thermal-IR
sensing utilized OM scanners in HCMM, Landsat TM and
ETM + sensors, whereas in Landsat-8/TIRS pushbroom
technology has been utilized. The various types of geometric
and radiometric characteristics and distortions occurring in
spaceborne scanner images and their rectification have been
discussed elsewhere (see Chaps. 7, 9, and 13). Atmospheric
Fig. 12.8 Typical diurnal radiant temperature curves (idealized) for correction of thermal-IR data is discussed in Chap. 10.
selected materials (after Sabins 1987)

Table 12.3 Salient characteristics of selected spaceborne thermal-IR sensors


S. no. Spacecraft Sensor/band no. Spectral bandwidth Type of scanner Ground Quantization Swath
(µm) resolution (m) width (km)
1 Landsat-4,-5 TM/B6 10.4–11.7 OM 120 8-bit 185
2 Landsat-7 ETM+/B6 10.3–12.3 OM 60 8-bit 185
3 Landsat-8 TIRS/B10 10.6–11.9 Pushbroom 100 12-bit 185
(resampled to 16-bit)
4 EOS-1 ASTER 5 multispectral bands OM 90 12-bit 60
148 12 Interpretation of Thermal-IR Data

12.3.2.3 Ground Measurements difficulties may lead to mismatching. However, water bod-
The following ground parameters are usually monitored ies, if present, often serve as good control points, as they are
during a thermal-IR survey: (1) ground surface temperature quite distinct on the TIR images, as also on other data sets.
at selected locations (temperature profile to a depth of about Image processing of thermal-IR images in particular is
1-m may also be useful), (2) wind speed, (3) air temperature, addressed by Schott (1989).
(4) sky temperature (radiative), (5) cloud cover, (6) humid-
ity, (7) rainfall over the last few weeks or a month, (8) soil
moisture at surface at selected locations, (9) groundwater 12.3.4 Interpretation of Thermal-IR Imagery
level, (10) vegetation type and density, (12) type of soil or
rock and (12) albedo. Determination of thermal properties As is obvious from the foregoing discussion, the radiant
such as emissivity and thermal inertia in the field (e.g. Marsh temperature (TR) is de-pendent on two main factors: emis-
et al. 1982; Vlcek 1982) is useful during data interpretation. sivity and kinetic temperature of the surface. The various
It is advisable to monitor the first six variables at least for a physical—environmental factors to be considered during
few days prior to the actual survey (Ellyett and Pratt 1975; interpretation of radiant temperature image are summarized
Bonn 1977). in Table 12.2.
The purpose of ground investigations could be to pro- A dedicated thermal remote sensing experiment typically
vide a reference base for interpretation or to verify some uses a set of two passes: one pre-dawn (night) and one day
anomalous signatures. Where the area to be flown is large, (noon) pass. Qualitative image interpretation utilizes the
a significant time may elapse between start and finish usual elements of photo-interpretation (see Sect. 9.3.1). Here
points, and in such situations, monitoring of meteorologi- we discuss some typical features commonly seen on radiant
cal data could help in reducing the data to a common base temperature im-ages.
for comparative interpretation. Further, field checking of
some anomalies may be necessary to control the 1. Topography. Topographical features are enhanced on
interpretation. daytime thermal images due to differential heating and
shadowing (Fig. 12.9b). The hill slopes facing the Sun
receive more solar energy than those sloping away from
12.3.3 Processing of Broad-Band TIR Images it, and some of the hill slopes may lie in shadows, owing
to rugged topography. These effects lead to local differ-
The most commonly used TIR image is a simple radiant ences in thermal energy budgets and consequent differ-
temperature image (TR) that may belong to the night-time or ences in surface temperatures. However, on night-time
the daytime pass. For some investigations, temperature dif- images, the topography becomes subdued (Fig. 12.9c).
ference [ΔT = (TD − TN)] image, and ‘apparent thermal Elevation is another variable influencing ground tem-
inertia’ im-age (a computed parameter) may be better used. peratures. The temperature elapses with elevation, the
Additionally, thermal image data can also be used in any common environmental lapse rate being 6.5 °C per
desired combination in multisensor integrated studies. 1000 m. This effect may be more manifest on satellite
For image processing and data integration, it is necessary to thermal images covering large mountainous areas;
register thermal images over other images. Some practical therefore, it is always advisable to use topographical data
problems occur in registering thermal images, particularly conjunctively when interpreting the thermal-IR data.
because the temperature of the ground surface is highly vari- 2. Wind and cloud cover. Wind trails can be seen on
able and depends on time and meteorological atmospheric thermal-IR images of good spatial resolution. Wind
parameters (which are dynamic factors!), and also on such causes dissipation of surface heat. Objects such as
factors as albedo, thermal and emissive properties, slope, shrubs, boulders etc. act as barriers to wind and lead to
topography, moisture and vegetation. Therefore, locating formation of shadow trails with relatively higher surface
stable ground control points (GCPs) is a difficult exercise, as temperature. Wind effects can thus be seen as alternating
even fixed features on the ground (e.g. topography) may be bright and dark parallel-curved lines. Cloud cover leads
displayed differently on different day and night thermal images. to differential heating and shadowing and hence a patchy
In this context, handling daytime thermal-IR data is rel- bright (warm) and dark (cool) appearance on the image.
atively easy as the accompanying VNIR images are usually Scattered non-uniform precipitation leads to unsystem-
available; registering night-time thermal-IR data over other atic moisture levels in soils and also results in a mottled
images requires much more careful effort. Another difficulty appearance on the image.
in registering thermal images is the resolution aspect, as 3. Land surface (rocks and soils). The land surface gets
thermal imagery generally has a coarser spatial resolution as heated during the day and cools at night, thus showing
compared to VNIR imagery, and lacks fine details. These temperature variation in a diurnal cycle (Fig. 12.8).
12.3 Broad-Band Thermal-IR Sensing 149

Fig. 12.9 A set of HCMM images of the Atlas Mts. a Visible band subdued; the NW area being higher is, cooler; some geological details
image; note the higher spatial resolution and many geological features are seen on the night IR image but not on the day IR image. d Night
on the image; also seen are scanty clouds in the southern part. b Day TIR image after registration in geometric conformity to b. e Tempera-
thermal-IR image; topographical effects are enhanced due to solar ture-difference image (ΔT = dayTIR − nightTIR). f ATI image [(1 − A)/
heating; water body in the NE, clouds in the S and hilly area ΔT]; water has higher ATI and therefore appears bright; day clouds and
(NW) appear cooler (darker). c Night thermal-IR image; note the snow are also bright; the wet sand in the SE appears light grey; many
difference in geometry as compared to b; topographical effects are geological details are observed (a–f courtesy of R. Haydn)

4. Standing water. Relative to land, standing water evaporation takes places and becomes stronger with
appears brighter (relatively warmer) on the night-time in-creasing temperature. Due to evaporation, energy is
image and darker (cooler) on the daytime IR image transported from the water to the air and the water
(Figs. 12.8 and 12.10). Although thermal inertia values appears cooler. At night, as cooling of the surface
of rocks and water are nearly equal, the unique thermal water pro-ceeds, convection brings warmer water from
pattern of water is related to convection, circulation and depth to the surface, decreasing the net drop in tem-
evaporative cooling over water bodies. In the daytime, perature of surface water, and the water appears
as the temperature of the surface water starts to rise, warmer.

Fig. 12.10 a Daytime aerial


photograph and b night-time TIR
(8–14 µm region) aerial scanner
imagery of the Oster Lakes,
Bavaria, Germany. In the night,
water body is warm and
vegetation cooler (courtesy of
DLR Oberpfaffenhofen)
150 12 Interpretation of Thermal-IR Data

5. Damp terrain. Moisture content present in soils directly 12.3.5.1 Method for Computing ATI
affects the thermal in-ertia values, and therefore influ- As defined earlier, thermal inertia is a measure of the
ences the thermal pattern in a day-and-night cycle. resistance offered by an object to change in temperature.
A damp terrain has a very different thermal response to Conceptually, if the same amount of heat energy (Q) is given
either standing water or land. The moisture present in to the same quantity of materials with differing thermal
materials leads to evaporative cooling and therefore the inertia (TI), then the change in temperature (ΔT) will be
radiant temperature of damp terrain is generally quite low different for different materials, such that:
and the thermal contrast in the day and night images of  
damp terrain is also less. 1
TI / ð12:6Þ
6. Metallic objects. These have low emissivity (as they have DT
high reflectivity = low absorptivity = low emissivity), Watson (1973, 1975, 1982b), a pioneer worker in the
and therefore exhibit low radiant temperatures and field of thermal-IR sensing, developed a simple thermal
always appear dark (cool) on TIR images, day and night model assuming that the Sun causes periodic heating of the
(Fig. 12.8). Earth’s surface and that the ground losses of heat are
7. Vegetation. In general, vegetation is warmer on the only by radiative transfer. The solar energy incident on
night-time and cooler on the daytime image, in com- the Earth’s surface is partly reflected and partly absorbed.
parison to the adjoining non-vegetated land (Figs. 12.8 If the solar flux S is incident on a surface of albedo
and 12.10). The lower daytime temperature is related to A, then
the transpiration process in the plants and the higher
night-time temperature to the higher moisture content in S¼ AS þ ðI  AÞ  S
leaves. Dry vegetation lying on the ground insulates the solar incident flux relfected energy absorbed energy
ground from the atmosphere and causes warmer ð12:7Þ
night-time and cooler daytime responses. Extensive and
thick vegetation cover acts as a barrier to geological where (1 − A), called co-albedo, is responsible for the
mapping on TIR channels. amount of heat absorbed [(1 − A)  S], which causes a rise
in surface temperature. However, the co-albedo varies spa-
tially within a scene. In order to obtain an estimate of ATI, it
is necessary to normalize for the variation in input heat
12.3.5 Thermal Inertia Mapping energy [(1 − A)  S]. Therefore,
Thermal inertia (P) is an intrinsic property of materials and is 1 S  ð1  AÞ ð1  A Þ
ATI / / ¼N
the single most important thermal property of materials, ðDT=ð1  AÞ  SÞ DT DT
governing the diurnal variation in temperature of objects on
ð12:8Þ
the Earth’s surface (Sect. 12.2.1.3). Materials with lower
thermal inertia exhibit a greater range of temperature change where N is a scaling constant as solar flux can be considered
in the diurnal cycle. Therefore, thermal inertia mapping can to be uniform in a scene.
help discriminate various types of soils, rocks etc. on the In order to compute an ATI image, three input images are
Earth’s surface. Thermal inertia values computed from required: broad-band panchromatic (VNIR), daytime TIR
remote sensing methods are only approximations of the and night-time TIR images (Fig. 12.11). Pixel by pixel,
actual thermal inertia (TI) values and are therefore called the albedo can be obtained from a VNIR image, and diurnal
Apparent Thermal Inertia (ATI) values. temperature variation (ΔT) can be computed from daytime

Fig. 12.11 Schematic for ATI


calculation (based on Watson’s
simple periodic heating model)
12.3 Broad-Band Thermal-IR Sensing 151

(TD) and night-time (TN) coverages. In this way, apparent higher values of ΔT would generally correspond to lower
thermal inertia can be computed [Eq. (12.8)]. values of ATI and vice versa, although on detailed exami-
A simplified formulation to calculate ATI from global nation many exceptions may be found.
remote sensing data (HCMM) is as follows (Short and Stuart
1982):
12.3.6 Scope for Geological Applications—
ð1  AÞ Broad-Band Thermal Sensing
ATI ¼ N  C  ð12:9Þ
DT
where N = a scaling factor (=1000), C = a constant to nor- 12.3.6.1 Geomorphology
malize for solar flux variations with latitude and solar Space-acquired night-time thermal images provide a good
declination, A = apparent solar albedo, and ΔT = diurnal impression of regional landforms, owing to the fact that
temperature difference (TD – TN). differences in night-time temperatures are related to eleva-
tion, soil moisture and vegetation. On the other hand, day-
12.3.5.2 ATI Image Interpretation time thermal images give more information about
Topography and solar illumination are very important factors topography and relief.
in thermal-IR sensing. For example, poorly illuminated slopes
may be computed to have erroneously high ATI! Therefore, 12.3.6.2 Structural Mapping
topography/slope effects need proper correction and consid- Thermal-IR images can be extremely useful in delineating
eration during TIR data processing and interpretation. structural features. Structural features such as folds, faults etc.
Soil and alluvium commonly appear dark on the ATI may be manifested due to spatial differences in thermal char-
image as they possess low ATI, due to lower density and acteristics of rocks. Bedding and foliation planes appear as
thermal conductivity values. Forests are generally sub-parallel linear features due to thermal contrasts of com-
light-toned, as they have high ATI due to low albedo (A) and positional layering. There are examples where structures have
relatively small ΔT, owing to moisture content and first been detected on the TIR image, not identified at all on the
evapo-transpiration. Agricultural areas are depicted in vary- aerial photographs, and only subsequently located in the field
ing shades of gray. Water and snow both possess very light (Sabins 1969). Faults and lineaments may be associated with
tones on the ATI image (Fig. 12.9f), but for different reasons. springs or they may promote movement of groundwater to
Water has a light tone (high ATI) because the albedo is very shallow depth; this would lead to evaporative cooling along a
low, and snow has it because although the albedo is high, the line or zone, producing a linear feature (Fig. 12.12). At times,
ΔT is very small. Rock materials are expressed in various daytime thermal images, on which topographical effects are
shades of gray depending on their albedo and density. Day enhanced, may also be useful in locating structural features.
clouds are white (Fig. 12.9f) and night clouds appear black.
An ATI image and its corresponding ΔT image have, in 12.3.6.3 Lithological Mapping
general, an inverse mutual relation (Fig. 12.9e, f), owing to The spatial resolution of sensors is an important aspect in
the fact that albedo values of many objects are similar, and lithological mapping. The present-day space-borne TIR
ΔT appears in denominator for calculating ATI. Therefore, scanners provide rather coarse spatial resolution data.

Fig. 12.12 a TIR image showing numerours structural features such as bedding and lineaments; b the corresponding aerial photograph, which is
nearly featureless; c interpretation map based on the TIR image (Stilfonstein area, Transvaal, South Africa). (a, b, from Warwick et al. 1979)
152 12 Interpretation of Thermal-IR Data

However, the viability of the technique for lithological dis- As many materials have overlapping ranges of ATI val-
crimination has been shown by higher-spatial-resolution ues, the ATI value may not be diagnostic for identification;
aerial scanner data. Both radiant temperature images and however, it can be a useful data to complement the VNIR
ATI images (particularly the latter) can be used for litho- data for improved lithological discrimination.
logical discrimination (see e.g. Kahle et al. 1976; Short and It was mentioned earlier that the diurnal heat wave pen-
Stuart 1982: Watson 1982b; Abrams et al. 1984). etrates to a depth of nearly 1-m below the ground surface.
The material in the top 10-cm zone is the most important
(a) Relevance of ATI in hard-rock areas. Discrimination on region in thermal-IR remote sensing (Watson 1975; Byrne
the basis of ATI appears to be easiest in the case of sedi- and Davis 1980). The top encrustation—oxidation soil—
mentary rocks—orthoquartzites, dolomites, sandstones, lichen layer, which is usually less than 1-mm thick, may
limestones and shales have successively lower ATI have a negligible effect on the bulk thermal properties. This
(Fig. 12.13a). This may primarily be due to the wide range is in contrast to the case of solar reflection sensing, where the
of porosity and bulk density in the sedimentary rocks. For top few- microns-thick layer exerts a predominant influence
example, in an area of sedimentary sequence comprising over the spectral response. Therefore, thermal-IR image data
shale, siltstone and sandstone, on the pre-dawn thermal are likely to yield better information on bedrock lithology
image, sandstone appears relatively cool (high ATI), than the solar reflection data.
siltstone warmer, and shale still warmer (low ATI). Fur-
ther, due to higher conductivity and specific heat, dolo- 12.3.6.4 Hydrogeological Studies
mites generally have higher ATI than limestones. Thermal-IR data are highly influenced by surface moisture,
Therefore, limestone and dolomite, which are hard to and this characteristic can be applied for hydrogeological
distinguish in the VNIR region, may be separated from investigations in a number of ways. For example, for (a) soil
each other by subtle temperature differences. moisture estimation, (b) exploration of shallow aquifers or
water-bearing fractures, (c) detection of seepage of water
Distinction among igneous rocks on the basis of ATI from irrigation canals, and (d) landslide studies, where
values is more difficult, as compared to the sedimentary mapping of moisture zones is important. Figure 12.14 gives
rocks, although smaller differences in ATI values of igneous an interesting example of the detection of drainage pattern
rocks such as rhyolites, andesites, basalts and granites have on the TIR, whereas the same features are not observed
been reported (Fig. 12.13b). clearly either on the VIS or near-IR im-ages.

(b) Relevance of ATI in weathered areas. After weathering, 12.3.6.5 Penetration of Smoke
different rocks would yield soil covers differing in bulk For fighting forest fires, the essential information required is
density, porosity and water saturation, which will lead the exact location of the fire front. The smoke plume ema-
to differences in ATI values. In this way, ATI images nating from the fire may engulf the fire front completely. As
can help discriminate between weathered products visible/near-IR radiation is scattered by tiny suspended
derived from different parent rocks. particles of smoke plume, VNIR images would not be able

Fig. 12.13 ATI values for


selected a sedimentary and
b igneous rocks (plots vertically
separated for clarity; compiled
after Miller and Watson 1977;
Short and Stuart 1982; Watson
1982a); note that the rocks with
higher density and higher silica
content possess higher ATI
12.3 Broad-Band Thermal-IR Sensing 153

Fig. 12.14 Aerial multispectral scanner images of a semi-arid region day TIR; on the other hand, as vegetation and wet ground have
in Central India: a visible (red); b near-IR; and c daytime thermal-IR. In mutually contrasting responses on both VIS and near-IR channels, the
this area, the channel-ways are partly covered with vegetation and drainage-ways fail to manifest clearly (courtesy of Space Applications
partly wet ground; on the TIR image, the drainage pattern is distinct, Centre, Ahmedabad, ISRO)
because wet ground and vegetation have near similar responses on the

Fig. 12.15 Penetration of smoke by thermal-IR radiation. a Visible band image; note that the fire front of the forest fire is engulfed by the smoke
plume and is not observable. b Corresponding thermal-IR image showing the fire front (courtesy of NASA Ames Research Centre, in Sabins 1997)

to penetrate the smoke plume to locate the fire front. On the used to convert measured spectral radiance to temperature
other hand, the thermal-IR radiation, possessing a values:
longer-wavelength, can penetrate the fire smoke and reveal !
the location of the fire front (Fig. 12.15). 2phc2 1
Lk ¼  hc  ek ð12:10Þ
k5 p ekKT  1
12.3.6.6 Monitoring of Volcanoes, Geothermal
Fields and Coal Fires where k is the wavelength in meters, Lk is the spectral
This group of applications deals with high-temperature fea- radiance, h is Planck’s constant = 6.62  1014 Js, k is
tures. Thermal-IR sensing has specific applications in the Boltzmann’s constant = 1.38  10−23 JK−1, T is tempera-
above problems of geohazards, which are discussed in ture in K, c is the speed of light = 3  108 ms−1 and ek is
Sect. 19.14.1 with examples. the spectral emissivity. The above can be re-written as:

C2
T¼    ð12:11Þ
12.4 Temperature Estimation k ln ek C1 k5 =pLk þ 1

where C1 = 2phc2 = 3.742  10−16 Wm−2 and C2 =


Remote sensing estimation of temperature has to be based on
hc/k = 0.0144 mK, and are constants. The above implies
the intensity of radiation emitted by the target (heat source).
that temperature (T) of body can be estimated from spectral
Reviews on temperature estimation from remote sensing
radiance (Lk) at a wavelength (k) with spectral emissivity
data can be found in Kahle (1980), Rothery et al. (1988) and
(ek) known.
Quattrochi et al. (2009). Planck’s radiation equation can be
154 12 Interpretation of Thermal-IR Data

12.4.1 Computation of Spectral Radiance 12.4.3 Conversion of Spectral Radiance


to Temperature
The first step while handling digital remote sensing data is to
convert DN values into spectral radiance. The general Once the spectral radiance (Lƛ) for the pixel is known, it can
approach has been discussed in Sect. 11.5.1 and the same is be substituted in Eq. (12.11) to compute the brightness
also followed for thermal data. The DN values can be con- temperature. This general methodology can be followed for
verted to spectral radiance by using the general formulation all wavelengths—both SWIR and TIR.
(Markham and Barker 1986; Chander et al. 2009): For thermal IR bands, however, it is also possible to use a
simplified equation for computing brightness temperature as
LmaxðkÞ  LminðkÞ
Lk ¼ LminðkÞ þ :ðQcal  Qcalmin Þ ð12:12Þ follows (Chander et al. 2009):
Qcalmax  Qcalmin
K2
where Lk is spectral radiance received by the sensor for the T¼   ð12:14Þ
K1
pixel in question, Lmin (k) is minimum detected spectral ln þ1
Lk
radiance by the sensor, Lmax(k) is maximum detected spectral
radiance by the sensor, Qcalmin is the minimum grey level, where K1 and K2 are constants. Table 12.4 gives values of
Qcalmax is maximum grey level, and Qcal is the grey level for K1 and K2 for Landsat TM, ETM+, and TIRS sensors.
the analysed pixel. In the final step, radiant temperature, TR, can be con-
In a simple way, this can be also written as: verted to surface kinetic temperature, TK, using the follow-
ing relationship:
Lk ¼ Grescale  Qcal þ Brescale
1=4
TR ¼ ek TK ð12:15Þ
where Grescale is the gain (multiplicative) rescaling factor and
Brescale is the bias (additive) rescaling factor, and Qcal is the where Ɛƛ is the spectral emissivity.
pixel value in question. Along with other factors such as moisture, emissivity
varies with surface material composition, texture, tempera-
ture, and the wavelength at which it is measured. In the
12.4.2 Atmospheric Correction of Spectral thermal infrared region, most natural materials are found to
Radiance Data possess an emissivity ranging from 0.7 to 0.98. A useful
approach may be to classify the area into different surface
The values of spectral radiance at the satellite sensor calcu- cover types based on texture and spectral characteristics, and
lated as above may carry three main radiance components: then assign published emissivity values to different surface
(a) thermal emission from the ground, (b) a component of cover classes (e.g. Yang et al. 2014; Rozenstein et al. 2014;
atmospheric contribution, and (c) a component of reflection Wang et al. 2015). Generally, an emissivity value of around
from the ground. Relative amounts of these components vary 0.90–0.96 may be assumed for most natural surface
with wavelength, conditions and time of survey. It is impor- materials.
tant to carry out corrections to arrive at realistic estimates of The distribution of spectral bands of Landsat TM together
the ground temperatures. The principle of atmospheric con- with their sensitivity limits is presented in Fig. 12.16a.
tribution and correction of thermal IR data has been discussed Similar data for Landsat-8 (SWIR bands 6 and 7 and TIR
in Chap. 10. Appropriate models as incorporated in various band 10) is shown in Fig. 12.16b. Table 12.5 provides an
modules (e.g. Atcor, Thermal Atm Correction etc.) can be overview of the highest measurable brightness temperature
implemented to rectify for atmospheric effects. from Landsat sensors—TM, ETM+ and OLI/TIRS in the

Table 12.4 Thermal calibration S. no. Sensor Constant


constants for Landsat TM, ETM+
and TIRS sensors K1 (mW/cm2/sr/µm) K2 (K)
1. Landsat 4 TM 67.162 1284.30
2. Landsat 5 TM 60.776 1260.56
3. Landsat 7 ETM+ 66.609 1282.71
4. Landsat 8 TIRS-B10 77.488 1321.08
Compiled after Chander et al. (2009), Metadata file of Landsat-8
12.4 Temperature Estimation 155

Fig. 12.16 a Wavelength dependence of thermal radiance of black- bands 1 to 7, (Markham and Barker 1986); b similar plot for the
body (E = 1.0) plotted for a range of temperatures (0–1200 °C), operational range of radiance for Landsat-8 SWIR bands 6 and 7, and
shaded boxes indicate operational range of radiance for Landsat TM TIR band 10 (plot generated from Landsat-8 OLI/TIRS metadata file)

Table 12.5 Input radiance values and measurable highest brightness temperatures for Landsat thermal-IR and SWIR bandsa
Spectral range Sensor Band no. Wavelength Minimum input radiance Maximum input radiance Measurable highest
range (µm) (Lmin) (mW/cm2/sr1/µm) (Lmax) (mW/cm2/sr1/µm) brightness temperature
(°C)
Thermal IR TM B6 10.4–12.5 0.1238 1.56 68
ETM+ B6 10.4–12.5 0.00 1.704 77
TIRS B10 10.6–11.2 0.01 2.20 87
B11 11.5–12.5 Band not working satisfactorily since 2014
SWIR TM B5 1.55–1.75 0.037 2.719 420
B7 2.08–2.35 0.015 1.438 290
ETM+ B5 1.55–1.75 − 0.10 4.757 500
(low gain) B7 2.08–2.35 −0.035 1.654 310
ETM+ B5 1.55–1.75 −0.10 3.106 460
(high gain) B7 2.08–2.35 −0.035 1.080 250
OLI B6 1.56–1.66 −0.059 9.114 580
B7 2.10–2.30 −0.021 2.961 340
a
Data compiled after Chander et al. (2009), Barsi et al. (2003); metadata files
156 12 Interpretation of Thermal-IR Data

Fig. 12.17 Distribution of


ASTER spectral bands together
with their temperature sensitivity
limits for normal mode (gain = 1)
and emissivity = 1 (plot
generated from data after Urai
et al. 1999)

TIR and SWIR bands. Figure 12.17 gives the spectral dis- surface phenomena such as volcanic vents, volcanic lava
tribution and sensitivity of ASTER bands, and Table 12.6 flows and surface fires.
gives the measurable temperature ranges of ASTER in The temperature so obtained as outlined above represents
normal mode. ASTER can also operate in other gain modes the overall temperature of the pixel, and is called the pixel-
(see Appendix C). integrated temperature. This can approximate the true sur-
Broadly, the thermal-IR bands are sensitive from face temperature only in case of surfaces where temperatures
sub-zero to about 65–90 °C temperature range; therefore, are uniform over large areas. Examples of estimation of
data from these bands can be used for studying the thermal surface temperature using TIR and SWIR data have been
phenomena on the Earth’s surface in the normal range given by several workers (e.g. Rothery et al. 1988; Oppen-
(0–60 °C). The SWIR bands possess capability to measure heimer et al. 1993; Becker and Li 1995; Prata 1995; Prakash
temperatures in the range of about 120–550 °C; therefore, and Gupta 1999; Urai 2000; Blackett 2017; Schroeder et al.
these bands can be used for studying high-temperature 2016; also see Sects. 19.14 and 19.15).

Table 12.6 Measurable temperature ranges for ASTER (normal mode)a


Subsystem Band no. Minimum input radiance Maximum input radiance Brightness temperature
(Lmin) (mW/cm2/sr/µm) (Lmax) (mW/cm2/sr/µm) range (°C)
VNIR 1 0.85 42.7 N/A
2 0.71 35.8 N/A
3 0.44 21.8 705–973
SWIR 4 0.11 5.50 273–449
5 0.035 1.76 149–288
6 0.032 1.58 140–277
7 0.030 1.51 132–267
8 0.021 1.055 115–242
9 0.0161 0.804 101–222
TIR 10 Radiance of 200 K Radiance of 370 K (−)73–97
11
12
13
14
Minimum input radiance is taken as 2% of Maximum input radiance
a
Data after Urai et al. (1999); For more details, see Appendix C
12.4 Temperature Estimation 157

Fig. 12.18 Schematic


representation of sub-pixel
proportion of hot/cool areas

12.4.4 Sub-pixel Temperature Estimation dominates the 8–12 µm region. It is seen that the
wave-length of the Si–O absorption band decreases from
In many cases, pixels consist of thermally mixed objects. For 12 to 9 µm in a gradual systematic succession for min-
example, in the case of an active lava flow or surface fire, the erals with chain, sheet and framework structures (see
source pixel is likely to be made up of two thermally distinct Sect. 3.4.3). This provides a distinct possibility for
surface components (Fig. 12.18): (1) a hot component such identifying minerals with these crystal structures.
as molten lava or fire with temperature TL, occupying a • Hydroxides, carbonates, sulphates, phosphates and oxi-
portion ‘p’ of the pixel, and (2) a cool component (back- des are other important mineral groups frequently
ground area) with temperature TC, which will occupy the occurring in sedimentary and metamorphic rocks, and
remaining portion of the pixel (1 − p). If the same thermally these also exhibit prominent spectral features in the TIR
radiant pixel is concurrently sensed in two channels of the (especially in 8–14 µm) region.
sensor, then we have two simultaneous equations: • The emission spectra are additive in nature, and therefore
the spectra of rocks depend on the relative amounts of
Li ¼ pLi ðTh Þ þ ð1  pÞLi ðTc Þ ð12:16Þ various minerals present in the rock and the spectra of the
individual minerals.
Lj ¼ pLj ðTh Þ þ ð1  pÞLj ðTc Þ ð12:17Þ
• The emission spectra are not significantly affected by
where Li and Lj are the at-satellite spectral radiances in surficial coatings (less than 1 mm thick) on mineral/rock
channels i and j, p is the portion of the pixel occupied by the surfaces.
hot source, Li (Th) and Lj (Th) are the spectral radiances for
the hot source in channels i and j respectively, and Li (Tc) Multispectral thermal-IR sensing exploits the presence of
and Lj (Tc) are the spectral radiances for the cool source in subtle spectral emissivity differences in minerals. The tech-
channels i and j. Using the dual-band method (Matson and nique has to use a multispectral sensor in order to pick up
Dozier 1981), the temperature and size of sub-pixel heat relative differences in emissivity at narrow spectral bands.
sources can be calculated if any one of the three variables Th, Lyon and Patterson (1966) were the first to investigate the
Tc or p is known (as this leaves two unknowns and two above idea of spectral emissivity differences in minerals and
equations). Commonly, Tc can be reasonably estimated as rocks for geological field investigations. Using field data
the temperature of the background area; then using data from from a spectrometer mounted on a mobile van, they found
two bands (e.g. SWIR bands TM7/ETM+7 and TM5/ETM that the reststrahlen (low-emissivity) spectral band centre
+5), the method can be applied to estimate sub-pixel tem- shifts to larger wavelengths and decreases in intensity with
perature and size of hot areas (e.g. volcanoes and surface decreasing silica content (or corresponding increase in mafic
fires, see Sects. 19.14 and 19.15). minerals) in rocks (Fig. 12.19).

12.5 Thermal-IR Multispectral Sensing 12.5.1 Multispectral Sensors in the TIR

Thermal-IR multispectral sensing utilizes differences in Initially, Kahle and Rowan (1980) used six-channel TIR data
spectral emissivity to discriminate between various out of the Bendix 24-channel aerial scanner data acquired
minerals/mineral groups. The spectra of minerals and rocks over Utah, and their study showed highly promising results
in the thermal-IR region have been described earlier for geologic discrimination. This led to the development of a
(Chap. 3). To briefly recapitulate the important features: dedicated aerial six-channel NASA-Daedulus Thermal-
Infrared Multispectral Scanner (TIMS). The TIMS oper-
• The silicates, the most abundant group of minerals in ated in six channels between 8–12 µm and provided extre-
crustal rocks, show prominent spectral bands in the TIR mely interesting results (Kahle and Goetz 1983). As far as
region. The Si–O stretching phenomenon in silicates space-borne imaging is concerned, ASTER is the only
158 12 Interpretation of Thermal-IR Data

radiance that needs to be corrected for atmospheric effects.


Commonly this is accomplished by monitoring the atmo-
spheric parameters and using a suitable model for atmospheric
transmission and emission (e.g. MODTRAN model, see
Chap. 10). This correction results in ground spectral radiance
images. After the above, the data are processed to derive
spectral emissivity and surface kinetic temperature values.

12.5.3 Temperature/Emissivity Separation (TES)

The thermal IR radiance is a function of both the surface


temperature and emissivity. The emissivity relates to the
composition of the surface and may be used for surface
constituent mapping. The multispectral thermal-IR sensor
ASTER provides data in five spectral bands; however the
problem is that there are six unknowns (one surface temper-
ature and five spectral emissivity values corresponding to five
spectral bands) and this makes temperature/emissivity sepa-
ration (TES) a difficult problem (e.g. Hook et al. 1992; Kealy
and Hook 1993; Gillespie et al. 1998). Three different data
processing techniques are used for temperature emissivity
separation: the emissivity reference channel method, emis-
sivity normalization method and emissivity alpha residual
method. Conceptually, these methods are as follows:

(a) Emissivity reference channel method is a rather


straightforward technique. It assumes that all the pixels
in one band have the same constant emissivity. Using this
constant emissivity value for a particular band (which is
assumed), a temperature image is calculated; now using
the temperature values, emissivity values are calculated
Fig. 12.19 Spectral normal emissivities of common terrestrial rocks. in all the other bands using the Planck function.
Spectra are arranged in decreasing SiO2 content from acidic to basic
(b) Emissivity normalization method The first step in this
types. Numbers refer to wavelengths of prominent minima. The
positions of TIMS and ASTER channels (vertical bars) indicate that method is to calculate radiant temperature for every
multispectral thermal sensing has tremendous potential in lithological pixel in all the channels using a certain fixed emissivity
mapping (spectral curves from Lyon and Patterson 1966) value; for a particular pixel, the highest of the set of
temperatures in various channels is considered as the
space-borne multispectral sensor operating in the TIR (see near-correct temperature of the pixel. The spectral
Table 6.7). emissivity in the chosen channel is assigned a value (say
The distribution of TIMS and ASTER channels in rela- 0.96), which represents a reasonable emissivity value
tion to the spectra of common rocks is shown in Fig. 12.19. for exposed rock surfaces. This set of values (highest
It is obvious that the multispectral thermal-IR data from temperature and spectral emissivity value of 0.96) is
TIMS and ASTER have capabilities for mineralogical/ used to compute the surface (kinetic) temperature of the
litholigical discrimination and identification. pixel. With kinetic temperature so determined, spectral
emissivity in each channel is computed using the cor-
responding radiant temperatures. In this way, spectral
12.5.2 Data Correction emissivity images are derived for various channels.
(c) Alpha residual method produces spectra that approxi-
The multispectral thermal-IR data require special approach for mate the shape of emissivity spectra from thermal
data process-ing. The first step is calibration of the data. During infrared radiance data. The method uses Wien’s
scanning, ASTER sensor collects data from standard sources approximation of the Planck function for simplification
for calibration. The DN values are converted to at-sensor and enables separation of temperature and emissivity.
12.5 Thermal-IR Multispectral Sensing 159

The various spectral emissivity images, and also the


radiant temperature images in general, are very highly cor-
related. The contrast ratio in spectral emissivity is quite low,
commonly less than 0.15, as compared to the contrast ratio of
0.5 or higher found in the VNIR data. Hence, the data need
further processing for meaningful representation. A com-
monly used technique for enhancing correlated thermal-IR
data is decorrelation stretch (Sect. 13.7.4). The decorrelated
images can be colour-coded and displayed as a composite.

12.5.4 SO2 Atmospheric Absorption

Sulfur dioxide (SO2) may be an important pollutant of the


atmosphere at places, as it may be emitted by volcanoes and
also released from industrial activity or waste. It leads to Fig. 12.20 Schematic arrangement of an active laser remote sensing
precipitation of sulfuric acid and has long term detrimental system
effects on vegetation, man-made structures and human
health. SO2 has absorption features at 7.3–8.6 µm, which platform, impinge on the ground and are back-scattered
means that thermal-IR sensing can be used to quantify its (Fig. 12.20). The intensity of the back-scattered beam is
abundance in the atmosphere (Watson et al. 2004). The sensed by the detector, amplified and recorded. The data can
7.3 µm absorption feature lies in the atmospheric opaque be used to discriminate ground objects. The technique uses
region; however the 8.6 µm lies within the atmospheric differences in reflectivity in very fine, nearly monochromatic
window. ASTER band 11 encompasses 8.6 µm. Therefore, spectral bands. A number of laser wavelengths ranging from
the selective absorption of the ground radiance due to SO2 in the visible to the thermal-infrared part of the EM spectrum
the atmosphere can be sensed by ASTER Band 11 (for are available.
examples, see Figs. 19.125 and 19.159).

12.6.2 Scope for Geological Applications


12.5.5 Applications
LIDAR sensing in the thermal-IR spectral region uses the
The multispectral thermal IR data has been used for a presence of reststrahlen (high-reflectivity) bands in geologic
number of geologic studies. To mention just a few, for material. With changes in the lattice structure and mineral
distinguishing between pahoehoe and aa lava flows in the composition, the high-reflectivity Si-O bands shift in posi-
Hawaii islands (Lockwood and Lipman 1987), for quanti- tion and intensity from 9 to 12 µm. Additionally, many other
tative estimation of granitoid composition in Sierra Nevada minerals, such as carbonates (at 12.3 µm), sulphate (at 10.2
(Sabine et al. 1994), for lithologic analysis of alkaline rock and 9 µm), phosphates (at 9.25 µm) and aluminium-bearing
complex (Watson et al. 1996), and for mapping playa clays (at 11 µm) also exhibit spectral features in the
evaporite minerals and associated sediments (Crowley and thermal-IR spectral region.
Hook 1996). Several ASTER ratio indices have been Kahle et al. (1984) used a set of two laser wavelengths at
developed for specific mineral identification (such as quartz, 9.23-µm and 10.27-µm for a flight in Death Valley, Cali-
silica, carbonates, mafic, ultramafic) (see Sect. 19.7.5). fornia. Ground-track photography was used to locate the
Some examples of multispectral thermal IR image data as flight path and to compare it with ancillary data such as the
applied to lithologic mapping are presented in Chap. 19. geological map and other images. The lithological units
covered included shale, limestone, quartzites and volcanics
(rhyolitic tuffs and basalts).
12.6 LIDAR Sensing The laser data were recorded on a strip chart and later
digitized. A ratio plot of the intensity of the two lasers
12.6.1 Working Principle (9.23-µm/10.27-µm) was found to be very informative. It
could discriminate between various rock types. To compare
LIDAR (laser radar) sensing is an active sensor using an the reflectivity data with thermal emission data of the same
artificial laser beam to illuminate the ground. The laser rays, area, Kahle et al. (1984) registered the laser data over the
transmitted from the system on-board the remote sensing TIMS image data, and found that the emissivity and
160 12 Interpretation of Thermal-IR Data

Fig. 12.21 Upper part of the figure shows TIMS emissivity ratio for saline playa and lake sediments; 2 argillaceous sedimentary rocks; 3 fan
bands 3/5 (8.9–9.3-µm waveband/10.2–10.9-µm waveband) for various gravels mixed (Blackwater, Tucki and Trail sources); 4 fan gravels of
surface materials in a profile in Death Valley, California. The lower part Blackwater and Train canyon; 5 fan gravels of Tucki wash; 6 Carbonate
shows laser reflectivity ratio for wavelengths 9.23-µm/10.27-µm for the rocks and fans (predominantly dolomites); 7 basaltic lava; 8 breccia
same profile for comparison. The two ratios have an inverse relation. 1 rocks and fans of mixed composition (after Kahle et al. 1984)

reflectivity values are inversely related (Fig. 12.21). Cudahy Blackett M (2017) An overview of infrared remote sensing of volcanic
et al. (1994) reported similar results using lasers for activity. J Imaging 3:13. https://doi.org/10.3390/jimaging3020013
Bonn FJ (1977) Ground truth measurements for thermal infrared remote
geological mapping in Australia. sensing. Photogramm Engg Remote Sens 43:1001–1007
Byrne GF, Davis JR (1980) Thermal inertia, thermal admittance and the
effect of layers. Remote Sens Environ 9:295–300
12.7 Future Carlsaw HS, Jaegar JC (1959) Conduction of heat in solids, 2nd edn.
Oxford Univ Press, New York
Chander G, Markham BL, Helder DL (2009) Summary of current
Future developments in thermal-IR sensing are expected in radiometric coefficients for Landsat MSS, TM, ETM+, and EO-1
many directions, for example: OLI sensors. Rem Sens Environ 113:893–903
Crowley JK, Hook SJ (1996) Mapping playa evaporite minerals and
associated sediments in Death Valley, California, with multispectral
• Further understanding the emission spectra of natural thermal images. J Geophys Res 99B:643–660
surfaces and mixtures Cudahy TJ, Connor PM, Hausknecht P, Hook SJ, Huntington JF,
• Improved methods to incorporate atmospheric- Kahle AB, Phillips RN, Whitboum LB (1994) Airborne CO2 laser
environmental effects spectrometer and TIMS TIR data for mineral mapping in Australia.
In: Proceedings 7th Australian Remote Sensing Conference,
• Improvements in data processing for emissivity image Melbourne, Victoria, Australia, March 1–4, pp 918–924
computation Dancak C (1979) Temperature calibration of test infrared scanner.
• Extending the use of the 3–5 µm region and the Photogramm Eng Remote Sens 45:749–751
17–25 µm region Ellyett CD, Pratt DA (1975) A review of the potential applications of
remote sensing techniques to hydrogeological studies in Australia.
• Developing higher spatial and spectral resolution multi- Australian Water Resources Council Technical paper No 13, 147 pp
spectral thermal-IR scanner from orbital platform. Gillespie AR, Kahle AB (1977) Construction and interpretation of a
digital thermal inertia image. Photogram Eng Remote Sens
43:983–1000
Gillespie AR, Rokugawa S, Matsunga T, Cothern S, Hook S, Kahle AB
References (1998) A temperature emissivity separation algorithm for advanced
spaceborne thermal emission and reflection radiometer (ASTER)
images. IEEE Trans Geosci Remote Sens 36(4):1113–1126
Abrams MJ, Kahle AB, Palluconi FD, Jp Schieldge (1984) Geologic Hook SJ, Gabell AR, Green AA, Kealy PS (1992) A comparison of
mapping using thermal images. Remote Sens Environ 16:13–33 techniques for extracting emissivity information from thermal
Barsi JA, Schott JR, Palluconi FD, Helder DL, Hook SJ, Markham BL, infrared data for geologic studies. Remote Sens Environ 42:123–135
Chander G, O'Donnell EM (2003) Landsat TM and ETM+ thermal Janza FJ (1975) Interaction mechanisms. In: Reeves RG (ed) Manual of
band calibration. Can J Remote Sens 29(2):141−153 remote sensing. Ist edn, Am Soc Photogramm, Falls Church, VA,
Becker F, Li ZL (1995) Surface temperature and emissivity at various pp 75–179
scales—definition, measurement, and related problems. Remote Kahle AB (1977) A simple thermal model of the Earth’s surface for
Sens Rev 12:225–253 geologic mapping by remote sensing. J Geophys Res 82:1673–1690
References 161

Kahle AB (1980) Surface thermal properties. In: Siegal BS, Gillespie AR Rothery DA, Francis PW, Wood CA (1988) Volcano monitoring using
(eds) Remote sensing in geology. Wiley, New York, pp 257–273 short wavelength IR data from satellites. J Geophys Res 93
Kahle AB, Goetz AFH (1983) Mineralogic information from a new (B7):7993–8008
airborne thermal infra-red multispectral scanner. Science 222:24–27 Rozenstein O, Qin Z, Derimian Y, Karnieli A (2014) Derivation of land
Kahle AB, Rowan LC (1980) Evaluation of multispectral middle surface temperature for Landsat-8 TIRS using a split window
infrared aircraft images for lithologic mapping in the East Tintic algorithm. Sensor 14:5768–5780
Mountains, Utah. Geology 8:234–239 Sabine C, Realmuto VJ, Taranik JV (1994) Quantitative estimation of
Kahle AB, Gillespie AR, Goetz AFH (1976) Thermal inertia imaging: a granitoid composition from thermal infrared multispectral scanner
new geologic mapping tool. Geophys Res Lett 3:26–28 (TIMS) data, desolation Wilderness, northern Sierra Nevada,
Kahle AB, Madura DP, Soha JM (1980) Middle infrared multispectral California. J Geophys Res 99(B3):427l–4261
aircraft scanner data: analysis for geological applications. Appl Opt Sabins FF Jr (1969) Thermal infrared imagery and its application to
19:2279–2290 structural mapping in southem Califomia. Geol Soc Am Bull
Kahle AB, Shumate MS, Nash DB (1984) Active airborne infrared 80:397–404
laser system for identification of surface rock and minerals. Sabins FF Jr (1987) Remote sensing principles and interpretation. 2nd
Geophys Res Lett 11:1149–1152 edn, Freeman, San Francisco, 449 pp
Kealy PS, Hook SJ (1993) Separating temperature and emissivity in Sabins FF Jr (1997) Remote sensing-principles and interpretation, 3rd
thermal infrared multispectral scanner data: implications for recov- edn. Freeman & Co, NY
ering land surface temperatures. IEEE Trans Geosc Remote Sens 31 Scarpace FL, Madding RP, Green T III (1975) Scanning thermal
(6):1155–1164 plumes. Photogram Eng 41:1223–1231
Lockwood JP, Lipman PW (1987) Holocene eruptive history of Mauna Schott JR (1989) Image processing of thermal infrared images.
Loa volcano. In: Decker RW, Write TL, Stauffer PH (eds) Volcan- Photogram Eng Remote Sens 55(9):1311–1321
ism in Hawaii. Vol l, USGS Prof Pap 1350, U S Geol Surv, Schroeder W, Oliva P, Giglio L, Quayle B, Lorenz E, Morelli F (2016)
Washington DC, pp 509–535 Active fire detection using Landsat-8/OLI data. Remote Sens
Lyon RJP, Patterson JW (1966) Infrared spectral signatures -a field Environ 185:210–220
geological tool. In: Proceedings 4th International Symposia Remote Short NM, Stuart LM Jr (1982) The heat capacity mapping mission
Sensing Environmental, Ann Arbor, MI, pp 215–220 (HCMM) Anthology. NASA SP-465, US Govt Printing Office,
Markham BL, Barker JL (1986) Landsat MSS and TM post-calibration Washington, DC, 264p
dynamic ranges, exoatmospheric reflectances and at-satellite tem- Urai M (2000) Volcano monitoring with Landsat TM short-wave
peratures. EOSAT Technical Notes 1:3–8 infrared bands: the 1990–1994 eruption of Unzen Volcano. Japan.
Marsh SE, Sehieldge JP, Kahle AB (1982) An instrument for Int J Remote Sens 21(5):861–872
measuring thermal inertia in the field. Photogramm Eng Remote Urai M, Fukui K, Yamaguchi Y, Pieri DC (1999) Volcano observation
Sens 48:605–607 potential and global volcano monitoring plan with ASTER.
Matson M, Dozier J (1981) Identifieation of subresolution high Volcanological Society of Japan 44(3):131–141 (in Japanese)
temperature sources using a thermal infrared sensor. Photogramm Vlcek J (1982) A field method for determination of emissivity with
Eng Remote Sens 47(9):1311–1318 imaging radiometers. Photogram Eng Remote Sens 48:609–614
Miller SH, Watson K (1977) Evaluation of algorithms for geological Wang et al (2015) 3D geological modeling for prediction of subsurface
thermal inertia mapping. Proceedings of the 11th International Mo targets in the Luanchuan district, China. Ore Geol Rev
Symposium on Remote Sensing of Environment, Ann Arbor, MI, 71:592–610
pt2, pp 1147–1160 Warwick D, Hartopp PG, Viljoen RP (1979) Application ofthe thermal
Oppenheimer C, Francis PW, Rothery DA, Carlton RW, Glaze LS (1993) infrared line scan¬ning technique to engineering geological map-
Infrared image analysis of volcanic thermal features: Volcano Lascar, ping in South Africa. Q J Eng Geol 12:159–179
Chile, 1984–1992. J Geophys Res 98:4269–4286 Watson K (1973) Periodic heating of a layer over a semi-infinite half
Prakash A, Gupta RP (1999) Surface fires in the Jharia coalfield, India solid. J Geophys Res 78:5904–5910
—their distribution and estimation of area and temperature from TM Watson K (1975) Geologic applications of thermal infrared images.
data. Int J Remote Sens 20(10):1935–1946 Proc IEEE 63(1):128–137
Prata AJ (1995) Thermal remote sensing of land surfaces temperature Watson K (1982a) Regional thermal inertia mapping from an
from satellites: current status and future prospects. Remote Sens experimental satellite. Geophysics 47:1681–1687
Rev 12:175–224 Watson K (1982b) Regional thermal inertia mapping from an
Price JC (1977) Thermal inertia mapping: a new view of the Earth. experimental satellite. Geophysics 47:1681–1687
J Geophys Res 81:2582–2590 Watson K, Rowan LC, Bowers TL, Anton-Pacheco C, Gumiel P,
Quattrochi DA, Luvall JC (eds) (2004) Thermal remote sensing in land Miller SH (1996) Lithologic analysis from multispectral thermal
surface processes. CRC Press, Boca Raton, FL, p 440 infrared data of the alkalic rock complex at Tron Hill, Colorado.
Quattrochi DA et al (2009) Thermal remote sensing: theory, sensors Geophysics 61:706–721
and applications. In: Jackson MW (ed) Earth observing platforms Watson IM et al (2004) Thermal infrared remote sensing of volcanic
and sensors, manual of remote sensing 3rd edn. Vol. 1.1, Amer Soc emissions using the moderate resolution imaging spetroradiometer.
Photog Remote Sens (ASPRS), Bethesda, Md., pp 107–187 J Volcano Geother Res. 135:75–89
Reeves RG (1968) Introduction to electromagnetic remote sensing with Yang L et al (2014) Land surface temperature retrieval for arid regions
emphasis on applications to geology and hydrology. Am Geol. Ins, based on Landsat-8 TIRS data: a case study in Shihezi. Northwest
Washington, DC China. J Arid Land 6(6):704–716
Digital Image Processing of Multispectral
Data 13

13.1 Introduction (c) greater ease of data storage, distribution and a higher
shelf life, (d) amenability to digital image processing and
13.1.1 What Is Digital Imagery? (e) repeatability of results.

In the most general terms, a digital image is an array of


numbers depicting spatial distribution of a certain field or 13.1.2 Sources of Multispectral Image Data
parameter. It is a digital representation in the form of rows
and columns, where each number in the array represents the The various types of scanners (Chap. 5) used on aerial and
relative value of the parameter at that point/over the unit area space-borne platforms constitute the basic sources and pro-
(Fig. 13.1). The parameter could be reflectivity of EM vide the bulk of remote sensing multispectral digital ima-
radiation, emissivity, temperature, or a parameter such as gery. In some cases, photographic products can also be
topographical elevation, geomagnetic field or even any other scanned to generate digital image data. This could be par-
computed parameter. In this chapter, we deal with remote ticularly useful in cases where some of the input data may be
sensing multispectral images. available only as photographic products, such as studies
In a digital image, each point/unit area in the image is requiring use of old archival photographic rec-ords. How-
represented by an integer digital number (DN). The lowest ever, as scanning of film involves an additional step at the
intensity is assigned DN zero and the highest intensity the data input stage, and every additional step means some loss
highest DN number, the various intermediate intensities of information, it is to be used only when utmost necessary.
receiving appropriate intermediate DNs. Thus, the intensities
over a scene are converted into an array of numbers, where
each number represents the relative value of the field over a 13.1.3 Storage and Supply of Digital Image
unit area, which is called the picture element or pixel. Data
The range of DNs used in a digital image depends upon the
number of bit data, the most common being the 8-bit type: A data disk is composed of tiny storage sites called bits.
Each unit of information is called a byte. The alphanumeric
Bit number Scale DN-range characters are stored in various models such as ASCII or
7-bit or 27 128 0–127 EBCDIC. The video data are commonly stored in binary
8-bit or 28 256 0–255 mode, i.e. there are two alternatives for each bit—to remain
unfilled or to get filled, also called no/yes, or 0/1.
9-bit or 29 512 0–511
10
Remote sensing data, as received from satellites by the
10-bit or 2 1024 0–1023
data-receiving agency, is generally first partially corrected,
11-bit or 211 2048 0–2047 pre-processed and reformatted before it is supplied to users.
12
12-bit or 2 4096 0–4095 A variety of media have been used for data distribution. In
1970s–80s, computer-compatible tapes (CCTs) were initially
Advantages: There are several advantages of remote used for this purpose; subsequently, CDs, optical disks and
sensing data handling in digital mode (as compared to exabyte tapes were used. Now a days, almost all data supply
photographic mode), such as: (a) better quality of data, as the is done through internet, i.e. File Transfer Protocol (FTP).
entire DN-range (generally 256 levels) can be used, (b) no The data are stored on HDD, flash card or solid state drive
loss of information during reproduction and distribution, (SSD).

© Springer-Verlag GmbH Germany 2018 163


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_13
164 13 Digital Image Processing of Multispectral Data

Fig. 13.1 Structure of a digital image. a ASTER Band3 image marked in (a). c Topographic map of the corresponding area. (Survey
showing a part of the Himalayas and the Indo-Gangetic Plains; note the of India) d field photograph of the canal
prominent dark canal in the boxed area. b DN output of the area
13.1 Introduction 165

A remote sensing data disk usually contains several files, standard map projection. Generally, raw data disks are not
such as header, ancillary, video data and trailer files. Each file supplied to users, as the data contained therein is not ready
may consist of several records. Header, ancillary and/or trailer for interpretation. The pre-processed type is the most com-
files contain such information as location, date, type and monly available product. Precision processing requires extra
condition of sensor, altitude, Sun angle and other data for effort and has to be especially requested.
calibration etc. The video data files contain image data, which
may be arranged in a variety of ways, called formats. The two
important formats used are band-sequential (BSQ) and 13.1.4 Image Processing Systems
band-interleaved by line (BIL) (Fig. 13.2). In the BSQ
format, all the data for a single band covering the entire scene The revolution in the field of computers during the last
are written as one file, each line forming a record; in this way, decade has made digital image processing widely accessible.
there are as many video data files as there are spectral bands. The power of the workstations of yesterday is matched by
In the BIL format, the video data for various bands is written the desktop PCs (Personal Computers) of today, and the
line-wise (i.e. line 1 band 1, line 1 band 2, line 1 band 3, and change in scenario is a continuous on-going process.
so on). BIL is quite handy when multispectral data of a Therefore, it is best to talk in terms of only general system
smaller sub-scene is to be picked out, and BSQ is particularly requirements, rather than capacity of computing systems.
convenient when some of the spectral bands are to be skipped An image processing system comprises hardware (phys-
and only some are to be used. ical computer components and peripherals) and software
On the basis of the degree of rectification carried out, (computer programs). Hardware comprises the central pro-
remote sensing image data are broadly categorised as one of cessing unit (CPU), hard disk to store data, and random
three types: (1) raw data, in which little correction has been access memory (RAM) to store data and programs when the
carried out, (2) pre-processed data, in which some initial or computer is working. Linked are a monitor, a keyboard and a
basic corrections have been carried out, and (3) precision- mouse. Peripherals may include tape drives, audio and video
processed data, in which rectification to a high degree has systems, scanners, printers, photo-writers, CD-writers etc.
been accomplished and the data correspond to a certain (Fig. 13.3).

Fig. 13.2 Types of data formats: band sequential and band interleaved by line
166 13 Digital Image Processing of Multispectral Data

Fig. 13.3 Typical configuration


of remote sensing image data
processing system ( after Curran
1985)

Software provides logical instructions that drive a electron guns. Now a days, more sophisticated devices in the
computer. There are two basic kinds of software, called form of liquid-crystal display (LCD) and light-emitting
operating system software and applications software. Oper- diodes (LED) display are used. LCD display is a flat-panel
ating system software provides a basic platform for working display that uses the light-modulating properties of liquid
and allows the running of applications software. Applica- crystals to display the information, whereas LED display is a
tions software performs applications of particular interest to flat-panel display that uses an array of light-emitting diodes as
the user. A list of more widely used remote sensing software pixels for the display of image processing results.
is provided in Appendix D. Look-Up Tables (LUTs) provide data manipulation and
A crucial component of the digital image processing display interactively, with a lot of flexibility. An image is
system is colour display device that presents results of image displayed on the monitor through a set of three LUTs, one
processing. The processor generates a signal driving the for each of the three R, G, B guns. A black-and-white
monitor. The display system has also evolved over the years. image display would have identical LUTs for R, G, B guns.
In the early days, the display screen used to contain a phos- An LUT is used to maintain a specified selective relation-
phor layer; when high-energy electrons from the signal ship between the DN values in the input image and the gray
strike/excite the phosphor, visible light is emitted, the inten- levels displayed on the monitor (output) (Fig. 13.4).
sity of illuminated light being proportional to the current of Digital image processing changes the LUTs, bringing about
the electron beam. Colour on the screen was displayed by changes in the display accordingly, although the image data
using triads of phosphors: red, green and blue (the three pri- in the hard disk does not change, unless specifically
mary additive colours), each one being independently excited saved. This allows flexibility in interactive working, by trial
by an electron beam. These have been called the R, G, B and error.

Fig. 13.4 Working concept for


look-up table (LUT) of 8-bit data
13.1 Introduction 167

13.1.5 Techniques of Digital Image Processing consideration the adjoining DN values as well. Examples of
these types of operations are presented at several places
Digital image processing can be carried out for various within this volume.
purposes. Basic background and general techniques of
digital image processing of remote sensing data have been
discussed by many (Moik 1980; Hord 1982; Jensen 2005; 13.2 Radiometric Image Correction
Richards and Jia 2006; Pratt 2007; Schowengerdt 2007;
Mather 2010; Gonzales and Woods 2008; Russ 2011). The purpose of radiometric image correction, also some-
Geological aspects of digital remote sensing analysis have times called image restoration, is to rectify the recorded
been discussed in particular by Condit and Chavez (1979), image data for various radiometric distortions. Atmospheric
Gillespie (1980), Harris et al. (1999), Vincent (1997), Gupta correction discussed in Chap. 10 can also be treated as
(2003), Drury (2004), Sabins (2007) and Prost (2013). radiometric image correction. Further, correction for solar
Based on the objective and technique, digital image illumination variation (Sect. 11.2.1) and that for topographic
processing can be identified into various types. In this effects (Sect. 11.5) can also considered as parts of digital
treatment, the discussion is divided under the following image processing for radiometric image correction.
sub-headings.

1. Radiometric image correction—correction of the recor- 13.2.1 Sensor Calibration


ded digital image in respect of radiometric distortions.
2. Geometric image correction—deals with digital Calibration of a remote sensor is necessary to generate
processing for systematic geo-metric distortions. absolute data on physical properties such as reflectance,
3. Image registration—superimposition of images taken by temperature, emissivity etc. This type of data on physical
different sensors from different platforms and/or at dif- properties is needed e.g. while dealing with physical models
ferent times, over one another, or onto a standard map where the physical data may be required to be input into the
projection. models. Aspects of sensor calibration and absolute data
4. Image enhancement—aims at contrast manipulation to computation from satellite sensor data have been dealt with
enhance certain features of interest in the image. by a few workers (e.g. Hill 1991; Thome et al. 1993;
5. Image filtering—extraction (and enhancement) of Chander et al. 2009).
spatial-scale information from an image. The radiometric response of a remote sensor is subject to
6. Image transformation—deals with processing of multi- drift with time, as the sensor may degrade/decay or its
ple band images to generate a computed (transformed) detector characteristics may change over a period of time.
image. The task of sensor calibration is handled by the remote
7. Colour enhancement—use of colour space in single and sensing agency operating the system, and calibration data are
multiple images for feature enhancement. provided by the agency from time to time.
8. Image fusion—aims at combining two or more images
by using a certain algorithm, to form a new image.
9. 2.5-dimensional visualization—deals with the display of 13.2.2 De-Striping
image-raster data as a surface from different perspectives.
10. Image segmentation—subdivides and describes the Striping is a common feature seen on images, and arises due
image by textural parameters. to non-identical detector response. When a series of detector
11. Classification—classification of pixels of the scene into elements is used for imaging a scene, the radiometric
various thematic groups, based on spectral response response of all the detector elements may not be identical.
characteristics. The dis-similarity in response may occur due to
non-identical detector characteristics, disturbance with time
Point versus local operation. During digital processing of or rise in temperature, or even failure of an element in an
an image, old pixel values are modified and an image with array. This leads to the appearance of stripes, called striping.
new DN values is created, according to a certain scheme. In Defective resampling algorithms may also lead to striping.
this respect, the various processing tasks can be classified The phenomenon is quite commonly observed, e.g. in
into two types—point operations and local operations. In images from Landsat MSS, TM, IRS-LISS, SPOT-HRV etc.
point operations, the DN value is modified at the particular The various Earth resources satellites are placed in near-polar
pixel, irrespective of the surrounding DN values. In contrast orbits. The OM line scanners scan the ground in an across-track
to this, local operations modify the pixel values taking into direction; therefore striping in such products appears in an
168 13 Digital Image Processing of Multispectral Data

E–W direction or ‘horizontal’ on the image (Fig. 13.5a). On 2. On-board calibration method. A major limitation of the
the other hand, the CCD-pushbroom scanners scan the ground LUT method is that relative changes occurring in the
in an along-track direction; therefore striping in their case response characteristics of the detector elements after
appears oriented N–S or ‘vertical’ on the image (Fig. 13.5b). launch with time, or due to temperature fluctuations etc.,
Removal of stripes, called de-striping, is sometimes are likely to remain unrectified. Therefore, in long-
carried out during pre-processing. However, even the duration space missions, on-board calibration is pro-
pre-processed data, as available, are sometimes striped and vided. The stability of the on-board calibration lamps is
users have to rectify the data themselves. Further, the data monitored from the ground. Data from different detector
must be destriped before any image enhancement is carried units can be brought to a common reference level (bias)
out, otherwise striping is likely to get enhanced, leading to and scale (gain).
erroneous results. For this reason, de-striping is discussed in 3. Histogram matching. Some striping is commonly found,
more detail here. Several procedures, often in combination, even after implementing a look-up table or on-board
are used for de-striping. calibration method. This may be removed by statistical
histogram matching. Separate histograms, one corre-
1. Method of look-up tables (LUTs). The radiometric sponding to each detector unit, are constructed, and they
responses of all the detector elements at different are matched with each other. Taking one response as the
brightness levels are recorded. From this data set, rela- standard, the gain and bias for all other detector units are
tionships between DN value and intensity of radiation for suitably adjusted and new DN values computed and
each detector element are generated providing LUTs that assigned. This yields a destriped image where all DN
are used for radiometric normalization. values conform to a common reference level and scale.

Fig. 13.5 a Horizontal scan


lines and stripes observed in an
opto-mechanical line scanner
image (Landsat MSS); b vertical
scan lines and stripes in a
pushbroom scanner image
(IRS-1C LISS-III); c and d are the
respective images after
de-striping by Fourier filtering
13.2 Radiometric Image Correction 169

4. Interpolation. Failure of a detector unit in an array of the horizontal distance and h is the angle of rotation of the
detectors may also lead to missing lines and striping. This optical axis (see Fig. 7.2).
can be rectified cosmetically, in two ways: (a) by inter- This correction is especially necessary in aerial scanner
polating using data of adjacent scan lines in the same data as h is typically about 50° (total angular field-of-view
spectral band, or (b) by interpolating data of the same 100°). The correction is sometimes applied during
scan line in adjacent spectral bands, using the concept of pre-processing; the angle of rotation (h) is related to time and
spectral slopes. therefore X can be related to a time-base to produce a geo-
5. Fourier transform. In a CCD line scanner image there are metrically rectified image. In space-borne missions the angle
several thousand columns, each corresponding to a h is very small (e.g. in Landsat TM, h = 7.3°) due to the very
detector element. De-striping and noise removal of such high altitudes involved, and the error can often be ignored.
image data are better accomplished using Fourier trans-
form technique. Figure 13.5c, d show example of the same
image before and after de-striping by Fourier filtering. 13.3.2 Correction for Skewing Due to Earth’s
Rotation

This type of distortion is caused due to the rotation of the


13.2.3 Correction for Periodic and Spike Noise Earth underneath, relative to the line-imaging sensor as it
scans the ground from above (see Fig. 7.12). In aerial
Periodic noise occurs at regular intervals and may arise due to scanner data, the effect of this type of skew is negligible. In
interference from adjoining instruments. This is removed by satellite data, the skew is maximum in sensors orbiting in
various signal-filtering techniques. Spike noise may arise polar orbits. In a generalized way, the image skew is
owing to bit errors during transmission of data or due to a given by
temporary disturbance. It is detected by mutually comparing  
neighbouring pixel values. If adjoining pixel values differ by 2  R cos h 1
tan u  sin a ¼  ð13:1Þ
more than a certain threshold margin, this is designated as a 24  60  60 V
spike noise and the pixel value is replaced by an inter-polated
where u = image skew, h = scene centre latitude, R =
DN value. Occasionally, a complete scan line may have to be
radius of the Earth (=6367.5 km), V = velocity of the
rectified in this manner (see interpolation above).
ground track of the satellite (in km/s) and a = inclination of
Some of the above operations serve to render the image
the orbit plane from the equator. Once the amount of skew is
look cleaner and better and may also be considered as
computed, the scan lines are physically shifted at regular
cosmetic operations.
intervals, so that terrain features appear in proper geometric
positions in relation to each other. Often, this type of recti-
fication is made during pre-processing.
13.3 Geometric Corrections

Geometric distortions are broadly divided into two groups:


systematic and non-systematic (see Table 7.1). We discuss 13.3.3 Correction for Aspect Ratio Distortion
the correction for systematic distortions in this section. The
non-systematic distortions are removed by the general When the linear scales along the two rectangular arms of an
technique of ‘rubber-sheet stretching’, as discussed under image are not equal, it leads to aspect ratio distortion. This
registration (Sect. 13.4). can arise for many reasons, e.g. over-sampling/under-
sampling, or variations in the V/H ratio of the sensor-craft.
Aspect ratio distortion occurring due to sampling pattern
13.3.1 Correction for Panoramic Distortion could be of systematic type (e.g. in Landsat MSS due to
over-sampling, the scale along x-axis was longer than along
Panoramic distortion arising due to non-verticality of the y-axis by a factor of 1.38, see Fig. 13.6). The aspect ratio
optical axis results in squeezing at image margins. A cor- distortion due to V/H variation in the spacecraft is of an
rection is necessary such that the horizontal distance is given unsystematic type and is rectified by the general technique of
by X = H tan h, where H is the flying height/altitude, X is ‘rubber-sheet stretching’.
170 13 Digital Image Processing of Multispectral Data

Fig. 13.6 a Landsat MSS image


with aspect ratio and skew
distortions. b The corresponding
rectified image

13.4 Registration is widely used in GIS applications (Chap. 18). Here we


confine discussion to relative registration.
13.4.1 Definition and Importance Multispectral images taken from the same sensor and
platform position are easy to register. The problem arises
Registration is the process of superimposing images, maps or when images to be registered are from different sensors,
data sets over one another with geometric precision or con- platforms, altitudes or look directions. In such cases, dis-
gruence, i.e. the data derived from the same ground element in tortions, variations in scale, geometry, parallax, shadow,
different sensor coverage are exactly superimposed over each platform instability etc. lead to mismatch, if the various
other (Fig. 13.7). The need for image registration is obvious. images are simply laid over one another. Therefore, there is a
In many situations, we have multi-sensor, multi-temporal, need for a method of digital image registration, whereby
multi-platform or even multi-disciplinary data. For integrated multi-images can be superimposed over each-other with
study, e.g. for general pattern recognition or change detection, geometric precision.
the various image data sets must be registered over each other.
Registration of an image is carried out to a certain base. If
the base is an image, it is called relative registration, and if the 13.4.2 Principle
base is a certain standard cartographic projection, it is called
absolute registration. The basic technique in both relative and Digital image registration basically uses the technique of
absolute registration is the same. Absolute or map registration co-ordinate transformation. A set of ground control points
(GCPs) in the two images are identified and their
co-ordinates define the transformation parameters. Typically,
a set of polynomial or projection equations is used to link the
two co-ordinate systems. An example of simple affine
projection is as follows:

X0 ¼ a0 þ a1 x þ a2 y þ a3 xy ð13:2Þ

Y0 ¼ b0 þ b1 x þ b2 y þ b3 xy ð13:3Þ

where X′ and Y′ are the co-ordinates in the new system and


x, y those in the old system. There occur eight unknown
constants (a0, a1, a2, a3, b0, b1, b2, b3), which can be
computed by using four control points (as each control point
gives two equations, one for X′ and one for Y′). However,
only four control points may not be sufficient for a large
image. Usually, a net of quadrilaterals is drawn, employing
Fig. 13.7 Concept of image registration; the image data at each unit several control points over the entire scene, and a set of
cell are exactly in superposition and geometric congruence transformation equations for each quadrilateral is computed.
13.4 Registration 171

These transformation equations are used for transposing 2. Selection of control points. A number of controls point or
pixels lying within the quadrilateral. This is also popularly landmarks are identified which may be uniquely located
called ‘rubber-sheet stretching’. on the two images. These points are also called ground
The type of simple coordinate transformation outlined control points (GCP’s). Workers in digital images also
above may often meet the requirements for Landsat-type refer them to as templates. The control points should be
satellite data. Higher-order corrections would give better well distributed, stable, unique and prominent. Some of
results, but require more elaborate transformations and the features commonly used as control points are, for
computer time. example, road-intersections, river bends, canal turns or
other similarly prominent and stable features.
3. Matching of control points or templates. The next step is
13.4.3 Procedure to match the selected control points on the two images.
This is most commonly done visually, i.e. by computer-
The basic concept in image registration is shown in human interaction—the digital images are projected on
Fig. 13.8. The various steps involved are as follows. the screen and the control points are visually matched.
Further, the process of matching control points is basi-
1. Selection of base. When only remote sensing images are cally iterative, and some other match points can always
to be mutually registered, it is often convenient to use be used to check the earlier matching.
any one of the images as base or master image, and
perform relative registration. Usually, the image with Another possibility is to match the templates or tie points
higher spatial resolution and better geometric fidelity is digitally (only by computer); this is termed template match-
selected as the base. (On the other hand, when data from ing. Template matching is a very critical operation, as it
multi-disciplinary investigations, e.g. remote sensing, controls the accuracy of the entire registration operation.
geophysical, geochemical, geological, topographic etc., Commonly, once a template in one of the digital images has
are being collectively used, it is often better to use a been defined, its approximate position in the other digital
standard cartographic projection as the base, see image can be estimated, which gives the search area. The
Sect. 18.4). For further discussion, let us call the base most commonly used statistical measure of template match-
image as master image, and the image to be registered as ing is the correlation coefficient. The template is moved
slave image. The basic task is to create a new image through the search area and at each position the correlation
which has the geometry of the master image and field (or coefficient between the array of DNs in the two images is
parameter) of the slave image. computed. The position giving the highest correlation

Fig. 13.8 In registration, the


basic task is to create a new image
that has the geometry of the
master image but carries the field
(or parameter) of the slave image
172 13 Digital Image Processing of Multispectral Data

coefficient is taken as the match position. When the template


is being moved through the search area, only translational
errors in the two digital images are taken care of. Differences
in orientation due to rotation, skew, scale etc. hamper tem-
plate matching. Other factors affecting template matching are
real changes in scene, or changes in solar illumination and
shadows etc. Therefore, if two digital images differ widely,
the digital template matching may be a futile exercise.

4. Computing the projection equation. Once the matching


of control points/templates has been satisfactorily done,
the next step is to calculate parameters for projections
(polynomial or affine) from the coordinate data sets.
5. Interpolation and filling of data. Registration is accom- Fig. 13.10 Schematic representation of procedures of data resampling
plished by selecting a cell in the grid of the base image, and interpolation. Open circles = input image matrix (locations of
finding its corresponding location in the slave image, known DNs); filled circles = output image matrix (locations where DN
with the help of the affine projections, and then com- values are desired). a Nearest-neighbour approach; b bilinear interpo-
lation (after Moik 1980)
puting the DN value at that point in the slave image by
interpolation and resampling methods. This gives the DN
value distribution of the slave image field, in the geom- interpolated, but some of the old pixels are bodily shifted,
etry of the master image. The process therefore involves leaving some pixels out. This technique is fast and quite
finding new DN values at grid points in the geometry of commonly used. However, owing to sudden shifting of
the base image (and not transferring old DN values to pixels, it may lead to the appearance of breaks or artificial
new locations!). If the entire image is registered, pixel by lines on the images, as the local geometry may become
pixel, in the above manner, it takes a prohibitive amount displaced by half a pixel.
of computer time. Therefore in practice, a few selected Bilinear interpolation. This is a two-dimensional extension
points are registered in the above manner and the space in of linear interpolation. It uses the surrounding four pixel values
between is filled by re-sampling/interpolation. from the older (slave) grid. The procedure is first to interpolate
along one direction, to obtain two intermediary DN values,
The commonly used methods of data interpolation and and then to interpolate in the second direction to obtain the
resampling are as follows (Figs. 13.9 and 13.10). final DN value (Fig. 13.10b). It takes more computer time, but
Nearest-neighbour method. Here the point in the new grid produces a smoother picture with better geometric accuracy.
simply acquires the DN value of that point in the older grid Bicubic interpolation. This uses a polynomical surface to
which lies closest to it (Fig. 13.10a). The DN value is not fit the DN dibstribution locally, and from this fitted surface
DN values at the new grid points are computed. It gives a
better visual appearance, but is quite expensive in terms of
computer time. The image is also quite smoothened and
local phenomena such as high spatial frequency variation
etc. may also get subdued.
In addition, krigging could be another interpolation
method for resampling during registration.
An example of image-to-image registration is given in
Fig. 12.9. It is obvious that low-resolution and low-contrast
images are relatively more difficult to register than sharp and
high-contrast images. The accuracy of registration may thus
depend upon the sharpness of features, overall image con-
trast and spatial resolution of component images.

Fig. 13.9 Data resampling method for registration (after Lillesand and 13.5 Image Enhancement
Kiefer 1987). The value at z (the required pixel location) is given as that
at p by the nearest neighbour approach. In the bilinear interpolation
The purpose of enhancement is to render the images more
approach, the value at z is computed by interpolating between p and
q cells. In the cubic convolution approach, interpolation is made interpretable, i.e. features should become better discernible.
considering the values at p, q, and r This at times occurs at the expense of some other features
13.5 Image Enhancement 173

which may become relatively subdued. It must be empha-


sized that digital images should be corrected for radiometric
distortions prior to image enhancement, otherwise the
distortions/noise may also get enhanced.
As a preparatory first step for image enhancement, the
statistical data distribution is examined. A histogram describes
data distribution in a single image and a scatterogram provides
an idea of relative data distribution in two or more images.
Techniques developed in the field of classical signal
processing over many years (e.g. Pratt 2007; Rosenfeld and
Fig. 13.12 Histogram of the raw image in Fig. 13.11a; note the
Kak 1982) are advantageously adopted and applied to
crowding of DNs in lower range, which is responsible for the low
remote sensing images. In this section, we discuss the contrast in Fig. 13.11a
contrast-enhancement techniques.
Contrast enhancement deals with rescaling of gray levels
so that features of interest are better shown on the image. In be crowding only in the lower DN range, with hardly any
general, it so happens that the number of actually recorded pixel occurrence in the higher DN-range (Fig. 13.12).
intensity levels in a scene is rather low and the full dynamic Details on such an image are scarcely visible. A rescaling of
range of the digital image (e.g. 256 levels in 8-bit data) is not gray levels would improve the image contrast and allow
utilized. As a result, typically, the image has a low contrast better visibility of features of interest. Contrast enhancement
(Fig. 13.11a). This characteristic is indicated by the corre- is a typical point operation in which the adjacent pixel values
sponding histogram in which the DN frequency is found to have no role to play.
To perform the contrast-stretching operation, the first step
is to plot a histogram of the DN values in the image. This
itself may convey valuable information about the scene.
A sharp peak would indicate no contrast in the scene, and a
broad distribution would imply objects of high contrast.
A multimodal distribution would clearly indicate objects of
more than one type. This information could also be utilized
for deciding on the type of contrast manipulation. As a
common practice, gray levels with a frequency of less than
2% of the population, on either flank of the histogram, are
truncated and only the intervening DN-range is stretched.
This allows greater contrast in the entire image, although
some of the gray levels on either extreme become squeezed
or saturated. In any particular case, however, the actual
cut-off limit should be based on the requirements.
Some of the common methods for contrast manipulation
are as follows.

1. Linear contrast stretching. This is the most frequently


applied transform in which the old gray range is linearly
expanded to occupy the full range of gray levels in the
new scale (Fig. 13.13a). Figure 13.11b gives an example
of a linearly stretched image, the raw image being shown
in Fig. 13.11a.
2. Multiple linear stretch (piece-wise linear stretch). Dif-
ferent segments of the old gray range can also be stret-
ched differently, each segment in itself being a linear
stretch (Fig. 13.13b). This is a useful transform when
some selected stretches of DN-ranges are to be enhanced
Fig. 13.11 Contrast stretching. a Raw image; b linear and other intervening ranges are to be squeezed.
contrast-stretched image; c image with histogram equalization stretch;
note the higher contrast in (c) than (b). IRS-1C-LISS-III near-IR band 3. Logarithmic, power or functional stretch. The image data
image covering a part of the Gangetic basin can be stretched using a logarithmic function, power
174 13 Digital Image Processing of Multispectral Data

Fig. 13.13 Transforms for contrast stretching. a Linear contrast stretch. b Multiple linear stretch. c Logarithmic stretch. d Histogram equalization
stretch. e Density slicing

function or any other function (Fig. 13.13c). The loga- black-and-white, but is not particularly suited for making
rithmic stretch is useful for enhancing features lying in colour composite displays.
the darker parts of the original image, the result being an 6. Density slicing is also a type of gray scale modification.
overall relatively brighter image. In contrast, the expo- The old gray scale is subdivided into a number of ranges,
nential function stretches preferentially features in the and each range is assigned one particular level in the new
brighter parts of the original gray range, rendering the gray scale (Fig. 13.13e). The steps can be of equal or
image generally darker. unequal width. This is analogous to drawing a contour
4. Gaussian stretching. In this scheme of gray-scale map for the parameter DN, and the contour interval could
manipulation, the new gray scale is computed by fitting be equal or unequal. A density-sliced image is often
the original histogram into a normal distribution curve. It colour coded for better feature discrimination (see
renders greater contrast preferentially in the tails of the pseudocolour enhancement, Sect. 13.8.2).
old histogram, which means that the cut-off limits in the
old histogram become highly critical. Thus, there are numerous possibilities to manipulate the
5. Histogram equalization stretching. This is also called image contrast to suit the needs of an investigation. Further,
ramp stretching, cumulative distribution function any type of simple or processed image can be subjected to
stretching or uniform distribution stretching. The new contrast enhancement.
image has a uniform density of pixels along the DN-axis,
i.e. each DN value becomes equally frequent
(Fig. 13.13d). As pixels in the middle range happen to be 13.6 Image Filtering
most frequent, the middle range is substantially expanded
and made to occupy a larger range of DN values in the Image filtering is carried out to extract spatial-scale infor-
new gray scale. The overall contrast in the image is thus mation from an image. As it is a technique to enhance spatial
significantly increased (Fig. 13.11c). This is a good information in the image, it can also be grouped under
transform when an individual image is to be displayed in ‘enhancement’. All filtering operations are typically local
13.6 Image Filtering 175

operations in which the DN values at the neighbouring low-frequency variations imply regional changes, from one
pixels also play a role. part of the image to another. Correspondingly, there are two
Image filtering operations are of two basic types: (1) spa- types of filtering techniques: high-pass filtering and low-pass
tial domain filtering, and (2) frequency domain (Fourier) fil- filtering, to enhance one type of information over the other.
tering. Spatial domain filtering is carried out using windows, Both of these types of filters (high-pass and low-pass) can be
boxes or kernels (see below) and has dominated the remote implemented through spatial-domain as well as frequency-
sensing image processing scenario until now, for two main domain filtering.
reasons: (a) it is simple and easy to implement and requires Spatial-domain filtering is carried out by using kernels,
lower computational capabilities, and (b) it could meet the also called boxes or filter weight matrices. A kernel consists
requirements of data processing for the Landsat type data that of an array of coefficients. Figure 13.15 shows some oftused
has been used world-wide. Spatial domain filtering will be kernels. In order to compute the new DN, it is imagined that
discussed first and Fourier filtering later. the kernel is superimposed over the old image-data array.
If we plot a profile of DN values from one end of an The original DN values are weighted by the overlying
image to another, we find that the profile consists of a coefficients in the kernel and the resulting total DN value is
complex combination of sine waveforms. It can be broadly ascribed to the central pixel in the new image. The kernel is
split into two: (1) high-frequency variations and (2) low- successively moved over all the pixels, in rows and columns,
frequency variations (Fig. 13.14). Here, frequency connotes and the array of new DNs is computed.
the rate of variation in the DNs. High-frequency variations The most common kernel size is 3  3, 5  5 or 7  7
correspond to local changes, i.e. from pixel to pixel, and being relatively less common. Kernel size can also be arbi-
trarily chosen, as per requirements in a certain case. There
are no predefined kernels that would provide the best results
in all cases. At times, the selection is based on trial and error.
Odd-numbered kernel sizes are generally used so that the
central pixel is evenly weighted on either side. Anisotropic
kernels (e.g. 3  5) produce directional effects, and there-
fore can be used to enhance linear features in a certain
preferred direction.

13.6.1 High-Pass Filtering (Edge Enhancement)

In a remote sensing image, the information vital for dis-


criminating adjacent objects from one another is contained in
Fig. 13.14 High-frequency and low-frequency components in an
the edges, which correspond to the high-frequency varia-
image. a A typical profile of DN values consists of a complex
combination of sine waveforms which can be split into tions. The edges are influenced by terrain properties, vege-
b high-frequency and c low-frequency components tation, illumination condition etc. Thus, edge enhancement is

Fig. 13.15 Some typical kernels


(3  3 matrices) for filtering.
a X-edge gradient. b Y-edge
gradient. c Isotropic Laplacian.
d ‘Image with isotropic
Laplacian’. e Diagonal-edge
gradient. f Image smoothing.
g Image-smoothing ‘mean image’
176 13 Digital Image Processing of Multispectral Data

basically a sharpening process whereby borders of objects


are enhanced.
There are several ways of designing filter-weight matrices
and enhancing edges in digital images (Davis 1975; Shaw
1979; Peli and Malah 1982; Jensen 2005). Some of the more
important methods are as follows.

1. Gradient image. The gradient means the first derivative


or the first difference. A gradient image is obtained by
finding the change in the DN value at successive pixels,
in a particular direction. The procedure is to subtract the
DN value at one pixel from the DN value at the next
pixel. Several variations exist. Enhancement parallel to
X, called X-edge enhancement, can be obtained by pro-
ceeding along the Y-axis, i.e. taking differences of suc-
cessive pixels along the Y-axis (DNnew = df/dy).
Similarly, Y-edge enhancement is obtained by proceed-
ing along the X-axis, (i.e. DNnew = df/dx). Fig-
ure 13.15a, b are the kernels used for enhancing edges by
the gradient method in the X and Y directions respec-
tively. In general, the first difference in the direction
parallel to scanning has a smaller variation than that in
the direction perpendicular to it, owing to striping.

The gradient image can be added back to the original


image to provide a gradient-edge-enhanced image. The
gradient method is often used in detecting boundaries, called Fig. 13.16 Edge enhancement. a Original image; b isotropic Lapla-
line detection, as discussed later in image segmentation cian image; c edge-enhanced image (produced by ‘image with isotropic
Laplacian’ Fig. 13.15d filter). The image is IRS-LISS-II NIR band
(Sect. 13.11), but not so frequently for edge enhancement, covering a part of Rajasthan, India. The nearly NE-SW trending rocks
for which the Laplacian (given below) is generally consid- occurring on the east are the Vindhyan Super Group, which are
ered to be better suited. separated from the folded rocks of the Aravalli Super Group, occurring
on the west, by the Great Boundary Fault of Rajasthan
2. Laplacian image. The Laplacian enhancement is given
by the second derivative, i.e. the rate of change. Basi-
3. Low-frequency image subtraction. This is another par-
cally, it would involve computing differences with
ticularly powerful method of edge enhancement. It is
respect to the two neighbouring pixels on either side, in a
obvious from Fig. 13.14 that high-frequency informa-
row or column. The isotropic Laplacian (kernel in
tion in an image can be obtained by subtracting a
Fig. 13.15c) can be written as
low-frequency image from the original image.
4. Diagonal edge image. Diagonal edges can be enhanced by
computing differences across diagonal pixels in an image
d2 f d2 f (see e.g. kernel in Fig. 13.15e). The image can be gener-
DNL ¼ þ ð13:4Þ ated for left-look (SE looking) or right-look (SW looking)
dx2 dy2
directions (Fig. 13.17a, b). In this way, boundaries run-
DNL has a high value whenever the rate of change of DN ning NW–SE or NE–SW can be enhanced.
in the old image is high. Figure 13.16a, b give examples of
an image and its Laplacian edge. Invariably, an edge image Some basic concepts in edge enhancement (i.e. image
is more noisy; therefore, it is added to the original image to sharpening) have been outlined above. The subject matter
give an edge-enhanced image (Fig. 13.16c). For this, a encompasses a vast field where numerous other non-linear
useful kernel is that shown in Fig. 13.18d. filters (such as Sobel, Robert’s, Kirsch etc.) have been
13.6 Image Filtering 177

common practice is, therefore, to add/subtract the Laplacian


from the original DN values, so that both original DN values
and local variations are displayed together.

13.6.2 Image Smoothing

The main aim of image smoothing is to enhance


low-frequency spatial information. In effect, it is just the
reverse of edge enhancement. Typical kernels used for
image smoothing are shown in Fig. 13.15f, g. Examples of
image smoothing are given in Fig. 13.18a, b. As image
smoothing suppresses local variations, it is particularly
useful when the aim is to study regional distribution over
larger geological domains.

Fig. 13.17 Examples of directional edge enhancement. a Diagonal 13.6.3 Fourier Filtering
edges with look direction towards SE. b The same with look direction
towards SW
The mathematical technique of Fourier analysis, which uses
the frequency domain, is also applicable to remote sensing
formulated and applied with varying success (for more data. Fourier analysis operates on one image at a time.
details, refer to e.g. Russ 2011). Specific mention may also Figure 13.19 presents the various steps involved in Fourier
be made here of a subtractive box filter formulated by filtering of an image.
Thomas et al. (1981) for enhancing circular features on the Fourier analysis separates an image into its component
Landsat MSS data. spatial frequencies. Referring back to Fig. 13.14, the DN
variation plotted along each row and column provides a
Scope for Geological Applications complex curve with numerous ‘peaks’ and ‘valleys’. The
Fourier analysis splits this complex curve into a series of
Edge enhancement brings out variations in DN values across waveforms of various frequencies, amplitudes and phases.
neighbouring pixels, or high-frequency spatial changes; it is From the data on component frequencies, a Fourier spec-
also called high-pass filtering or textural enhancement. The trum can be generated, which is a two-dimensional scatter
main applications of high-pass filtering in geological
investigations are as follows.

(a) To obtain a sharper image showing more details for


better interpretation in terms of local topography /
landforms, lithology, vegetation, soil, moisture etc.
(b) To enhance linear systems (edges), to facilitate inter-
pretation of fractures, joints etc.; linear edge enhance-
ment in specific directions can also be obtained by
suitable anisotropic filters (although interpretation of
such data warrants extra care).
(c) To reduce the effects of gross differences in illumina-
tion; on an edge-enhanced image, local variations
become important so that details within a uniformly
illuminated zone are better deciphered.

A limitation of the high-pass-filtered image is that only


local variations with respect to the adjoining pixels become
important and absolute DN values may have little signifi-
cance, e.g. uniform large stretches of dark-toned and Fig. 13.18 Image smoothing. a Resulting from a kernel 3  3, is less
light-toned objects could show up in similar tones. The smooth than, b corresponding to a kernel 5  5
178 13 Digital Image Processing of Multispectral Data

generate the filtered image. Figure 13.21 gives an example


where a ‘low-pass blocking filter’ has been used to derive an
edge-enhanced image.
As mentioned earlier, the use of Fourier analysis in
remote sensing multispectral data processing by resources
scientists has been rather limited, because of considerations
of computer sophistication and computer time etc. (and
therefore the technique of spatial-domain filtering has
dominated the image processing scenario till now). Fourier
transform provides higher flexibility in data processing; for
example, an image can be filtered by blocking the
low-frequency or high-frequency component. Further,
Fig. 13.19 Basic procedure in the frequency-domain filtering of a modern remote sensors carry several thousand detector ele-
digital image ments (such as in CCD line arrays and area arrays). For this
type of data, it is extremely difficult to carry out noise
removal (e.g. de-striping) with methods such as histogram
plot of the frequencies. Further, if the Fourier spectrum of an matching etc., developed for OM line scanners. Fourier
image is known, it is possible to regenerate the image transform is a powerful means to rectify this type of image
through inverse Fourier transform. data (see Figs. 13.5 and 14.9).
Various frequencies corresponding to the original image With the availability of superior computing facilities
can be identified in the Fourier spectrum. For example, low (hardware/software), and also the growing need to handle
frequencies appear at the centre and successively higher large data from CCD line and area arrays, it is expected that
frequencies are plotted outwards in the Fourier spectrum Fourier transform will find greater applications in remote
(Fig. 13.20). sensing digital image processing.
The Fourier spectrum can be processed through filtering
in the frequency domain. Practically, this involves designing
a suitable filter (matrix) and transposing the earlier matrix by
multiplying with the new (filter) matrix. By applying inverse
Fourier transform, the filtered Fourier spectrum is used to

Fig. 13.21 Fourier filtering. a Amplitude spectrum with


Fig. 13.20 Location of low-, mid- and high-frequency components in low-frequency blocking filter; b edge-enhanced image from inverse
the amplitude spectrum transform of (a) (a, b courtesy Reet K. Tiwari)
13.7 Image Transformation 179

13.7 Image Transformation In general, the present Landsat TM, ETM+, OLI,
SPOT-HRV and IRS-LISS spectral channels provide data
The enhancement techniques so far discussed are basically for which are well correlated. This fact is expressed by the
single images. They are derived from the field of classical pronounced elongated shape of the data points in a scat-
signal processing. A peculiar aspect of remote sensing is that terogram (Fig. 13.22c). If some objects are spectrally
it provides data in multispectral bands, which can be collec- unique, then they form clusters. The clusters can at times be
tively processed to deduce information not readily seen on a separated from other data points. A spectral plot showing
single image. Moreover, images from different sensors, dif- clusters of points is called a cluster diagram. The separa-
ferent platforms, or acquired at different times can also be bility of clusters governs the discriminability of objects in
superimposed over each other with geometric congruence. the image. The meaning and utility of cluster diagrams is
Image transformation deals with processing of multiple-band mentioned also in the section on classification. The con-
images to generate a new computed (transform) image. cepts of a feature or cluster diagram can now be easily
In order to conceptually understand what is happening adapted to understand the various multiple-image
when the multispectral images are combined, it is necessary to enhancement techniques, using simplified conceptual pro-
become conversant with the concept of feature space and jection diagrams.
related terminology. Two sets of image data can be statistically
represented through a two-dimensional plot called feature
space—the feature axes here being the two spectral channels. 13.7.1 Addition and Subtraction
The location of any pixel in this space is controlled by the DN
values of the pixel in the two channels (features). This is also The simplest method to combine multispectral digital images
called a scatterogram or scatter-diagram. A purely visual is by addition and subtraction. In a most generalized way,
inspection of such a diagram may itself be quite informative the linear combination can be written as:
about the mutual relation of the two features. If the two fea- DNnew ¼ p  DNA  q  DNB þ K ð13:5Þ
tures are highly correlated, a line approximates the scattero-
gram (Fig. 13.22a). Two non-correlated features produce a where p and q are constants suitably selected to give weights
feature space plot in which points are scattered isotropically to A and B input images, and K is a constant which helps in
(Fig. 13.22b). re-scaling the gray-scale.

Fig. 13.22 Concept of


scatterograms. a Extremely
correlated feature axes.
b Non-correlated feature axes.
c A typical scatterogram using
two Landsat TM/ETM+/OLI
channels; the pronounced
elongated shape of the
scatterogram indicates that the
two channels are well correlated
180 13 Digital Image Processing of Multispectral Data

If we form a simple addition of two spectral bands be a situation in which the spectral differences become
(DNnew = DNA + DNB), the DN value distribution in the reduced between some categories on the addition image.
new image would conceptually be given by projection of If we calculate the difference between the two bands
points on the line passing through the origin and having a (DNnew = DNA − DNB), the DN value distribution in the
slope of 45° (Fig. 13.23a). The resulting image is charac- new image appears as if the data points were projected on
terized by a much larger dynamic range and higher contrast the axis perpendicular to the simple addition axis
than any of the originals, due to the good correlation of the (Fig. 13.23b). The image is characterized by a lower
two input images. However, it may not necessarily imply dynamic range (which can be taken care of by subsequent
better separability of clusters; on the other hand, there could contrast manipulation). However, all those objects which do
not possess a correlation pattern in conformity with the
majority of data points show up strongly on such an image.
Therefore, the technique has value for change detection in a
multi-temporal image data set.
By introducing weighting factors on features (input
images), a variety of linear combinations can be generated.
The resulting new images may appear quite different in
dynamic range and contrast distribution in differently
weighted addition/subtraction cases—this may even confuse
subsequent interpretations; therefore, for meaningful data
handling and manipulation it is essential to formulate
strategies beforehand. Careful stretching of spectral bands
combined with properly weighted subtraction/addition
(particularly the subtraction) may produce quite informa-
tive images. At times, the weighting factors may be derived
from statistical approaches such as principal components.

Scope for Geological Applications

Figure 13.24 presents examples of simple addition and


subtraction image processing. The following general points
can be made:

• An addition image generated from a set of correlated


images has a much larger dynamic range and higher
contrast than any of the input images. The objects which
have positive correlation in the two input bands may
become still better separable.
• A subtraction image made from a similar data set has a
smaller dynamic range and lower contrast. It is useful in
Fig. 13.23 Transformation of data during addition and subtraction of picking up data points/objects which do not spectrally
images. a Simple addition of well-correlated multispectral images conform to the background, or are ‘offbeat’. The tech-
results in distribution of DN values as if the points were projected onto nique is also used for change detection from
the line passing through the origin, having a slope of 45°. The new
image would have a much larger dynamic range and therefore greater
temporal-repetitive images.
overall contrast; however, it does not necessarily mean better separa-
bility. For example, on the new image clusters A and D become more
separated from each other, but clusters B and C would largely overlap.
b Subtraction of well-correlated multispectral data would result in
projection of points on the line perpendicular to the simple addition
13.7.2 Principal Component Transformation
axis, passing through the origin and having a slope of 45°. The dynamic
range of the new image is relatively small; the clusters A and D which Principal component transformation (PCT), also called prin-
were distinct on the addition image, appear overlapping and are now cipal component analysis or Karhunen–Loeve analysis, is a
inseparable on the subtraction image; however, clusters B and C become
very powerful technique for the analysis of correlated multi-
separable from each other on the new image. Note: scale along the
projection axis is nonlinear and only the relative position of dimensional data (for statistical principles, refer to any stan-
points/clusters is relevant here dard text, e.g. Davis 1986). Its application in digital remote
13.7 Image Transformation 181

Fig. 13.24 a Landsat MSS5 and


b MSS6 images of a part of Saudi
Arabia used as input for making
simple addition and subtraction
images; c and d are the resulting
addition and subtraction images
respectively. Note the higher
contrast in (c) in comparison to
(d). Also note the manifestation
of the spectrally ‘off-beat’
(egg-shaped) feature in
(d) (a–d courtesy of R. Haydn)

sensing has been discussed by many workers (e.g. Byrne et al. other PCAs are generally differences in various channels (e.g.
1980; Haralick and Fu 1983). The n-channel multispectral the second PCA is the difference between the visible and
data can be considered as n-dimensional data. As often hap- infrared channels, and so on). These can be considered as
pens, many of the channels are correlated, either negatively or spectral or colour information (Fig. 13.26b).
positively. The PCT builds up a new set of axes orthogonal to A variation of PCT is canonical analysis. In this, new
each other, i.e. non-correlated. Most of the variance in the feature axes are computed in such a way that they enable
input image data can be represented in terms of new principal maximum discrimination between some selected groups of
component (PC) axes, fewer in number. objects, and not in the entire scene (Fig. 13.25b), i.e. the
To understand the PC transform conceptually, we again statistics are computed not from the entire scene but only
refer back to our concept of scatterogram. It was stated from some selected object types (training areas); the trans-
earlier that nearly unlimited possibilities exist to combine the formation to be carried out is based on these statistics. Thus,
two spectral channels in the form of weighted linear com- this provides maximum discrimination between the selected
binations. The PC transform provides statistically the opti- groups of objects.
mum weighting factors for linear combinations, so that the
new feature axes are orthogonal to each other (Fig. 13.25a). Scope for Geological Applications
Corresponding to each principal component axis (PCA), one
image, called the PC axis image, can be generated that can The PCT increases overall separability and reduces dimen-
be contrast stretched or used as a component in other image sionality, and is therefore useful in classification. A limita-
combinations/enhancements etc. tion of PCT is that the statistics of a PC image are highly
The first PCA (PC1) has the largest spread of data, and scene dependent (e.g. influenced by climatic season) and
when dealing with Landsat-type data it is generally found to may not be extrapolated to other scenes. Further, geologic
be the weighted average filter of all the channels. As all the interpretation of PC images also requires great care as the
channels are represented in this image, it more or less surface information dominates the variation.
becomes an intensity (or albedo in the solar reflection region) As indicated earlier, commonly the first principal com-
picture with little spectral information (e.g. Fig. 13.26a). The ponent (PC1) image is a general albedo (intensity) image,
182 13 Digital Image Processing of Multispectral Data

whereas other PC images carry spectral information. The


eigen-vector matrix gives the information on how the vari-
ous input spectral bands are contributing to each PCA.
A simplified approach for interpreting PC images, explain-
ing the effect of negative/positive PC eigenvectors in com-
bination with strong reflection/absorption spectral behaviour
at different pixels on the DN values of PC images, has been
given by Gupta et al. (2013).
The method of PCA has been applied for mineral
exploration and mapping alteration zones (e.g. Crosta and
Moore 1989; Loughlin 1991; Ruiz-Armenta and
Prol-Ledesma 1998; Tangestani and Moore 2002; Pour and
Hashim 2011). The following case study from Central
Mexico provides an interesting example.
In order to enhance the spectral response of hydrothermal
alteration minerals in Central Mexico, the Landsat TM data
were processed using PCT by Ruiz-Armenta and Prol-
Ledesma (1998). As is expected, PC1 was found to be
general albedo (intensity) image carrying 87% of the scene
variance. Other PCAs carried successively less scene vari-
ance, and had negative/positive contributions from the six
(solar reflection) TM channels. Noteworthy is that PC4 was
found to have a high positive contribution (82%) from TM7
and a high negative contribution (−32%) from TM5. As
hydroxyl minerals exhibit high absorption in TM7 and high
reflectance in TM5, an inverse image of PC4 would repre-
sent hydroxyls as bright pixels (Fig. 13.27a). Similarly, in
PC5 they had a high positive contribution (71%) from TM1
and a high negative contribution (−69%) from TM3. As iron
oxide has strong absorption in TM1 and high reflectance in
TM3, an inverse image of PC5 would show iron oxide as
bright pixels (Fig. 13.27b). In this way, careful interpretation
of the PC eigen-vector data can allow deduction of thematic
Fig. 13.25 Concepts of a principal component analysis axes, and b
information from PC images.
canonical analysis axes

Fig. 13.26 a PC1 and b PC2


images from a Landsat MSS
subscene of Saudi Arabia. The
PC1 image is a high contrast
‘general-albedo’ image; PC2
carries spectral information; note
the egg-shaped geological feature
enhanced on the PC2 image
(a, b courtesy of R. Haydn)
13.7 Image Transformation 183

Fig. 13.27 a Inverse PC4 and


b inverse PC5 images showing
distribution of hydroxyls and iron
oxides respectively (bright pixels
in both cases), in an area in
Central Mexico (for details, see
text) (a, b Ruiz-Armenta and
Prol-Ledesma 1998)

13.7.3 Other Transformations can be used as components for making colour composites.
An example of decorrelation-stretched image colour com-
A number of other transformations are used for enhancing posite is presented in Fig. 19.125.
remote sensing data. Amongst them, mention may be made
here of the Minimum Noise Fraction (MNF) transformation,
13.7.5 Ratioing
which is similar to PCA. It is used to determine the inherent
dimensionality of image data, segregate noise in the data, and
Ratioing is an extremely useful procedure for enhancing
reduce the computational requirements for subsequent pro-
features multispectral images. It is frequently used to reduce
cessing. MNF is used as a preparatory transformation to
the variable effects of solar illumination and topography and
condense most of the essential components into a few spec-
enhance spectral information in the images (Crane 1971;
tral bands and to order those bands from the most interesting
Justice et al. 1981). The new digital image is constructed by
to the least interesting (Boardman et al., 1995; Wahi et al.
computing the ratio of DN values in two or more input ima-
2013). Minimum Noise Fraction analysis identifies the
ges, pixel by pixel. The general concept can be formulated as
locations of spectral signature anomalies. This process is of
 
interest to exploration scientists because spectral anomalies DNA  K1
are often indicative of alterations (Pour and Hashim 2011). DNnew ¼ m þn ð13:6Þ
DNB  K2
Besides, Tasseled Cap Transformation is frequently used
in vegetation studies and briefly discussed in Sect. 19.17.1. where DNA and DNB are the DN values in A and B input
images, K1 and K2 are the factors to take care of path
radiance in the two input images, and m and n are scaling
13.7.4 Decorrelation Stretching factors for the gray-scale.
In a situation when DNB is much greater than DNA, the
Another modification of PCT is called decorrelation stretch, use of the equation in the above form would squeeze the
a useful technique for processing highly correlated multidi- range too much, and in such cases a better alternative is to
mensional image data (Kahle et al. 1980; Campbell 1996). It use the logarithmic of ratios (Moik 1980) or arctan functions
involves the following steps: (1) a principal component ratios (Hord 1982). Images for complex ratio parameters
transformation, followed by (2) a contrast equalization by including additions, subtractions, multiplications and double
Gaussian stretch, so that histograms of all principal com- ratios etc. can be generated in a similar manner.
ponents approximate a Gaussian distribution of a specified The resulting ratio image can again be contrast stretched
variance, resulting in a 3-D composite histogram of a or used as a component for colour displays etc.
spherically symmetric 3-D Gaussian type, and next (3) a To discuss what happens during ratioing and how it helps
co-ordinate transformation that is the inverse of the principal in feature enhancement, let us refer to the scatterogram
component rotation to project data in the original space. The concept. In Fig. 13.28, we have a two-channel plot with
inverting operation has the advantage of restoring basic several straight lines of constant slopes. Each straight line
spectral relationships. The decorrelation-stretched images corresponds to a particular ratio between A and B channels.
184 13 Digital Image Processing of Multispectral Data

Fig. 13.28 The concept of ratioing. The figure shows a two-channel


plot with lines of equal slopes; all points lying on a line have the same Fig. 13.29 The figure shows spectral curves of two objects A and B, k1
ratio and therefore acquire the same gray tone in the new ratio image, and k2 being the two sensor channels. The objects A and B have
irrespective of the absolute albedo values overlapping spectral responses in both k1 and k2 channels; therefore, no
single channel is able to give unique results. However, if the ratio of the two
channels is taken, the spectral slopes would be given by A1–A2, and B1–B2
lines, which make discrimination between the objects A and B possible
Pixels with different DNs but the same ratio value in the two
channel images would lie on the same line; therefore they
degree, including the skylight component. If we take the
would have the same gray tone on the ratio image. Thus, the
ratio values with shorter wavelength in the numerator and
various gray tones on the ratio image are controlled by proceed from poorly illuminated to well-illuminated topo-
slopes of the straight lines on which the ratio points lie (and
graphical slopes of the same surface material, the ratio val-
not on the distance from the origin!). In this way, a ratio
ues are found to decrease with distance, i.e. in poorly
image gives no information on the absolute or original illuminated areas, the ratio values are higher in comparison
albedo values, but depends on the relative reflectance in the
to those in well-illuminated areas (c.f. Kowalik et al. 1983).
two channels. Therefore, basically, a ratio image enhances
This is because of two reasons:
features when there are differences in spectral slopes
(Fig. 13.29). In this respect, a ratio image can also be con-
(a) path radiance is higher towards shorter wavelengths;
sidered as a representation of spectral/colour information.
(b) the fraction of the total signal arising from the path
In actual practice, the ratio values of images in the solar radiance is greater in poorly illuminated areas than in
reflection region are influenced by path radiance to a high
well-illuminated areas (Fig. 13.30).

Fig. 13.30 Undulating terrain composed of a homogeneous rock (e.g. segment M towards the sensor. The atmospheric contribution (Ak) is the
granite) illuminated from a direction and viewed from two sensor same and is additive in both cases. The relative fraction of the signal
positions; ground segment N reflects more sunlight than the ground due to path radiance is more at M than at N
13.7 Image Transformation 185

Fig. 13.31 Examples of ratio images (TM1/3). a Ratio image without the resulting image is free from topographical effects. (Landsat TM
rectification for path radiance; the topographical effects are still seen on sub-scene covering a part of the Khetri copper belt, India.)
the ratio image. b Ratio image when path radiance is properly rectified;

Ratio images derived from such data carry strong topo- 1982; Haydn 1985; Gillespie et al. 1986, 1987). It leads to
graphic effects; therefore, it is necessary to rectify for the feature enhancement owing to the following three main
path radiance component in the input images before con- reasons.
structing a ratio image. Figure 13.31 presents an example.
1. Sensitivity of the human eye. Whereas the human eye
Scope for Geological Applications can distinguish at the most only about 20–25 gray
tones, its sensitivity to colour variation is very high
The most important advantage of ratioing is that it provides and it can distinguish more than a million colours.
an image which may be quite independent of illumination Therefore, subtle changes in the image can be more
conditions. The pixels acquire the same DN value if the readily distinguished by human interaction if colour is
ground material has the same spectral ratio, quite irrespec- used.
tive of whether the ground happens to lie in a well- or poorly 2. Number of variables available. The black-and-white
illuminated zone. Therefore, a properly made ratio image images carry information in terms of only one variable,
significantly reduces topographic effects (Fig. 13.31). i.e. tone or brightness (gray level). In comparison to this,
The Landsat TM/ETM+/OLI and ASTER data have been a colour space consists of three variables—hue, satura-
available on a global basis and have suitable spatial, spectral tion and brightness.
and radiometric resolutions. Some of the more useful ratio 3. Possibility of collective multi-image display. In remote
combinations and their applications are given in Chap. 19. sensing, we often deal with multiple images, viz. images
Such processed images have found practical applications in in different spectral bands, multi-sensor images,
mapping vegetation, limonite, clays, specific minerals multi-temporal images, various computed images, etc.
(from ASTER data) and delineation of hydrothermal alter- For a collective interpretation, the colour space offers a
ation zones. powerful medium. It stems from the colour theory
(Wyszecki and Stiles 1967) that three input images can
be viewed in colour space concurrently.
13.8 Colour Enhancement
The display of wide-spectrum remote sensing data from
13.8.1 Advantages gamma ray to microwave in the visible space is possible
only through falsification of colours. Therefore, the colours
Colour viewing is a highly effective mode of presentation seen on the false-colour image may not have any relation to
of multispectral images (Buchanan 1979; Haydn et al. the actual colours of the objects on the ground.
186 13 Digital Image Processing of Multispectral Data

13.8.2 Pseudocolour Display 13.8.3 Colour Display of Multiple Images—


Guidelines for Image Selection
The pseudocolour technique of colour enhancement is applied
to a single image, i.e. one image at a time. In this, the gray tones As mentioned earlier, one of the chief advantages of colour
are coded in colours, according to a suitable arbitrary scheme. display is that multiple (three) images can be displayed and
Frequently, a density-sliced image (Sect. 13.5) is processed so interpreted concurrently. Often, there are many images to
that each gray tone slice is displayed in a particular colour, to choose from, as any simple or processed image can be used
permit better discrimination of features (also called density- as a component in a colour display. Screening and selection
slicing colour coding). Figure 13.32 presents an example of bands may initially be done on the basis of dynamic
where different temperatures over a lava flow surface are range, atmospheric effects, knowledge of spectra of ground
exhibited in different colours. materials and objects of interest, followed by trial and error

Fig. 13.32 Pseudocolour coding. A sequence of night-time thermal eruptive phase; the last image shows flows from a later eruptive phase
images (ASTER band 14) shows lava flows entering the sea, Hawaii [courtesy of NASA/GSFC/MITI/ERSDAC/JAROS and US/Japan
Island. Colour coding from black (coldest) through blue, red, yellow, ASTER Science Team (see colour Plate I)]
white (hottest). The first five images show a time sequence of a single
13.8 Colour Enhancement 187

to some extent. However, as the number of available images


increases the method of trial and error becomes more con-
fusing and the selection more critical.
The information content of the colour images depends
largely on the quality of the input images. In general,
meaningful and good-looking colour displays are obtained
if: (a) each of the three component images is suitably con-
trast stretched and frequency histograms of all three input
images are similar, (b) excessive contrast is avoided, as this
may lead to saturation at one or both ends, and (c) the mean
DN value lies at the centre of the gray range in each of the
stretched images.

13.8.4 Colour Models

A number of colour models have been used for enhancing


digital images (see Appendix A). We discuss here two basic
methods: (1) using the RGB model and (2) using the IHS
model.

13.8.4.1 Using the RGB Model


The RGB model is based on the well-known principle that
red, green and blue are the three primary additive colours.
Display monitors utilize this principle for colour image
display.
The RGB model is capable of defining a significant
subset of the human perceptible colour range. Its field can be
treated as a subset of the more basic and larger chromaticity
diagram (Fig. 13.33). The positions of R, G and B end
members in the chromaticity diagram will define the gamut
of colours that can be generated by using a particular triplet.
In RGB coding, a set of three input images is projected
concurrently, one image in one primary colour, i.e. one
image in red, the second in green and the third in blue. In
this way, each image is coded in a colour (that is commonly
a false colour). Variation in DN values in the three input
Fig. 13.33 a CIE chromaticity diagram with colour field generated by
images collectively lead to variation in output colours on the B, G and R primaries. b Schematic position of various colours in the
colour display. BGR (or RGB) diagram
In principle, any image can be coded in any colour,
although it is more conventional to code a relatively shorter
wavelength image in blue, and a longer wavelength image in 13.8.4.2 Using the IHS Model
red. Various types of transformed (e.g. ratio, PCA, etc.) and In this model, colour is described in terms of three parameters:
TIR or SAR images can also be used as input images. intensity (I), hue (H) and saturation (S). Hue is the dominant
Whatever the scheme, the R, G and B points in the wavelength, saturation is the relative purity of hue, and
chromaticity diagram define a triangle. Therefore, a simple intensity is the brightness (for details, refer to Appendix A).
way to understand relations between various colours and Conceptually, the IHS model can be represented with a
interpret the colour composite is through the RGB colour cylinder, where hue (H) is represented by the polar angle,
ternary diagram. Several examples are given (see e.g. saturation (S) by the radius and intensity (I) by the vertical
Figs. 11.11, 19.68b and 19.81b). distance on the cylinder axis (Fig. 13.34). However, the
188 13 Digital Image Processing of Multispectral Data

however, as this renders the same hue (blue) to the lowest


and highest DNs, it is customary to restrict the hue range as
blue–green–red for coding. An example of IHS coding is
given in Fig. 19.134.

13.9 Image Fusion

13.9.1 Introduction

Image data can be acquired from a variety of aerial/


spaceborne sensors with differing spatial, temporal and
spectral resolutions. Image fusion is the technique of inte-
grating these images and other data to obtain more and better
information about an object of study that would not be
possible to derive from single-sensor data alone. It can be
defined as ‘a process of merging data from multiple sources
to achieve refined information’. Thus, it is a tool to combine
Fig. 13.34 Principle of colour coding in the IHS scheme. The first multi-source imagery and data using advanced processing
image is coded in intensity, the second in hue, and the third in techniques (for a review on image fusion, refer to Pohl and
saturation
van Genderen 1998). Fused data products provide increased
confidence and reduced ambiguity in image interpretation
number of perceptible colours decreases with decrease in and applications.
intensity and the contributions of hue and saturation become Image fusion techniques can be applied to digital images
insignificant at zero intensity. For these reasons, a cone in order to:
could be a better representation. The mutual relationships
between RGB-and IHS-transforms are discussed by many • sharpen images
(e.g. Buchanan and Pendgrass 1980; Haydn et al. 1982; • improve geometric corrections
Gillespie et al. 1986; Harris et al. 1990). • enhance certain features not visible in either of the single
The coding of remote sensing images using this system is data alone
called IHS transform. Each image in the triplet is coded in • complement data sets for improved classification
one of the three colour parameters: • detect changes using multi-temporal data
• substitute missing information in one image with signals
• First image: 0–255 DN values = intensity variation at from another sensor (e.g. clouds, shadows etc.)
pixels (i.e. dark to bright), • replace defective data.
• Second image: 0–255 DN values = hue variation (blue–
green–red–blue),
• Third image: 0–255 DN values = saturation variation at 13.9.2 Techniques of Image Fusion
pixels (i.e. pure saturated colour to gray).
The techniques used for image fusion can be categorized
The scheme is inherently highly flexible. However, minor into three types depending upon the stage at which the image
changes in hue axis may lead to drastic changes in colour of data set are fused:
the pixels (for example, suddenly from blue to green!) on the
display screen; therefore, decoding of the colours has to be (a) pixel-based fusion
done very carefully. (b) feature-based fusion
When images of differing resolutions are being (c) decision-based fusion
colour-displayed in IHS, it is useful to have the high-
spatial-resolution image as the intensity image, as this The first step in all fusion methods is pre-processing of the
becomes the base image and all other images can be easily data to ensure that radiometric errors are minimized and the
registered to this image. Further, hue is the polar angle of the images are co-registered. The subsequent steps may differ
conical/cylindrical colour space; in principle, one can use the according to the level of fusion. After image fusion, the
entire range blue–green–red–magenta–blue for coding; resulting image may be further enhanced by image processing.
13.9 Image Fusion 189

Fusing data at pixel level requires co-registered images The new image will exhibit the radiometric character of
at sub-pixel accuracy. The advantage of pixel fusion is that the lower resolution image but the sharpened edges from the
the input images contain the most original information; high spatial resolution image.
therefore, the measured physical parameters are retained Bovery transform. This technique is of special interest in
and merged. In feature-level fusion, the data are already image sharpening and deserves specific mention. It main-
processed to extract features of interest to be fused. Cor- tains the radiometric integrity of the data while increasing
responding features are identified by their characteristics, the spatial resolution. The technique normalizes a band for
such as extent, shape and neighbourhood. In decision-level intensity, and then the normalized band is multiplied by a
fusion, the features are classified and fused according to high resolution data set. Thus, the transformed band is
decision rules to resolve differences and provide a more defined as follows:
reliable result to the user. For geological applications, most
DNb1
commonly the pixel-based fusion techniques are applied to DNb1fused ¼  DNHighres
remote sensing and ancillary image data, which will be ðDNb1 þ DNb2 . . .DNbi    þ DNbn Þ
discussed in the following paragraphs. ð13:7Þ
The pixel-based fusion techniques can be grouped into
where DNb1fused is the digital number of the resulting fused
three categories:
image in band1. The advantage of this technique is that it
optically maintains the spectral information of the band
(1) statistical and numerical methods
whilst sharpening the scene. In this way, a set of three bands
(2) colour transformations
can be sharpened and then displayed in RGB.
(3) wavelet transform method.
13.9.2.2 Colour Transformation (RGB–IHS)
13.9.2.1 Statistical and Numerical Methods Image fusion by colour transformation takes advantage of
A number of methods can be applied to statistically and the possibility of presenting multiple-image data in different
numerically process the co-registered multiple-image data colours. The RGB system allows the assigning of three
sets. For example, the following techniques generate image different images to the three primary additive colours—red,
data with new information: green and blue, for collective viewing.
A useful transform for multiband image data is the IHS
• principal component transformation (also called HSI) transform that separates spatial (I) and
• decorrelation stretch spectral (H and S) from a standard RGB image. The IHS
• addition/subtraction technique can be applied for image fusion either in a direct
• ratioing/multiplication. manner or in a substitutional manner. The first refers to the
transformation of three image channels assigned to I, H and
These can also be treated under the general category of S directly. In the latter, an RGB composite is first resolved
image fusion, as new computed images carry information into IHS colour space; then, by substitution, one of these
from multiple input images. three components is replaced by another co-registered
Image fusion is frequently used to sharpen images. The image (Fig. 13.35). An inverse transformation returns the
problem is that often we have multiple images in different data to the RGB colour space, producing the fused data set.
spectral bands, with different spatial resolutions, e.g. a The IHS transformation has become quite a standard pro-
panchromatic band image with high spatial resolution and cedure in image analysis. The method provides colour
several other coarser spatial resolution spectral bands. It enhancement of correlated data, feature enhancement, and
would be a good idea to generate a new image that exhibits improvement of spatial resolution (Gillespie et al. 1986;
the spectral character of a particular band but carries spatial Daily 1983; Harris et al. 1990; Welch and Ehlers 1987;
information from the high spatial resolution image. Carper et al. 1990).
A simple technique to sharpen the image from a
multi-resolution image data set using addition is as follows: 13.9.2.3 Wavelet Transform Method
Given a set of images with different spatial resolutions and
• first generate an ‘edge’ image from a high-resolution spectral content, this method allows improved spatial reso-
panchromatic image, lution of the coarser resolution image, which can be
• add this edge image to a lower-resolution image (Tauch increased up to the level of the highest spatial resolution
and Kähler 1988). image available.
190 13 Digital Image Processing of Multispectral Data

Fig. 13.35 Steps in image fusion and sharpening using RGB–IHS–RGB colour transformation

Fig. 13.36 Main steps in


wavelet transform method for
generating a synthesized image
with spectral content of
low-resolution image and spatial
resolution of high-resolution
image

The concept makes use of multi-resolution analysis sensing image processing, the DN values can be considered
(MRA) using wavelet transform (WT). WT is used to com- as Z data, i.e. at each X, Y location, a certain Z value is
pute successive approximations of an image with coarser and associated. This means that the raster image data can be
coarser spatial resolutions, and to represent the difference of portrayed as a surface. However, in a true 3-D, it should be
information existing between two successive approxima- possible to provide more than one Z value at each X, Y
tions. Wavelet coefficients allow isolation of finer structures. location, to enable modelling of features in 3-D (e.g. an
In practice, wavelet coefficients from both low resolution and overturned fold or an orebody below the gossan). As remote
high resolution images are computed separately, and the data sensing image processing generates a surface of varying
are fed into a model. From this, new wavelet coefficients are elevation across the scene, the term 2.5-D is used, since the
computed for generating a synthesized image, which corre- display is more than 2-D but not fully 3-D.
sponds to the spectral character of the low resolution image There are different ways of visualizing 2.5-D raster data:
but the spatial resolution of the high resolution image
(Fig. 13.36) (for details, see, e.g. Zhou et al. 1998). (1) shaded relief model,
(2) synthetic stereo, and
(3) perspective view.
13.10 2.5-Dimensional Visualization

The term 2.5-dimensional visualization is used to imply that 13.10.1 Shaded Relief Model (SRM)
the display is more than 2-D but not fully 3-D. Raster images
are typically two-dimensional displays in which the position A shaded relief model is an image of an area that is obtained
of a pixel is given in terms of X, Y co-ordinates. In remote by artificially illuminating the terrain from a certain
13.10 2.5-Dimensional Visualization 191

direction. To generate a shaded relief model we need a other so that the ‘height’ values from the second image can be
digital terrain model (DTM), from which slope and aspect assigned to corresponding pixels in the base image. The
values at each pixel are calculated. Then, using an artificial geometric relationship for deriving the parallax shift is well
light source (e.g. ‘artificial sun’), and assuming a suitable known. Using DN values in the ‘heighting’ image, parallax
scattering model, scattered radiance is calculated at each values are calculated for each pixel which are to be assigned
pixel, to yield an SRM. Changing the illumination direction to the base-image pixels. The pixels in the base image are
can generate an infinite number of SRMs of an area (for shifted by an amount corresponding to the parallax value.
example of SRM, see Fig. 8.3). The processing is done pixel by pixel, line by line, to generate
a set of new images. The resulting stereo pair can be analysed
with an ordinary stereoscope. For example, Fig. 13.37 shows
13.10.2 Synthetic Stereo a synthetic stereo in which the base image is TM4 and the
parallax is due to the image TM6 (thermal-IR channel).
Another practical possibility for combining and enhancing Temperature variations appear like an artificial topography,
multi-image data is by developing a synthetic stereo (Pichel where areas with higher temperatures appear elevated.
et al. 1973; Sawchuka 1978; Zobrist et al. 1979; Haydn et al.
1982). The technique utilizes the simple geometrical rela-
tionship described by the standard parallax formula in aerial 13.10.3 Perspective View
photogrammetry. The only difference is that now the
height-parallax relationship is used to calculate parallax Perspective views are based on integration of remote
values, which are artificially introduced in the image, to sensing and DTM data. First, the remote sensing image is
correspond to the ‘height’ data. registered and draped over the topographic map. Then, the
Consider two images A and B to be merged by synthetic 3-D model can be viewed from any angle to provide a
stereo; one of these images is to be used as the base image perspective view. Such models help in terrain conceptual-
and the other as the ‘height’ image. Generally, the ization. Figure 13.38 shows a perspective view generated
high-resolution image is taken as the base image on which from MOMS-02P. The aft and fore cameras were used to
data from the lower-resolution image is superimposed as generate the stereo model from which the DTM was
height data. The two images are first registered over each derived. On this DTM, the high-resolution panchromatic
channel was draped over.

13.11 Image Segmentation/Slicing

13.11.1 General

The purpose of image segmentation or image slicing is to


subdivide the image into zones that are homogeneous with
respect to image properties. From a geological point of view,
image segmentation could be of interest for lithological
mapping. Several variables such as texture, which is a
complicated parameter to quantify, may be involved in the
task of image segmentation, and the technique is under
continuous development using various statistical approa-
ches. The reader may refer to other texts for details (e.g. Hsu
1978; Haralick 1979; Farag 1992; Pal and Pal 1993; Evans
et al. 2002; Hu et al. 2005).
The segmentation of an image into sub-areas involves
two considerations: (a) the sub-area should have spectral
similarity, and (b) the presence of break or change charac-
teristics between two sub-areas should be shown.
Most algorithms presently used employ one of the two
approaches: (1) use a local gradient operation to detect segment/
Fig. 13.37 Synthetic stereo formed by using TM4 as the base image
and TM6 (thermal channel) as the height information (Courtesy of region boundaries, or (2) employ aggregation of neighbouring,
H. Kaufmann) similar pixels into larger regions (region growing).
192 13 Digital Image Processing of Multispectral Data

Fig. 13.38 Perspective view generated from MOMS-02P of an area in Korea (courtesy of DLR, Oberpfaffenhofen)

Thresholding is the process of subdividing an image by deviation threshold. If the pixels have spectral differences
putting boundaries within any gray scale, and is a highly within the threshold, they are aggregated. Aggregation/
critical operation. Using only gray level as the parameter for segregation of pixels are performed based on the above
subdividing an image into segments may be done by density spectral similarity/dissimilarity, and spatial clusters are cre-
slicing, which may be based on a certain statistical criterion, ated (see e.g. Benz et al. 2004; Hu et al. 2005). The region
such as multi-modal distribution in a DN histogram. The growing approach is now used in ‘object based image analy-
image can also be subdivided by gradient edges that are sis’ (OBIA) discussed below (for details, see Blaschke et al.
produced by sudden changes in DN values of successive 2008).
pixels in the image.
On the other hand, the region growing approach involves
clustering of pixels based on spectral characteristics but 13.11.2 Object Based Image Analysis
operates in spatial domain. Successive pixels are compared,
both in rows and columns. Two parameters are specified by The need of OBIA becomes obvious if we consider the
the user, a DN difference threshold, and a DN standard relationship between the dimensions of objects/classes to be

Fig. 13.39 Relationship between objects under consideration and appropriate. c High resolution: pixels are significantly smaller than
spatial resolution: a low resolution: pixels significantly larger than objects, regionalization of pixels into groups of pixels and finally
objects, sub-pixel techniques needed. b Medium resolution: pixel and objects is needed (after Blaschke 2010)
objects sizes are of the same order, pixel by- pixel techniques are
13.11 Image Segmentation/Slicing 193

mapped vis-à-vis the spatial resolution of various sensors ground-cover types. Thus, in a way, the analyst supervises
(Fig. 13.39). The delineation of land cover classes from the process of image classification, hence the name. The
medium or low resolution satellite sensor data (such as digital numbers of the pixels in all spectral bands repre-
Landsat TM/ETM+/OLI, Aster, IRS-LISS etc.) depends senting these training areas are then used to generate some
upon the spectral information captured within the pixel, and statistical parameters (also known as training-data statistics
for this purpose, the per-pixel and sub-pixel classification or signatures), depending upon the classification algorithm
techniques may be sufficient (Fig. 13.39a, b) (see used. Finally, the decision rule developed on the basis of
Sect. 13.12 for classification). training-data statistics allocates each pixel of the image to
However, in case of high-spatial resolution data such as one of the classes.
that provided by IKONOS, QuickBird, Geo-Eye, World- In unsupervised classification (also called cluster analy-
View, and Cartosat etc., the spatial information attributes sis), practically the reverse happens. The classes are first
such as pixel proximity, image texture, contextual informa- grouped spectrally on the basis of the digital numbers of the
tion and geometric characteristics of classes may also be pixels. The analyst then confirms the identification of the
captured or contained in the remote sensing data groups to produce meaningful classes. A range of clustering
(Fig. 13.39c) (Benz et al. 2004; Radoux and Defourny algorithms may be used to produce these natural groups or
2007). These meaningful attributes imply that the class to be clusters in the data. The analyst may, however, have to
classified occurs as a group of pixels rather than a single supply to these algorithms with some minimum information,
pixel. The OBIA uses this concept of working on a group of such as the number of clusters, the statistical parameters to
pixels as objects rather than individual pixels. The ‘region be used etc. Thus, unsupervised classification is also not
growing’ approach of segmentation has been outlined above. completely free from the analyst’s intervention. However, it
The resulting spatially homogeneous areas are treated as does not depend upon a priori knowledge of the terrain.
individual classes, and are then classified in their entirety as A hybrid approach involving both supervised and unsu-
single spectral samples rather than pixel by pixel. pervised methods may generally be the best. For example, in
the first stage, an unsupervised algorithm may be used to
form the clusters, which can provide some information about
13.12 Digital Image Classification the spectrally separable classes in the area. This could assist
in generating training-data statistics. Then, in the second
The purpose of digital image classification is to produce stage, using a suitable supervised classification algorithm,
thematic maps where each pixel in the image is assigned on the pixels are allocated to different classes.
the basis of spectral response to a particular theme. The A large number of supervised and unsupervised statistical
methods of image classification are largely based on the classification algorithms have been developed; a few of the
principles of pattern recognition. A pattern may be defined as commonly used algorithms will be described here.
a meaningful regularity in the data. Thus, the identification of
pattern in the data is the job of the classification. In the context
of remote sensing, the process of classification involves 13.12.1 Supervised Classification
conversion of satellite- or aircraft-derived spectral data (DN
values) into different classes or themes of interest (e.g. water, A supervised classification involves three distinct stages:
forest, soil, rock etc.). The output from a classification is a training, allocation and testing. A schematic representation of
classified image, usually called a thematic map of the image. a supervised classification process is presented in Fig. 13.40.
Detailed texts on digital image classification can be found
elsewhere (e.g. Schowengerdt 2007; Arora et al. 2011). 13.12.1.1 Construction of Measurement Space
With the unprecedented increase in the availability of a The physical world contains objects that are observed by a
large amount of multispectral digital data, the use of sophis- remote sensor. Data from measurements constitute the mea-
ticated computer-based classification techniques has become surement space (i.e. each observed point represents a digital
essential. Moreover, the digital thematic maps prepared from number in each of the n spectral channels in an n-dimensional
the classification can be used as direct input to a GIS. measurement space). The sensor data may be corrected or
There are two basic approaches to digital image classifi- processed (e.g. by enhancements, or multi-image operations
cation: supervised and unsupervised. In a supervised clas- etc.) before any classification is attempted. Although
sification, the analyst first locates representative samples of pre-processing may yield a refined measurement space in
different ground-cover types (i.e. classes or themes) of which it may be easier to recognize patterns, the spectral
interest in the image. These samples are known as training nature of the data gets changed; therefore, pre-processing
areas. These training areas are selected based on the ana- operations ought to be performed very carefully, and only
lyst’s familiarity with the area and knowledge of the actual when really necessary.
194 13 Digital Image Processing of Multispectral Data

Fig. 13.40 Flow diagram of


data in supervised classification
approach

13.12.1.2 Training Accordingly, the training sample size varies from a mini-
The training of a classifier is one of the most critical oper- mum of 10b per class to 100b per class, where b is the
ations (Hixson et al. 1980). Training data are areas of known number of bands (Swain 1978). However, it is not a question
identity that are demarcated on the digital image interac- of ‘the bigger the better’.
tively. The training data should be sufficiently representative Ideally, the training data should be collected at or near the
of the classes in the area. As collection of training data is a time of the satellite or aircraft over-pass, in order to accu-
costly affair, the size of the training data set ought to be kept rately characterize the classes on the image. This is, how-
small, but at the same time it should be large enough to ever, not always possible in practice, due to financial, time
accurately characterize the classes. The training data sets and resources constraints. Nonetheless, the time of training
should therefore meet the following general requirements: data collection is crucial to some studies, such as land-
cover-change detection and crop classification.
(a) training areas should be homogeneous (i.e. should In order to judge the quality of the training data, the
contain pure pixels); following preliminary statistical study may be carried out.
(b) a sufficiently large number of pixels should be available
for the training data set; (a) The histogram of the training data for each class may
(c) the training data set of each class should exhibit a be examined. A normally distributed unimodal curve
normal distribution; represents good-quality training samples for that class.
(d) training areas should be widely dispersed over the (b) A matrix showing statistical separability (normalized
entire scene. distance between class means) may be computed for
each spectral band to check whether or not any two
Typically, the sample size for the training data is related classes are mutually distinguishable in any one or more
to the number of variables (viz. spectral channels/bands). spectral bands. The classes that have poor statistical
13.12 Digital Image Classification 195

separability in all the bands should be merged into each component analysis and canonical analysis, which have been
other. already mentioned, can also form feature axes in image
(c) In order to obtain a better physical idea about the classification.
training data set, cross-plots or scatterograms between
various spectral bands may also be plotted, to graphi- 13.12.1.4 Allocation
cally depict the mutual relations of responses in dif- Once a suitable feature space has been created, the remote
ferent spectral bands for all of the classes. sensing image can be classified. The aim of the allocation
stage is to allocate a class to each pixel of the image using a
After having decided about the quality, number and statistical classification decision rule. In other words, the
placement of training areas, the stage is now set for the next objective is to divide the feature space into disjoint
step, i.e. feature selection. sub-spaces by putting decision boundaries so that any given
pixel or pattern can be assigned to any one of the classes.
13.12.1.3 Feature Selection The strategy of a statistical classifier may be based on:
Feature selection deals with the distillation process to deci-
pher the most useful bands contained in the measurement (1) geometry
space. It is performed to bin the redundant information, in (2) probability
order to increase the efficiency of classification. (3) a discriminant function.
As remote sensing data are collected in a large number of 1. Pattern classifiers based on geometry. This type of
bands, some of the spectral bands exhibit high correlation. In classifier uses a geometric distance as a measure for
such a situation, utilizing data from all of the spectral bands sub-dividing the feature space. Two such typical clas-
would not be advantageous for the classification process. sifiers are the minimum distance to means (or centroid
The main purpose of feature selection is to reduce the classifier) and the parallelepiped classifier.
dimensionality of the measurement space, minimizing
redundancy of data, while at the same time retaining suffi- The minimum distance to means classifier is the simplest
cient information in the data. The feature selection may be one. The mean vector (as obtained from training data
performed by utilizing various separability indices such as statistics) representing the mean of each class in the bands
divergence, transformed divergence, Bhatacharya distance, selected is used here to allocate the unknown pixel to a class.
J-M distance, or from principal component analysis etc. The spectral distance between the DN value of the unknown
(Swain 1978). pixel and mean vectors of the various classes is successively
Ratios and indices of spectral measurements can also be computed. (There are a number of ways to compute the
used as feature axes. For example, vegetation index (NDVI), distance, e.g. Euclidean distance, Mahalobonis distance.)
snow index (NDSI), or various spectral ratios to map Fe-O, The pixel is assigned to a class to which it has the minimum
hydroxyl minerals etc. (Table 13.2) can also form feature distance. Figure 13.41 shows a two-dimensional represen-
axes for image classification. In addition, principal tation of the concept of this classifier.

Fig. 13.41 Comparison of some


supervised classification schemes.
a The three known class
categories are A, B and C;
unknown pixels X and Y are to be
classified. b The categorization of
the points X and Y as A, B or C, if
different classification criteria are
used
196 13 Digital Image Processing of Multispectral Data

One major limitation of this classifer is that it does not based on a PDF (probability density function) obtained
take into account the variance of the different classes. Thus, from a Gaussian Probability distribution. The probabili-
using the above rule, an unknown pixel may be classified in ties of a particular pixel belonging to each class are
a particular class, whereas it may go to another class if the calculated, and the class with the highest probability
scatter (or variance) of the class in a certain band is also secures the pixel (Fig. 13.41).
considered.
The parallelepiped classifier is a type of classifier that In practice there may be several classes, each having a
may incorporate variance also. In this classifier, a range is different inherent probability of occurrence in the area, and
usually determined from the difference between lower and this aspect may also be duly taken into account by consid-
upper limits of the training-data statistics of a particular ering the a priori probability of occurrence of the various
class. These limits may be the minimum and maximum DN classes. A priori probabilities may be generated from other
values of the class in a certain band, or may be computed as data, such as by incorporating the effects of terrain charac-
µ ± r (where µ is the mean of a class and r is the standard teristics (Strahler 1980) or by including data from a
deviation of the class in a band). Once the lower and upper non-parametric process (Maselli et al. 1992).
limits have been defined for the classes under consideration, Further, often a thresholding is also applied so that in the
the classifier works on each DN value of the unknown pixel. case that even the highest PDF is below a certain threshold
A class secures the pixel if it falls within its range. For limit (generally a value of two standard deviations), the pixel
example, in a two-dimensional representation (Fig. 13.41), may be classified as unknown, lest the overall accuracy of
the classes possess rectangular fields and an unknown pixel the classification output may deteriorate. The MLC tech-
lying within the field is classified accordingly. Parallelepiped nique uses a fair amount of computer time, but has been
classifiers are in general very fast and fairly effective. most popularly used for classifying remote sensing data as it
However, their reliability sharply declines if classes have gives a minimum of errors.
large variances, which may lead to high correlation along the Figure 13.42 shows an example of land-use/land-cover
measurement axes. In addition, if a pixel does not fall within classification using IRS-LISS-II data in the Ganges valley,
a particular range it may remain unclassified. Himalayas.

2. Probability-based classifier or maximum likelihood 3. Classifier based on a discriminant function. Such clas-
classifier (MLC). This type of classifier is based on the sifiers associate a certain function with each class and
principle that a given pixel may be assigned to a class to assign the unknown pixel to that class for which the
which it has the maximum probability of belonging. value of the function is maximum. The determination of
A common strategy used is called the Bayes optimal or the optimum discriminating function or decision bound-
Bayesian, which minimizes the error of misclassification ary is basically the training or learning in such a case.
over the entire set of data classified. It is another way of The technique is powerful if the classes can be linearly
stating the maximum likelihood principle. The method is separated along any feature axis. The transformation

Fig. 13.42 Land use/land cover


classification of remote sensing
data (IRS-LISS-II) (using MLC)
of a part of the Ganges valley,
Himalayas
13.12 Digital Image Classification 197

involves the building up of a feature axis along which the this distance is less than a certain threshold (say d), then
groups are separated most and inflated least. In a com- group it in cluster I; alternatively put it in cluster II; then
parative analysis between discriminant analysis and MLC take the next pixel and classify it as a member of cluster I
it was found that the discriminant analysis algorithm or II, or alternatively cluster III, according to its mathe-
offered distinct advantages in accuracy, computer time, matical location. In this way, all the subsequent pixels may
cost and flexibility over the MLC (Tom and Miller 1984). be classified.
Unsupervised classification is highly sensitive to the
Overall, the maximum likelihood classifier (MLC) has value of threshold distance d, which has to be very carefully
been universally accepted as an effective method of classi- chosen. The value of d controls the number of clusters that
fication. However, as with other statistical classifiers, prob- develop; it may happen that a particular d may yield clusters
lems exist with this classifier also. First is its normal having no information value. The technique is very expen-
distribution assumption: sometimes the spectral properties of sive by way of computer time, as each subsequent pixel has
the classes are far from the assumed distribution (e.g. in to be compared with all the earlier clusters, and therefore as
complex and heterogeneous environments). Second, this d is reduced, the number of clusters increases and the
classifier is based on the principle of one pixel one class computer time increases manifold.
allocation (Wang 1990), thereby forcing each pixel to be
allocated to only one class, although it may correspond to
two or more classes (viz. mixed pixels). The maximum 13.12.3 Fuzzy Classification
likelihood classifier shows marked limitations under these
circumstances. The classifiers mentioned so far seek to designate mutually
exclusive classes with well-defined boundaries, and there-
fore are based on the ‘one pixel, one class phenomenon’.
13.12.2 Unsupervised Classification These are termed crisp classifications (Fig. 13.43).
In practice, however, such an assumption may not be met
This classification method is applied in cases where a priori by many pixels as they may actually record reflectance from
knowledge of the types of classes is not available and a number of different classes within a pixel (mixed pixels!)
classes have to be determined empirically. It involves (Chikara 1984; Key et al. 1989; Wang 1990; Fischer and
development of sets of unlabelled clusters, in which pixels Pathirana 1990; Foody and Arora 1996). Therefore, other
are more similar to each other within a cluster, and rather approaches are required to classify pixels, which may use the
different from the ones outside the cluster. Because of this method of allocating multiple-class membership to each
characteristic of cluster building, it is also termed cluster pixel. Such classifications are termed fuzzy classifications.
analysis. In practice, the spectral distance between any two The concept is based on fuzzy set theory, in which pixels do
pixels is used to measure their similarity or dissimilarity. not belong to only one class but instead are given mem-
The simplest way to understand it is as follows: start with bership values for each class being constructed. These
one pixel and assign to it the name cluster I; then take the membership values range between 0 and 1, and sum to unity
next pixel, and compute its spectral distance to cluster I; if for each pixel. A pixel with a membership value of 1

Fig. 13.43 The concept of crisp


versus fuzzy classifiers; in a crisp
classifier, a pixel is classified as
belonging to the dominant
category; in a fuzzy classifier, a
pixel is assigned fractions of
thematic categories
198 13 Digital Image Processing of Multispectral Data

signifies a high degree of similarity to that class, while a their capability to handle data from different sources in
value near to 0 implies no similarity to that class. addition to remote sensing data from different sensors.
Among the various fuzzy classifications, the fuzzy-c A neural network comprises a relatively large number of
means (FCM) algorithm is more widely used (Bezdek et al. simple processing units (nodes) that work in parallel to
1984). FCM is a clustering algorithm that randomly assigns classify input data into output classes (Hepner et al. 1990;
pixels to classes and then, in an iterative process, moves Foody 1992, 1995; Schalkoff 1992). The processing units
pixels to other classes so as to minimize the generalized are generally organized into layers. Each unit in a layer is
least-squares error. In addition, the well-known MLC can also connected to every other unit in the next layer. This is
be used as a fuzzy classifier, the reason being that the prob- known as a feed-forward multi-layer network.
abilities obtained from MLC have been found to be related to The architecture of a typical multi-layer neural network is
the proportion of classes within a pixel on the ground. shown in Fig. 13.44. As can be seen, it consists of an input
In a fuzzy classification, there will not be a single clas- layer, a hidden layer and an output layer. The input layer
sified image but a number of fraction images or proportion merely receives the data. Unlike the input layer, both hidden
images, equal to number of classes considered. For example, and output layers actively process the data. The output layer,
if there are five classes, there will be five fraction images. as its name suggests, produces the neural network’s results.
Introducing a hidden layer(s) between input and output layer
increases the network’s ability to model complex functions.
13.12.4 Linear Mixture Modelling (LMM) The units in the neural network are connected with each
other. These connections carry weights. The determination
LMM is also a strategy to classify sub-pixel components. It of the appropriate weights of the connections is known as
is based on a simple linear equation, derived on the learning or training. Learning algorithms may be categorized
assumption that the DN value of a pixel is a linear sum of the as supervised and unsupervised, as in a conventional clas-
DN values of component classes, weighted by their corre- sification. Thus, the magnitudes of the weights are deter-
sponding area (Settle and Drake 1993). Based on this, class mined by an iterative training procedure in which the
proportions can be obtained, which sum to one. However, if network repeatedly tries to learn the correct output for each
the spectral responses of component classes do not exhibit a training sample. The procedure involves modifying the
linear mixing, the results may be misleading. These aspects weights between units until the network is able to charac-
are discussed in greater detail in Chap. 14. terize the training data. Once the network is trained, the
adjusted weights are used to classify the unknown data set
(i.e. the unknown pixels of the image).
13.12.5 Artificial Neural Network Classification In recent years, the neural network has been viewed as the
most challenging classification technique after the MLC. An
Another classifier is the artificial neural network important feature of the neural network classification is that
(ANN) classifier, which is regarded as having enormous higher classification accuracies may be achieved with neural
potential for classifying remotely sensed data. The major networks than with conventional classifiers, even when a
appeal in using neural networks as a classification technique small training data set is used. Furthermore, neural networks
is due to their probability-distribution-free assumption and have been found to be consistently effective under situations

Fig. 13.44 Architecture of an


artificial neural network (ANN)
13.12 Digital Image Classification 199

where data from different sources other than remote sensing picked in classification, as compared to the total number of
are considered (Arora and Mathur 2001). Aspects of using pixels classified in that category. Therefore the correctly
ancillary data for classification are discussed in Chap. 18. classified pixels numbers are divided by the pixel numbers
of that unit appearing in the classified image. Several factors
affect the classification accuracy, which mainly include the
13.12.6 Classification Accuracy Assessment reference or ground truth data collection, classification
scheme, sampling scheme and analytical technique. Sources
A classification is seldom perfect. Inaccuracies may arise of confusion in error matrix generation are given by Con-
due to imperfect training or a poor strategy of classification. galton and Green (1993).
Therefore, it is useful to estimate errors in order to have an
idea of the confidence attached to a particular classification.
One of the most common ways of representing the clas- 13.12.7 Super Resolution Techniques
sification accuracy is by defining it with the help of an error
matrix (Congalton 1991). An error matrix is a cross- An imaging device inherently breaks up a continuous scene
tabulation of the classes on the classified remotely sensed into discrete units, and assigns an average brightness value
image and the ground data. It is represented by a to each discrete unit. This artificial segmentation, i.e. sam-
c  c matrix (where c is the number of classes) (e.g. pling and quantization is a type of degradation common to
Fig. 19.45). The columns of the matrix generally define the all imaging sensors, whereby each pixel is treated as a
ground data while the rows define the classified remotely homogeneous unit and details with-in the pixel are
sensed data (albeit the two are interchangeable). The error virtually lost.
matrix has been referred to in the literature by different Image fusion techniques commonly used for image
names, such as confusion matrix, contingency table, mis- sharpening have been discussed in Sect. 13.9. These tech-
classification matrix, etc. niques bring the resolution of the set of images to the highest
To generate an error matrix, a set of testing-data are resolution input image level, not any further. Super-
used which are usually collected from the ground data in resolution reconstruction (SRR) is an advanced image pro-
the same way as the earlier used training-data. The testing cessing technique that attempts to create high-resolution
area consists of pixels of known classes, which were not images (higher resolution than the available sensor data)
used in training the classifier, but were held back for the from a set of low resolution images whereby details with-in
purpose of classifier evaluation at a subsequent stage. The the pixel can be retrieved (Park et al. 2003; Benz et al. 2004;
resulting output from the classifier is compared with the Zhang et al. 2014).
ground truth by building up an error or confusion matrix. Over the last nearly five decades of remote sensing, there
The feedback from the classifier evaluation could also be has been a continuous drive to improve the spatial resolution
used to modify the strategy of classification or to improve of imaging sensors. One general approach has been by regu-
feature selection. larly improving the hardware (sensor fabrication), i.e.,
From the error matrix, errors of two types can be com- decreasing the physical sizes of the charge-coupled device
puted: omission and commission (Fig. 19.45). Omission (CCD) or complementary metal oxide semiconductor
errors are computed by dividing the total number of correctly (CMOS) sensor cells. With the result, the high-resolution
classified pixels by the total number of pixels of that cate- satellite imagery now available from IKONOS, QuickBird,
gory as present in reference data (i.e. column total). This is Geo-Eye, WorldView, and Cartosat etc., provides rich,
called ‘producer’s accuracy’. Commission errors are com- detailed Earth’s surface information and allows high-
puted by dividing the total number of correctly classified definition visual interpretation. Such image data are exten-
pixels by the total number of pixels of that category as sively used these days for urban planning, land use land cover
present in the classified out (i.e. row total). This is called management, disaster assessment, defence intelligence, and so
‘user’s accuracy’. In concept, the producer’s accuracy gives on. However, there is a technical limit with regard to the sensor
an estimate as to how well a particular thematic unit has been cell size reduction, as beyond a certain limit it could degrade
picked in classification, as compared to the unit actually the signal-to-noise ratio and harm the image quality. Looking
present on the ground (reference data). Therefore the cor- into the future, the SRR technique is one option to further
rectly classified pixels numbers are divided by the pixel improve the output image resolution.
numbers of the reference data. The user’s accuracy gives an Besides, there could a requirement in some cases, to
estimate as to how well a particular thematic unit has been generate high resolution images from a set of existing
200 13 Digital Image Processing of Multispectral Data

relatively low resolution image data that may be noisy, 13.12.8 Scope for Geological Applications
blurred and/or down-sampled. The multiple input images
may have been acquired at different times (multi-temporal) The digital classification approach to multispectral remote
or from different angles. The super resolution image pro- sensing data has found rather limited applications in
cessing technique using various adaptive and weighted fil- geological-lithological mapping. Most of the world-wide
ters could generate new images with improved resolution. available remote sensing data (e.g. Landsat TM/ETM +/OLI,
There are also classification approaches involving super Terra-ASTER, SPOT-HRV, IRS-LISS) pertain to principally
resolution. The sub-pixel or soft classification (discussed the solar reflection region, which brings information from
earlier in Sect. 13.12.3) gives the class membership grades the top few-microns-thick surficial cover over the Earth’s
within the pixel for various classes present; it gives the surface. The thermal-infrared data are related to the
proportion of the land cover class within the pixel, but lacks few-centimetres-thick top surface layer and, although better
information on the spatial location of these sub-pixel class in this respect, are scarcely available with proper spatial
units. The super resolution mapping attempts to resolve this resolution. In geological mapping, the geologist is interested
ambiguity. The technique recovers information not only on in defining bedrock lithologies, irrespective of the type of
the type of sub-pixel units, but also on their spatial location surface cover, such as vegetation, scree, alluvium, cultiva-
with-in the pixel. This is variously called as sub-pixel tion etc., which may serve only as a guide at most. This
mapping, super-resolution mapping or sharpening. Thus, the could be a major reason why classification maps are reported
super resolution map is a classified map that is at a finer to have a rather low correspondence with field geological
scale (smaller pixel size) than the pixel size of any of the maps (c.f. Siegal and Abrams 1976).
input images (for more details, see e.g. Foody 1998; Ver- It is in this context that enhancement has been a com-
hoeye and Wulf 2002; Tatem et al. 2002; Mertens et al. paratively more rewarding approach for geological inter-
2004). pretation and applications. In practice, the geologist can
Evidently, the super resolution techniques, both image avail himself of an enhanced remote sensing product and
reconstruction and mapping, are more applied to the extrac- interpret the image data suitably, considering the various
tion of fine detailed surface information from remote sensing surface features such as topography, drainage, geomor-
data with applications mainly for urban town-planning, phology, soil, land-cover, vegetation, alluvium, scree etc.,
strategic surveys and defence intelligence, etc. for lithological discrimination and mapping (Fig. 13.45).

Fig. 13.45 A typical error


matrix
References 201

References Davis JC (1986) Statistics and data analysis in geology, 3rd edn. Wiley,
New York, p 646
Drury SA (2004) Image interpretation in geology, 3rd edn. Blackwell
Arora MK, Mathur S (2001) Multi-source image classification using Sciences, Malden, p 304
neural network in a rugged terrain. Geocarto Int 16(3):37–44 Evans C, Jones R, Svalbe I, Berman M (2002) Segmenting multispec-
Arora MK, Shukla A, Gupta RP (2011) Digital information extraction tral Landsat TM images into field units. IEEE Trans Geosci Remote
techniques for snow cover mapping from remote sensing data. In: Sens 40(5):1054–1064
Singh VP, Singh P, Haritashya UK (eds) Encyclopedia of snow, ice Farag AA (1992) Edge-based image segmentation. Remote Sens Rev
and glacier. Springer, Dordrecht, pp 213–232 6:95–122
Benz UC, Hofmann P, Willhauck G, Lingenfelder I, Heynen M (2004) Fisher PF, Pathirana S (1990) The evaluation of fuzzy membership of
Multi-resolution, object-oriented fuzzy analysis of remote sensing land cover classes in the suburb an zone. Remote Sens Environ
data for GIS-ready information. ISPRS J Photogram Remote Sens 34:121–132
58:239–258 Foody GM (1992) A fuzzy sets approach to representation ofvegetation
Bezdek JC, Ehrlich R, Full W (1984) FCM: The fuzzy c-means continua from remotely sensed data: an example from Lowland
clustering algorithm. Comp Geosci 10:191–203 heath. Photogram Eng Remote Sens 58:221–225
Blaschke T (2010) Object based image analysis for remote sensing, Foody GM (1995) Land cover classification by an artificial neural
ISPRS J Photogram Remote Sens 65(1):2–16. ISSN 0924-2716. network with ancillary information. Int J Geog Inform Sys
http://dx.doi.org/10.1016/j.isprsjprs.2009.06.004 9:527–542
Boardman JW, Kruse FA, Green RO (1995) Mapping target signatures Foody GM (1998) Sharpening fuzzy classification output to refine the
via partial unmixing of AVIRIS data. In: Proceedings of Fifth JPL representation of sub-pixel land cover distribution. Int J Remote
airborne earth science workshop, summaries, Pasadena, California, Sens 19(13):2593–2599
vol 1. JPL Publication 95–1, pp 23–26, 23–26 Jan 1995 Foody GM, Arora MK (1996) Incorporating mixed pixels in the
Buchanan MD (1979) Effective utilization of colour in multidimen- training, allocation and testing stages of supervised classifications.
sional data presentation. Proc Soc Photo Opt Instrument Eng Pattern Recog Lett 17:1389–1398
199:9–19 Gillespie AR (1980) Digital techniques of image enhancement. In:
Buchanan MD, Pendgrass R (1980) Digital image processing: can Seigal BS, Gillespie AR (eds) Remote sensing in geology. Wiley,
intensity hue and saturation replace red, green and blue? New York, pp 139–226
Electro-Opt Syst Design 12(3):29–36 Gillespie AR, Kahle AB, Walker RE (1986) Color enhancement of
Byrne GF, Crapper PF, Mayo KK (1980) Monitoring land-cover highly correlated images: I-decorrelation and HIS contrast stretches,
change by principal component analysis of multitemporal Landsat Remote Sens Environ 20:209–235
data. Remote Sens Environ 10:175–189 Gillespie AR, Kahle AB, Walker RE (1987) Colour enhancement of
Campbell NA (1996) The decorrelation stretch transform. Int J Remote highly correlated images: II-channel ratio and chromaticity trans-
Sens 17:1939–1949 formation techniques. Remote Sens Environ 22:343–365
Carper WJ, LilIesand TM, Kiefer RW (1990) The use of Gonzales RC, Woods RE (2008) Digital image processing, 3rd edn.
intensity-hue-saturation transformations for merging SPOT Addison Wesley, Reading
panchromatic and multispectral image data. Photogramm Eng Gupta RP (2003) Remote sensing geology, 2nd edn. Springer, Berlin,
Remote Sens 56(4):459–467 655 p
Chander G, Markham BL, Helder DL (2009) Summary of current Gupta RP, Tiwari RK, Saini V, Srivastava N (2013) A simplified
radiometric coefficients for Landsat MSS, TM, ETM+, and EO-1 approach for interpreting principal component images. Adv Remote
OLI sensors. Rem Sens Environ 113:893–903 Sens 2:111–119
Chikara RS (1984) Effect of mixed pixels on crop proportion Haralick RM (1979) Statistical and structural approaches to texture.
estimation. Remote Sens Environ 14:207–218 Proc IEEE 67:786–804
Condit CD, Chavez PS (1979) Basic concepts of computerised digital Haralick RM, Fu K (1983) Pattern recognition and classification. In:
image processing for geologists. US Geol Surv Bull No. 1462, US Colwell RN (ed) Manual of remote sensing. American Society of
Govt Printing Office, Washington DC, 16p Photogrammetry and Remote Sensing, Falls Church, VA, pp 793–
Congalton RG (1991) A review of assessing the accuracy of 805
classifications of remotely sensed data. Remote Sens Environ Harris JR, Murray R, Hirose T (1990) IHS transform for the integration
37:35–46 of radar imagery and other remotely sensed data. Photogramm Eng
Congalton RG, Green K (1993) A practical look at the sources of Remote Sens 56:1631–1641
confusion in error matrix generation. Photogram Eng Remote Sens Harris JR, David WV, Andrew NR (1999) Integration and visualization
59:641–644 of geoscience data. In: Remote sensing for the earth sciences,
Crane RB (1971) Preprocessing techniques to reduce atmospheric and manual of remote sensing, 3rd edn, vol 3. Am Society for
sensor variability in multispectral scanner data. In: Proceedings of Photogrammetry and Remote Sensing, pp 307–354
7th international symposium on remote sensing of environment, vol Haydn R (1985) A concept for the processing and display of Thematic
II. Ann Arbor, MI, pp 1345–1355 Mapper data. In: Proceedings of symposium on Landsat-4 science
Crosta AP, Moore JM (1989) Enhancement of Landsat Themetic characterization early results. NASA publ 2355 Greenbelt, MD,
Mapper imagery for residual soil mapping in SW Minas Gerais pp 217–237
State, Brazil: a prospecting case history in Greenstone Belt Terrain. Haydn R, Dalke GW, Henkel J, Bare JE (1982) Application ofthe IHS
In: Proceedings of 7th thematic conference on remote sensing for colour transform to the processing of multi sensor data and image
exploration geology, Calgary, pp 1173–1187, 2–6 October 1989 enhancement. Proc Int Symp Remote Sens of Arid and Semi-Arid
Curran PJ (1985) Principles of remote sensing. Longman, London Lands, Cairo, pp 599–616
Daily MI (1983) Hue-saturation-intensity split-spectrum processing of Hepner GF, Logan T, Ritter N, Bryant N (1990) Artificial neural
Seasat radar imagery. Photogramm Eng Remote Sens 49:349–355 network classification using a minimal training set: comparison to
Davis LS (1975) A survey of edge detection techniques. Computer conventional supervised classification. Photogramm Eng Remote
Graphics Image Proc-essing 4:248–270 Sens 56:469–473
202 13 Digital Image Processing of Multispectral Data

Hill J (1991) A quantitative approach to remote sensing: sensor Ruiz-Armenta JR, Prol-Ledesma RM (1998) Techniques for enhancing
calibration and comparison. In: Belward AS, Valenzuela CR (eds), the spectral reponse of hydrothermal alteration minerals in
pp 97–110 Thematic Mapper images of Central Mexico. Int J Remote Sens
Hord RM (1982) Digital image processing of remotely sensed data. 19(10):1981–2000
Academic Press, New York, p 256 Russ JC (2011) The image processing handbook, 6th edn. CRC Press,
Hsu S (1978) Texture-tone analysis for automated landuse mapping. Boca Raton
Photogramm Eng Remote Sens 44:1393–1404 Sabins FF Jr (2007) Remote sensing: principles and interpretation, 4th
Hu X, Tao CV, Prenzel B (2005) Automatic segmentation of edn. Waveland Press, Long Grove, p 512
high-resolution satellite imagery by integrating texture, intensity Sawchuka AA (1978) Artificial stereo. App Optics 17:3869–3873
and colour features. Photogramm Eng Remote Sens 71(12): Schalkoff RJ (1992) Pattern recognition: statistical. Wiley, New York
1399–1406 Schowengerdt RA (2007) Remote sensing: models and methods for
Jensen JR (2005) Introductory digital image processing, 3rd edn. image processing, 3rd edn. Academic Press, San Diego
Prentice Hall, Englewood Cliffs, 379 p Settle JJ, Drake NA (1993) Linear mixing and the estimation of ground
Justice C, Wharton SW, Holben BN (1981) Application of digital cover proportions. Int J Remote Sens 14:1159–1177
terrain data to quantify and reduce the topographic effect on Shaw GB (1979) Local and regional edge detectors: some compar-
Landsat data. Int J Remote Sens 2:213–230 isons. Comput Graph Image Process 9:135–149
Kahle AB (1980) Surface thermal properties. In: Siegal BS, Gille- Siegal BS, Abrams MJ (1976) Geologic mapping using Landsat data.
spie AR (eds) Remote sensing in geology. Wiley, New York, Photogram Eng Remote Sens 42:325–337
pp 257–273 Strahler AH (1980) The use of prior probabilities in maximum
Key JR, Maslanik JA, Barry RG (1989) Cloud classification from likelihood classification of remotely sensed data. Remote Sens
satellite data using a fuzzy set algorithm: a polar example. Int J Environ 10:135–163
Remote Sens 10:1823–1842 Swain PH (1978) Fundmentals of pattern recognition in remote
Kowalik WS, Lyon RJP, Switzer P (1983) The effects of additive sensing. In: Swain PH, Davis SM (eds) Remote sensing: the
radiance terms on ratios of Landsat data. Photogram Eng Remote quantitative approach. Mcgraw Hill, New York, pp 136–187
Sens 49:659–669 Tangestani MH, Moore F (2002) Porphyry copper alteration mapping
Lillesand TM, Kiefer RW (1987) Remote sensing and image at the Meiduk Area. Iran, Int J Rem Sens 23(22):4815–4825
interpretation, 2nd edn, Wiley, New York, 721 pp Tatem AJ, Hugh G, Atkinson PM, Nixon MS (2002) Superresolution
Loughlin WP (1991) Principal component analysis for alteration land cover pattern prediction using a Hopfield neural network.
mapping. Photogram Eng Remote Sens 57(9):1163–1169 Remote Sens Environ 79:1–14
Maselli F, Conese G, Petkov L, Resti R (1992) Inclusion of prior Tauch R, Kähler M (1988) Improving the quality of satellite images
probabilities derived from a nonparametric proeess into the maps by various processing techniques. In: International Archives
maximum likelihood classifier. Photogramm Eng Remote Sens of Photogrammetry Remote Sensing. Proceedings of XVI ISPRS
58:201–207 Congress, Tokyo, Japan, pp IV238–IV247
Mather PM (2010) Computer proeessing of remotely sensed images, an Thomas IL, Howorth R, Eggers A, Fowler ADW (1981) Textural
introduction, 2nd edn. Wiley, Chicester enhancement of a circular geological feature. Photogram Eng
Mertens KC, Verbeke LPC, Westra T, De Wulf RR (2004) Subpixel Remote Sens 47:89–91
mapping and sub-pixel sharpening using neural network predicted Thome KJ, Gellman DI, Parada RJ, Biggar SF, Slater PN, Moran MS
wavelet coefficients. Remote Sens Environ 91:225–236 (1993) Absolute radiometric calibration of Thematic Mapper. SPIE
Moik JG (1980) Digital processing of remotely sensed images. NASA Proc 600:2–8
SP-431, US Govt Printing Office, Washington, DC Tom Ch, Miller LD (1984) An automated land-use mapping compar-
Pal NR, Pal SK (1993) A review of image segmentation techniques. ison of the Bayesian maximum likelihood and linear discriminant
Pattern Recogn 26(9):1277–1294 analysis algorithms. Photogram Eng Remote Sens 50:193–207
Park SC, Park MK, Kang MG (2003) Super-resolution image Verhoeye J, Wulf RD (2002) Land cover mapping at sub-pixel scales
reconstruction: a technical overview. IEEE Signal Process Mag using linear optimization techniques. Remote Sens Environ
20(3):21–36 79:96–104
Peli T, Malah D (1982) A study of edge detection algorithms. Comput Vincent RK (1997) Fundamentals of geological and environmental
Graph Image Process 20:1–21 remote sensing. Prentice Hall, Englewood Cliffs
Pichel W, Bristor CL, Brower R (1973) Artificial stereo: a technique Wahi M, Taj-Eddine K, Laftouhi N (2013) ASTER VNIR & SWIR band
for combining multichannel satellite image data. Bull Am Meteorol enhancement for lithological mapping—a case study of the Azegour
Soc 54:688–690 area (Western High Atlas, Morocco). J Environ Earth Sci 3(12):33–44
Pohl C, van Genderen JL (1998) Multisensor image fusion in remote Wang F (1990) Fuzzy supervised classification of remote sensing
sensing: concepts, methods and application. Int J Remote Sens images. IEEE Trans Geo-sci Remote Sens 28:194–201
19:823–854 Welch R, Ehlers M (1987) Merging multiresolution SPOT HRV and
Pour AB, Hashim M (2011) Spectral transformation of ASTER data Landsat TM data. Photogram Eng Remote Sens 53(3):301–303
and the discrimination of hydrothermal alteration minerals in a Wyszecki G, Stiles WS (1967) Color science. Wiley, New York
semi-arid region, SE Iran. Int J Phys Sci 6(8):2037–2059 Zhang H, Yang Z, Zhang L, Shen H (2014) Super-resolution
Pratt WK (2007) Digital image processing, 4th edn. Wiley, New York reconstruction for multi-angle remote sensing images considering
Prost GL (2013) Remote sensing for geoscientists, 3rd edn. CRC Press, resolution differences. Remote Sens 6:637–657. doi:10.3390/
702 p rs6010637
Radoux J, Defourny P (2007) A quantitative assessment of boundaries Zhou Z, Civco DL, Silander JA (1998) A wavelet transform method to
in automated forest stand delineation using very high resolution merge Landsat TM and SPOT panchromatic data. Int J Remote
imagery. Remote Sens Environ 110(4):468–475 Sens 19:743–757
Richards JA, Jia X (2006) Remote sensing digital image analysis, 4th Zobrist AL, Blackwell RJ, Stromberg WD (1979) Integration of
edn, Springer, Heidelberg, 363 p Landsat, Seasat and other geo-data sources. In: Proceedings of 13th
Rosenfeld A, Kak AC (1982) Digital picture processing, 2nd edn. international symposium on remote sensing environment, Ann
Academic Press, Orlando Arbor, MI, pp 271–279
Imaging Spectroscopy
14

14.1 Introduction spectral data, after adequate rectification and calibration, are
compared to field/laboratory/library spectra of minerals in
Imaging spectroscopy, also called hyperspectral sensing, is order to generate mineral maps. These maps may show details
an emerging relatively new field in remote sensing that has such as mineral combinations, their relative abundances and
rapidly grown during the last some three decades. It can be even mineral compositions in a solid-solution series.
defined as “acquisition of images in hundreds of contiguous, Further, if a material is not commonly occurring but has
registered, spectral bands such that for each pixel a radiance diagnostic absorption features, its presence can be detected
spectrum can be derived” (Goetz et al. 1985). The term and distribution mapped. The main limitations of the tech-
hyperspectral sensing implies a very large number of narrow nique are the high sophistication required in data acquisition
spectral channels, e.g. 64 to about 200 channels at 10–20 nm and processing, and its high sensitivity to minor changes in
interval, in comparison to the multispectral sensing in which surface material characteristics.
there are typically 4–10 spectral channels at approximately Most of the aerial and space-borne hyperspectral sensors
100–200 nm interval. The term imaging spectroscopy operate in the solar reflection region (0.4–2.5 µm), and their
additionally implies that the bands are near-contiguous to data processing and interpretation form the subject matter of
allow generation of spectral radiance curve for each pixel this chapter. Some sensors also carry 5–6 bands in the
(Fig. 14.1). In the rest of the treatment here, the two terms, thermal-IR (e.g. TIMS and ASTER), and these have been
imaging spectroscopy and hyperspectral sensing, are used discussed in Chap. 12.
synonymously. Review information on imaging spectrome-
try and its geological applications can be found in Clark
et al. (1990a), Clark (1999), Mustard and Sunshine (1999), 14.2 Spectral Considerations
Goetz (2009), Jensen and Yang (2009), Van der Meer et al.
(2012) and Ramakrishnan and Bharti (2015). 14.2.1 Processes Leading to Spectral Features
Advantages and limitations. The most important advan-
tage of the imaging spectroscopy technique comes to the The basic information sought by imaging spectroscopic
fore as we consider a relative limitation of the multispectral technique lies in the spectral features of objects. The various
technique. Figure 14.2a shows spectra of two common basic atomic–molecular processes [viz. electronic processes
minerals, kaolinite and hematite, along with band passes of (crystal field effects, charge transfer effect, electronic tran-
the Landsat TM/ETM+, this sensor having been possibly the sition, conduction bands) and vibrational processes (funda-
most extensively used multispectral sensor till date. Fig- mentals, overtones and combinations of molecular motions)]
ure 14.2b shows the spectral responses as would be recorded governing spectra of minerals were discussed in Chap. 3.
by the Landsat TM/ETM+. The coarse band passes of the A large number of spectra showing broad spectral features
Landsat TM/ETM+ allow recording of only broad variations were also presented there.
in intensity values, and not the fine spectral features, which
remain undetected. Hyperspectral remote sensing overcomes
this limitation, as it allows generation of an almost contin- 14.2.2 Continuum and Absorption Depth—
uous spectrum at each pixel by using hundreds of narrow Terminology
contiguous spectral channels.
Imaging spectrometry possesses the capability to identify In a spectrum, an absorption feature can be considered to be
and map the distribution of specific minerals. The image composed of two components: the continuum and the

© Springer-Verlag GmbH Germany 2018 203


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_14
204 14 Imaging Spectroscopy

Fig. 14.1 Concept of imaging


spectrometry

continuum may be taken as the broad upward limit of the


reflectance curve that would be obtained if the specific
absorption feature was not present. Both absorption and
scattering processes influence continuum. Further, the con-
tinuum itself in a particular case could be a wing of a larger
absorption feature. For example, the Fig. 14.3 shows the
spectrum of goethite; the feature at 0.88 µm is the absorption
band, for which continuum is formed by a broader and
stronger absorption feature in the UV.
Depth of an absorption band is defined relative to
continuum
 
ðR c  R b Þ Rb
D¼ ¼1 ð14:1Þ
Rc Rc

Fig. 14.2 a Laboratory spectra of kaolinite and hematite; b the same,


as would be recorded by Landsat TM

individual feature. The background level is called contin-


uum, on which a particular absorption feature is superim-
posed. It is rather difficult to define continuum.
Qualitatively, it may considered as ‘the upward limit of the
general reflectance curve for a material’ (Mustard and Sun- Fig. 14.3 Spectrum of goethite used to illustrate the terminology—
shine 1999). Practically, for a particular absorption feature, continuum, absorption band and depth of absorption
14.2 Spectral Considerations 205

where Rb is the reflectance at band bottom and Rc is the on chemistry of minerals. Some examples of hyperspectral
reflectance at the continuum, both being measured at the laboratory data are given below.
same wavelength (Clark and Roush 1984). The band depth Figure 14.5 shows high-resolution laboratory spectra of
of an absorption feature can be related to the amount (degree some common clay minerals. It is evident that various
of abundance) of the absorber, as well as to the grain size of minerals like kaolinite, montmorillonite, alunite, muscovite,
the mineral. pyrophyllite etc. have individually distinct and diagnostic
The shape and slope of continuum influence the appear- spectral signatures. Even within a group of minerals, there
ance of the absorption feature, in terms of its position and could be subtle differences in spectra which discriminate one
depth (Fig. 14.4). If the continuum is sloping and an mineral from the other. For example, in the kaolinite group
absorption feature is superimposed over it, then the local of minerals, kaolinite, halloysite and dickite, all of them
minimum may appear shifted. In case, the continuum is have a common absorption band at 2.21 µm, but have
steeply sloping and a rather shallow and broad absorption slightly differing spectra (Fig. 14.6).
feature is superimposed over it, then the resulting effect may Carbonates show diagnostic vibrational absorption bands in
appear as a change in slope with no local minimum. There- the SWIR at 2.30–2.35 µm (Fig. 3.8). The band position may
fore, it is necessary to remove continuum, and the best way of vary with composition. Figure 14.7 shows the subtle shift in
doing this is by division that normalizes the reflectance values absorption band from calcite to dolomite. In the serpentine
and allows comparison of spectra for interpretation. group, chrysotile, antigorite and lizardite are the three iso-
chemical end-members. Figure 14.8 shows that the OH-related
absorption band can help distinguish the three minerals.
14.2.3 High-Resolution Spectral Features The high-resolution spectra are also sensitive to solid
of Minerals solution. Figure 14.9 shows spectra of tremolite and acti-
nolite. The Mg3OH absorption band in tremolite is sharply
It has been observed that changes in the chemical compo- defined; in actinolite, due to the presence of Fe, the
sition of minerals are characterized in terms of subtle absorption band shifts and two additional smaller bands
changes in spectral absorption bands of minerals, i.e. subtle appear. The reflectance spectra can help estimate the ratio
spectral features mark fine variations in the chemistry of Fe/Fe+Mg (Clark et al. 1990b).
minerals. Laboratory data show that these spectral features Elemental substitution is a common process occurring in
have in general a width of approx. 10–40 nm. Hence, minerals. For example, it has been observed that substitution
spectral sampling at a 10 nm interval is generally considered by aluminium in muscovite series leads to fine gradual
suitable for hyperspectral sensors for obtaining information continuous changes in the spectral behaviour of muscovite
(Fig. 14.10).

14.2.4 High-Resolution Spectral Features


of Stressed Vegetation

Hyperspectral features of vegetation are discussed in detail


by Thenkabail et al. (2012). It is well known that there occur
significant changes in the spectra of plants, particularly in
the red-edge region (around 0.70 µm) (Fig. 3.16), as the leaf
transforms from actively photosynthetic state to total
senescence. However, even in the case of actively photo-
synthetic plants, the slope and position of red edge shows
subtle variations that are related to geochemical stresses.
Some metals such as selenium, copper and zinc act as
micronutrients and are required by most plants and animals
in very small quantity, while others such as mercury, arsenic
and lead are toxic. Typically, plants growing over soils
containing heavy metals (metal sulphides) exhibit shift of the
red edge towards the blue end (shorter wavelengths), by
about 7–10 nm (Fig. 14.11; Chang and Collins 1983). This
Fig. 14.4 Effect of sloping continuum on apparent position and slope is called ‘blue-shift of the red edge’. The degree of blue shift
of absorption feature is proportional to the amount of heavy metals in soils and
206 14 Imaging Spectroscopy

Fig. 14.5 High-resolution


laboratory spectra of some
common clay minerals in the
SWIR region; each mineral
exhibits diagnostic spectral
features (after Goetz and Rowan
1981)

Fig. 14.7 Shift in absorption band from calcite to dolomite


(continuum-removed spectra) (after Clark 1999) (reproduced by
Fig. 14.6 Laboratory spectra of the kaolinite group of minerals permission, copyright © 1999, John Wiley & Sons Inc.)
(kaolinite, halloysite and dickite); all these have a common absorption
band at 2.21 µm but each possesses a slightly different spectrum (after
Clark 1999) (reproduced by permission, copyright © 1999, John Wiley sensor vis-à-vis surface dimension of the objects governs
& Sons Inc.) whether a pixel represents a pure pixel or a mixture of
objects (Fig. 14.12). If the pixel covers a uniform ground
vegetation. Additionally, the presence of heavy metals leads area at the sensor resolution, it represents a pure pixel
to a reduced absorption as compared to healthy vegetation in (Fig. 14.12a). In the case of mixtures, three types of physical
the visible (at 0.48 and 0.68 µm) and in the SWIR region. mixtures are identified in the context of remote sensing
Hyperspectral sensors permit detection of such subtle spectroscopy.
spectral changes, and therefore are of interest in mineral
exploration. 1. Areal or linear mixture. This occurs when the pixel
comprises two or more objects occurring in patches that
are large relative to the resolution of the sensor
14.2.5 Mixtures (Fig. 14.12b). Thus, the constituent materials are opti-
cally separated from each other so that there is no mul-
The field-of-view of a sensor may be filled with a variety of tiple scattering between component materials. Spectral
objects of varying surface extents. Spatial resolution of the mixing occurs at the sensor and the signal from the
14.2 Spectral Considerations 207

Fig. 14.10 Effect of substitution of aluminium in muscovite (after


Swayze in Clark 1999)

Fig. 14.8 High-resolution laboratory spectra of serpentine group of


minerals (chrysotile, ntigorite and lizardite) (after Clark et al. 1990a)

Fig. 14.11 Blue shift of the red edge of the chlorophyll band in
coniferous trees growing over copper sulphide rich soil zone (simplified
after Chang and Collins 1983)

on the contact surfaces. The resulting signal from the


intimate mixture is a highly non-linear combination of
the spectra of end-members (Fig. 14.13), for analyzing
which use of radiative transfer equation is necessary.

Intimate mixtures of light and dark objects (e.g. olivine


and magnetite) deserve special mention. In such a mixture,
photons reflected from the light objects have a high proba-
bility of striking and being captured by the dark objects.
Thus, addition of even small amounts of an opaque or dark
Fig. 14.9 High-resolution laboratory spectra of tremolite and actinolit
(after Clark et al. 1990a) mineral to a reflecting mineral drastically reduces the albedo
of the mixture (Fig. 14.14).

mixture is a linear weighted average of the end con- 3. Coatings. Coating of one mineral over another is treated
stituents (Fig. 14.13). as a special type of physical mixture. In such a case,
2. Intimate or non-linear mixture. When different materials reflectance properties of the mineral forming the coating
are in intimate contact with each other (Fig. 14.12c) (e.g. usually dominate or even mask those of the coated
mineral grains in a rock), multiple scattering takes place mineral.
208 14 Imaging Spectroscopy

Fig. 14.12 Linear and non-linear spectral mixing. a A single homogeneous object forming a pure pixel. b Areal mixing (linear spectral
combination). c Intimate mixing (non-linear spectral combination) (after Campbell 1996)

Fig. 14.13 Spectrum of an


intimate mixture appearing as a
highly non-linear combination of
the two end-members (after Clark
1999) (reproduced by permission,
copyright © 1999, John Wiley &
Sons Inc.)

Fig. 14.14 Comparison of


modelled (based on non-linear
mixing model) and measured
reflectance spectra of olivine–
magnetite mixture series (Johnson
et al. 1983)

Effect of grain size. Grain size influences the amount of Effect of viewing geometry. Viewing geometry, including
light scattered and transmitted by a grain. If the material is the angle of incidence and angle of view, influences the
fine grained, a larger area is available for surface scattering. radiation intensity received at the sensor; however, it does
On the other hand, if the material is coarse grained, then a not have a spectral character, which means that its influence
larger internal transmission path is provided to the radiation. over the entire wavelength range is similar. Therefore,
Therefore, in the visible–near-IR region, in general, reflec- viewing geometry is not considered to influence absorption
tance decreases as the grain size increases. band position, shape and depth.
14.2 Spectral Considerations 209

14.2.6 Spectral Libraries spectral channels. The working principle is illustrated in


Fig. 14.15. A moving plane mirror directs radiation from
Spectral libraries are used to hold spectral data/information different parts of the ground on to the radiation-sorting
for world-wide dissemination. A large amount of data on optics. The radiation-sorting optics comprises a dispersion
spectra of various types of objects (minerals, rocks, plants, device. The radiation dispersed and separated wavelength-
trees, organic substances etc.) has been generated in various wise is focused on the photo-detector. The heart of the
laboratories and stored in libraries. Examples of such device is the photo-detector, which consists of a CCD line
libraries are those of the USGS (http://speclab.cr.usgs.gov) array, aligned such that radiation of different wavelength
(Kokaly et al. 2017), John Hopkins University-JPL (http:// ranges falls on different elements of the CCD line array.
speclib.jpl/nasa.gov) (Salisbury et al. 1991) and NASA-JPL Thus, for each ground IFOV, radiation is dispersed and its
(http://asterweb.jpl.nasa.gov) (Baldridge et al. 2009). intensity is recorded in as many spectral channels, as the
number of detector elements in the line array. The rest of the
principle is same as in a typical OM line scanner, i.e.
14.3 Hyperspectral Sensors imaging is carried out pixel by pixel, and line by line, as the
sensor-craft keeps advancing. This leads to generation of
Imaging spectrometers alias hyperspectral sensors are used images in numerous contiguous spectral channels.
to generate digital images in a large number of contiguous For high spatial resolution, the whiskbroom mechanism
spectral bands. Commonly, these sensors utilize 12- or in imaging spectrometers is suited only to an airborne sensor
16-bit radiometric quantization levels. which flies slowly enough that the readout time of the
detector array is only a fraction of the integration time. An
14.3.1 Working Principle of Imaging example of this type of sensor is the Airborne
Spectrometers Visible/Infra-Red Imaging Spectrometer (AVIRIS) (Vane
et al. 1993) (Table 14.1), which has provided terrestrial
Depending upon the mechanism of working, two basic types hyper-spectral sensing data over a number of sites.
of imaging spectrometers are distinguished: (1) the
whiskbroom imaging spectrometer and (2) the pushbroom 2. Pushbroom imaging spectrometer
imaging spectrometer.
This type of sensor uses a two-dimensional CCD area array
1. Whiskbroom imaging spectrometer of detectors (instead of a CCD linear array!), located at the focal
plane of the spectrometer. The working principle is shown in
This is basically and opto-mechanical device, but pro- Fig. 14.16. The radiation is collected by the objective, passes
duces image data in numerous (about 200) contiguous through a slit, and is collimated onto a dispersing element. The

Fig. 14.15 Working principle of


whiskbroom imaging
spectrometer
210 14 Imaging Spectroscopy

Table 14.1 Selected airborne and spaceborne hyperspectral imaging sensors


SN Name Full name Country/ Availability No. of Spectral Band Ground
Manufacturer channels range (lm) width resolution (m)
(nm)
(a) Airborne
1 AVIRIS Airborne visible/infrared imaging US 1987 224 0.4–2.45 9.4 20
spectrometer
2 CASI Compact airborne spectrographic Canada 1990 288 0.4–0.9 2.5 1.2
imager
3 DAIS-7915 Digital airborne imaging Germany 79 0.4–12.0 variable 5–20
spectrometer
4 HYDICE Hyperspectral Digital imagery US 1994 206 0.4–2.5 7.6-14.9 1–4
Collection experiment
5. SFSI SWIR full spectrographic imager Canada 122 1.2–2.4 10 0.2
6. HyMAP Hyperspectral mapper Australia 100–200 0.45–2.5 10-20 2–10
(PROBE1)
(b) Space-borne
1. Hyperion Hyperion US 2000 220 0.45–2.5 10 30
Hyperspectral imager
2 MODIS Moderate Resolution US 1999 36 0.4–14.4 Variable 250–1000
Imaging spectrometer

radiation is separated according to wavelengths and focused width of the individual spectral channel in the sensor and is
onto an area array of detectors, for example n  m detector generally given in terms of nanometers. Hyperspectral sen-
elements. In such a case, the imaging device measures radiation sors have narrower bandwidths than multispectral sensors, in
intensity in n number of spectral channels, over m across-track order to be able to detect fine absorption features in the spectra
pixels. Thus, there is a dedicated column of n detector elements of objects. It is observed that bandwidths greater than 20–
for each ground cell in the swath. The across-track width of the 40 nm are unable to resolve characteristic absorption features
area array (m elements) is commensurate with the slit opening of minerals. Each spectral channel senses radiation of a cer-
which determines the swath. The rest of the pushbroom scan- tain wavelength band, the band-pass response most com-
ning mechanism remains the same as in the CCD linear scan- monly being gaussian type. The width of the band pass is
ners, i.e. the radiation intensity is integrated for a certain period usually defined as the width in wavelength at 50% response
(dwell time) by a frontal photo-gate, after which the charge is level of the function, and is called the ‘full width at half
read out. An example of a pushbroom imaging spectrometer is maximum’ (FWHM) (Fig. 14.17).
the Airborne Imaging Spectrometer (AIS) (Vane et al. 1983). Signal-to-noise (S/N) ratio for a spectrometer has the
Pushbroom imaging spectrometers obviate the requirement of same connotation as for multispectral scanners [Chap. 5,
OM-type scanning, and are ideally suited for use as space-borne Eq. (5.6)]. It depends upon detector sensitivity, spectral
imaging spectrometers. bandwidth, scene brightness, and transmittance through
The above paragraphs have described the general work- optics and the atmosphere. Improvements in the design of
ing principle of imaging spectrometers, the actual design and imaging spectrometers have led to manifold increase in the
instrumentation being highly sophisticated and complex and signal-to-noise ratio during the last about two decades.
undergoing continuous improvement.

14.3.3 Selected Airborne and Space-Borne


14.3.2 Sensor Specification Parameters Hyperspectral Sensors

The characteristics of an imaging spectrometer are given in In 1983, the Airborne Imaging Spectrometer (AIS) was the
terms of three main parameters: (1) spectral range, (2) spectral first hyperspectral sensor to acquire image data in 128
bandwidth and (3) signal-to-noise ratio. Spectral range is the contiguous spectral bands with 9.3 nm spectral bandwidth. It
entire range of EM radiation over which the sensor operates, was a pushbroom type sensor, operated in a limited spectral
e.g. visible, NIR, SWIR etc. Spectral bandwidth means the range (1.2–2.4 um), had a swath of 32 pixels of 11.5 m
14.3 Hyperspectral Sensors 211

Fig. 14.16 Working principle of


pushbroom imaging spectrometer

in the spectral range of 0.4–14.5 µm, has 36 bands with


ground resolution ranging from 250 m to 1 km, and a swath
width of approximately 2330 km.
Besides, portable field spectrometers are used to collect
data on spectral reflectance/emission characteristics of min-
erals and rocks in the field to serve as ground truth.

14.4 Processing of Hyperspectral Data

Processing of hyperspectral remote sensing data needs a


Fig. 14.17 Typical band-pass profile (gaussian) of spectrometer
channel and the concept of ‘full width at half maximum’ (FWHM) different strategy, as compared to the multispectral data. In
the case of hyperspectral data, there are hundreds of chan-
nels and the data may be of 12 or 16-bit type; therefore high
pixel size (Vane et al. 1983). Soon, it was succeeded by a computational facilities are required.
more advanced instrument, Advanced Visible/Infrared The entire sequence of hyperspectral data processing
Imaging Spectrometer (AVIRIS) that became operational can be grouped into the following three broad stages
in 1986/87 (Vane and Goetz 1988; Vane et al. 1993). This (van der Meer and de Jong 2001; Jensen and Yang 2009;
was a whiskbroom type of imaging spectrometer and has Ramakrishnan and Bharti 2015) (Fig. 14.18):
been flown over a number of test sites, world-wide.
The AVIRIS provided images in 224 contiguous channels in 1. Pre-processing—which aims at converting the raw
the spectral range of 0.4–2.45 µm, with a spectral sampling radiance data into spectrally and spatially rectified
of 9.4 nm, spatial resolution of 20 m, over a swath of 11 km at-sensor radiance data.
(550 pixels). Since then, a number of private organizations 2. Radiance-to-reflectance transformation—during which
recognized the immense potential of hyperspectral sensing. the influence of external factors (atmosphere, solar irra-
Among the more important commercial airborne imaging diance and topography) are removed, and the data are
spectrometers have been CASI, GERIS, HYDICE, HyMAP converted to reflectance data.
and SFSI. A list of selected aerial hyperspectral sensors, 3. Data analysis for feature mapping—which involves
including their salient specifications is given in Table 14.1. spectral-curve matching with reference data, and aims at
Following the success of AVIRIS, two important analysing the image data and generating mineral distri-
space-borne hyperspectral imaging sensors have been bution maps.
developed and launched, viz. Hyperion and MODIS.
Hyperion operates in the spectral range 0.4–2.5 µm and
provides data in 220 spectral bands with a spectral interval
of 10 nm. Hyperion data has been used in a number of 14.4.1 Pre-processing
geologic studies, world over. MODIS, the Moderate Reso-
lution Imaging Spectrometer, is designed primarily for bio- Pre-processing of hyperspectral image data is carried out to
logical and physical processes on a regional scale. It operates convert raw radiance into at-sensor radiance. This part is
212 14 Imaging Spectroscopy

Fig. 14.18 Overview of scheme


of hyperspectral data processing

generally conducted by the agency operating the sensor, and the relation between the signal (DN value) and incoming
the data released to users is commonly the at-sensor radiance radiance, and is experimentally determined. With this, the
data. However, at times some of the corrections have to be raw radiance image data is converted into spectral radiance
carried out by the users; therefore, it is pertinent to know the image data.
types and methods of the various corrections involved. The Geometric calibration and geocoding. Spatial distortions
pre-processing includes spectral calibration, spatial rectifi- occur in the image data due to aircraft instability and
cation and noise adjustment. ruggedness of the terrain. In modern times, the use of
Spectral calibration. The chain of spectral calibration on-board DGPS allows accurate location of the aircraft at
comprises geometric spectral calibration, spectrometric cal- image acquisition time. In addition, on-board gyros (on
ibration and radiometric calibration (Green 1992). Geomet- aerial platforms) are used to record angular distortions (roll,
ric spectral calibration is necessary as spectral response is pitch and yaw). During spatial rectification of image data,
not homogeneous when measured over the area covered by a the original observation geometry is reconstructed for each
pixel. The point-spread function describes the spatial varia- pixel, based on aircraft altitude, flight line, aircraft naviga-
tion of the measured signal. Similarly, a spectral channel tional information and surface topography (Meyer 1994).
measures radiance in a wavelength band that stretches from a This generates a geocoded image data.
few nanometres lower to a few nanometres higher than the Signal-to-noise ratio and noise adjustment. Signal is the
central wavelength of the channel (Fig. 14.17). The spectral quantity measured by the sensor whereas noise is the random
response function is the curve describing the (gaussian) variability in the signal. Signal-to-noise (S/N) ratio is
decline of radiance levels around the central wavelength for important in evaluating the performance of the sensor and
each channel, and is used for spectrometric calibration. may vary at different band passes for the same sensor.
Radiometric calibration is necessary to convert the measured A higher S/N ratio leads to better result. The S/N ratio can be
digital number into meaningful spectral radiance value. For increased by improving design of the sensor and by reducing
this a radiometric response function is used, which defines noise and retaining the signal during data processing. For
14.4 Processing of Hyperspectral Data 213

noise reduction in the hyperspectral image data set, Green hyperspectral sensing, are similar in nature to those in
et al. (1988) developed a powerful method called minimum multispectral sensing. All these factors are external to the
noise fraction (MNF) transform (also see Sect. 13.7.3). It is a target reflectance, and influence the spectral and spatial
modification of principal component analysis (PCA). In this, variations in at-sensor radiance. Therefore, these must be
first the noise covariance matrix is computed and then the adequately normalized in order to compute ground reflec-
reflectance data are rotated and scaled to make noise with tance values.
unit variance in all the bands. The transformed data set is A variety of techniques have been developed to remove
analysed with PCA. In this way, noise gets equally dis- the above effects of external factors. A comparative review is
tributed among the bands, hence the method provides good provided by Rast et al. (1991). The various techniques can
discrimination of spectral features. be categorized into two groups: (a) those using radiative
De-striping. The hyperspectral image data often contain transfer codes (atmospheric models) and (b) those using
striping, which has to be removed (Fig. 14.19a). Dickerhof parameters derived from the scene (including flat-field cor-
et al. (1999) applied a combination of MNF and Fourier rection, internal average relative reflectance correction and
analysis for destriping (the sequential steps being: calcula- empirical line methods). The atmospheric correction proce-
tion of noise statistics, followed by MNF, forward FFT dures have been discussed in Chap. 10.
analysis, filtering, retransformation from frequency to spatial
domain, low-pass filtering and finally reverse MNF trans-
form). The resulting image data have increased SNR and are 14.4.3 Data Analysis for Feature Mapping
de-striped (Fig. 14.19c).
After rectification as above, the hyperspectral sensor data
take the form of reflectance image data in numerous con-
14.4.2 Radiance-to-Reflectance Transformation tiguous bands. The large amount of image data has to be
processed for a positive discrimination and meaningful
The hyperspectral image data carry the influence of a interpretation. The general approach involves the following
number of external factors, which may mask the fine spectral steps: characterization of the absorption features, compar-
features of ground objects. These factors are: effects of the ison to ground truth (spectral libraries), and finally analysis.
solar irradiance curve, atmosphere and topography. The A number of techniques have evolved for handling hyper-
effect of the solar irradiance curve arises from the fact that spectral data and feature extraction (e.g. Mustard and Sun-
solar radiation intensity peaks at 0.48 µm and the radiation shine 1999; Van der Meer et al. 2012). We consider here the
intensity drops off towards longer wavelengths (Fig. 2.2); basic concepts involved in some of the common techniques,
therefore, the effect of solar irradiance is not uniform viz.: (1) absorption band characterization, (2) matching
throughout. Atmospheric effects arise due to the fact that complete shape of spectral feature, (3) spectral angle
hyperspectral image data are collected over a wide wave- mapping, and (4) spectral unmixing.
length range, which includes atmospheric windows as well
as parts of the EM spectrum affected by atmospheric 1. Absorption band characterization (position, strength and
absorption and scattering. Topographic effects, in shape). This method is based on quantifying the

Fig. 14.19 Destriping.


a Original hyperspectral sensor
image showing strong striping;
b striping (noise) component;
c de-striped image (after
Dickerhof et al. 1999)
214 14 Imaging Spectroscopy

absorption band characteristics. The spectral curve of a The normalized reflectance curve is used to characterize
mineral species carries absorption bands with character- the absorption feature in terms of various parameters—
istic position, shape and strength. Reflection occurs at wavelength position of minimum, strength of absorption
wavelengths that do not exhibit absorption. The contin- (depth) and shape (Fig. 14.20). The wavelength of band
uous function is referred to as continuum or hull minimum is detected through the use of slope and magnitude
(Fig. 14.3) and gives in general terms the upward limit of (in a manner analogous to band maximum). The relative
the reflectance of a material. It is considered that con- depth of band absorption ‘D’ is given in Eq. (14.1).
tinuum is influenced by non-selective absorption and Band shape is given in terms of FWHM and asymmetry.
scattering processes. As grain size influences diffuse FWHM (full width at half maximum) is the spectral width of
scattering, continuum is also affected by grain size, in the absorption band measured at a place where the depth of
addition to mineral chemistry. absorption is half of its peak (i.e. D/2). Asymmetry gives
whether the absorption band is skewed to the left or to the
The method of absorption band characterization has been right of the band minimum. Besides, slope of the continuum
developed by Kruse et al. (1988), and integrated into ana- can also be taken as a parameter.
lytical packages (Kruse et al. 1993). It involves the following In this way, the information contained in the hyperspec-
main steps. tral sensing data can by represented in terms of absorption
In order to isolate absorption features from the back- band characteristics. This can be compared with similar
ground signal, first continuum is defined; this can be done by absorption band characteristics of laboratory/field spectra.
defining high points in the spectrum (through slope and An expert system for analysing hyperspectral data which
magnitude criteria) and drawing straight-line segments integrates band parameterization with spectral library data,
between the defined high points. This gives a model has been developed by Kruse et al. (1993).
continuum. The method of absorption band characterization offers a
To remove continuum from the reflectance data, the data rapid tool for processing the huge amount of data obtained in
set is divided by the model continuum at each channel. This imaging spectrometry. The main advantage of the method is
normalizes the reflectance data and also provides a that the spectral information from a given band is condensed
first-order correction for illumination effects, thus leaving into four variables. However there are limitations, arising
only absorption features to remain. mainly from sensitivity to noise and the presence of
sub-pixel mixtures, and also due to ambiguity cropping up in
the case of materials with broad absorption features. The
technique is better suited to sharp narrow absorption
features.

2. Matching complete shape of spectral feature. The


method of matching the complete shape of a spectral
feature, also called spectral feature fitting (Fig. 14.21), is
more rigorous than the method of absorption band
characterization. It is based on matching the complete
shape of image spectra to library spectra, within a certain
wavelength range; the algorithm computes the degree of
Fig. 14.20 Absorption band characteristics—position and depth.
a Original spectral curve; b the same after continuum removal.
similarity between the two sets of spectra (Clark et al.
R = reflectance; NR = normalized reflectance 1990a).

Fig. 14.21 Spectral feature


fitting. a Reference and pixel
spectral curves. b Continuum
removed spectra are fitted to each
other
14.4 Processing of Hyperspectral Data 215

To begin with, there must be a priori knowledge of


specific minerals or objects expected in the area. A certain
spectral range is selected to define the spectral
feature/absorption band. Continuum is removed separately
from image data and library data (Fig. 14.22). The
continuum-removed library spectrum is superimposed over
the pixel spectrum, using simple linear gain and offset
adjustment. A least-squares fit is calculated between the
image (unknown pixel) and each of the reference members,
separately. The root mean square (RMS) error of this fit
yields an overall goodness-of-fit measure. At the same time,
the band depth can be related to the abundance of the min-
eral. Therefore, statistical spectral similarity maps showing
distribution and also relative abundance of minerals, based
on spectral data, can be generated.

3. Spectral angle mapping. This is another method of Fig. 14.23 Concept of spectral unmixing with scatteroplot of two
generating a spectral similarity map between image data bands; the end members A, B and C are defined by the scatteroplot
and library data (Kruse et al. 1993). First, the image and
library data are processed as in the case of the earlier
method of ‘matching complete shape of spectral feature’. proportions is recorded at the remote sensor. It is
Then, instead of distance (RMS error), in this method the important to decipher these constituents and their relative
similarity is expressed in terms of average angle between proportions in a pixel. It is also called sub-pixel classi-
the two spectra (Fig. 14.23). The spectra are treated as fication. In the reflection domain, the spectrum of a
vectors in space, the dimensionality being equal to the mixture may be explained by linear or non-linear mixing
number of bands used. Often the full spectral range is of spectra of end members (Ichoku and Karnieli 1996). In
used. The angular difference between the pixel spectrum simple cases, linear mixing models are used. For
and the laboratory spectrum may range between zero to non-linear mixing models, radiative transfer equation is
p/2. The spectra whose vectors are separated by small used to understand the influencing factors and effects
angles are considered mutually more similar, and in this associated with multiple scattering.
way a spectral similarity map can be generated. A limi-
tation of the method is that vector lengths are ignored, Mixture modelling or spectral unmixing uses these con-
which amounts to de-emphasizing albedo differences. cepts and also a premise that in a particular area only a few
4. Spectral unmixing. In most cases, a pixel is composed of minerals/materials are commonly occurring, such that their
mixed objects; in other words, there are many spectrally spectra are quite constant; varying proportions of the end
diverse objects present within a pixel. The collective members (common materials) are the cause of spectral
response of all the end members present in different variability in the scene.
In a simplified form, spectral unmixing considers that the
mixed pixel reflection spectrum is a linear combination of
the spectra of end members, weighted by their respective
areal coverage in the particular pixel. The spectra of end
members are known (either from a spectral library or
through the application of statistical/procedures on the scene
data) and the observed mixed pixel spectrum is known; the
abundance/relative proportion values can be obtained by
solving the equations.
Theoretically, the permissible numbers of end members
are governed by the dimensionality of the data set, i.e. the
number of spectral channels. The least-square approximation
can be used to obtain the solution if the number of end
members is equal to the number of spectral channels minus
Fig. 14.22 The concept of spectral angle between reference spectrum one (or less). However, as the hyperspectral data have a high
and field (pixel) spectrum channel-to-channel correlation, it is practically possible to
216 14 Imaging Spectroscopy

use only a limited number (three to seven) of end members feature space. It assumes that pixels that are closer to these
in the analysis. vertices are more likely to represent pure materials.
The analysis starts by identifying the spectrally pure end In addition, techniques of matched filtering (Farrand and
members in the scene. Ideally, the data cloud should be Harsanyi 1997) and cross-correlation spectral matching (van
bound by the end members. They may be selected on the der Meer and Bakker 1997) have been developed for
basis of ground truth, spectral library, or from statistical hyperspectral image data analysis.
procedures applied on the hyperspectral image data. For
example, data processing can help to locate vertices of the
intersections defining end members (Fig. 14.24). 14.5 Applications and Future
The observed pixel spectrum is considered to be a mix-
ture of the various end member spectra. Pixel-by-pixel As mentioned earlier, the present development of hyper-
analysis is carried out to calculate the amount (fraction) of spectral sensors has been limited to solar reflection region,
each end member component in the observed pixel. This i.e. VIS-NIR-SWIR spectral range. In this spectral range, the
results in fraction images, which show the abundance and mineralogic constituents that can be detected by hyper-
distribution of end member components in the scene. These spectral sensing are: transition metals (iron, manganese,
images provide information on the composition of the copper, nickel, chromium), and ions of carbonate and
surface. hydroxyl, and water molecule. Therefore, the most fre-
A popular technique has been the pixel purity index quently studied systems using hyperspectral sensors have
(PPI). PPI derives a statistic for each pixel in a hyperspectral been hydrothermal world-wide, as these include spectrally
scene based on its proximity to a vertex in an n-dimensional active mineral groups such as hydroxyl-bearing minerals

Fig. 14.24 Mineral maps for selected alteration minerals using HyMAP of an area in Rodalquilar, Spain (from top to bottom): jarosite, kaolinite,
illite, alunite, chlorite (Van der Meer et al. 2012)
14.5 Applications and Future 217

Table 14.2 Alteration minerals indicative of metallogenic environments-a hyperspectral sensing perspective
Metallogenic Alteration zone Characteristic SWIR active mineral assemblages
environment name
Massive sulphide Tourmaline Muscovite and tourmaline
Carbonate Siderite, ankerite, calcite
Sericite Chlorite and sericite
Albite Chlorite, biotite, muscovite
Mesothermal Carbonate Calcite, ankerite, dolomite, muscovite
Chlorite Chlorite, muscovite and actinolite
Biotite Chlorite and biotite
Low-and high-sulphide Propylitic Epidote, chlorite, sericite, calcite, illite, smectite, montomorillonite
epithermal Argillic Kaolinite, dickite, alunite, diaspore, pyrophyllite, jarosite
Adularia Sericite, illite-smectite, kaolinite, chalcedony, opal, montmorillonite, calcite, dolomite
Igneous intrusion Potassic Phlogopite, actinolite, sericite, chlorite, epidote, muscovite, anhydrite
related Sodic Actinolite, diopside, chlorite, epidote, scapolite
Phyllic Muscovite-illite, chlorite, anhydrite
Argillic Pyrophyllite, sericite, diaspore, alunite, topaz, tourmaline, kaolinite, montmorillonite, calcite
Greisen Topaz, muscovite, tourmaline
Skarn Clinopyroxene, wollastonite, actinolite-tremolite, vesuvianite, epidote, serpentinite-talc,
calcite, chlorite, illite-smectite, nontronite
Supergene sulphide Oxidized and Clay minerals, limonite, goethite, hematite, jarosite
leached zones Chalcocite, covellite, chrysocolla, native copper and copper oxide, carbonate and sulphate
Enriched zone minerals
After Thompson and Thompson 1996

Table 14.3 Selected examples of mineral mapping and exploration using hyperspectral sensing
SN Mineral/rock/theme Data Authors
used
1 Detection of ammonium alteration (buddingtonite) AIS Goetz and Srivastava (1985)
2 Identification of alteration minerals (alunite, kaolinite, buddingtonite, hematite and zeolites) GERIS Kruse et al. (1990)
and generation of mineral distribution maps
3 Hot spring mineral deposits mapping AVIRIS Kruse (1997)
4 Identification of kaolinite, alunite and buddingtonite and generation of mineral distribution AVIRIS van der Meer and Bakker
maps (1998)
5 Mapping surface mineralogy of mine tailings Probe-1 Shang et al. (2009)
6 Hydrothermal epigenetic alterations (greisen + albite) associated with Sn-W mineralization HyMap Hoefen et al. (2011)
7 White mica mineral abundance and distribution mapping for the exploration of volcanogenic HyMap Van Ruitenbeek et al. (2012)
massive sulphide (VMS) deposit
8 Hydrothermal alteration zones (phyllic, argillic, propylitic) associated with porphyry type Hyperion Bierwirth et al. (2002),
copper deposit Bishop et al. (2011)
9 Residual secondary enrichment (bauxite) deposits Hyperion Kusuma et al. (2012)

(hydrothermal clays, sulfates), ammonium-bearing minerals, Future trends in hyperspectral remote sensing are likely to
phyllosilicates, iron oxides, and carbonates. Table 14.2 gives include the following (Goetz 2009):
a list of characteristic SWIR active mineral assemblages that
could constitute guides for hyperspectral remote sensing 1. Combined and cascaded strategy utilizing airborne and
exploration in different metallogenic environments. A few spaceborne multispectral and hyperspectral data and
selected examples of geologic applications are listed in more precise measurements,
Table 14.3. Figure 14.24 shows mineral maps for alteration 2. Improvements in computational and data processing
minerals in Rodalquilar, Spain. methodology,
218 14 Imaging Spectroscopy

3. A high resolution hyperspectral sensor in space, and Hoefen TM, Knepper DH Jr, Giles SA (2011) Analysis of imaging
4. Extending hyperspectral sensing to the thermal infrared spectrometer data for the Daykundi area of interest. In: Peters SG
et al (eds) Summaries of Important areas for mineral investment and
region. production opportunities of nonfuel minerals in Afghanistan. US
Geological Survey, Reston, Virginia, pp 314–339
Ichoku C, Karnieli A (1996) A review of mixture modeling teehniques
for sub-pixel land cover estimation. Remote Sens Rev 13:161–186
Jensen RR, Yang C (2009) Hyperspectral remote sensing—Sensors and
References applications. In: Jackson MW (ed) Earth observing platforms and
sensors, manual of remote sensing 3rd ed. Vol. 1.1, Amer Soc
Baldridge AM, Hook SJ, Grove CI, Rivera G (2009) The ASTER Photog Remote Sens (ASPRS), Bethesda, Md., pp 205–224
spectral library version 2.0. Remote Sens Environ 113:711–715 Johnson PE, Smith MO, Taylor-George S, Adams JB (1983) A
Bierwirth P, Huston D, Blewett R (2002) Hyperspectral mapping of semiempricial method for analysis of the ref1ectance spectra of
mineral assemblages associated with gold mineralization in the binary mineral mixtures. J Geophys Res 88(B4):3557–3561
Central Pilbara, Western Australia. Econ Geol 97:819–826 Kokaly RF et al (2017) USGS Spectral Library Version 7: U.S. geological
Bishop CA, Liu JG, Mason PJ (2011) Hyperspectral remote sensing for survey data series 1035, 61 p, https://doi.org/10.3133/ds1035
mineral exploration in Pulang, Yunnan Province China. Int J Kruse FA (1997) Characterization of active hot-springs environments
Remote Sens 32(9):2409–2426 using multispectral and hyperspectral remote sensing. In: Proceed-
Campbell NA (1996) The decorrelation stretch transform. Int J Remote ings 12th International Conference and Workshops on applied
Sens 17:1939–1949 Geologic Remote Sensing, Vol I, Env Res lust Michigan, Ann
Clark RN (1999) Spectroscopy of rocks and minerals, and principles of Arbor, Mich, pp 214–221
spectroscopy. In: Rencz AN (ed) Remote sensing for the Earth Kruse FA, Calvin WM, Seznec O (1988) Automated extraction of
sciences, manual of remote sensing, vol 3, 3rd edn. Am Soc absorption features from airborne visible/infrared imaging spec-
Photogramm Remote Sens. Wiley, London, pp 3–58 trometer (A VIRIS) and Geophysical Environmental Research
Chang SH, Collins W (1983) Confirmation of the airbome biogeo- imaging spectrometer (GERIS) data. In: Proc A VIRIS Performance
physical mineral exploration technique using laboratory methods. Evaluation Workshop, JPL Publ 88-38, Jet Propulsion Laboratory,
Econ Geol 78:723–736 California lust Tech, Pasadena, CA, pp 62–75
Clark RN, Roush TL (1984) Reflectance spectroscopy: quantitative Kruse FA, Kierein-Young KS, Boardman JW (1990) Mineral mapping
analysis techniques for remote sensing applications. J Geophys Res at cuprite, Nevada with a 63-channel imaging spectrometer.
89(B7):6329–6340 Photogram Eng Remote Sens 56(1):83–92
Clark RN, King TVV, Klejwa M, Swayze G, Vergo N (1990a) High Kruse FA, Letkoff AB, Boardman JW, Heidebrecht KB, Shapiro AT,
spectral resolution reflectance spectroscopy of minerals. J Geophys Barloon PJ, Goetz AFH (1993) The spectral image processing
Res 95:12653–12680 system (SIPS)-interactive visualization and analysis of imaging
Clark RN, Gallagher AJ, Swayze GA (1990b) Material absorption band spectrometer data. Remote Sens Environ 44:145–163
depth mapping of imaging spectrometer data using complete band Kusuma KN, Ramakrishnan D, Pandalai HS (2012) Spectral pathways
shape least-squares bit with library reference spectra. In: Proceed- for effective delineation of high-grade bauxites: a case study from
ings 2nd airborne visible infrared imaging spectrometer (AVIRIS) the Savitri River Basin, Maharashtra, India, using EO-1 Hyperion
workshop, JPL Publ 90–54, Jet propulsion Laboratory, California data. Int J Remote Sens 33(22):7273–7290
Inst Tech, Pasadena, CA, pp 176–186 Meyer P (1994) A parametric approach for the geocoding of airborne
Dickerhof C et al (1999) Mineral identification and lithological visible/lnfrared lmaging spectrometer (AVIRIS) data in rugged
mapping on the Island of Naxos (Greece) using DIAS 7915 terrain. Remote Sens Environ 49:118–130
hyperspectral data. EARSeL Adv Remote Sens 1(1):255–273 Mustard JF, Sunshine JM (1999) Spectral analysis for Earth science:
Farrand WH, Harsanyi JC (1997) Mapping the distribution of mine investigations using remote sensing data. In: Rencz AN
tailings in the Coeur d’ Alene River valley, Idaho, through the use (ed) Remote sensing for the earth sciences, manual of remote
of a constrained energy minimization technique. Remote Sens sensing, vol 3, 3rd edn. Am Soc Photogramm Remote Sens. Wiley,
Environ 59:64–76 New York, pp 251–306
Goetz AFH, Vane G, Solomon J, Rock BN (1985) Imaging spectrom- Ramakrishnan D, Bharti R (2015) Hyperspectral remote sensing and
etry for Earth remote sensing. Science 228:1147–1153 geological applications. Curr Sci 108(5):879–891
Green RO (1992) Determination of the in-flight spectral and radiomet- Rast M, Hook SJ, Elvidge CD, Alley RE (1991) An evaluation of
ric characteristics of the airborne visible/infrared imaging spec- techniques for the extraction of mineral absorption features from
trometer (A VIRIS). In: Toselli F, Bodechtel J (eds) Imaging high spectral resolution remote sensing data. Photogram Eng
spectrometry: fundamentals and prospective applications. Kluwer, Remote Sens 57:1303–1309
Dordrecht, pp 103–123 Salisbury JW, Walter LS, Vergo N, D’Aria DM (1991) Infrared (2.1–
Green AA, Berman M, Switzer P, Graig MD (1988) A transformation 2.5 µm) Spectra of Minerals. Johns Hopkins University Press,
for ordering multispectral data in terms of image quality with Baltimore, pp 1–267
implications for noise removal. IEEE Trans Geosci Remote Sens Shang JL, Morris B, Howarth P, Levesque J, Staenz K, Neville B
26:65–74 (2009) Mapping mine tailing surface mineralogy using hyperspec-
Goetz AFH (2009) Three decades of hyperspectral remote sensing of tral remote sensing. Canad J Remote Sens 35:S126–S141
the Earth: a personal view. Remote Sens Environ 113:S5–S16 Thenkabail PS, Lyon JG, Huete A (eds) (2012) Hyperspectral remote
Goetz AFH, Rowan LC (1981) Geologic remote sensing. Science sensing of vegetation, CRC press. Taylor & Francis, Florida
211:781–791 Thompson AJB, Thompson JFH (1996) Atlas of alteration: a field and
Goetz AFH, Srivastava V (1985) Mineralogic mapping in the Cuprite petrographic guide to hydrothermal alteration minerals. Geological
mining district, Nevada. In: Proceedings of the airborne imaging Association of Canada, Mineral Deposits Division, p. 119
spectrometer data analysis workshop. JPL Publ 85–41, Jet Propul- Van Der Meer FD et al (2012) Multi- and hyperspectral geologic
sion Laboratory, Pasadena, CA, pp 22–31 remote sensing: a review. Int J Appl Earth Obs Geoinf 14:112–128
References 219

Vane G, Goetz AFH (1988) Terrestrial imaging spectroscopy. Remote Vane G, Goetz AFH, Wellman JB (1983) Airborne imaging spectrometer:
Sens Environ 24:129 a new tool for remote sensing. In: Proceedings of the IEEE international
van der Meer F, Bakker W (1997) CCSM: cross correlogram spectral geoscience remote sensing symposium (IGARSS) F A-4:6.1–6.5
matching. Int J Remote Sens 18:1197–1201 Van Ruitenbeek FJA, Cudahy TJ, Van der Meer FD, Hale M (2012)
van der Meer F, Bakker W (1998) Validated surface mineralogy from Characterization of the hydrothermal systems associated with
high-spectral resolution remote sensing: a review and a novel Archean VMS-mineralization at Panorama, Western Australia,
approach applied to gold exploration using AVIRIS data. Terra using hyperspectral, geochemical and geothermometric data. Ore
Nova 10:112–119 Geol Rev 45:33–46
van der Meer FD, de Jong SM (2001) Imaging spectrometry: basic Vane G, Green RO, Chrien TG, Enmark HT, Hansen EG, Porter (1993)
analytical techniques. In imaging spectrometry: basic principles and The airborne visible infrared imaging spectrometer (AVIRIS).
prospective applications. Springer, Dordrecht Remote Sens Environ 44:127–143
Microwave Sensors
15

15.1 Introduction emitted by the Earth forms the overwhelmingly dominant


component.
The EM spectrum range 1 mm–1.0 m is designated as It was discussed in Chap. 2 that at longer wavelengths
microwave. In the context of terrestrial remote sensing, this such as in the microwave region, Planck’s radiation equation
spectral region is marked by an excellent atmospheric win- is simplified to the Raleigh–Jeans Law, which gives [(for
dow, i.e. the radiation traverses the atmosphere with minimal abbreviations see Eq. 2.5)]:
absorption and attenuation (Fig. 15.1). Therefore, this
2pck
spectral range has aroused much interest for remote sensing Wk ffi T ð15:1Þ
applications. k4
The techniques and sensors for Earth observation in the This means that the intensity of blackbody radiation in the
microwave region can be divided into two broad types: microwave region is directly proportional to the temperature
passive, i.e. measuring the naturally available radiation, and of the radiating object. It also implies that temperature can be
active, i.e. illuminating the ground scene by an artificial directly estimated from the radiation intensity. The temper-
source of energy and measuring the back-scattered radiation. ature thus estimated in the real world is called brightness
temperature.

15.2 Passive Microwave Sensors


and Radiometry 15.2.2 Measurement and Interpretation

15.2.1 Principle The study of microwave radiometer data is termed


radiometry. In the microwave region, radiation is collected
The passive microwave sensors, called radiometers, measure by the sensor antenna (instead of lenses and mirrors, as is
the intensity of naturally available radiation, the techniques done in the optical region). Owing to the weak intensity of
having been adapted from the field of radio-astronomy. the signal, the sensor employs a large antenna beamwidth,
Commonly, passive microwave sensors operate in the which leads to a lower spatial resolution. The signal
wavelength range of 1 mm–30 cm for the Earth surface received by the antenna from the ground is compared to that
observations. obtained from on-board calibration sources, alternately by a
The blackbody radiation emitted by the Earth peaks at switching device (Fig. 15.2). The spatial variation in ground
about 9.6 µm (Fig. 15.1) and falls off with increasing radiation intensity is interpreted in terms of ground tem-
wavelength. At microwave wavelengths, although quite perature differences. Such sensors are most commonly
weak, it is still the most dominant source of naturally operated in profiling mode and are popularly termed
occurring radiation. Radiation emanating from some depth microwave radiometers. Sometimes they are operated in
below the top ground surface may also be transmitted scanning mode to produce images, being then called scan-
through the overlying cover and may form a part of the ning radiometers.
signal received by the microwave sensor. Additionally, other The nature of passive microwave response is yet to be
radiation components in this region may include the reflected fully understood. It is related to a number of ground factors,
solar energy, the reflected skylight and atmospheric emis- such as ground temperature and emissivity, and electrical
sion, all of which possess very weak intensity. Thus, as far properties such as dielectric constant etc. The spectral
as energy sources are concerned, the blackbody radiation emissivity depends on composition and surface roughness. If

© Springer-Verlag GmbH Germany 2018 221


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_15
222 15 Microwave Sensors

Fig. 15.1 Blackbody radiation


emitted by the Earth and the
atmospheric windows in the
microwave region (1 mm–1.0 m)

Fig. 15.2 Sources of passive


microwave radiation and
schematic for detection and
recording

we consider the EM radiation to be incident on a surface, 15.3 Active Microwave Sensors—Imaging


then a part of the radiation is reflected (R) and the remaining Radars
part is absorbed (A) (assuming R + A = 1, and transmission
to be negligible due to high dielectric constant). In such a 15.3.1 What is a Radar?
case, absorptivity (A) equals (1−R). It is known that
absorptivity equals emissivity, and emissivity governs the 15.3.1.1 Definition and Development
spectral emitted radiance. As the spectral dielectric property Radar is an acronym for Radio Detection And Ranging. In
governs the interaction of radiation with materials, it the present context, however, the term radar is used to
becomes a very important parameter in passive microwave encompass all active microwave sensors applied for detect-
radiometry. ing the physical attributes of remotely located objects.
Water and vegetation have dielectric constants that are Basically, radar operates on the principle that artificially
very different from those of other ground objects. Thus, there generated microwaves transmitted in a particular direction
is a distinct applicability of passive microwave radiometry in collide with objects and are scattered. The back-scattered
hydrological sciences and vegetation-related disciplines. In energy is received, amplified and analysed to determine
geology, the scope for passive microwave sensing lies in location, electrical properties and surface configuration of
detecting seepage zones, springs, faults, soil moisture vari- the objects.
ations, and vegetation banding etc. (see e.g. O’Leary et al. Radar technology made tremendous strides during World
1983). Wars I and II, when this technique was used to detect ships
Besides, passive microwave remote sensing also appears and locate military targets. In the early 1950s, the configu-
to hold promise for shallow subsurface exploration, as the ration of side-looking airborne radar (SLAR) came into
radiation from buried objects is transmitted through the existence, which allowed acquisition of continuous-strip
overlying cover and recorded by the sensor. However, fur- image data, without actually flying over an area. Further, the
ther work is needed to understand the response of ground invention of synthetic aperture radar (SAR) was also made
features in order to better interpret passive microwave about the same time that significantly improved the resolu-
sensing data. Useful reviews on passive microwave remote tion capability of imaging radar sensors.
sensing have been given by Schmugge (1980) and Ulaby In the 1960–1970s the airborne imaging radar saw rapid
et al. (1981, 1982, 1986). development and deployment for civilian purposes, e.g. the
15.3 Active Microwave Sensors—Imaging Radars 223

Radar Mapping in Panama (RAMP) Project and the Radam commands to other components of the radar system. Finally,
Project in Brazil (MacDonald 1969a; De Azevedo 1971). data processing and storage provide data output that could be
These and other similar airborne projects world-wide used for processing and interpretation. The above are the
showed the tremendous potential in mapping large heavily basic functional components of a simple radar system, the
vegetated and permanently cloud-covered virgin tracts on instrumentation being highly sophisticated in practice.
the Earth in a very short period by imaging radar, due to the
unique penetration capabilities of radar wavelengths.
The above paved the way for space-borne imaging SAR 15.3.2 Side-Looking Airborne Radar—Basic
missions (e.g. Seasat, ERS-1/2, JERS, Radarsat-1/2, ALOS, Configuration
Envisat, SRTM, RISAT, TerraSAR-X and many others). At
present, there are about 10–15 high resolution imaging radar 15.3.2.1 Working Principle
sensors in the space and many more are due to be launched For imaging purposes, the radar is mounted on an aircraft
in the near future. such that the antenna (semi-cylindrical in shape, commonly
As mentioned above, the imaging radar throws artificially about 5–10 m in length) is attached to the fuselage, aligned
generated electromagnetic waves to illuminate the terrain parallel to the aircraft axis. Commonly, the antenna looks
and is therefore an active system. This is in contrast to other perpendicular to the flight line, obliquely down upon the
passive remote sensing methods that work on only naturally Earth on one side. Sometimes two antennas are used, one on
available energy, such as the sensors in the VNIR, SWIR each side of the aircraft, to scan the terrain on both sides of
and TIR region and passive microwave methods. The radar the ground track in a single flight.
sensors use EMR wavelengths ranging from a few mil- In one scanning episode, the radar transmits a short pulse
limetres to less than a metre that pass through the atmo- of coherent EM energy, illuminating narrow strips of the
sphere unhindered without any atmospheric interaction ground on one side, perpendicular to the flight direction
(Fig. 15.1). These factors make radar an all-time and (Fig. 15.4a). The radar receiver records echoes (back-scatter)
all-weather capability, independent of solar illumination in order of arrival, which is related to ground distances
variables, which is responsible for the increased interest in (because of constancy of the speed of light). Thus, echoes
this technology. from the objects situated closer to the ground track are
received earlier, and those from farther away in the range
15.3.1.2 Basic Components of a Radar System direction, successively later (Fig. 15.4b). In this manner,
A typical radar system carries a pulse-generating device or target signals are converted into time-amplitude signals.
transmitter, which is linked to an antenna (Fig. 15.3). The After the last echo is received, determined by the swath
generated signal is radiated by the antenna in a given look width, a new pulse of energy is emitted by the radar. As the
direction, and scattered by the object. The back-scattered sensor-craft moves along its trajectory (azimuth direction),
signal is sensed by an antenna. The systems which illuminate the radar beam likewise sweeps perpendicular to the flight
and observe objects from approximately the same location in path, and the entire terrain is scanned. The time-amplitude
space are called monostatic and those which illuminate from signal gathered at the radar receiver can be put on a monitor
one location but observe from a substantially different or data tape. The radar data results in an image (Fig. 15.5).
location are called bistatic. A bistatic system naturally
requires two antennas to operate. However, more frequently, 15.3.2.2 Imaging Radar Terminology
the same antenna is switched to transmitting and receiving A jargon has evolved in the context of imaging radar sensing
circuits alternately by a duplexer. A control unit serves as the (Fig. 15.6) and it is necessary to become conversant with the
radar brain and distributes timings signals and necessary terminology before proceeding further.

Fig. 15.3 Basic structure of a


simple radar system
224 15 Microwave Sensors

Fig. 15.4 Working principle of


side-looking airborne radar
(SLAR). a Terrain illuminated in
one burst of energy. b Conversion
of target signals into
time-amplitude signals

aircraft in the range direction (largest travel time). Range


resolution is the linear ground distance resolution in the
range direction. Slant range denotes the direct linear dis-
tance between the antenna and the ground. Ground range
pertains to the map distance in the range direction. This
distance is also called range distance. The total width of the
ground imaged from one end to the other, i.e. from far range
to near range, is called swath width. Azimuth direction is the
horizontal direction of aircraft/spacecraft flight. The azimuth
and range directions are generally mutually perpendicular.
Azimuth resolution means the linear ground distance reso-
lution in the azimuth direction. It varies according to the type
of radar system (being coarser for real-aperture radar and
finer for synthetic-aperture radar; see Sect. 15.3.3.2). Pulse
rectangle corresponds to the unit area on the ground, sensed
by the radar (Fig. 15.7). It is also referred to as spot size. For
real-aperture systems, it is the product of azimuth resolution
(beam width) and range resolution (ground distance corre-
sponding to pulse length).
Fig. 15.5 Correspondence of radar image to ground scene; the radar
data acquired over an area are processed to yield an image; the ground Angular relations between antenna, incident ray and
characteristic parameter (back-scattering coefficient r at the scene ground object (including surface topography) are vital fac-
location (i, j) corresponds to the digital number (DN) at (i, j) in the tors influencing radar return. Depression angle is the acute
image (after JPL 1986)
angle that the transmitted ray (i.e. the line joining the
antenna with the ground object) makes with the horizontal,
Range direction is the horizontal direction in which the as measured at the antenna in the vertical plane. It varies at
aircraft-mounted antenna looks. It is generally perpendicular different points in the swath and is smallest at far range and
to the flight direction, except in case of squint-mode opera- largest at near range. Look angle is the angle made by the
tion. Objects on the ground lying closest to the flight path transmitted ray with the vertical, as measured at the antenna
(nadir line) are called near-range objects; they imply in the vertical plane; it is complementary to the depression
shortest travel time. Far-range implies far away from the angle. Incidence angle is that angle which the incident ray
15.3 Active Microwave Sensors—Imaging Radars 225

Fig. 15.6 Basic geometry of


side-looking airborne radar and
terminology

Fig. 15.7 Pulse length, beam


width and pulse rectangle in a real
aperture imaging radar

makes with the vertical on the ground, where the ground is


assumed as essentially flat (or in other words, the vertical is
drawn from the centre of the Earth). In the case of aerial
sensing, the effect of the Earth’s curvature can be ignored,
and the incidence angle equals the look angle, and is com-
plementary to the depression angle. However, in spaceborne
sensors, the effect of the Earth’s curvature is significant; the
result is that the incidence angle is always greater than the
look angle (by approximately 3–5°) (Fig. 15.8). As radar
return is governed by incidence angle, the incidence angle is
generally used as a preferred specification to represent the
imaging radar viewing geometry. Another term, local inci-
dence angle, is sometimes used for the angle made by the
incident ray with the normal to the surface at a particular
point, when local topography is uneven.
In most cases, azimuth direction and range direction are
Fig. 15.8 In spaceborne sensors, the incidence angle (on a flat ground)
mutually perpendicular. However, to produce images with is always greater than the look angle at the antenna, owing to the
different look directions (e.g. to produce squint-look images), curvature of the Earth’s surface
226 15 Microwave Sensors

the range/look direction may be altered by suitable maneuvers that objects situated the same distance apart may be resolved
at the sensor for imaging in squint-look mode. The angle at far range, but sometimes may not be so at near range.
between the azimuth direction and the range directional
measured in the horizontal plane is called the azimuth angle. 15.3.3.2 Azimuth Location and Resolution
Complementary to this is the squint angle (angle between the Broadly, azimuth location is determined from the position of
across-track direction and squint mode look direction). As is the sensorcraft in the azimuth direction. Azimuth resolution
obvious, in most cases the squint angle is zero. is a very important factor in evaluating the performance of
an imaging radar system. Depending upon how azimuth
resolution is obtained, two types of systems can be distin-
guished: real-aperture radar (RAR) and synthetic-aperture
15.3.3 Spatial Positioning and Ground
radar (SAR).
Resolution from SLAR/SAR
1. Real-aperture radar. Real-aperture radars (RARs) were
Basically, the position of objects is estimated from distances
technically simpler and were used in the initial stages
along two orthogonal directions—viz. the range direction
(and are now obsolete; however, we briefly discuss this
and the azimuth direction.
to facilitate understanding of the principle). They had
coarser azimuth resolution, which varied systematically
15.3.3.1 Range Location and Resolution
from near range to far range. When an angular beam of
A swath strip is illuminated by a single burst of EM energy
energy is radiated from the antenna, it fans out in the
and the time interval between the transmitted and received
range direction. Azimuth resolution (Ra) is simply the arc
signal is used to give the position of the object in the range
length or horizontal beam width (ß, see Fig. 15.7) at a
direction. If Rs is the slant range distance from the antenna to
particular place in the range direction, i.e.
the object and t is the time interval between transmitted and
received pulses, then
ct
Rs ¼ ; ð15:2Þ Ra ¼ Bh  Rs ; ð15:5Þ
2
where c is the speed of light. The ground range distance, R, where Bh is the angular beam width at the antenna and Rs is
can be computed from this as the slant range distance. The angular beam width Bh can be
approximated as
Rs ct
R¼ ¼ ; ð15:3Þ k
cos h 2 cos h Bh ¼ ; ð15:6Þ
D
where h is the depression angle (Fig. 15.6). The earlier
SLAR images used slant range, which is actually the where k is the wavelength used and D is the antenna length.
inclined distance, as the reference distance. This resulted in Therefore
scale distortion, because of the ‘cos h’ factor. All the modern
k  Rs
radar systems use now the computed horizontal ground Ra ¼ ; ð15:7Þ
D
range as the reference distance.
Range resolution is the ability of the radar to discriminate This means that azimuth resolution becomes coarser with
two targets situated behind each other in the range direction. increasing slant range. It can be improved by using shorter
A limitation on this is imposed by the pulse length or pulse wavelengths, smaller slant range and larger antenna length.
duration. Two objects will be resolved if the received pulse Shorter wavelengths have limitations, as they are more
from the first object ends before that from the second object attenuated. Reduction in slant range improves azimuth res-
starts. Thus pulses are made short, and now coded pulses are olution but would adversely affect range resolution (see
used to improve range resolution. If pulse duration is s, then above). Further, as mentioned later, longer slant range, i.e.
ground range resolution (Rr) is given as the range distance small depression angle, is useful for other purposes, e.g. for
corresponding to the pulse length as enhancing features in areas of low relief and reducing vari-
ous image distortions etc. (Sect. 13.8). Antenna length can
sc
Rr ¼ ; ð15:4Þ be increased within a certain physical limit. Thus, there are
2 cos h constraints to improving the azimuth resolution of RAR
Therefore, range resolution is also dependent on the systems. It is generally of the order of 15–60 m, and varies
depression angle—for a smaller depression angle it is finer, with slant range. The unit cell (pulse length by beam width)
and for larger depression angles it is coarser. This implies is called the pulse rectangle.
15.3 Active Microwave Sensors—Imaging Radars 227

2. Synthetic-aperture radar. Synthetic-aperture radar the space-borne imaging radars have used SAR technology,
(SAR) systems are more sophisticated radars and use with the exception of Almaz which was a RAR sensor. Now,
advanced processing algorithms to yield finer azimuth the real-aperture radars have become obsolete, and the air-
resolution. Present-day aerial and space-borne systems borne SAR systems are losing their main advantage as the
use SAR technology. The basic limitation of the RAR was spatial resolution from space-borne SAR has reached an
the coarse azimuth resolution with increasing range dis- order of 1 m. Therefore, the term imaging SAR, or simply
tance. However, the fanning out of the radar beam width SAR, is now generally used for space-borne synthetic
has another implication: that objects situated farther away aperture imaging radar sensors.
are observed by radar for a longer duration, i.e. in a greater A compilation of airborne SAR systems can be found in
number of sweeps. In case of synthetic-aperture imaging Reigber et al. (2013). Here, we confine discussion to
radar, all the observations of objects are integrated, and space-borne SAR sensors.
successive antenna positions are treated as if they were
individual elements of one long antenna array (Fig. 15.9).
The Doppler principle is used to process the data and 15.3.4 SAR System Specifications
synthesize an antenna of a much larger length. The syn-
thesized beam is narrow and has a constant width or res- A number of parameters are used to specify a SAR sensor,
olution, irrespective of the range distance. The SAR the important ones being the following:
processing has been a major technological advancement.
The present-day space-borne SAR systems provide 1. Radar wavelength. Radar imagery is acquired at a fixed
ground resolution of about 1 m (see Table 15.2). wavelength and this wavelength is one of the most
important radar parameters. Ranges of wavelength used
It may be mentioned here that the term SLAR was ini- in radar remote sensing are listed in Table 15.1. They
tially used for airborne imaging radar of RAR type. Most of were coded alphabetically during the early classified
stages of radar development, and the same nomenclature
is followed to date, for harmony and convenience.
2. Beam polarization. The plane of vibration of the elec-
trical field vector defines the plane of polarization of the
EM energy wave. In radar sensing, the transmitted wave
train is always polarized in a particular direction (hori-
zontal, H, or vertical, V). On interaction with the ground
surface, it becomes partly depolarized. The radar antenna
records back-scattered radiation only of a certain polar-
ization, e.g. the like-polarized return, viz. HH (implying
horizontal transmit, horizontal receive) or VV, or the
cross-polarized return, viz. HV or VH. The HH config-
uration yields the strongest radar return and is therefore
has been most widely used. A SAR can also be operated
with multiple receiving antennas—concurrently for the

Table 15.1 Imaging radar bands


Radar band Wavelength Frequency (in GHz)
(in cm) i.e. 109 cycles/s
Ka 0.8–1.1 (0.86)a 40.0–26.5
Ks 1.1–1.7 26.5–18.0
Ku 1.7–2.4 18.0–12.5
X 2.4–3.8 (3.1) 12.5–8.0
C 3.8–7.5 (5.7) 8.0–4.0
S 7.5–15.0 (15) 4.0–2.0
Fig. 15.9 Working principle of a synthetic aperture radar (Craib L 15.0–30.0 (23.5) 2.0–1.0
1972). a Doppler frequency shift due to relative motion of target
through the radar beam. b Resolution of synthetic aperture radar; P 30.0–100.0 (50) 1.0–0.3
a
although the physical antenna length is D, the synthetically lengthened Parentheses commonly used radar wavelength
antenna is L
228 15 Microwave Sensors

like-polarized and cross-polarized returns. In one radar fixed point to illuminate a given region for longer dura-
image, the beam polarization is constant. A number of tion. The long illumination time results in an increased
SAR sensors have used selectable dual polarization (e.g. synthetic aperture length and consequently in a higher
HH + HV, or VV + VH). However, in many modern resolution. However, this mode may be used to image
SAR sensors, all the four polarizations are concurrently only individual patches along the radar flight path.
available (HH + HV + VV + VH), and such a SAR is
said to possess quad polarization, and is used for There could be further variations within the above three
polarimetric studies. broad categories, e.g. ultrafine resolution beam mode,
3. Look angle and swath width. Often several geometric extra-wide swath mode, extended mode etc., as per the
parameters such as swath width, look angle and other sensor design.
sensor-linked characteristics are used as specifications. Figure 15.11 shows an example of comparison of satellite
These parameters have been defined earlier. SAR image resolution of yester years (1990s) vis-à-vis the
4. Resolution. Resolution, in both azimuth and range current generation of SAR sensors (2007-onwards).
direction, forms another important specification.

15.3.6 Selected Space-Borne SAR Sensors

15.3.5 Imaging Modes of SAR Sensors


1. SEASAT
Advances in SAR technology have revolutionized the SAR
Space-borne radar remote sensing began in 1978 with the
imaging capability during the last nearly 2 decades. This has
launch of Seasat (Fig. 15.12a). The Seasat, basically an
resulted in various imaging modes that enable SAR sensing
in a wide range of depression angles, swath and spatial
resolution by controlling the antenna radiation pattern. The
main modes of operation are as follows (Fig. 15.10):

1. Stripmap (Standard Beam)operation: The most funda-


mental mode is the Stripmap or Standard Beam opera-
tion, where the pattern is fixed to one swath, thus
imaging a single continuous strip.
2. ScanSAR mode: The ScanSAR mode enables SAR data
acquisition in a wider swath. The antenna elevation
pattern can be suitably manoeuvred to different elevation
angles. After appropriate processing, this yields a
wide-swath SAR image; however, the azimuth resolution
is degraded when compared to the Stripmap mode.
3. Spotlight (Fine Resolution Beam) mode: This mode
facilitates high resolution SAR image data acquisition.
Here the antenna pattern is steered in azimuth towards a

Fig. 15.11 Image pair showing the comparison of resolution of SAR


sensors of yester years and present day; a C-band SAR image from
space-borne sensor of 1990s (resolution around 20 m) and, b X-band
image of the present-day spaceborne SAR sensor (resolution 1 m); the
Fig. 15.10 Typical imaging modes of modern spaceborne SAR ground scene covered is pyramids of Giza, Egypt (a, b, Moreira et al.
sensors 2013)
15.3 Active Microwave Sensors—Imaging Radars 229

oceanographic survey sensor, carried a synthetic aperture 2. Shuttle Imaging Radar (SIR) Series
radar (SAR) and a scatterometer, in addition to other sensors.
The SAR operated in the L-band (23.5 cm) (Table 15.2). NASA’s Shuttle-Imaging-Radar (SIR) series of experiments
Owing to a large depression angle (about 70°), the Seasat (SIR-A, -B and C) followed the Seasat experiment. The
images had high geometric distortion and therefore could not SIR-A was flown aboard Space Transportation System
be much used for geological applications. (STS)-2 in 1981 and used depression angles of 40° ± 3°.

Fig. 15.12 Selected SAR sensor satellites: a Seasat; b ERS; c Radarsat; d Envisat; e TerraX-SAR; f RISAT; g Sentinel; h Cosmo-Skymed
(images after respective space agencies)
230 15 Microwave Sensors

Table 15.2 Overview of spaceborne SAR sensorsa


S. Mission/sensor Operation Frequency Comments Agency/Country
No. band
(Polarization)
1. Seasat 1978– L (HH) First civilian SAR sensor in space NASA/JPL, USA
100 days
2. ERS-1/2 1991– C (VV) First European Remote Sensing satellite ESA, Europe
2000/1995–
2011
3. JERS 1992–1998 L (HH) First Japanese SAR satelite JAXA, Japan
4. SIR-C/X-SAR 1994– L & C (quad) Shuttle imaging radar; multi-frequency and NASA/JPL,
limited X (VV) multi-polarization USA; DLY,
period Germany; ASI, Italy
5. Radarsat-1 1995–2013 C (HH) First Canadian SAR satellite; multiple imaging CSA, Canada
modes;
6. SRTM Feb 2000 C (HH + VV) Shuttle Radar Topographic Mission; first NASA/JPL,
X (VV) spaceborne dedicated InSAR USA; DLY,
Germany; ASI, Italy
7. ENVISAT/ASAR 2002–2012 C (dual) Follow up of ERS; swath width up-to 400 km ESA, Europe
8. ALOS-1/PALSAR 2006–2011; L (quad) First Japanese SAR satellite; multiple imaging JAXA, Japan
(DAICHI) and polarization modes
9. Radarsat-2 2007– C (quad) Resolution up-to 1 m; swath 500 km CSA, Canada
continues
10. TerraSAR-X/TanDEM-X 2007/2010– X (quad) First bistatic radar in space; resolution up-to 1 m; DLR/Astrium,
continues global topography Germany
10. COSMO-Skymed-1/4 2007/2010– X (dual) Constellation of four SAR satellites, resolution ASI/MiD, Italy
continues up-to 1 m
11. RISAT-1 2012– C (quad) First Indian SAR satellite; multiple imaging and ISRO, India
continues polarization modes; resolution up-to 2 m
12. HJ-1C 2012– S (VV) Proposed of constellation of multiple satellites CAST, China
continues
13. Kompsat-5 (Arirang-5) 2013– X (dual) Korean multipurpose satellite, resolution up-to KARI, Korea
continues 1m
14. ALOS-2 (PALSAR). 2014– L (quad) Multiple imaging and polarization modes; JAXA, Japan
(DAICHI) continues resolution up-to 1 m
15. Sentinel-1a/1b 2014, 2016 C (dual) Constellation of four satellites proposed; swath ESA, Europe
up-to 400 km
16. Radarsat Launch C (quad) Constellation of three satellites proposed; swath CSA, Canada
constellation-1/2/3 proposed in up-to 500 km
2017
17. NISAR (Nasa-Isro-SAR) Proposed S (quad) L Dual frequency polarimetric SAR; multiple NASA, USA; ISRO,
2020 (quad) imaging modes; resolution up-to 2 m; swath India
240 km
a
source http://database.eohandbook.com/

Due to relatively moderate depression angles, layover is Agency (ASI). This mission had the capability to acquire
largely absent on these images, shadows are also not so SAR data concurrently at three wavelengths—L-band
prominent, and therefore this has yielded quite interesting (23.5 cm), C-band (5.6 cm) and X-band (3.1 cm)—and in
image data. SIR-B, flown on STS-17 in 1984, had only various polarization modes and at variable incidence angles.
limited success due to system malfunctioning. The experiment was flown twice aboard the space shuttle
SIR-C/X-SAR was an important programme, a joint ‘Endeavour’ in 1994. A wide choice of polarization, reso-
venture between the USA (NASA/JPL), the German Aero- lution, swath width and wavelength allowed several data
space Research Establishment (DLR) and the Italian Space collection modes from this experiment.
15.3 Active Microwave Sensors—Imaging Radars 231

3. Almaz-1 resolution beam), providing a range of image swath width


(varying from 45 to 500 km) and spatial resolution (from 1
Almaz-1 was a USSR/Russian satellite carrying an SAR to 160 m).
sensor, which operated during 1991–92. Its S-band wave-
length (10 cm) and selectable incident angles made it dif- 7. SRTM
ferent from other SAR missions. However, due to its low
orbit (300 km), the on-board fuel depleted rather quickly, SRTM was a unique mission exclusively dedicated to SAR
and this terminated the programme before the planned life. interferometry. A two-frequency single-pass interferometric
SAR was configured by simultaneously operating two sets
4. JERS-1 of radar antennas separated by a baseline length of 60 m for
C-band and 12 m for X-band. The 60 m long, deployable,
The Japanese Earth Resources Satellite-1 (JERS-1) was stiff-boom structure perpendicular to the velocity direction
launched in 1992 (Table 15.2). It carried a SAR sensor with of the space shuttle separated the two antennas (see
L-band (23.5 cm wavelength) and HH polarization. The Fig. 17.4 for the configuration of SRTM).
system was quite similar to the earlier Seasat except for the The look angles of the SRTM were fixed (C-band: 30–60°,
larger incidence angle of 39° in comparison to the 23° of X-band: 50–55°, off-nadir), both SARs being left-looking.
Seasat. The JERS provided images with reduced geometric The C-band swath width was 225 km and for X-band it was
distortions related to topographic relief, and a better reso- 50 km. The C-band covered any point on the equator twice
lution (18 m  18 m, both range and azimuth) and a swath (one ascending and one descending pass), whereas X-band
width of 75 km. With a tape recorder on board, the sensor coverage was small in swath. It has provided global coverage
gathered image data of several parts of the globe. (except areas more than 60° north and south) and has been the
first continuous data set of this kind ever acquired.
5. ERS-1/ERS -2 The main goal of SRTM was to generate very accurate
and high-spatial-resolution DEMs, in general with about
The European Space Agency (ESA) launched two European 30 m  30 m spatial resolution (with C-band), and in spe-
Remote Sensing (ERS) satellites, ERS-1 and -2, into the cial cases 10 m  10 m spatial resolution (with X-band).
same orbit in 1991 and 1995 respectively (Fig. 15.12b; The vertical height resolutions are generally better than
Table 15.2). Their payloads were similar and included a 10 m for C-band and 6 m for X-band.
synthetic aperture imaging radar (C-band, 5.7 cm wave-
length), radar altimeter, besides others. Both ERS-1/-2 were 8. ALOS-1/ALOS-2
launched into a sun-synchronous polar orbit in the same
orbital plane, at an altitude of 782–785 km. This allowed a The Advanced Land Observing Satellite (ALOS-1), also
tandem mission, with ERS-2 passing the same point on the called Daichi, was a Japanese satellite launched in 628 km
ground 1 day later than ERS-1. The resolution was 30 m, high sun-synchronous orbit in 2006 and delivered data till
swath width 100 km, repeat cycle 35 days. The two satel- 2011. ALOS had three sensors: the Panchromatic
lites acquired valuable data sets extending over two decades. Remote-sensing Instrument for Stereo Mapping (PRISM),
A major application of ERS data has been in SAR inter- the Advanced Visible and Near Infrared Radiometer type 2
ferometry (discussed in detail in Chap. 17). (AVNIR-2); and the Phased Array type L-band Synthetic
Aperture Radar (PALSAR). The PALSAR enabled
6. RADARSAT-1/RADARSAT-2 day-and-night, all-weather land observation. It provided data
in high resolution mode, standard mode, as well as Scan-
Radarsat-1 was Canada’s first commercial Earth observation SAR mode and also polarimetry data. The ALOS-2 is
satellite (Fig. 15.12c), launched in 1995 into a follow-up mission launched in 2014. While the standard
sun-synchronous dawn-dusk orbit (6 p.m. ascending node Beam mode has a resolution of 10 m, the Spotlight Mode
and 6 a.m. descending node) above the Earth with an alti- could reach a resolution of 1–3 m and the wide swath
tude of 798 km, orbit-plane inclination of 98.6°. It carried a ScanSAR mode has a resolution of 100 m. The image swath
C-band SAR sensor (5.6 cm wavelength), H-H polarization. width accordingly varies—Spotlight mode: 25 km, Strip-
Radarsat-2 was a follow-up programme launched in 2007 map mode: 50–70 km, ScanSAR mode: 350–490 km, and
with C-band SAR and multiple (HH, VV, HV, VH) polar- Polarimetry: 30–50 km.
izations (Table 15.2). In standard mode, the Radarsat-SAR
sensor provided a nominal resolution of 30 m  30 m. 9. ENVISAT-1
Through ground command, it provided data in a variety of
beam selections—in terms of resolution, incidence angle Envisat (“Environmental Satellite”) was ESA’s successor to
and swath width (standard beam, ScanSAR, and fine ERS and was similar to ERS in terms of orbit and radar
232 15 Microwave Sensors

wavelength (Fig. 15.12d; Table 15.2). It was launched in 11. RISAT-1


2002 and delivered data till 2012. Envisat operated in a
sun-synchronous, polar orbit, with inclination of 98.5°, RISAT-1 (Radar Imaging Satellite-1), is an Indian remote
altitude of 777 km, orbital period of about 101 min and sensing satellite, built and operated by the Indian Space
repeat cycle of 35 days. Envisat carried an instrument called Research Organisation (ISRO) (Fig. 15.12f; Table 15.2). It
Advanced Synthetic Aperture Radar (ASAR) that was a is placed in 536 km high nominal altitude, near-circular,
C-band imaging radar (5.7 cm wavelength). It provided sun-synchronous dawn-dusk equatorial crossing orbit, with
image data in standard Stripmap mode (30 m resolution) and inclination of 97 deg. The launch of RISAT-1 came in 2012,
wide ScanSAR mode (150 m resolution), with swaths of a few years after that of RISAT-2 (2009), which carried an
100 km and 400 km respectively. The SAR had multiple Israeli-built X-band radar for defence purposes. RISAT-1
polarisation modes (VV, HH, VV/HH, HV/HH, or VH/VV). uses C-band Synthetic Aperture Radar (5.6 cm wavelength).
The basic imaging modes of RISAT-1 include: two Scan-
10. TerraSAR-X and TerraDEM-X SAR modes with 25 and 50 m resolution, and 115 and
223 km swath; and two fine resolution Stripmap modes with
TerraSAR-X is a public-private-partnership program 3 and 9 m resolution and 25 km swath. Further, a
between the German Aerospace Center (DLR) and high-resolution spotlight mode is provided as an experi-
EADS-Astrium (Fig. 15.12e; Table 15.2). It was launched mental mode with 2 m resolution and 10 km swath.
in 2007 in a sun-synchronous dusk-dawn orbit at 514 km
altitude. TanDEM-X is a second, very similar spacecraft 12. KOMPSAT-5
launched in 2010 placed in exactly the same orbit.
TerraSAR-X and TanDEM-X fly in a close formation, Kompsat-5, also called Arirang-5, is a Korean Space Agency
having been located only a few hundred metres apart from programme and was launched in 2013 in a 550 km high
each other, and record data synchronously. This unique twin orbit with dusk-dawn equatorial crossing and repeat cycle of
satellite constellation allows the generation of a homoge- 28 days. It uses an X-band SAR (3.1 cm wavelength) called
neous global digital elevation model (DEM). The two “COSI”. The sensor provides image data in high resolution
satellites use X-band SAR (3.1 cm wavelength) and have the mode of 1 m resolution (Spotlight mode), standard mode of
capability to operate in Spotlight mode, Stripmap mode, and 3 m resolution (Stripmap mode) and wide swath mode of
ScanSAR mode, and provide data in quad polarisation 20 m resolution (ScanSAR mode). All four polarisation
modes (HH + HV + VH + VV). modes (HH, HV, VH and VV) are provided.

Fig. 15.13 Time distribution of selected spaceborne SAR missions for the period 1990–2020 (source http://database.eohandbook.com/)
15.3 Active Microwave Sensors—Imaging Radars 233

13. Copernicus Sentinel-1a, -1b, -1c, -1d in a variety of polarisation modes (VV, HH, HV, VH,
HH/HV + VV/VH).
As a continuity of ERS/Envisat program, the European Figure 15.13 shows the time distribution of selected SAR
Space Agency has developed the Copernicus programme satellites for the period 1990–2020.
that includes a constellation of four satellites, called
Sentinel-1a, -1b, -1c and -1d (Fig. 15.12g; Table 15.2).
These are identical satellites in the same orbit. Sentinel-1a References
was launched in 2014 and Sentinel-1b in 2016. The two
satellites carry C-band Synthetic Aperture Radar (5.6 cm Craib KB (1972) Synthetic aperture SLAR systems and their applica-
wavelength) and provide images in multiple (HH, VV, tion for regional resources analysis Conf Earth resources observa-
HH + HV, VV + VH) polarisation modes. The SAR can tion and information analysis system in remote sensing of earth
resources. Space Inst, Univ Tennessee, Tullahoma, pp 152–178
operate in Strip mode (9 m resolution, 80 km swath) and De Azevedo LHA (1971) Radar in the Amazon project Radam. Proc
wide swath mode (50 m resolution and 400 km swath). 7th Int Symp Remote Sensing of Environ, Ann Arbor, MI,
Sentinel-1c and 1-d are proposed for launch around 2019. pp 2303–2306
MacDonald HC (1969) Geologic evaluation of radar imagery from
Darien Provinee, Pa-nama. Mod Geol 1:1–63
14. COSMO-SkyMed Moreira A, Prats-Iraola P, Younis M, Krieger G, Hajnsek I, Papathanas-
siou KP (2013) A tutorial on synthetic aperture radar. IEEE Geosc
The COnstellation of small Satellites for Mediterranean Remote Sens Magazine 1–43. doi 10.1109/MGRS.2013.2248301
basin Observation (COSMO-SkyMed) is an Italian Space O’Leary DW, Johnson GR, England AW (1983) Fracutre detection by
airborne microwave radiometery in parts of the Mississippi
Agency program that commenced in 2007 (Fig. 15.12h; embayments, Missouri and Tennessee. Remote Sens Environ
Table 15.2). It has a set of exactly similar four satellites, 13:509–523
placed in sun-synchronous polar orbits with a 97.9° incli- Reigber A et al (2013) Very-high-resolution airborne synthetic aperture
nation at a nominal altitude of 619 km, orbital period of radar imaging: signal processing and applications. Proc IEEE 101
(3):759–783
97.2 min, repeat cycle of 16 days and orbiting with Schmugge T (1980) Techniques and applications of microwave
dusk-dawn equatorial crossing. The four satellites are phased radiometry. In: Siegel BS, Gillespie AR (eds) Remote sensing in
in the same orbital plane, with COSMO-SkyMed’s 1, 2 and geology. Wiley, New York, pp 337–361
4 at 90° to each other and COSMO-SkyMed 3 at 67.5° Ulaby FT, Moore RK, Fung AK (1981) Microwave remote sensing—
active and passive, vol I. Microwave Remote Sensing Fundamentals
from COSMO-SkyMed 2. This results in varied intervals and Radiometry, Addison-Wesley, Reading
between the satellites along the same ground track. The Ulaby FT, Moore RK, Fung AK (1982) Microwave remote sensing—
sensor is X-band (3.1 cm wavelength) imaging SAR, and is active and passive, vol II. Radar Remote Sensing ans Surface
called SAR-2000. It can operate in Stripmap mode (resolu- Scattering and Emission Theory, Addison-Wesley, Reading
Ulaby FT, Moore RK, Fung AK (1986) Microwave remote sensing—
tion 3–15 m), Spotlight mode (resolution 1 m) and Scan- active and passive, vol III, from Theory to Applications. Artech
SAR mode (resolution 30–100 m) and generate image data House, Delham Mass
Interpretation of SAR Imagery
16

16.1 Introduction SAR image, the intensity of radar return is shown in shades
of grey, such that areas of higher backscatter appear corre-
The technique of SAR imaging and the various aerial and spondingly brighter. The most common types of responses
space-borne SAR sensors have been discussed in the pre- on an SAR image are the following (Figs. 16.1 and 16.2).
ceding chapter. Briefly, the radar mounted on the base of the
sensor platform looks sideways down on the Earth, trans- 1. Diffused scattering. Most of the area on an SAR image is
verse to the flight direction. It emits pulses of microwave dominated by diffused scattering caused by rough ground
energy, which illuminate long narrow stripes on the ground. surfaces and vegetation (leaves, twigs and branches).
The back-scattered signal, sensed by the antenna, is recorded These objects are also called diffuse scatterers and pro-
in order of arrival time. This yields a radar image. duce intermediate radar return.
The interpretation and geological applications of SAR 2. Clutter is the term used for an intermediate, rapidly
data have been discussed by several authors (e.g. MacDon- varying noise-type of response, seen typically over sea
ald 1969a, b, 1980; MacDonald and Waite 1973; Elachi surface and vegetation; this may be a hindrance to
1980; Ford et al. 1983; Trevett 1986; Buchroithner and locating surface phenomena.
Granica 1997; Ford 1998; Woodhouse 2006). The interac- 3. Hard targets. Some very strong responses are often seen
tion of microwave EM energy with matter is governed by the on a radar image and are said to from hard targets, such
wave nature of light. Both geometrical characteristics (shape, as metallic objects and corner reflectors. Metallic objects
roughness, and surface orientation) and electrical properties produce high radar returns due to their high dielectric
(complex dielectric constant) are important. Broadly, the constant (discussed later). Bridges, automobiles, power
radar return is sensitive to decameter-scale changes in sur- lines, railway tracks and all metallic objects are generally
face slope and centimeter-scale changes in surface rough- easily identified on radar images (Figs. 16.1 and 16.2).
ness. In addition, the dielectric properties of the ground Some hard targets may contain several scattering centres,
material (surface moisture and mineralogical composition) e.g. in a ship or in an industrial complex, and the
also influence the radar return. resulting image may exhibit a cluster of hard targets.
The radar response opens up new avenues for discrimi- 4. Corner reflector effect is produced when the object has a
nating and mapping Earth materials, as the radar signal rectangular shape, such as a vertical wall joining with the
provides a ‘new look’ at the ground. In this chapter we first ground or walls/roofs/ground at mutual right angles. This
discuss some SAR image characteristics, in order to become leads to echoes and high radar return (Fig. 16.3). Corner
conversant with the terminology. This is followed by a reflectors are formed typically by buildings, ships, shar-
discussion on factors affecting radar return, and then inter- ply rising hills etc. Behind the corner reflector there often
pretation and geological application aspects of SAR sensing. lies a shadow zone. As corner reflectors (CRs) appear
prominently on SAR images, they may serve as GCPs
and therefore have practical use in locating tie points for
16.2 SAR Image Characteristics SAR image registration. For this purpose, natural or
artificial CRs (whose coordinates are very well defined,
16.2.1 Radiometric Characteristics also see Sect. 17.5) may be used.
5. Specular reflection. This effect is produced by smooth
The backscatter at the radar, called radar return, is received surfaces, such as a quiet water body, playa lake, tidal flat
by the antenna, amplified and recorded (Fig. 15.3). On an etc. In the case of specular reflection, the SAR beam is

© Springer-Verlag GmbH Germany 2018 235


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_16
236 16 Interpretation of SAR Imagery

Fig. 16.1 a Radar image (L-band, SIR-B image of a region in largest city in Ecuador) (corner reflection); c river (Guayas river;
Ecuador) showing some typical features. b Radar response profile specular reflection); d ship (corner reflection); e agricultural fields (rice
drawn along P–Q on the image. The features seen are: a forested hills fields; near-specular reflection); f grazing ground (diffused scattering)
(diffused scattering); b city (Guayaquil, which forms the chief port and (image courtesy of C. Elachi and J.P. Ford, Jet Propulsion Laboratory)

Fig. 16.2 a SIR-A image of a


part of central India. b radar
response profile along X–Y.
Several typical features are seen
such as; a plateau land with
sparse vegetation (lowintensity
diffused scattering); b river
(Mahanadi river; specular
reflection); c railway track
(metallic object); d vegetated
channel (diffused scattering); e, f
settlement and city (corner
reflection); g power transmission
line (metallic object; note the
corner reflection effect due to
transmission line towers, which
appear as small dots); h forested
hills (diffused scattering and
corner reflection) (image courtesy
of C. Elachi, Jet Propulsion
Laboratory)
16.2 SAR Image Characteristics 237

Fig. 16.3 Corner reflection. a Mechanism of reflection of SAR waves effect; individual parking lot is clearly visible (b Courtesy of
straight back to the antenna (after Curran 1985); b aerial X-band SAR MacDonald Dettwiler and Assoc. Ltd.)
high-resolution imagery of Detroit, showing strong corner reflection

reflected in a small angular zone given by Snell’s Law, radar return for the pixel. Other special cases of specular
and there occurs little or no return at the antenna, reflection are ‘sea echo’ and ‘no show’ in the surrounding
resulting in dark (black) tone (Fig. 16.1). A special case sea clutter. Sea echo is a peculiar high signal recorded on
could be that in a certain area there may be a sub-pixel the sea surface, brought about by reflection from sea
planar surface oriented nearly perpendicular to the inci- waves. Further, oil films on the sea surface have a
dent beam (near-zero local incidence angle); due to dampening effect on the waves and may produce ‘no
quasi-specular reflection, this would generate very high show’ in the surrounding sea clutter.

Fig. 16.4 a Layover,


foreshortening and radar
shadows. b Seasat SAR image
(18 August 1978) of parts of
Iceland showing strong layover
and foreshortening effects
(processed by DLR; courtesy of
K. Arnason) (AZ Azimuth
direction; L Look direction)
238 16 Interpretation of SAR Imagery

6. Radar shadow. Imaging radar is a system which illumi- the sensor craft), leading to relative shortening of all
nates the ground from one side. In this configuration, facets sloping towards the sensor-craft (Fig. 16.4a); this
some areas may not receive any radar pulse if they are is called foreshortening. Complementarily, the hill facets
located behind obstacles such as hills, buildings etc. This sloping away are relatively extended.
results in radar shadows. The extent of the shadow zone Layover occurs in special situations when the slant-range
depends on the height of the sensor-craft, look angle, and distance to the top of a feature (e.g. a hilltop) is less than
elevation of the corner reflector. In radar sensing, the that to the base. In such a case, the top will be imaged
shadow zone is typically black, in contrast to the case of earlier on the SAR image, and the base of the same
VNIR images, where some skylight may still faintly topographic feature later. This is called layover
illuminate the shadow zone. Radar shadows are highly (Fig. 16.4). This occurs when the terrain is high and
dependent on illumination geometry and ground relief. rugged, topographical features rise sharply, and/or the
Higher look angles result in a greater amount of shadow look angle at the antenna is low. Layover renders phys-
(Fig. 16.4a). At times, radar shadows are helpful in iographical conceptualization and interpretation difficult
detecting subtle topographical features. In a highly rug- and geometric mapping altogether quite impossible.
ged terrain, on the other hand, radar shadows may be so 2. Geometric distortion due to slant-range display. Some of
extensive that they may render the image quite unsuitable the SAR systems use slant range as the reference distance
for interpretation and application. for image display. Although a set of objects may be
7. Speckle. Speckle is a type of noise that accompanies data equally spaced on the ground, the radar time intervals
processing for higher azimuth resolution in SAR sys- (related to slant range between the antenna and the
tems. It occurs due to coherency of the radar signal and objects) for these objects are unequal. The use of slant
the presence of many tiny elemental scatterers with a range as the base distance results in a scale distortion
random distribution within a resolution cell. It produces a such that a relative compression of imagery at near range
type of fine texture on the SAR images, somewhat similar (or expansion of imagery at far range; Fig. 16.5) occurs.
to that observed when a scene is illuminated with a fine To counter this type of distortion, ground range (which
laser beam. Processing for higher azimuth resolution
leads to greater image speckle noise. The speckle noise
can be reduced by going through an averaging or
smoothing filter, or multi-look processing that in a way
reduces the image resolution.

16.2.2 Geometric Characteristics

The geometry of radar imagery is fundamentally different


from that of both types of optical sensor data (photographs
and scanner imagery), owing to the basic difference that the
location of objects on SAR images is based on distances
rather than on angles. We mention here a few important
types of geometric distortions which affect interpretation and
application of SAR imagery (for details, see Leberl 1998).

1. Image displacement due to relief. The SAR system


basically works on slant-range distances. If the ground is
flat, the image carries a regular non-linear geometric
distortion, which is not difficult to rectify. However, if
the ground has uneven topography, the images of objects
Fig. 16.5 Scale distortion due to slant-range display. a Schematic
get displaced, exhibiting foreshortening and layover showing that the ground distances D1 and D2 although equal, are
(Fig. 16.4). represented on the image as S1 and S2 respectively, such that S1 < S2,
Foreshortening is a universal phenomenon on all SAR which results in relative compression on the near-range side (after
Sabins 1987). b Slant-range display image; note the compression of
images of undulating terrains. This distortion arises due
features on the near range (NR) side, in comparison to that on the far
to the fact that elevated points are displaced towards the range (FR) side, resulting in curving of streets and rhombohedral fields
sensorcraft (owing to decreased slant-range distance from (courtesy of Environmental Research Institute of Michigan)
16.2 SAR Image Characteristics 239

can be computed from look angle) is used as the refer- (DEM) provides a better method (Naraghi et al. 1983).
ence distance. Ortho-rectified radar image products may also be provided
3. Look direction effects. It is clear that radar imagery is by various remote sensing agencies.
strongly direction dependent (e.g. Eppes and Rouse
1974). Owing to radar shadows, which get oriented SAR Image Mosaic
parallel to the azimuth direction, features parallel to the
azimuth direction are relatively enhanced and those SAR images are often used for large-scale regional sur-
parallel to the look direction are relatively suppressed. veys. For synoptic viewing, SAR images can be put together
On an SAR image, the position, orientation and extent of strip by strip to form a mosaic. However, as mentioned ear-
look-direction effects (shadows, layover and foreshort- lier, radar images are highly direction dependent. Therefore,
ening) depend upon illumination geometry—i.e. relief in for mosaicking, it is advisable to have the entire area covered
the area, altitude of the sensor-craft and look angle and with the same look direction. If the area is imaged from
look direction. The same area covered from differently opposite look directions, then shadows in adjacent strips
oriented SAR flights appears differently (see Fig. 16.22 become oppositely oriented, which makes conceptualization
later). Due consideration must therefore be given to look of the terrain difficult; hill ranges in one strip may look like
direction effects while interpreting and mapping from a valleys in another, owing to oppositely cast shadows. If on
radar image. the other hand, a mosaic is generated from strips flown with
4. Look angle effect. The geometry of an SAR image is also the same look direction, the problem is that the far range of
dependent upon look angle (see Fig. 16.21 later). In the one strip falls adjacent to the near range of the next strip. This
case of airborne SAR, the geometry changes rapidly sudden change in look angle creates an artifact that has no
across the swath due to look angle variation. On the other relation to the terrain features. In view of the above, radar
hand, in space-borne radar imaging, the effect of look mosaics have found rather limited application.
angle variation across the swath on image geometry may
be minimal.
5. Platform instability. Additionally, some distortions may 16.3 SAR Stereoscopy and Radargrammetry
arise in SAR images due to sensor-platform instability.
Space-craft are relatively more stable platforms and the SAR Stereoscopy
images acquired from space possess better geometric
fidelity than do those from aerial SAR systems. Ideally, for stereo viewing, the two images in a stereo pair
must be similar in the matic content and radiometry, and
Geometric Rectification differ only in geometry caused by viewing perspective.
However, any two SAR images of the same area, acquired
Raw SAR images possess geometric distortions and from two different stations, would possess differences in
characteristics so peculiar in nature that they render geo- geometry as well as in radiometry, caused by the variation in
graphic location/recognition, and also superimposition and look angle and direction. The perception of relief in the
conjunctive interpretation with other data sets (such as mental model is related to base/height (B/H) ratio, which
photographs, scanner images and other standard maps), quite governs vertical exaggeration (VE; see Sect. 7.2.2). In SAR
difficult. Therefore, it is necessary to carry out digital pro- sensing, if the area is covered from two adjacent parallel
cessing of SAR image data for geometric rectification before flights with the same look direction, the B/H ratio is small
any interpretation work can be taken up. This involves (Fig. 16.6a), giving a small value of VE. If, on the other
registration of SAR image data to a base image by using tie hand, SAR images with opposite look directions are used
points (for image registration, see Sect. 10.4). In a rugged (which certainly would increase B/H ratio and therefore VE)
terrain, image rectification using a digital elevation model (Fig. 16.6b), images carry shadows in opposite directions;

Fig. 16.6 Flight arrangements


for stereo coverage by SAR with
a parallel look direction and
b opposite look directions
240 16 Interpretation of SAR Imagery

Fig. 16.7 Stereo image pair


from SRTM. The region covered
is NW of Bhuj, India. The
elevation model is derived from
the SRTM, over which Landsat-7
ETM + image is draped to
provide the land-cover
information (printed in
black-and-white from the colour
image) (courtesy of
NASA/JPL/NIMA)

the resulting sharp differences in radiometry render image where,


fusion difficult for stereoscopy. Therefore, stereo viewing of
SAR images has limitations. Pr Power received (or radar return)
Figure 16.7 presents an example of stereo radar images Pt Power transmitted
obtained from the SRTM mission (February 2000). G Antenna gain
k Wavelength used
Radargrammetry Rs Slant range distance between antenna and target
r Effective back-scatter of the target, also called radar
The technique of measurements from stereo radar images cross-section. It depends upon local incidence angle,
is termed radar grammetry or stereo radargrammetry. As surface roughness, complex dielectric constant, wave-
raw/simple radar images have poor geometric accuracy, they length used and polarization. Local incidence angle, in
were used in earlier times only for reconnaissance surveys. turn, depends upon depression angle and terrain
Now, with the availability of the global positioning system orientation. Surface roughness is also a function of
(GPS), differential GPS is used to locate the position of the wavelength and local relief. For all practical purposes,
imaging sensor at any instant of time. Using these data, this is the single most important parameter influencing
stereo SAR images can be digitally combined to produce radar return in an image.
digital elevation models (see Sect. 8.2.6). Thus the radar return is governed by a complex inter-
play of factors, which can be grouped into two main cat-
egories: radar system factors and terrain factors
16.4 Radar Return (Table 16.1).

16.4.1 Radar Equation


16.4.2 Radar System Factors
The backscattered signal received at the SAR antenna is
called radar return. The radar equation describes the 1. Power transmitted. The magnitude of power transmitted
dependence of radar return on various parameters, and is (Pt) directly affects radar return. Generally, radar flying at
given as follows (see e.g. Lewis and Henderson 1998): a higher altitude has to transmit a greater amount of
power than one flying at a lower altitude; however,
Pt  G2  k2
Pr ¼ r ð16:1Þ during a particular investigation, Pt can be taken as
ð4pÞ3 R4s constant.
16.4 Radar Return 241

Table 16.1 Factors affecting radar return


Factors Primary variable Secondary variable
Radar system 1. Power transmitted
factors 2. Antenna gain
3. Radar wavelength
4. Beam polarization
5. Look angle
6. Aspect angle Look-direction and orientation of the general trend of the ground terrain
7. Slant range Flying altitude, look angle and ground relief
Terrain factors 1. Local angle of incidence Local topography (surface slope—amount and direction), look direction, look
angle, flying altitude
2. Surface roughness Character of the ground material, radar wavelength, incidence angle
3. Complex dielectric constant Type of rock/soil, vegetation, moisture content
4. Complex volume scattering Object inhomogeneities, variation in dielectric property, radar wavelength, beam
coefficient polarization

2. Antenna gain. Antenna gain (G) is a measure of current polarized in some other plane. It is observed that depo-
losses within the antenna material and is constant for a larization depends on several factors, such as surface
particular operation. roughness, vegetation, leaf size, orientation etc. Based on
3. Radar wavelength. The radar return is directly related to earlier studies (Dellwig and Moore 1966; Ulaby et al.
the wavelength at which it operates. A particular SAR 1982; Evans et al. 1986; Zebker et al. 1987), it can be
imagery is acquired at a fixed wavelength and therefore generalized that: (a) volume scattering promotes depo-
this factor is constant. The various radar wavelengths larization whereas direct surface scattering does not, (b) a
used are listed in Table 12.1. The magnitude of the higher degree of small-scale surface roughness, e.g.
wavelength has to be taken into account while inter- inhomogeneous soil or grass cover, leads to greater
preting the imagery, as it primarily governs whether the depolarization, (c) the orientation of features such as
surface would behave as a rough or a smooth surface (see leaves etc. also appears to play a role in depolarization
Fig. 16.13 later). Further, smaller wavelengths are more (also see Sect. 16.6).
prone to interaction with surficial features such as leaves, Part of the depolarized wave-train can also be uniquely
twigs and top soil, and carry greater information about picked up by an antenna. In all, four polarization com-
surface roughness. On the other hand, larger wavelengths binations can be used:
penetrate deeper, their signal carrying more information
about the ground below the top cover of vegetation • Horizontal transmit–Horizontal receive (H–H)
and soil. • Horizontal transmit–Vertical receive (H–V)
4. Beam polarization. The plane of vibration of the elec- • Vertical transmit–Vertical receive (V–V)
trical field vector defines the plane of polarization of the • Vertical transmit–Horizontal receive (V–H).
EM energy wave. The back-scattered beam largely
retains its polarization. However, a part of the beam may Figure 16.8 gives an example of like- and cross-
become depolarized after reflection, i.e. it may become polarized SAR images.

Fig. 16.8 Multi-polarization (a HH, b VV, and c HV) SAR images of an agricultural field; the image set shows differences in ground response as
a function of SAR polarization (courtesy: Canada Center of Remote Sensing, Natural Resources Canada)
242 16 Interpretation of SAR Imagery

5. Look angle. Look angle may vary from 0° to 90°. slope is towards the sensor-craft, the radar return is high,
Commonly, in aerial surveys, the range of look angle is and if it is sloping away the radar return is low.
60°–85°. In space-borne SAR, as the altitude is high
(about 300–800 km), lower look angles are generally On a very rough surface, e.g. a densely vegetated surface,
used (see Table 12.2). The look angle influences radar the effect of surface roughness may be so intense that vari-
return as it governs the angle of incidence. Lower look ation in topographical slope may not be of any consequence,
angle carries greater information about the features and and at incidence angles >30° the back-scattering coefficient
their slope, but carries geometric distortion. As men- may be almost constant.
tioned later, look angle is more important in lithologic
discrimination. 2. Surface roughness. Roughness of the surface on which
6. Aspect angle. Look direction compared with the general the radar beam is incident is very important. It may be
terrain orientation is considered as the aspect angle, and considered as statistical variation of the random compo-
influences radar return (MacDonald 1969b; Eppes and nent in surface height relative to a certain reference
Rouse 1974). The same ground terrain may appear quite surface. The reference surface could be a mean surface or
differently in terms of radar return when imaged from could itself have a larger-wavelength periodic wave, such
different directions (see Fig. 16.22 later). Aspect angle as the pattern of a row-tilled field (Fig. 16.10).
has to be duly considered while interpreting SAR images
and comparing a radar image with other images. If the object surface is nearly smooth, the energy is totally
7. Slant-range. The slant-range distance depends on the reflected in a small angular region, and if the surface is
altitude of the sensor-craft and the look angle. For aerial rough, the energy is scattered in various directions
surveys, the typical slant-range distance is on the order of (Fig. 16.11a, b). Therefore, for rough surfaces, the intensity
5–20 km. For space-borne surveys, it may be on the of the radar return is nearly the same, quite irrespective of
order of 250–800 km. The radar return intensity the look angle (Fig. 16.11d). On the other hand, for smooth
decreases as the fourth power of slant range. Correction surfaces, the radar return is high at low look angles and falls
for variation in slant range is generally applied as a sharply with increasing look angle (Fig. 16.11c). All
radiometric correction during pre-processing of the data.

16.4.3 Terrain Factors

1. Local angle of incidence. Local angle of incidence (angle


between the direction of the incident beam and the local
normal to the surface) is one of the most important fac-
tors in radar return. A small incident angle leads to higher
back-scatter (Fig. 16.9a). As the incident angle increases,
the amount of back-scattered energy at the radar antenna
decreases (Fig. 16.9b, c). When the incident angle is
90°, the incident beam ‘grazes’ the plane and there Fig. 16.10 Surface roughness, as conceived due to height variations.
is no radar return. Consequently, in general, if the Random height variations superimposed on a periodic surface and that
over a flat mean surface (after JPL 1986)

Fig. 16.9 Variation in incidence angle caused by local topography, the orientation of the incoming radar beam being held constant. a Low
incidence angle; b moderate incidence angle; c high incidence angle. This variation results in corresponding differences in radar return
16.4 Radar Return 243

Fig. 16.11 Radar reflection


mechanism on a smooth and
b rough surfaces; radar return for
c smooth and d rough surfaces, as
a function of look angle

gradations occur between the two extremes of perfectly


smooth and completely rough surfaces.
Ground surfaces of the same roughness may behave as
rough or smooth, depending upon the radar wavelength and
look angle, and accordingly influence radar return. There-
fore, the same object may appear dark or bright on different
SAR images. For example, a field could be rough (bright) for
an X-band radar, but quite smooth (dark) for a P-band radar
(Fig. 16.12).
A number of models have been proposed to explain the
radar scattering in terms of ground surface roughness, and
have been reviewed by Ulaby et al. (1982). Criteria to
quantify smoothness or roughness of a surface have been
developed by different workers. One such criterion is the
Raleigh criterion, which classifies a surface as rough if the
root mean square of surface roughness (hrms) has the fol-
lowing relation:

k
hrms [ ð16:2Þ
8 cos h
where k is the wavelength and h is the incidence angle. Peak
and Oliver (1971) modified the above Raleigh criterion and
proposed the following norms.
The surface is

k
smooth : if hrms \ ð16:3Þ
25 cos h
k k
intermediate : if \hrms \ ð16:4Þ
25 cos h 4:4 cos h
Fig. 16.12 Dependence of radar return on ground surface roughness
k in relation to radar wavelength. The figure shows multi-wavelength
rough : if hrms [ ð16:5Þ
4:4 cos h (a X-band, 3-cm wavelength, HH; and b P-band, 72-cm wavelength,
HH) aerial SAR images. Agricultural fields (crops) appear as diffuse
For various sensor configurations in space missions, the reflector on the X-band and as specular reflector on the P-band
limits for rough/intermediate and intermediate/smooth are (courtesy of Aerosensing Radarsysteme GmbH)
244 16 Interpretation of SAR Imagery

schematically presented in Fig. 16.13, based on the above objects possess higher dielectric constant, the energy gets
criteria of Peak and Oliver (op. cit.). It is obvious that the confined to the top surface layers and back-scattering is
same surface may behave as rough or intermediate or higher.
smooth, depending upon the radar wavelength and incidence
angle. This opens up new avenues for mapping terrain fea- At radar wavelengths, it is found that most dry rocks and
tures by using multiple-band SARs. soils have a complex dielectric constant of the order of
barely 3–8, whereas water has the value of 80. Any increase
3. Complex dielectric constant (d). As mentioned earlier, in moisture content of soils/rocks results in a corresponding
both geometrical characteristics and electrical properties increase in dielectric constant of the mixture (e.g. Ulaby
govern the interaction of EM energy with matter. The et al. 1983; Wang et al. 1982). For this reason, wet areas
most important electrical property of matter relevant in have a higher back-scattering coefficient and appear brighter
transmission and back-scattering of EM radiation is its on radar images (Fig. 16.14a, b).
spectral complex dielectric property (see Table 2.2).
Objects with lower dielectric constant reflect less energy 4. Complex volume-scattering coefficient. The concepts of
than those with higher dielectric constant, other factors surface and volume scattering were discussed earlier in
remaining the same. The reason for this is that a lower Sect. 2.4. When a beam of EM energy is incident on a
dielectric constant permits greater depth penetration, and surface, a part of the energy becomes reflected/scattered
as the energy travels through a larger volume of material, on the surface, called surface scattering, and a part is
surface reflection becomes less. On the other hand, when transmitted into the medium. If the material is

Fig. 16.13 Rough/intermediate


and intermediate/smooth criteria
limits for selected spaceborne
SAR configurations. The
rough/intermediate limit
corresponds to hrms = k/(4.4 cos
h) and the intermediate/smooth to
hrms = k/(25 cos h). The surface
relief is also plotted in cm;
k = wavelength of the imaging
radar; h = incidence angle
16.4 Radar Return 245

Fig. 16.14 a Increase in radar back-scattering coefficient with rise in soil moisture (redrawn after Bernard et al. 1984). b ERS-1 image showing
high back-scatter (middle right of the image) due to a local storm (b Copyright © ESA; courtesy of Canada Centre of Remote Sensing)

homogeneous, the wave is simply transmitted of the volume-scattered energy may reach the sensor,
(Fig. 16.15a). If on the other hand, the material is inho- carrying information about the subsurface/under-cover
mogeneous (layering, variation in textural and composi- conditions. In nature, both surface and volume scattering
tional characters, moisture etc.), the transmitted energy is occur concurrently, differing only in relative magnitude
further scattered (volume scattering) (Fig. 16.15c). A part in different cases. The complex volume-scattering

Fig. 16.15 a Surface scattering—an ideal case when the energy is scattered from the top surface. c Volume scattering, the incident energy
incident on a homogeneous medium; the energy is partly scattered on is partly transmitted, and is further scattered owing to the inhomo-
the surface and partly transmitted; the transmitted ray simply travels geneity of the medium. d Volume scattering from vegetation; the
further without any subsequent scattering, owing to the homogeneity of energy suffers multiple reflections from crown, twigs, leaves, stem and
the material. b Surface scattering of microwave energy over water the ground
body; owing to high dielectric constant, nearly all the energy is
246 16 Interpretation of SAR Imagery

coefficient is a function of numerous variables such as Speckle is a noise which affects the interpretability
wavelength, polarization of the incident beam, dielectric of SAR image data. Speckle reduction (through image
properties of the medium and its physical configuration. filtering) may be carried out before attempting interpreta-
tion and digital classification of radar image data. More
Microwave energy interactions over water bodies are advanced and dedicated radar image digital analysis
marked by predominantly surface scattering, owing to a very aspects include texture extraction and image segmentation
high complex dielectric constant (Fig. 16.15b), whereas (see e.g. Ranson and Sun 1994; Woodhouse 2006; Moreira
vegetation cover is characterized by volume scattering et al. 2013).
(Fig. 16.15d). The multiple reflections from twigs, branches, The radar image data can be processed and combined
leaves etc. resulting from volume scattering not only affect with other data sets in several ways using digital image
the intensity of the back-scattered signal but also cause processing techniques.
depolarization. Volume scattering thus leads to a higher
response on cross-polarized images. These peculiar radar 1. Radar image data can be split into two components using
responses may permit discrimination of vegetation types and a digital filter; the low-frequency component, possibly
densities. related to lithological contrast, and the high-frequency
component, related to vegetation and sloping targets.
These two components can be coded in different colours,
16.5 Processing of SAR Image Data or in an IHS scheme, which may lead to a better image
interpretability (Daily 1983).
Simple SAR images are of limited value for interpretation. In 2. Radar data can be combined with other data, for example
areas of moderate to high relief, radiometric correction for from the VNIR and TIR regions, or geodata such as
topographic effects, to account for local change in incidence geophysical data etc., in order to enhance geoscientific
angle, is necessary before attempting thematic mapping from information (e.g. Croft et al 1993).
SAR data. Usually a transformation is carried involving a 3. Another interesting approach is to combine multi-channel
cosine function of the local incidence angle at each pixel. and/or multi-polarization radar images in different colours.
A prerequisite for this correction, therefore, is the avail- As different radar images with different bands and polar-
ability of DEM, over which the SAR image can be regis- izations are sensitive to ground surface features of different
tered. The effect of variable local incidence angle can also be dimensions, they collectively bring out greater geologi-
reduced by rationing different radar-band images acquired at cal–geomorphological detail. Figure 16.16 presents an
the same time (Ranson and Sun 1994). example of the coding of multiple SAR images in RGB.

Fig. 16.16 Radar image


showing landscape in the Hoei
Range of north-central Thailand;
the plateau terrain has been
dissected by fluvial erosion; note
the fine morphological details
brought out by combining
multiple SAR images (SIR-C
image; colour coding: R L-band
HH, G L-band HV, B C-band
HV) (courtesy of NASA/JPL)
16.6 SAR Polarimetry and Tomography 247

16.6 SAR Polarimetry and Tomography up on the SAR image data and their geographic position is
well defined (to within centimetre accuracy), their data are
SAR polarimetry deals with the study of the polarization used for geometric rectification of the SAR image data (see
pattern of radar backscatter. It has been mentioned earlier Sect. 17.5).
that scattering brings about a change in the electric field
polarization of the EM wave. The change in polarization
depends upon such factors as scattering mechanism, diver- 16.7.2 Scatterometers
sity of scatterers, orientation and shape of scattering ele-
ments, and multiple bouncing. Scatterometers are used to collect field data on
During the last some 10–15 years, considerable progress back-scattering coefficients of various types of natural sur-
has been made in the field of SAR polarimetry and tomog- faces. They can be operated at various wavelengths and look
raphy that can lead to information on volume scatterers and angles. Usually the field scatterometer data are presented as
holographic tomography (Reigber and Moreira 2000; Pap- curves/profiles. For example, Fig. 16.17a shows a simple
athanassiou and Cloude 2001; Fornaro et al. 2012; Moreira scatterometer data plot at a fixed wavelength and varying
et al. 2013). The basic principle is that it is possible to define look angle; Fig. 16.17b shows a data plot at a fixed look
the scattering matrix by using multi-polarization SAR sys- angle and variable wavelength. The scatterometer data can
tems. The SAR signal is transmitted in two orthogonal help in understanding the behaviour of various natural sur-
polarizations on a pulse-to-pulse basis, and the received faces at different wavelengths and look angles, which can be
signal is measured in all four polarizations (HH + HV + useful for SAR image interpretation and also designing
VV + VH)—all collectively, i.e. quad polarization. This improved SAR sensors.
measurement leads to modelling of scatterers. Thus, SAR
polarimetry has become a powerful technique for the
derivation of qualitative and quantitative physical informa-
tion of man-made and natural scatterers, with applications in
forestry, land, snow and ice, ocean and urban architecture.
SAR tomography is another technological advancement
that has been achieved during the last decade (Fornaro et al.
2012). It allows retrieval of the whole vertical distribution of
the scatterers by using multiple passes of the SAR sensor
over the same area but at different angles and positions. This
is known as SAR tomography and has found specific
applications in forestry and vegetation studies. The tomo-
grams, in such cases can be used, for example, for the
estimation of structural parameters of trees and vegetation,
delineation of topography underneath the foliage, and
detection of objects hidden beneath the foliage.

16.7 Field Data (Ground Truth)

In a SAR image interpretation study, the field data collected


is of two types:

1. Corner reflector
2. Scatterometer.

Fig. 16.17 a Field scatterometer data plot at a fixed wavelength


16.7.1 Corner Reflectors (CRs) (L-band 23.5 cm) and variable look angle; note that pahoehoe lava and
forest are quite similar to each other except at look angles of around
10°. b Field scatterometer data plot at a fixed look angle (45°) but
Corner reflectors are metallic tetrahedral bodies that are variable wavelength; note that the basalt flow and coniferous forest are
planted at well-known fixed positions in the field, during an quite similar except at a wavelength of 2.25 cm (a, b simplified after
aerial/space-borne SAR survey. As CRs are uniquely picked Ford 1998)
248 16 Interpretation of SAR Imagery

16.8 Interpretation and Scope for Geological (b) Slope. For detailed interpretation of radar images, it
Applications is necessary to consider slopes and individual
topographical facets. Based on the look angle,
For image interpretation, the well-known elements of photo shadows and back-scatter at individual topographi-
interpretation are applied to SAR images. These elements are cal facets, it may be possible to distinguish various
tone, texture, pattern, shape, size, shadow, and association slopes in terms of gradient (steep or gentle) and
and convergence of evidence. Special advantages of SAR direction of slope (forward sloping or backward
sensing for geological applications stem from the fact that sloping).
the SAR return is influenced by the following: (c) Drainage. Study of the drainage network is often the
first step in radar image interpretation. Types of
(a) surface geometry drainage patterns and their relation to geology are
– decametre-scale changes in surface slope well known (Table 16.1). Drainage channels are
– centimetre-scale changes in surface roughness generally black to dark on radar images, as a water
(b) complex dielectric properties surface leads to specular reflection and there is no
(c) illumination geometry. radar return at the antenna (Fig. 16.1). However, in
semi-arid climates valley bases are often vegetated
Therefore, SAR return provides information on: and appear bright on radar images (Fig. 16.2).
2. Structure. Radar images are found to be very useful for
• topography structural studies. Planar features such as bedding and
• orientation of features in space foliation planes are often well manifested, in terms of
• surface roughness of the ground variation in topography, relief, shadows, vegetation and
• vegetation soil. Extension of geological contacts can give the strike
• soil moisture trend, and regional structures such as fold closures can be
• drainage mapped (Fig. 16.18; also see Fig. 19.23). Further, in a
• ground inhomogeneity sedimentary terrain, morphological characteristics such
• metallic objects etc. as dip slope, slope-asymmetry, flatirons and V-shaped
patterns of outcrop develop, and can be observed on
1. Geomorphology. SAR images are of value in regional radar images. The manifestation and pattern of these
geomorphological studies, owing to the fact that minor features on SAR images is related to illumination
details are suppressed on SAR images. However, due to geometry, as illustrated in Fig. 16.19.
the fact that SAR images provide ‘oblique views’, car- In an area of low relief, higher look angle leads to
tographic application is possible only from ortho-rectified shadows which may enhance outcrop pattern and struc-
radar images. ture. On the other hand, if relief is high, then low-look
(a) Relief. General terrain relief/ruggedness is a very angle SAR data suffer from greater geometric distortions
important type of information that can be obtained and high-look angle SAR data would have large shadows.
from SAR images. The trend of physiographical
features, i.e. hills and valleys, local relief, and look
angle govern the manifestation of shadows and relief
impression. In most cases, physiographical features
run parallel to the structural trend or grain. If azi-
muth lines are aligned parallel to the physiographical
trend, then alignment of hills (corner reflectors) and
valleys (shadows) takes place perpendicular to the
look direction, and this enhances manifestation of
topographical features on radar images. Further, it
should be noted that in areas of high relief, sub-
stantial areas are likely to remain unilluminated at
high look angles, and strong geometric distortions
would occur at low look angles. On the other hand,
in regions of low relief, subtle relief impression on Fig. 16.18 Aerial SAR image of a structurally deformed terrain
radar images may be revealed by high look angle (Appalachian Mountains). X-band, swath width about 50 km (courtesy
radar imaging. of Intera Technologies Ltd.)
16.8 Interpretation and Scope for Geological Applications 249

Fig. 16.19 Influence of look


angle and direction on
representation of a dip slope on
SAR images. In a a dip slope and
four different SAR imaging
positions I, II, III and IV are
shown. b I–IV show the resulting
images (modified after Koopmans
1983)

Lineaments are extremely well manifested on SAR ima- on scatterometer data, Blom et al. (1987) inferred that for
ges, generally much better than on the corresponding separation of various lava flows, shorter wavelengths and
VNIR images and photographs (Fig. 16.20). In addition, smaller incidence angles are best; on the other hand, for
several instances have been reported where structural sedimentary rocks, longer wavelengths and somewhat
features such as faults have been detected, extended or larger incidence angles are preferred. Figure 19.32 shows
suspected, based on the SAR image data (see e.g. Berlin an example of a SEASAT image of Iceland, where lava
et al. 1980; Sabins 1983). flows of different ages can be discriminated owing to
It has been mentioned that look direction has a surface roughness differences; the older lava surfaces are
tremendous effect on the response and manifestation of smother due to weathering and infilling with time than
features on SAR images; Figs. 16.21 and 16.22 illustrate the younger lava surfaces.
the role of look angle and direction respectively. Due Further, it has generally been found that SAR images
consideration must be given to this fact while interpreting with low look angle possess higher sensitivity to slope
the relative dominance of SAR lineaments. This phe- changes and surface roughness, which is helpful in
nomenon can also be used to locate, detect, or confirm lithologic discrimination (see e.g. Ford 1998).
suspected lineaments through specifically oriented flight 4. Soil moisture. In many investigations, soil moisture is a
lines. On the other hand, look direction is not relevant for critical parameter. It has been mentioned that microwave
studying features lacking directional character, such as response is governed by the complex dielectric constant
natural forests and floods. (d) of objects and that water has a very high d, as
3. Lithology. On radar images, unique and direct lithologic compared to dry soil and rock. When moisture content
identification is not possible. Interpretation for lithology in soil or rock is increased, a regular increase in d of the
must be based on indirect criteria such as surface mixture takes place (Fig. 16.14). This can be used to
roughness, vegetation, soil moisture, drainage, relief, map soil moisture variation on the ground (Ulaby et al.
geomorphological features (sinkholes, flow structures 1983).
etc.) and special features, contacts, etc. The radar image 5. Depth penetration. Yet another important aspect in the
texture is also an important parameter in radar geology. context of geological application is the depth penetration.
Further, investigations have revealed good correlations Depth penetration increases with longer wavelengths as
between radar return and surface roughness of different shorter wavelengths undergo a sort of ‘skinning effect’.
types of lithologies (see e.g. Schaber et al. 1976). Based A number of conditions are necessary for depth
250 16 Interpretation of SAR Imagery

penetration by SAR, and all of them must be fulfilled


simultaneously (Blom et al. 1984; Ford 1998):
(a) The surface to be penetrated must be radar smooth.
(b) The cover to be penetrated must be extremely dry
(moisture <1%), homogeneous, fine grained, and not
too thick (up to two metres of penetration have been
documented and up to six metres may be possible).
(c) The subsurface layer should be rough enough to
produce backscatter to form an image.
Such conditions exist in aeolian sand sheets in
desertic conditions. Further, from a single radar
image, it is not possible decipher whether features
observed on a particular SAR image are surface or
subsurface. Therefore, complementary VNIR/SWIR/
TIR image data are necessary for unambiguous
interpretation.
6. Sea-bottom features. Lastly, some signatures of bottom
features on radar images of the sea have also been
reported in both shallow and deep water bodies (De Loor
1981; Kasischke et al. 1983; see Fig. 16.23). As water
has a high dielectric constant at microwave frequencies,
the microwaves cannot penetrate the water surface; it is
considered that it must be the expression of the water
surface ‘morphology’ (Bragg scattering of small gravity
waves, currents, influencing wave pattern etc.) which
‘reflects’ the bottom topography on radar images.

Future:

The future in imaging SAR seems to be in various


directions, such as the following (Moreira et al. 2013):

• Multi-channel polarimetry,
• Multi-frequency imaging,
• Further improving the range and azimuth resolutions,
Fig. 16.20 The structural details on radar imagery are better picked up • Ultra-wide swath imaging, and
due to microrelief than on the corresponding VNIR images and
photographs. a X-band radar imagery of an area in Bahia, Brazil and • SAR tomography.
b aerial photograph of the same area (a, b Courtesy of A.J. Pedreira)

Fig. 16.21 Effect of look angle on radar imagery. The three SAR differences in manifestation of structural features on the three images
images (a–c) of a test site in Italy have been acquired with a similar (a–c Courtesy of Institute of Digital Image Processing and Graphics,
look direction but varying look angle: a 70°, b 45°, c 20°. Note the Graz)
References 251

Fig. 16.22 Effect of look direction on SAR imagery. The three radar relatively suppressed and those aligned perpendicular to the look
images (a, b, c) of a test site in Italy have been acquired from different direction are enhanced (a–c Courtesy of Institute for Digital Image
look directions, indicated by arrows on the images. In each image, note Processing Graphics, Graz)
that the linear features aligned parallel to the look direction are

Croft FC, Faust NL, Holcomb DW (1993) Merging of radar and


VIS/IR imagery. In: Proceedings of 9th thematic conference on
geologic remote sensing, Pasadena, CA, 8–11 February,
pp 379–381
Curran PJ (1985) Principles of remote sensing. Longman, London
Daily MI (1983) Hue-saturation-intensity split-spectrum processing
of Seasat radar imagery. Photogramm Eng Remote Sens
49:349–355
De Loor GP (1981) The observation of tidal parameters, currents and
bathymetry with SLAR imagery of the sea. IEEE J Oceanic Eng
6:124–129
Dellwig LF, Moore RK (1966) The geological value of simultaneously
Fig. 16.23 Seasat image showing ocean features in the Gulf Stream, produced like-and cross-polarized radar imagery. J Geophy Res
Western Atlantic Ocean) (courtesy of MacDonald Dettwiler and Assoc. 71:3597–3601
Ltd.) Elachi C (1980) Spacebome imaging radar: geo1ogic and oceano-
graphic applications. Science 209(4461):1073–1082
Eppes TA, Rouse JW Jr (1974) Viewing-angle effects in radar images.
Photogramm Eng 40:169–173
References Evans DL, Farr TG, Ford JP, Thompson TW, Wemer CL (1986)
Multipolarization radar images for geologie mapping and vegetation
discrimination. IEEE Trans Geosci Remott Sens 24:246–257
Berlin GL, Schaber GG, Horstman KC (1980) Possible fault detection Ford JP (1998) Radar geology. In: Henderson FM, Lewis AJ
in Cottonball Basin, California: an application of radar remote (eds) Principles and applications of imaging radar. Manual of
sensing. Remote Sens Environ 10:33–42 remote sensing, 3rd edn, vol 2. Wiley, New York, pp 511–565
Bernard R, Taconet O, Vidal-Madjar D, Thony JL, Vauclin M, Ford JP, Cimino JB, Elachi C (1983) Space shuttle Columbia views the
Chapoton A, Wattrelot F, Lebrun A (1984) Comparison of three world with imaging radar: the SIR-A experiment. Jet Propulsion
in-situ surface soil moisture measurements and application to Lab Publication No 82–95, Pasadena, CA, 179 p
C-band scatterometer calibration. IEEE Trans Geosc Remote Sens Fornaro G, Pauciullo A, Reale D, Zhu X, Bamler R (2012) SAR
GE-22(4):388–394 tomography: an advanced tool for 4D spaceborne radar scanning
Blom RG, Crippen RE, Elachi C (1984) Detection of subsurface with application to imaging and monitoring of cities and single
features in SEASAT radar images of Means Valley, Mojave Desert, buildings. IEEE Geosci Remote Sens Newslett 10–18
California. Geology 12:346–349 Kasischke ES, Schuchman AR, Lyzenga RD, Meadows AG (1983)
Blom RG, Schenck LR, Alley RE (1987) What are the best radar Detection of bottom features on Seasat synthetic aperture radar
wavelengths, incidence angles, and polarization for discrimination imagery. Photogram Eng Remote Sens 49:1341–1353
among lava flows and sedimentary rocks? a statistical approach. Koopmans BN (1983) Spacebome imaging radars: present and future.
IEEE Trans Geosci Remote Sens GE-25(2):208–212 ITCJ 3:223–231
Buchroithner MF, Granica K (1997) Applications of imaging radar in Leberl FW (1998) Radargrammetry. In: Henderson FM, Lewis AJ
hydro-geological disaster management—a review. Remote Sens (eds) Principles and applications of imaging radar, manual of
Rev 16:1–134 remote sensing, vol 2, 3rd edn. Wiley, New York, pp 183–269
252 16 Interpretation of SAR Imagery

Lewis AJ, Henderson FM (1998) Radar fundamentals: the geoscience Ranson KJ, Sun G (1994) Northern forest classification using temporal
perspective. In: Henderson FM, Lewis AJ (eds) Principles and multifrequency and multipolarization SAR images. Remote Sens
applications of imaging radar. Manual of remote sensing, 3rd ed, Environ 47(2):142–153
vo1 2, Wiley, New York, pp 131–181 Reigber A, Moreira A (2000) First demonstration of airborne SAR
MacDonald HC (1969a) Geologic evaluation of radar imagery from tomography using multibaseline L-band data. IEEE Trans Geosci
Darien Province, Panama. Mod Geol 1:1–63 Remote Sens 38(5):2142–2152
MacDonald HC (1969b) The influence of radar look direction on the Sabins FF Jr (1983) Geologic interpretation of space shuttle radar
detection of selected geologic features. In: Proceedings of 6th images of Indonesia. Am Assoc Petrol Geol Bull 67:2076–2099
international symposium on remote sensing of environment, Ann Sabins FF Jr (1987) Remote sensing principles and interpretation. 2nd
Arbor, MI, pp 637–650 edn, Freeman, San Francisco, 449 pp
MacDonald HC (1980) Techniques and applications of imaging radars. Schaber GG, Berlin GL, Brown WE Jr (1976) Variations in surface
In: Siegal BS, Gillespie AR (eds) Remote sensing in geology. roughness within Death Valley, Califomia, geologic evaluation on
Wiley, New York, pp 297–336 25-cm wavelength radar images. Geol Soc Am Bull 87:29–41
MacDonald HC, Waite WP (1973) Imaging radars provide terrain Trevett JW (1986) Imaging radar for resources surveys. Chapman and
texture and roughness parameters in semi-arid environments. Mod Hall, London, p 313
Geol 4:145–158 Ulaby FT, Moore RK, Fung AK (1982) Microwave remote sensing—
Moreira A, Prats-Iraola P, Younis M, Krieger G, Hajnsek I, Papathanas- active and passive, vol II, Radar remote sensing and surface
siou KP (2013) A tutorial on synthetic aperture radar. IEEE Geosci scattering and emission theory. Addison-Wesley, Reading, USA
Remote Sens Mag, pp 1–43, doi:10.1109/MGRS.2013.2248301 Ulaby FT, Brisco B, Dobson MC (1983) Improved spatial mapping of
Naraghi M, Stromberg W, Daily M (1983) Geometric rectification of rainfall events with spaceborne SAR imagery. IEEE Trans Geosci
radar imagery using digital elevation models. Photogram Eng Remote Sens GE-21:118–121
Remote Sens 49:195–199 Wang JR, Schmugge TJ, Gould WI, Glazar WS, Fuchs JE, McMurtrey JE
Papathanassiou KP, Cloude SR (2001) Single-baseline polarimetric (1982) A multi-frequency radiometric measurement of soil moisture
SAR interferometry. IEEE Trans Geosci Remote Sens 39(11): content over bare and vegetated fields. Geophys Res Lett 19:416–419
2352–2363 Woodhouse IH (2006) Introduction to microwave remote sensing. CRC
Peak WH, Oliver TC (1971) The response of terrestrial surfaces at Press, USA, 400 p
microwave frequencies. Ohio State Univ Electrosic Lab 2440-7 Zebker HA, van Zyl JJ, Held DN (1987) Imaging polarimetry from
Tech Rep AFAL-TR70-301, p 255 wave synthesis. J Geophys Res 92 (Bl):683–701
SAR Interferometry
17

17.1 Introduction 17.2 Principle of SAR Interferometry

Synthetic Aperture Radar (SAR) interferometry, also called The radar signal is acquired as a complex signal comprising
Interferometric SAR (InSAR, also IFSAR or ISAR), is a real (Re) and imaginary (Im) components (which is not so in
relatively new technique for producing digital elevation optical data). These values contain information about the
models (DEMs). It has made rapid strides during the last amplitude (I) and phase (/), which can be extracted from the
two-three decades. The technique provides a higher order of real and imaginary components using the following
accuracy than the conventional stereo radargrammetry, and relations:
has reached an operational status for generating precise  
Im
elevation data. £ ¼ arctan ; ð17:1Þ
Re
The InSAR method combines complex images recorded
by SAR antennas at different locations and/or at different pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
I¼ Im2 þ Re2 ; ð17:2Þ
times to generate interferograms. This permits determination
of differences in the 3-D location of objects, i.e. generation Thus, a unique feature of SAR data is that the phase of
of DEMs. The DEMs have wide application for geoscientific the complex signal is measurable directly, in contrast to the
studies, e.g. for topographic mapping, geomorphological case of optical data which has only intensity values. SAR
studies, detecting surface movements, earthquake and vol- images in general utilize amplitude information (Chap. 16),
canic hazard studies, and several other applications. whereas phase information is used in SAR interferometry.
Historical development. The use of SAR interferometry The amount of phase equals the two-way travel path
(InSAR) can be traced back to the 1960s, when the US mil- divided by the wavelength, and therefore carries informa-
itary applied it for mapping Darien Province from aerial radar tion on the two-way travel path to sub-wavelength accuracy
image data. Graham (1974) was the first to describe InSAR (Fig. 17.1). The phase of a single SAR image may not be
technique for terrestrial topographic mapping. Subsequently, of any particular utility. However, phases of two SAR
Zebker and Goldstein (1986) applied the technique to aerial images of the same ground scene, acquired from slightly
SAR data, and Gabriel and Goldstein (1988) to space-borne differing angles, possess a phase difference. This forms the
SIR-B data. Gabriel et al. (1989) introduced the technique of basic strength of the SAR interferometric technique—that
differential SAR interferometry (DInSAR) using data from the difference in slant ranges from two antenna positions
three different Seasat passes. Until 1991, the research in can be measured with fractional wavelength accuracy. The
InSAR was constrained due to limited availability of suitable phase difference is related to slant-range difference and can
SAR data pairs. In 1991, ERS-1 commenced delivering be processed to derive height information, i.e. generate a
interferometric data sets, and was followed by ERS-2 in 1995. DEM.
ERS-1 and ERS-2 were operated in tandem mode to provide The theoretical aspects of the technique are well under-
SAR data sets of the same ground scene with one-day interval stood and reviewed by several workers (see e.g. Gens and
that was highly suitable for interferometry. This generated van Genderen 1996; Madsen and Zebker 1998; Franceschetti
world-wide interest and provided a fillip to research into this and Lanari 1999). If we consider two SAR images taken
technique, bringing it to an operational stage. Subsequently, a from two slightly different viewing angles (Fig. 17.2),
number of satellites have been launched providing SAR data assuming no change in backscatter, the phase difference (/d)
suitable for interferometry (e.g. Radarsat, JERS, ALOS, for the same surface element in the two SAR images is
Envisat, Risat, Sentinel, Cosmo-Skymed, TerraSAR-X). proportional to the travel-path difference (Δr = r1 − r2).

© Springer-Verlag GmbH Germany 2018 253


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_17
254 17 SAR Interferometry

horizontal. Slant ranges from S1 and S2 to the same ground


element are r1 and r2. With this geometry, the height (h) of
the point above the datum plane can be represented as

h ¼ H  r1 cos h; ð17:4Þ

where h is the look angle. This can be rewritten as


 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
h ¼ H  r1 cos a 1  sin2 ðh  aÞ  sin a  sinðh  aÞ ;

ð17:5Þ
This means that h can be computed if we know H, a and
(h − a). H and a are directly known from the orbital flight
Fig. 17.1 The concept of phase and phase difference. Consider an data; the third parameter (h − a) can be derived from range
object A being viewed from two SAR antenna positions S1 and S2. The difference (rd) and baseline length (B) [as (h − a) = sin−1
emitted wave trains have the same phase at the antennas; however, the (Δr/B); Fig. 17.2b]. Thus, this method offers a means to
back-reflected wave trains differ in phase. Also, there is an ambiguity of
obtain elevation data of various points on the Earth’s surface
many cycles
—by measuring the travel-path differences.
The travel-path difference [2(r1 − r2) = 2Δr] is usually
4p
£d ¼ ðr1  r2 Þ; ð17:3Þ much greater than the wavelength k. For example, in
k
microwave sensing from satellites, the travel-path difference
where k is the radar wavelength. Figure 17.2 shows an is a few hundred kilometres and k is in the range of cen-
idealized case, such that two SAR image are acquired from timetres. Therefore, the measured phase shows an ambiguity
two antennas, S1 and S2, on parallel flight paths with flying of many cycles (Fig. 17.1). However, for adjacent pixels, the
height H. The baseline is B, and has a tilt angle a from the relative difference in the two-way travel-path is smaller than

Fig. 17.2 a Geometry of SAR interferometry. b Simplified diagram showing angular relations (assuming orbital tracks mutually parallel and
oriented perpendicular to the plane of the paper)
17.2 Principle of SAR Interferometry 255

k and the phase difference is usually not ambiguous. Further,


a simple relation between phase difference (/d) and relative
terrain elevation ‘h’ can be derived [see Eq. (17.9)].

17.3 Configurations of Data Acquisition


for InSAR

SAR interferometry can be considered to be of two main


types: (a) single-pass type, in which two or more antennas
simultaneously image the same ground scene, and (b) re- Fig. 17.4 The shuttle radar topographic mission (SRTM) (after
peat-pass type, where separate passes over the same target NASA)
area are used to form an interferogram. Single-pass interfer-
ometry can be in across-track mode or in along-track mode. Along-track interferometry employs two antennas on the
Across-track interferometry uses two SAR antennas same platform, such that the two antennas are separated from
mounted on the same platform for simultaneous data each-other by a fixed distance in the flight direction
acquisition, the two antennas being separated from each (Fig. 17.3b). This configuration has also been employed
other by a fixed distance in a direction perpendicular to the from aerial SAR systems and is suited for mapping moving
flight path (Fig. 17.3a). The method has a certain spatial objects, e.g. water currents, vehicles or boats etc., and has
baseline and ideally zero temporal baseline. The across-track importance for surveillance (Goldstein et al. 1989). The
baseline allows for the measurement of target elevation, i.e. phase difference between the two corresponding signals is
terrain topography. caused by movement of an object in the scene. The velocity
A number of aerial surveys have been carried out using of the moving object can be computed from the relation
this configuration. The Shuttle Radar Topographic Mapping
(SRTM) mission (flown February 2000) also falls in this 4p v
£d ¼   Bx ð17:6Þ
category (Fig. 17.4). A problem with airborne systems is k V
that errors induced by aircraft roll cannot be distinguished
where
from the effect of topographic slope. In view of the fact that
satellite track is more stable than aerial flight path, this /d phase difference
problem is less critical in satellite sensing. k wavelength of SAR

Fig. 17.3 Configurations of data acquisition for InSAR. a Across-track interferometry; b along-track interferometry and c repeat-track
interferometry
256 17 SAR Interferometry

v velocity of the moving object complex SAR images decreases systematically with
V velocity of the sensor platform increasing baseline length. The baseline length for which the
Bx baseline component in the direction of motion of the two SAR images become completely decorrelated is known
sensorcraft (x direction). as the critical normal baseline distance.
For DEM generation, it is very important to estimate the
In aerial sensors, with two antennas mounted on the same
baseline precisely. An error in baseline estimation would
platform, the method allows measurement of the velocity
result in erroneous estimates of ground heights and the two
component in the direction of line of sight, typically the
cannot be distinguished from each-other. In areas of flat
velocity sensitivity being on the order of a centimetre per
topography, the baseline can also be estimated from the local
second.
fringes in the interferogram. In general, satellite orbital data
Repeat-pass interferometry involves acquisition of SAR
is used for estimating the baseline.
data with a single antenna covering the same area multiple
As baseline is a key parameter in SAR interferometry, the
times, each time with a slightly different viewing geometry
configuration of SAR data acquisition is also identified in
(Fig. 17.3c). This method requires accurate information on
terms of baseline nomenclature, viz. spatial baseline, tem-
the flight path (platform locations) and is therefore better
poral baseline and mixed baseline.
suited for satellite systems than for airborne systems.
A spatial baseline is the one in which target area is
Satellite systems repeat their orbit at regular interval of time;
imaged from two different SAR antennas a certain distance
therefore, the method is also called repeat-track interfer-
apart, simultaneously (i.e. zero temporal baseline). This is
ometry (RTI) or multiple-pass interferometry.
the same as across-track configuration (Figs. 17.3a and
17.4). Temporal baseline means that the SAR image pair is
acquired from the same spatial position but at different times.
17.4 Baseline A close approximation is the ‘along-track’ configuration
(Fig. 17.3b). By suitably varying the temporal baseline
Baseline is the most critical parameter in SAR interferom- between the SAR data acquisitions, velocities ranging from
etry. Broadly, baseline is a measure of the distance between several meters per second to as low as a few millimeters per
the two SAR antenna locations. It can be given in terms of year can be accurately measured. Satellite repeat-pass data
length and orientation angle (with respect to horizontal) of sets often have mixed baselines—both spatial and temporal
the line joining the two SAR antenna positions in space (Fig. 17.3c).
(Fig. 17.2b). Baseline can be described or resolved in terms When the temporal baseline is large (i.e. data sets are
of horizontal and vertical components. More commonly, it is acquired at substantial time intervals), the change and
given in terms of components resolved parallel and per- growth of vegetation etc. lead to differences in distribution of
pendicular to the slant range, which are called parallel-to- scatterers in the two images. This leads to loss of coherence,
range baseline (Br) and normal baseline (Bn) components called temporal decorrelation, and renders the image pair
respectively. The normal baseline component (Bn) controls unstable for interferometric processing. For example, in the
measurement of target elevation and is the key element in case of ocean-surface monitoring, temporal baselines larger
InSAR for topographic modeling. than a few seconds are undesirable and often lead to noisy
For interferometry, it is extremely important to have a interferograms.
SAR image pair with a suitable normal baseline (Bn) com- The various spaceborne SAR sensors have been dis-
ponent. This factor alone determines whether a specific SAR cussed in Chap. 15.
image pair can be used for a particular measurement-
application or not.
The sensitivity of InSAR to height measurements can be 17.5 Ground Truth and Corner Reflectors
improved by increasing the normal baseline Bn. Consider a
fixed scatterer on the ground like a building being viewed For validation of remote sensing results, it is necessary to
from two SAR positions or angles. It is obvious that the have ground-truth data. In the case of SAR interferometry,
relative phase between the radar responses from the scatterer the main task is to provide information on 3-D location of
will be different at the two SAR antenna positions. The objects, which can be applied to various geoscientific
difference will be small for short baselines Bn, but with themes. For this purpose, corner reflectors (CRs) are instal-
increasing baseline length Bn, the phase contributions from led at various sites in the field, concurrently with
the scatterer will become more and more different between aerial/satellite overpass. Usually, trihedral corner reflectors,
the two SAR images. Thus, the sensitivity of InSAR to made out of three triangular metallic plates, are used
height measurements can be improved by increasing Bn. (Fig. 17.5). CRs serve as reference points in the SAR image
However, at the same time, the correlation between the two and are also required for the calibration of the SAR system.
17.5 Ground Truth and Corner Reflectors 257

1. Selection of data sets

The following guidelines are used for selecting a suitable


pair of SAR images for interferometry.
Wavelength of the two SAR images should be exactly
same, otherwise they will not form a pair for interferometric
processing. For example, ERS-1/-2 can form a pair (for both
k = 5.7 cm) but ERS-1 (k = 5.7 cm) and JERS-1
(k = 24 cm) cannot form an interferometric pair.
Spatial baseline component (i.e. the normal baseline
distance) should be within the limits for a particular appli-
cation, as each type of application requires a certain normal
Fig. 17.5 Corner reflector (courtesy of K.S. Rao and Y.S. Rao) baseline distance. If the baseline is longer, then it will result
in spatial decorrelation between phases of the two scenes.
Temporal baseline, i.e. the time difference between the
Information on accurate location of the CRs is obtained acquisition of two SAR images, should be suitable for inter-
through a differential GPS system and this helps in geo- ferometry. For DEM generation, zero temporal baselines is
metric calibration of the SAR data and validation of the preferred. A longer time interval between two scenes is likely
DEM results. to be accompanied by changes in backscatter character of the
ground features (e.g. vegetation etc.), which results in loss of
coherence in the image pair (temporal decorrelation). For this
17.6 Methodology of Data Processing reason, data from satellite overpasses in tandem mode, closely
following each other or separated by a few hours or a day (e.g.
The phase information required for interferometric process- ERS-1/-2, TerraSAR-X/Tandem-X, Sentinel constellation,
ing is contained in single-look complex (SLC) and raw data Cosmo-Skymed constellation) are preferred for InSAR DEM
products, and is not available in other types of SAR products generation. For applications of velocity mapping etc., suitable
such as multiple-look products. Therefore, for InSAR pro- temporal baselines should be used to allow detection of fea-
cessing purposes, SLC (alternatively raw) SAR data prod- tures of interest.
ucts are required.
Important steps in SAR interferometry procedure are the 2. Co-registration of SAR images
following (Fig. 17.6):
The two SAR images have to be co-registered very accu-
(1) Selection of data sets rately and this operation forms the most critical and
(2) Co-registration of the images time-consuming step. The quality of the final interferogram
(3) Generation of interferogram is governed by the accuracy of co-registration. The concept
(4) Phase unwrapping and general procedure of co-registration are the same as
(5) Height calculation/DEM generation discussed in Sect. 13.4, viz. calculating the field of the slave
(6) Geocoding. image in the geometry of the master image. Usually, first a

Fig. 17.6 Schematic of data


flow in SAR interferometric
processing
258 17 SAR Interferometry

coarse co-registration is carried out using data from satellite corresponding master image. In essence, this means sub-
orbits, or using tie points selected in both images. After this, traction of one phase value from the other, at each pixel in
fine co-registration is implemented, for which various sta- the co-registered images, and averaging the amplitude val-
tistical methods specific to SAR images have been devel- ues. The resulting difference image possesses phase val-
oped, such as: maximum value of coherence coefficient, ues in the range of −p to +p, which appear as fringes
cross-correlation of pixel amplitude, and minimization of (Fig. 17.7).
average fluctuation of phase difference (see e.g. Fran- At this stage, it is necessary to remove the effect of flat
ceschetti and Lanari 1999). Usually, the accuracies of topography from the fringe image, this step being called
co-registration obtained are in the range of 1/20 of a pixel. ‘flattening’ (Fig. 17.8). The idea is that some regular fringes
Any incorrect alignment, even on a sub-pixel level, causes would have appeared even if the Earth’s surface (screen) was
reduction in coherence and may have a substantial adverse essentially flat; therefore, it is necessary to eliminate the
influence on the interferogram, as scatterers within the cor- ‘flat-topography fringes’, in order to obtain fringes related to
responding pixels are not same in the two images. topographic variation only. Flattening is followed by a
smoothing filter to remove the noise.
3. Generation of an interferogram
4. Phase unwrapping
Interferogram is generated by multiplying the complex SAR
values of the slave image with the complex conjugate of the Phase unwrapping is a very important aspect of interfer-
ometry and leads to determination of the absolute phase from
the measured phase. As the height of a terrain feature
increases, the phase also correspondingly increases. How-
ever, as phase value is a periodic function of 2p, it gets
wrapped up after reaching 2p. Therefore, the measured
phase (/M, also called principal phase) values range between
−p to +p, irrespective of the elevation, and cannot be used
directly to estimate terrain elevation. In order to obtain
correct elevation data, the measured phase values (/M) must
be unwrapped. The phase unwrapping may be considered as
adding an integer number of cycles to each pixel to obtain
the absolute phase. Figure 17.9 shows the concept of mea-
sured phase, wrapped phase and unwrapped phase, in rela-
tion to terrain height.
The wrapped and unwrapped phases are related as

Unwrapped ðabsoluteÞphase ¼ measured phase ð/M Þ þ 2np


ð17:7Þ

Fig. 17.7 Interferometric fringes of Mt. Vesuvius (Italy), generated where n is an integer number. The correct value of n can be
from ERS data pair (Rocca et al. 1997) determined by a phase unwrapping process in the spatial

Fig. 17.8 Fringe images


generated from ERS-SAR data of
the Panvel area near Mumbai.
a Before flattening and b after
flattening (courtesy of K.S. Rao
and Y.S. Rao)
17.6 Methodology of Data Processing 259

of the absolute position of a pixel in the specific/standard


Cartesian reference system. The transformation is usually
carried out using a reference DEM. In case a reference DEM
is not available, geocoding transformation can also be
achieved by accurate estimation of the baseline and imaging
geometry.

17.7 Differential SAR Interferometry


(DInSAR)

Differential SAR interferometry is a powerful technique used


to detect relative changes, of the order of a few centimetres
or even less, occurring in the vertical direction on the Earth’s
surface (Gabriel et al. 1989). The technique utilizes three or
Fig. 17.9 The concept of phase wrapping. Consider elevation differ- more SAR images. The basic concept is illustrated in
ence (Δh) between A and B. It can be represented in term of several full Fig. 17.10. The three satellite passes are processed to yield
cycles of the SAR wavelength (which get wrapped-up) plus an
incomplete cycle forming the measured phase component. Absolute two interferograms, for example one from passes 1 and 2,
phase consists of measured phase plus wrapped-up component and another from passes 1 and 3. Then, the two interfero-
grams are differenced to generate a differential interfero-
domain. Assuming that the data are adequately sampled, the gram. This depicts surface changes and movements that have
phase difference between adjacent pixels may be less than taken place in the intervening period, e.g. those resulting
0.5 cycles. from earthquakes, ground subsidence, volcanic activity,
The various phase-unwrapping algorithms are classified landslides etc. If no surface change has occurred in the
into two major categories: (a) path-following algorithms and intervening period, the differential interferogram would
(b) least-square algorithms. The path-following algorithms show near-zero values throughout. The relation between
use pixel-by-pixel operation to unwrap a phase and bring the differential phase change (Δ/d) and surface elevation change
phase differences in adjacent pixels to within the range +p to (Δh) is given as
−p (within ±0.5 cycles). The least-square algorithms mini-
mize a global measure of the differences between the gra-
dients of the wrapped input phase and the unwrapped
solution (Ghiglia and Pritt 1998).

5. DEM generation

Height values at various points in the terrain (DEM) are to


be derived from the phase values in the interferogram. The
interferometric phase image is a representation of the relative
terrain elevation with respect to slant-range direction. This
coordinate system has to be transformed to horizontal plane
(ground-range and azimuth range axes) in order to generate a
standard DEM. The correspondence between slant-range and
ground-range is quite irregular, as the SAR image carries
effects of foreshortening and layover. Further, very accurate
estimation of baseline is essential for accurate DEM
generation.

6. Geocoding of the DEM

The InSAR DEM generated as above is in a co-ordinate


system related to the SAR geometric configuration. Con-
version is required to present the data/DEM in the universal
cartographic grid, called geocoding. It requires computation Fig. 17.10 Principle of differential SAR interferometry
260 17 SAR Interferometry

 
B2 4p and artifacts in the phase difference. This effect is more
D/d ¼ /2  /1 ¼  Dh ð17:8Þ
B1 k pronounced if the atmosphere is highly heterogeneous,
i.e. spatially varying. The interferograms produced from
where /1 and /2 are the phase values in the two interfero- two-pass interferometry exhibit a greater influence of
gram images, and B1 and B2 are the respective parallel atmospheric heterogeneity than do those from three-pass
baseline components. interferometry.
Experimental proof of the sensitivity of DInSAR was 3. Temporal decorrelation. Physical changes in the char-
obtained through field experiments by the ESA in which a acter of the terrain surface (i.e. vegetation etc.) occurring
large number of corner reflectors were installed in the field between two SAR data acquisitions lead to decrease in
and some of them were (secretly) moved in the middle of the phase coherence. This causes temporal decorrelation of
experiment by 1 cm, which could be detected by repetitive the data sets and is undesirable for InSAR processing.
DInSARs (Coulson 1993). 4. Baseline decorrelation. The length of the spatial baseline
At times, three SAR passes with good temporal coherence controls viewing geometry. Longer baselines imply greater
and suitable baselines may be difficult to obtain for DInSAR relative change in viewing geometry, which results in
processing. In such a situation, an existing good-quality DEM decreased coherence and greater decorrelation. This factor
may be used as one of the data sets. For local small-scale is particularly relevant in considering the applicability of
changes such as landslides, the DEM may not be used as the repeat-pass satellite data for various InSAR applications.
third data layer. However, for changes occurring over large
areas, such as those resulting from a major earthquake, a
high-quality DEM may be used as the third data layer for 17.9 InSAR Applications
DInSAR. For example, Massonnet et al. (1993) used two
SAR passes and an existing DEM to generate a differential SAR interferometry has applications to a number of geosci-
interferogram for the Landers earthquake of 1992. entific themes and problems, e.g. generating a DEM, which
Several approaches have evolved for processing InSAR forms a basic item of information for many tasks, besides study
image pair. The most common approach is called the per- of earthquake effects, monitoring of volcanoes, glacier ice
manent scatterer (PS) technique (Ferretti et al. 2000, 2001). movement, landslides, land subsidence etc. (see e.g. Massonnet
A PS appears in the image as a pixel showing stable and Feigl 1998; Rosen et al. 1999). The resolution require-
amplitude behaviour in time and usually corresponds to ments for different applications are different (Fig. 17.11). As
point-like targets such as buildings or rocks (conceptually, it most of these requirements can be met from SAR interferom-
is analogous to a GCP in surveys). etry, the technique has aroused much interest world-wide.
Basically, DInSAR technique can detect the component
of motion in the line of sight. The full amount and direction 1. Digital elevation model (DEM)
of motion can be estimated by combining data from
ascending and descending satellite passes. High-resolution topographic data constitute a useful input
for a number of geoscientific terrestrial ecosystem applica-
tions for the very simple reason that topography influences
17.8 Factors Affecting SAR Interferometry nearly all surface processes—solar radiation, rainfall, water
runoff, groundwater, microclimate, vegetation distribution,
A number of factors affect interferometric SAR data pro- soil development, to name just a few. The foremost appli-
cessing. An in-depth study of quality assessment of SAR cation of SAR interferometry lies in the generation of DEM.
interferometric data has been carried out by Gens (1998). The various steps involved in generating a DEM, including
Some of the more important factors are the following. generating a phase-difference image, phase unwrapping,
converting it into relative height difference and geocoding,
1. Baseline determination. In generating DEM it is very have been discussed earlier.
important to know the baseline and viewing geometry The relation between relative change of height of the
correctly. The exact orbit/position of the satellite is terrain (h) and phase difference (/d) can be written as
influenced by numerous factors, including the Earth’s
gravitational field, the Moon, the Sun, other planets etc. kr1 sin h
h¼  /d ð17:9Þ
Usually, data on orbit-state vectors are taken to provide 4pBn
information on position and velocity of the satellite, and
where
to compute the baseline.
2. Atmosphere. The atmospheric interaction with the SAR r1 the range distance
wavelength leads to refraction (causing mis-registration) h look angle
17.9 InSAR Applications 261

seismological observations. For example, a part of Turkey


was struck by a devastating earthquake (Mw = 7.4) on 17
August 1999. The epicentre was located on the eastern coast
of the Marmara Sea near Izmit. The seismic activity
(hypocentre 17 km deep) took place on the North Anatolian
fault system, with movements of a pure right-lateral
strike-slip type. The SAR image data was collected by
ERS-2 on 13 Aug. 1999, and 17 Sept. 1999. Figure 17.12
shows the co-seismic fringe pattern due to the earthquake
(interferogram). Each of the colour contours of the inter-
ferogram represents 28 mm of vertical motion towards the
satellite. Several slip planes can easily be identified.
In another example, a large earthquake (Mw 7.8) occur-
red in New Zealand on November 13, 2016. The
ALOS-2/PALSAR image data were used for measuring the
co-seismic three-dimensional deformation for this earth-
quake (Fig. 17.13). Several slip planes and discontinuity
surfaces are detected. The results revealed that highly
complex movements on several faults occurred and the
maximum uplift reached *10 m.

3. Land subsidence

Land subsidence monitoring is another important area of


application of DIn-SAR, in which the mining industry and
Fig. 17.11 Vertical and horizontal resolution requirements for various environmentalists are particularly interested. For example,
geoscientific applications (modified after Zebker et al. 1994)
land subsidence related to groundwater abstraction was

Bn component of normal baseline


/d phase difference

2. Earthquakes

Massonet et al. (1993) demonstrated for the first time the


applicability of differential InSAR for detecting minute
co-seismic surface changes, with an accuracy of up to a
centimetre or better. Thereafter, several other applications of
DInSAR materialized, e.g. for land surface changes associ-
ated with landslides, volcanoes, glaciers, subsidence etc.
The region of Southern California was struck by an
earthquake, called Landers earthquake (Mw = 7.3), on 28
June 1992. It led to rupturing over a complex fault system
spread over an 85-km length. Massonnet et al. (1993) used
ERS-1 SAR passes of 24 April 1992 and 7 August 1992 (as
pre-and post-earthquake data sets) and an existing
good-quality DEM, and by DInSAR processing obtained an
image that showed co-seismic ground deformation. This Fig. 17.12 Interferogram for the Izmit earthquake, Turkey, 17 August
result provided a big boost and increased confidence in the 1999 (generated from 13 August 1999 and 17 September 1999 ERS-2
technique. image pair, with normal baseline of 65 m). The event occurred along
Following this, several studies of this type have been the North Anatolian Fault with right-lateral displacement; several slip
planes can easily be identified. Each of the colour of the interferogram
carried out world-wide. Most of the reported results have represents 28 mm of vertical motion towards the satellite (courtesy:
agreed well with the conventional geodetic and photojournal-JPL)
262 17 SAR Interferometry

Fig. 17.14 ERS-1 SAR interferogram for landslide in France gener-


ated with 8-day interval. Landslide velocity of 1 cm per day was
estimated (Rocca et al. 1997)

5. Ice and glacier studies

Glacier and ice cover a significant part of the Earth and


influence global climate and water resources. The InSAR
technique can help estimate glacier/ice flow velocity and
changes in topography (Goldstein et al. 1993). Figure 17.15
Fig. 17.13 Three dimensional deformation field associated with the shows an interferogram example from Bagley Ice Field,
New Zealand earthquake (13 November 2016) generated from Alaska, where the phase fringes exhibit influences of both
ALOS-2/PALSAR image data; several slip planes and discontinuity topography and motion.
surfaces can be seen; the results revealed that very complex movements
on several faults occurred and the maximum uplift reached *10 m As glacier topography is spatially variable and the object
(courtesy: Geospatial Information Authority of Japan) is in motion, it poses a difficult problem to estimate both
variables through InSAR. The strategy requires having
detected by Galloway et al. (1998) using DInSAR; subsi- InSAR data baselines with widely different sensitivities to
dence studies in coal mining areas have been carried out by
Dong et al. (2013) in Huainan coal field, China, and Chat-
terjee et al. (2016) in Jharia coal field, India, among others.

4. Landslides

Differential SAR interferometry has been used to study some


of the major landslides. For example, the Saint-Etiene-
de-Tinee landslide in southern France was studied by Rocca
et al. (1997) using ERS-1 interferometric data (Fig. 17.14).
The study indicated a movement of about 1 cm per day and
this result was in good agreement with the field data. In
another case, there occurred a major landslide in China
(Zhouqu landslide) in the year 2010 that killed >1700 peo-
ple. Time series InSAR analysis of ALOS/PALSAR
image data revealed slow deformation of slopes in the
range of 30–70 mm yr−1 regularly over a 3-year period,
prior to the major landslide (Sun et al. 2015). Chaus-
sard et al. (2015) also applied InSAR technique on Fig. 17.15 Interferogram of Bagley Ice Field, Alaska; phase fringes
TerraSAR-X data to map slow moving landslides in parts of show both topography and motion influences (D.R. Fatland, http://
California. www.asf.alaska.edu/step/insar/absracts/fatland.html)
17.9 InSAR Applications 263

displacements and topography, so that when one is being


estimated the data set are insensitive to the other variable.
Goldstein et al. (1993) first used ERS-1 SAR repeat-pass
data for interferometry to monitor ice stream velocity in
Antarctica. As satellite orbit is almost repeated near to the
poles, they had SAR coverages with a 4 m spatial baseline.
As such small baselines become quite insensitive to height,
they could use the SAR pair for estimating ice flow velocity.
Their estimate (390 m per year) corresponded closely with
the field data.
Joughin et al. (1998) estimated both topography and ice
sheet motion for Ryder glacier, Greenland. If the ice-flow
velocity is constant over a certain time interval when the
SAR over-passes are being made, double-differencing pairs
of interferograms can be used to cancel the effect of object Fig. 17.16 Differential interferogram generated from ERS-1/-2 data
(March 27–28, 1996, ascending orbit) over cauldrons in the Vatnajokull
motion. Joughin et al. (1998) used this technique to generate
ice cap. Fringes indicate uplift in ice surface during an interval of one
ERS-1 SAR DEMs of ice sheet topography with an error day (data copyright ESA, processing by DLR, Jonsson et al. 1998)
±2 m.

6. Surface manifestation of subglacial geothermal activity rather coarse changes due to accretion of pyroclastic cones,
collapse of craters, lava flow and emplacement of domes
An interesting example of surface manifestation in terms of etc., and (b) ground deformation and shape changes, which
topographic change of subglacial geothermal activity is are finer changes associated with magma movement within
given by Jonsson et al. (1998). Iceland is marked by the volcano, fault movements, lava flow cooling etc. It is
ice-glacier covered areas and also geothermal activity in necessary to distinguish between the two.
some places. In such regions, ice cauldrons are created by The application of InSAR to topographic monitoring is
melting in subglacial geothermal areas. The melt-water likely to be hindered by the steep topography in volcanic
accumulates in a reservoir for a couple of years, until it terrains (layover and foreshortening effects). Further, topo-
drains in a jokulhlaup (a sudden release of water). In such graphic application, including monitoring of lava flows and
events, the ice surface over the depression drops down by domes for the purpose of quantifying volume of emitted
several tens of metres. However, as ice flows from the lava, while the eruption is actively flowing is entirely ruled
adjacent areas into the depression, the surface topography out through the InSAR–DEM route, owing to the simple fact
rises again, until the next jokulhlaup occurs. In such areas, that continuous changes on the volcanic surface lead to loss
the monitoring of changes in surface topography is very of coherence of the interferometric pair. On the other hand,
important for disaster mitigation and advance warning. the loss-of-coherence parameter could be used to classify
Jonsson et al. (1998) used ERS-1/-2 SAR data over an area areas where lava flows or domes are moving! Therefore,
in the Vatnajokull ice cap, Iceland, and based on DInSAR coherence maps have value for change detection in an
processing, they inferred uplift rates of 2–18 cm per day otherwise stable background. Further, assuming a certain
(Fig. 17.16). thickness of the lava flow, volumetric estimates of the lava
outpour could be possibly obtained from coherence maps in
7. Volcano monitoring a volcanic terrain.
The possibility of monitoring ground deformation through
The monitoring of volcanic hazards is another important area InSAR looks more promising and has the potential of
of application of SAR interferometry, in which it is likely to becoming a tool for disaster early warning. Concurrently with
emerge as an early warning tool. Also relevant in this con- the emplacement of dykes and a magma reservoir at shallow
text is the all-time, all-weather capability of SAR, which is depth, which may be precursors to volcanic eruption, some
essential for a warning system. ground deformation around volcanoes takes place. These fine
Volcanoes have been one of the most frequently used test surface deformations and movements can be detected through
sites for InSAR validation (e.g. Evans et al. 1992; Massonnet InSAR. For example, ERS-InSAR technique has been used
et al. 1995; Briole et al. 1997; Puglisi and Coltelli 1998). for monitoring small surface changes due to volcanic inflation
Around an active volcano, two broad categories of sur- (Massonnet et al. 1995) and flank deformation due to lava
face changes can be identified: (a) topographic, which are emplacement (Briole et al. 1997).
264 17 SAR Interferometry

DInSAR also holds potential as an early warning tool for on-board, and to down-communicate InSAR fringe patterns
disaster management in volcanic regions (see Sect. 19.14.1.5). in real-time. All this speaks volumes for the present and
Differential SAR interferometry has the potential for moni- future of SAR interferometry.
toring dynamics of a volcano, particularly in the mid-long
term basis. This can provide information about the precursors
of the eruption and assist in disaster mitigation. A high
accuracy would be required in order to make InSAR opera- References
tional for this application, which essentially means eliminating
atmospheric effects and also linking/inverting mono- Briole P, Massonnet D, Delacourt C (1997) Post-Eruptive deformation
associated with the 1986-87 and 1989 lave flows of Etna, detected
dimensional along-line-of-sight DInSAR observations to
by radar interferometry. Geophys Res Lett 24:37–40
3-D displacements. Chatterjee RS, Singh KB, Thapa S, Kumar D (2016) The present status
of subsiding land vulnerable to roof collapse in the Jharia Coalfield,
India, as obtained from shorter temporal baseline C-band DInSAR
17.10 Pol-InSAR (Polarimetric InSAR) by smaller spatial subset unwrapped phase profiling. Int J Remote
Sens 37(1):176–190
Chaussard E, Bürgmann R, Cohen-Waeber J, Delbridge B (2015)
Pol-InSAR (Polarimetric SAR Interferometry) is a new Landslide monitoring with InSAR. http://dels.nas.edu/resources/
advancing technique of combining polarimetric SAR image static-assets/besr/miscellaneous/LandslidesFeb2015/
data (Cloude and Papathanassiou 1998; Papathanassiou and Chaussard2015.pdf. Accessed on 27 April 2017
Cloude SR (2009) Polarisation: applications in remote sensing. Oxford
Cloude 2001; Cloude 2009). It is based on the coherent University Press, New York
combination of single- or multi-baseline interferograms Cloude SR, Papathanassiou KP (1998) Polarimetric SAR interferom-
possessing different polarizations. The technique has the etry. IEEE Trans Geosci Remote Sens 36(5):1551–1565
potential to characterize the vertical structure and distribu- Coulson S (1993) SAR interferometry with ERS-l. ESA-publ, Earth
Obs Quart 40:20–23
tion of scattering processes in volume scatterers. Dong S, Yin H, Yao S, Zhang F (2013) Detecting surface subsidence in
In forests, the SAR volume scattering process is influ- coal mining area based on DInSAR technique. J Earth Sci 24
enced by tree height, structure and biomass. There the (3):449–456
Pol-InSAR has shown clear applicability (Kugler et al. Evans DL, Farr TG, Zebker HA, Mouginis-Mark PJ (1992) Radar
interferometry studies of the earth’s topography. EOS Trans Am
2009), though its use in quantitative agricultural vegetation Geophys Union 73(533):557–558
is still awaited. The SAR response over snow and ice is Ferretti A, Prati C, Rocca F (2000) Nonlinear subsidence rate
dominated by volume scattering process, where seasonal estimation using permanent scatterers in differential SAR interfer-
melting process governs the near-surface vertical distribution ometry. IEEE Trans Geosci Remote Sens 38(5):2202–2212
Ferretti A, Prati C, Rocca F (2001) Permanent scatterers in SAR
of scatterers. There the features of interest include different interferometry. IEEE Trans Geosci Remote Sens 39(1):8–30
ice layers with varying size and shape of ice crystals, Franceschetti G, Lanari R (1999) Synthetic aperture radar processing.
snow/ice density, lenses and pipes of ice and trapped gas CRC Press, Baco Raton, Florida, p 307 p
bubbles. In such cases also, the Pol-InSAR has shown dis- Gabriel AK, Goldstein RM (1988) Crossed orbit interferometry, theory
and experimental results from SIR-B. Int J Remote Sens 9:857–872
tinct application potential (Sharma et al. 2007; Oveisgharan Gabriel AK, Goldstein RM, Zebker HA (1989) Mapping small
and Zebker 2007). elevation changes over large areas, differential radar interferometry.
J Geophys Res 94(B7):9183–9191
Galloway DL et al. (1998) Detection of aquifer system compaction and
land subsidence using interferometric synthetic aperture radar,
17.11 Future Antelope Valley, Mojave Desert, California. Water Resour Res 34
(10): 2573–2585
The future of SAR interferometry looks extremely bright. Gens R (1998) Quality assessment of sar interferometric data. ITC
The technique has already passed experimental tests and Publication No 61, 141 pp
Gens R, van Genderen JL (1996) SAR interferometry -issues,
entered the operational stage. It has many important appli- techniques, applications. Int J Remote Sens 17:1803–1835
cations—generation of high-resolution DEM, investigations Ghiglia DC, Pritt MD (1998) Two-dimensional phase unwrapping:
of geohazards such as co-seismic effects of earthquakes, theory, algorithms and software. Wiley Interscience, USA, 493 p
monitoring of volcanoes, landslides, land subsidence and Goldstein RM, Barnett TP, Zebker HA (1989) Remote sensing of ocean
currents. Science 246:1282–1285
glacier movements. Goldstein RM, Engelhardt H, Kamb B, Frolich RM (1993) Satellite
Many countries have launched SAR satellites e.g. radar interferometry for monitoring ice sheet motion, application to
Radarsat, JERS, ALOS, Envisat, Risat, and in-tanden/ an Antarctic ice stream. Science 262:1525–1530
constellations such as TerraSAR-X/Tandem-X, Sentinel Graham LC (1974) Synthetic interferometer radar for topographic
mapping. Proc IEEE 62:763–768
constellation, and Cosmo-Skymed constellation to generate Jonsson S, Adam N, Bjornsson H (1998) Effects of geothermal activity
SAR data optimally suited for InSAR processing. Addi- observed by satellite radar interferometry. Geophysic Res Lett
tionally, there are proposals to manage InSAR processing 25(7):1059–1062
References 265

Joughin et al (1998) Interferometric estimation of three-dimensional Puglisi G, Coltelli M (1998) SAR interferometry applications on active
ice-f1ow using ascending and descending passes. IEEE Trans vo1canoes: state of the art and perspectives for volcano monitoring.
Geosci Remote Sens 36(1):25–35 In: Workshop synthetic aperture radar, 25–26 February 1998,
Kugler F, Lee SK, Papathanassiou KP (2009) Estimation of forest Florence, Italy
vertical structure parameter by means of multi-baseline Pol-InSAR. Rocca F, Prati C, Feretti (1997) An overview of ERS-SAR interfer-
In: Proceedings of IEEE international geoscience and remote ometry. In: 3rd ERS symposium, space at the service of our
sensing symposium (IGARSS), Cape Town, South Africa environment, vol I, pp xxvii–xxxvi. Florence, 17–21 March, 1997,
Madsen SN, Zebker HA (1998) Imaging radar interferometry. In: ESA-SP414. http://florence97.erssymposium.org
Henderson FM, Lewis AJ (eds) Principles and applications of Rosen PA, Hensley S, Joughin IR, Li F, Madsen SN, Rodriguez E,
imaging radar, Manual of remote sensing, 3rd ed, vol 2, pp 359– Goldstein RM (1999) Synthetic aperture radar interferometry.
380. Wiley, New York Proc IEEE XX(Y):1–110
Massonnet D, Feigl K (1998) Radar interferometry and its applications Sharma JJ, Hajnsek I, Papathanassiou KP (2007) Vertical profile
to changes in the earth’s surface. Rev Geophys 36(4):441–500 reconstruction with Pol-InSAR data of a subpolar glacier. In:
Massonnet D, Rossi M, Carmona C, Adranga F, Peltzer G, Feigl K, Proceedings of IEEE international geoscience remote sensing
Rabaute T (1993) The displacement field of the Landers earthquake symposium (IGARSS), Barcelona, Spain, 2007
mapped by radar interferometry. Nature 364:138–142 Sun Q, Zhang L, Ding X, Hu J, Liang H (2015) Investigation of
Massonnet D, Briole P, Arnaud A (1995) Deflation of Mount Etna slow-moving landslides from ALOS/PALSAR images with TCPIn-
monitored by spaceborne radar interferometry. Nature 375:567–570 SAR: a case study of Oso, USA. Remote Sens 7:72–88
Oveisgharan S, Zebker HA (2007) Estimating snow accumulation from Zebker HA, Goldstein RM (1986) Topographic mapping from
InSAR correlation observations. IEEE Trans Geosci Remote Sens interferometry synthetic aperture radar observations. J Geophys
45(1):10–20 Res 91(B5):4993–4999
Papathanassiou KP, Cloude SR (2001) Single-baseline polarimetric Zebker HA, Werner CL, Rosen PA, Hensley S (1994) Mapping the
SAR interferometry. IEEE Trans Geosci Remote Sens 39(11):2352– world's topography with radar interferometry. Proc IEEE 82
2363 (12):1774–1786
Integrating Remote Sensing Data with Other
Geodata (GIS Approach) 18

18.1 Integrated Multidisciplinary correspondingly enhance the capability of discrimination


Geo-investigations and/or identification.
2. It is possible to induct suitable statistical procedures and
18.1.1 Introduction models for quantitative data interpretation.
3. Interpretation of all the data sets collectively should
The purpose of integrated multidisciplinary investigations is result in a more coherent and reliable analysis.
to study a system or phenomenon using several approaches
and as many attributes as possible/required, in order to 18.1.1.2 Limitations
obtain a more comprehensive and clearer picture. The main We encounter several limitations/difficulties while trying to
advantage of such an approach is that ambiguities, which integrate multigeodata sets.
may arise from the use of only one type of data, can often be
resolved by combining several data sets. 1. Combining data sets of different types involving cate-
Undoubtedly, the growth in computing and gorical and continuous attributes (see Sect. 18.2.3) is a
data-processing capabilities, coupled with advances in geo- work of special nature, requiring unique statistical treat-
graphic information system (GIS) technology and its inte- ment. Often the analytical classification process has to be
gration with geostatistics, has played a very important role in split into two or more hierarchical stages.
developing integrated geo-exploration approach. As such, 2. Many of the collateral data are derived from maps, which
the integrated GIS approach need not necessarily include have to be digitized, requiring specific instrumentation
remote sensing data; for example, Campbell et al. (1982) facilities.
demonstrated the successful identification of porphyry— 3. A different base map may have been used for each of the
molybdenum deposits using pre-drilling exploration data and multidata sets; this means that the data must be resam-
an artificial intelligence program. More recently, gravity data pled and geometrically projected to a common base,
from GRACE satellite coupled with field hydrogeologic data using control points etc.
have demonstrated basin-level large-scale depletion of 4. The spatial resolution and geometry of each of the
groundwater in parts of the Gangetic Plains (Rodell et al. multidata sets may be different. Further, there may be
2009) and the Middle-East (Voss et al. 2013). However, subjectivity errors, where on one map the boundary
remote sensing data are almost invariably used these days as between any two units may appear at one place, but in
basic input data in geoinvestigations, forming a very another data set the same level boundary may be at
important data source in GIS. In the context of remote another place. The problem is acute in situations
sensing data processing, multidisciplinary geodata are often describing categorical attributes occurring with grada-
referred to as collateral or ancillary data. tional contacts, e.g. in cases of gradational rock types,
soil types, vegetation types etc. In such cases, the
18.1.1.1 Advantages placement of boundaries could be arbitrary and would
Chief advantages of combining several data sets are three affect the mutual geometric compatibility of different
fold: data sets.
5. The reliability of some data sets may be questionable;
1. Using multidisciplinary data, the number of attributes or this may adversely influence the reliability of the entire
channels of information are increased, and this should processing, and therefore due care has to be exercised in

© Springer-Verlag GmbH Germany 2018 267


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_18
268 18 Integrating Remote Sensing Data with Other Geodata …

Fig. 18.1 Concept of an integrated multidisciplinary geo-investigation

selecting the data sets, keeping the possible noise factor software tools. In this chapter, we deal with the handling of
in mind. multigeodata sets, as developed around mainly image (raster
6. Excessive information/data may also be a problem in data) processing techniques.
handling and processing, and therefore optimization has
to be duly considered.
18.2 Geographic Information System (GIS)—
Nevertheless, the approach of combining multidata sets is Basics
gaining wide application simply because it improves inter-
pretation. Figure 18.1 shows the general working concept in 18.2.1 What is GIS?
a multidisciplinary data analysis. Data of different thematic
types and derived from different sources, viz. remote sensing, The geographical information system, also called ‘geobased
geophysical, geochemical, soil, land use, lithology etc. are information system’ (GIS), is a relatively new technology. It
aggregated into a digital data bank or geodatabase, processed is a very powerful tool for processing, analysing and inte-
and suitably applied. grating spatial data sets (see e.g. Aronoff 1989; Star and
Estes 1990; Maguire et al. 1991; Bonham-Carter 1994;
Longley et al. 1999; Skidmore 2002; Konency 2003; Hey-
18.1.2 Scope of the Present Discussion wood et al. 2006; Chang 2008). It can be considered as a
higher-order computer-based system that permits storage,
The techniques developed around multiple-image processing manipulation, display and output of spatial information. This
and data handling are directly applicable to assembling and technology has developed so rapidly during the last nearly
working with multigeodata sets. In a multidisciplinary three-four decades that it is treated as an essential tool for
geo-investigation, the sources of data could be satellite or numerous applications, an almost indispensable tool for
aerial photographs, field surveys, geochemical laboratory handling spatial information for Earth resources exploration,
analyses, geophysical data etc., and may be available in the development and management.
form of maps, profiles, point data, tables and lists etc. If GIS GIS technology is aptly suited to integrate data in a
methodology is not used, then integrating such a variety of multidisciplinary geoinvestigations for the following main
data sets would involve elaborate manual exercises for reasons.
extracting relevant information. On the other hand, utilizing
GIS techniques, the requirements of multiple-data integra- 1. Concurrent handling of locational and attribute data.
tion can easily be fulfilled by the available hardware and Invariably, we are required to deal with geodata
18.2 Geographic Information System (GIS)—Basics 269

Fig. 18.2 Schematic representation of GIS working. The GIS maintains a link between the map feature and the corresponding tabular information

comprising both locational (where it is) and attribute maps, (b) landform maps, (c) lithological maps,
(what it is characters). Such a capability is available only (d) structural maps, (e) geophysical and geochemical
in GIS packages, not in other types of packages profile data/maps, (f) tables of various observations and
(Fig. 18.2). data sets, and (g) point data, for example GPS locations
2. Variety of data. Investigations often comprise diverse etc. (Fig. 18.3). GIS packages offer methods for inte-
forms and types of data. Such as: (a) topographic contour grating the above variety of data sets.

Fig. 18.3 Source data of various types and formats required to be input into a digital data bank
270 18 Integrating Remote Sensing Data with Other Geodata …

3. Flexibility of operations and concurrent display. Data pertaining to the above two aspects are explicitly or
Modern GIS are endowed with numerous functions for specifically recorded in a GIS and link between the two
computing, searching for, processing and classifying regularly maintained.
data, which allow analysis of spatial information in a
highly flexible manner with concurrent display. 1. Location data. The location (or spatial position) is given
in terms of a set of latitude/longitude, or relative coor-
In addition, as a GIS is computer based, there are the dinates. From a geometrical point of view, all features on
advantages of speedy and efficient processing of large vol- a map can be resolved into points, lines (segments) or
umes of data, with repeatability of results. arcs and polygons. The location of a smelter plant, power
plant or dam is a typical example of point data. Linea-
Components of a GIS ments, including surface traces of faults, joints, shear
zones, bedding planes (on plans) and roads, canals etc.,
A GIS is made up of hardware and software. The hard- are typical examples of linear data. Maps showing
ware comprises a basic computer system (viz. CPU, storage topographical contours, geophysical or geochemical
devices, keyboard and monitor), a scanner (rotating drum anomaly contours or lithological distribution are exam-
and/or flat bed type) for inputting spatial data, a colour ples of polygon data. All features, whether points, lines
monitor for on-screen digitization and displaying spatial data or polygons can be described in terms of a pair of
in image mode, a plotter for production of maps, and a coordinates: points as a pair of x-y coordinates; lines as a
printer for printing tables, data, raster maps etc. The software set of interconnected points in a certain direction; and
component of a GIS enables data input, storage, transfor- polygons as an area enclosed by a set of lines.
mation, processing and output. There are a number of GIS 2. Attribute data. Attribute data are the information per-
software packages available on the market having a wide taining to what the feature is, i.e. whether the point
range of functions for GIS-based analysis and modeling. indicated is city or a power plant or a mine, or that the
Some of the more widely used GIS packages are (in information at the specified location pertains to
alphabetical order): Arc-GIS, IDRISI, ILWIS, GEOMEDIA, lithology etc.
GRASS, MAPINFO, QUANTUM GIS and SPANS.
In GIS, the thematic information is stored in data layers,
frequently called coverages or maps. A coverage consists of
18.2.2 GIS Data-Base a set of logically related geographic features and their
attributes. For each map layer (coverage), there is one
A data-base is a collection of information about things and attribute table providing a description of various items on the
their relationship to each other. In GIS, the data base is map. The attribute data are stored as tables in a data file. The
created to collate and maintain information. The geograph- various data are managed and maintained in DBMS (data
ical or spatial information has two fundamental components base management system).
(Fig. 18.4) as follows:

(1) location (position) of the feature (where it is), e.g. 18.2.3 Continuous Versus Categorical Data
location of a mine, city or power plant;
(2) attribute character of the feature (what it is), e.g. In a multidisciplinary investigation, the various types of
lithology type, topographic elevation, or landform attribute data could be of two basic types, categorical and
type etc. continuous (Table 18.1). Different scales are used to

Fig. 18.4 The two fundamental components in a GIS: a map and b attribute table; GIS
18.2 Geographic Information System (GIS)—Basics 271

Table 18.1 Types of attributes and measurement scales in GIS


Type of Type of scale Remark Example
attribute/property
Categorical Nominal Mutually exclusive categories of equal status A, B, C, D or quartzite, schist,
marble etc.
Ordinal Hierarchy of states in which all intervening lengths are Drainage density-low, medium high
not equal
Continuous Interval Possess lengths of equal increment but no absolute zero A linear contrast-stretched image
Ratio Possess lengths of equal increment and also a true zero Temperature scale in °C
After Davis 1986

measure the above two different types of attributes. The by their positions with respect to a coordinate system
categorical attributes are measured at nominal and ordinal (Fig. 18.5b). Every feature on a map has a unique position,
scales. A nominal scale classifies observations into mutually whether it is a point, a line or a polygon. A point (e.g. the
exclusive categories of equal status, e.g. quartzite, schist, location of a mine) is represented by a single position, whereas
marble etc. An ordinal scale uses the hierarchy of states, e.g. a line (e.g. a road) is a set of points. A polygon (e.g. an area
drainage density—low, medium and high. On such a scale, underlain by granites) is bounded by a closed loop, joined by a
although the categories can be encoded in numerals (e.g. set of line segments. In vector format, topology (i.e. mutual
1 = low, 2 = medium, 3 = high) the intervening lengths, i.e. relations between various spatial elements) is specifically
increments, are usually non-equal. The continuous attributes defined. The data in vector format is geometrically more
are measured at interval and ratio scales. An interval scale precise and compact. The method of cartographic manual
possesses lengths of equal increments but arbitrary zero, e.g. digitization, which is very widely used, employs vector mode.
a linear contrast-stretched image where zero has been arbi- However, the vector data are not amenable to digital image
trarily set, but intervals between successive gray levels are processing, and they are also relatively tedious for performing
equal. A ratio scale possesses lengths of equal increments certain GIS operations such as overlay, neighbourhood etc.
and also a true zero, e.g. a temperature scale in degrees A raster has a cellular organization (Fig. 18.5c). Remote
Celsius or the size of an area in m2. sensing data are the most typical example. The location of a
feature is represented in terms of row and column positions
of the cells occupied by the data. Limitations in raster
18.2.4 Basic Data Structures in GIS structure arise from degradation in information due to cell
size. On the other hand, the raster structure is simple, easy to
Two basic types of data structures exist in GIS: (a) vector and handle and suitable for performing image processing as well
(b) raster (Fig. 18.5). In vector format, the features are defined as GIS operations.

Fig. 18.5 Basic data structures in GIS. a A map, and the same in b vector format and c raster format (G granite; R road; Q quarry; V vegetation)
272 18 Integrating Remote Sensing Data with Other Geodata …

Fig. 18.6 Main segments of a GIS

In this presentation, the examples of geo-data processing 18.3 Data Acquisition (Sources of Geodata
utilize mainly a raster data structure. in a GIS)

18.3.1 Remote Sensing Data


18.2.5 Main Segments of GIS
Remote sensing has become one of the most important input
Broadly, a GIS comprises five main segments or stages data sources in GIS. Remote sensing data acquisition in
(Fig. 18.6): (1) data acquisition; (2) data pre-processing; various spectral ranges (UV, VIS, NIR, SWIR, TIR and
(3) data management; (4) data manipulation and analysis, microwaves) has been discussed in detail in earlier chapters.
and (5) data output. These are discussed in the following Platforms for remote sensing include space-borne, aerial and
sections. ground-based types (Table 18.2).

Table 18.2 Acquisition of various types of geodata from different platforms


Ground-based platform Aerial platform Space-borne platform
1 Remote sensing spectral data (UV, VIS, NIR,
SWIR, TIR and microwaves)
2 Geophysical
a. Magnetic data
b. Gravity data P
c. Electromagnetic –
d. V L F induction −
e. Electrical data − −
f. Seismic/SONAR − −
3 Gamma ray −
4 Geochemical data −
5 Geological
a. Structural data
b. Lithological data
6 Topographical
7 Other thematic data
a. Vegetation/forestry
b. Land use
c. Soil
d. Hydrological
e. Meteorological
Yes; P Possible (under research and development)
18.3 Data Acquisition (Sources of Geodata in a GIS) 273

18.3.2 Geophysical Data

Geophysical methods rely on measurements of physical


properties of geological materials to discriminate between
different types of objects. Details of geophysical methods
can be found in standard texts (e.g. Parasnis 1996). Wher-
ever necessary, the geophysical data can be inducted into
the GIS.
The magnetic methods measure anomalies in local geo-
magnetic fields in order to infer intensity of magnetization in
rocks. The intensity of magnetization depends on magnetic
susceptibility, which is used as a physical attribute in geo-
exploration. Various types of magnetometers are used to
measure the magnetic field, and the magnetic anomalies are
expressed in gamma or nano-Tesla. The survey takes the
form of profiles in a base grid. The data at different stations
along the profile are displayed as point data or as profiles. If
the profiles are quite closely spaced, the data can be inter-
polated to generate contour maps. The magnetometers were
initially deployed as ground-based instruments. After World
War II, improvements led to the development of airborne
magnetic methods, which have become a widely used tool
for regional exploration, especially for structure, geological
mapping, basement topography, mineral exploration etc.
MAGSAT (Magnetic Field Satellite) initiated the use of
magnetic methods for the study of the Earth from space
(Fischetti 1981; Fig. 18.7a). The MAGSAT orbited the
Earth for about 8 months during 1979–80. It carried a scalar
and a three-axis vector magnetometer, which possessed an Fig. 18.7 a MAGSAT (1979–80); b the twin GRACE satellites for
accuracy of ±3 gamma in total field, and ±6 gamma in each gravity measurements (2002–2016) (a, b Courtesy of NASA)
component in vector measurements. The MAGSAT data
have been used to develop models of the main field for the
1980 Epoch and maps of crustal anomalies. near-polar orbit), one satellite 220 km ahead of the other
Gravity methods utilize measurements of the gradient of (Fig. 18.7b). The principle is simple. As the pair of satellite
the Earth’s gravitational potential, i.e. the force of gravity passes over zones of higher gravity, the lead satellite is
(g). Gravity anomalies yield information about the variation affected first and gets pulled and the distance between the
in density of the material and, hence, the type of material at two satellites increases. The lead satellite then crosses the
depth. The anomalies are measured by instruments called anomaly and slows down. Subsequently, as the trailing
gravimeters and are expressed in milliGal. Gravity surveys, satellite comes over the area of higher gravity anomaly, it
like the magnetic surveys, are carried out along profiles, accelerates and the distance between the two satellites
which are referred to a base grid. Gravity surveys have been decreases. Satellite-to-satellite tracking is done with
essentially ground based. Airborne gravity-gradiometer is extraordinary precision (1-µm accuracy). The experiment
under research and development. yields data on minute variations in the Earth’s gravitational
GRACE (Gravity Recovery And Climate Experiment) is pull.
a joint program of NASA and German Aerospace Center GRACE mission has provided valuable insight on
(DLR). Its aim is to map tiny variations in the Earth’s large-scale features and processes on/near the Earth’s sur-
gravitational field from space. It was launched in 2002, and face, for example:
far exceeded its design life span of five years. A follow-up
mission is proposed for launch in 2017. • It is observed that significant gradual thinning of polar
GRACE uses a set of two low-altitude (500 km high) ice-sheets has occurred between 2003–2013 (e.g. Harig
satellites nick-named “Tom” and “Jerry”. They are placed in and Simons 2015), such that the melting would imply
the same orbit (89° inclined from the equator, near-circular about 0.9 mm per year of sea-water rise on global basis.
274 18 Integrating Remote Sensing Data with Other Geodata …

• It is found that large groundwater aquifers are stressed Further, the c-radiation associated with different sources, i.e.
40
and are being gradually depleted year-after-year, for K, 238U and 232Th, possess different energy levels (i.e.
example in the Gangetic Plains (Rodell et al. 2009) and frequency). Multichannel c-ray spectrometers collecting
the Middle East (Voss et al. 2013). radiation data of different levels can be used to provide
relative concentrations of these constituents. The data can be
As in the case of magnetic data, gravity data are usually expressed as single or composite (ratio) parameters and
presented as point data, profiles or as contoured maps. presented as point data, profiles or maps.
Electromagnetic induction methods provide data on dif- The c-ray technique has potential for application in
ferences in electrical conductivities for differentiating snow-pack studies, soil moisture measurements, environ-
between ground materials. The method was initially devel- mental surveillance, geological mapping, and mineral
oped as a ground-based technique, but its most spectacular exploration, especially of radioactive minerals and
success has been from airborne platforms. VLF (very low hydrothermal deposits (e.g. Bristow 1979; Duval 1983).
frequency) method is a type of EM induction method that
utilizes EM waves from radio stations. The VLF-EM signal
carries subsurface information, and the method has been 18.3.4 Geochemical Data
used from ground-based and airborne platforms.
There are also other geophysical exploration methods, for Geochemical data are often conjunctively used with geo-
example electrical methods, which measure electrical prop- logical and geophysical data. The data may consist of the
erties such as resistivity, self-potential etc., and seismic distribution of major elements, minor elements, ionic com-
methods, which measure elastic wave propagation proper- plexes or the relative distribution of some constituents.
ties. These are essentially ground-based techniques, and their Geochemical methods have been, by and large, ground
data are shown as depth-profile data. A modification of the based. The technique involves laying of base lines and grid,
seismic technique involving audible frequencies is SONAR, and then sampling of rocks/soil/water, followed by their
used in ocean bathymetric surveys. chemical analysis. Further, it has been found that the air
The types of platforms that are used for geophysical data contains aerosols, particulate matter and vapour, which may
acquisition are summarized in Table 18.2. be representative of the underlying terrain. Air sampling at
heights of 60–150 m has been reported for exploration of
certain deposits, e.g. mercury-bearing deposits (Barringer
18.3.3 Gamma Radiation Data 1976). Further, a higher concentration of iodine vapours in
the atmosphere has been linked to oil fields and petroleum
The natural radioactivity of surface materials also constitutes source rocks. However, the air-sampling technique for
an important attribute. Of the three natural radioactivity geochemical exploration has found only limited use, as its
emissions (alpha, beta and gamma rays), only c-rays can be application depends upon the pattern of dispersion and
used for remote sensing. Although the c-radiation is basi- meteorological processes. Carranza (2008) provides an
cally EM radiation possessing very high energy, it is dis- excellent treatment of geochemical data processing in GIS.
cussed here separately from remote sensing for the following
two reasons:
18.3.5 Geological Data
(1) the technique is restricted to ground-based and
very-low-altitude aerial survey (about 100–150 m For integrated geo-investigations, geological data forms an
above the terrain), owing to the rapid attenuation of the important input. The data may comprise lithological or
c-ray intensity with altitude; structural description at points or in the form of maps.
(2) c-ray surveys are carried out in non-imaging profiling
mode, similar to many geophysical methods, such as
magnetic, EM induction etc., in contrast to the imaging 18.3.6 Topographical Data
mode normally used in remote sensing.
For most investigations, topographical data would constitute
The main sources of c-radiation in the crustal rocks are a primary information. Often, topographical maps form the
potassium (40K), uranium (238U) and thorium (232Th). The basis on which all other geodata are co-registered, in order to
intensity of c-radiation at different ground locations is develop a digital data bank for the GIS. Data on elevation
measured using instruments such as the Geiger-Mueller slope and aspect (i.e. slope direction) also form vital col-
counter and scintillation counter, which give total counts. lateral information for interpretation of other types of data.
18.3 Data Acquisition (Sources of Geodata in a GIS) 275

18.3.7 Other Thematic Data simple to a fairly complex exercise. The main pre-processing
operations in the raster-based GIS are: (a) data input, (b) in-
Attribute data from any discipline can, if necessary, also be terpolation, (c) black-and-white image display, and (d) reg-
used as input in GIS. For example, thematic maps giving the istration (Fig. 18.8).
following information may be used:
1. Data input. This means the encoding of data into a
1. Vegetation computer-readable form. The maps, tables etc. obtained
2. Forest from various sources must be digitized for inclusion in
3. Land use GIS. It involves two types of methods, usually in com-
4. Soil bination: (a) digitization of maps for entering locational
5. Hydrology data and (b) keyboard entry for computerizing the attri-
6. Metereology etc. bute data.

As such, there is no limit to the geodata sets which could Keyboard entry involves manually entering the data at a
be incorporated in GIS and used in integrated geo- computer terminal. It is mainly used for entering the attribute
investigations. Only the researcher can define the most data, called tables in GIS. Feature labels, to identify points,
useful data sets for a particular investigation. lines and polygons are also entered through the keyboard.
On-screen digitization: Most commonly these days, first
the map is scanned and displayed on the monitor for
18.4 Pre-processing on-screen digitization (Fig. 18.9), which is then an interac-
tive process. Several types of scanners are available on the
Sources and acquisition of collateral geodata have been dis- market. A common type is a flat bed scanner which uses a
cussed in Sect. 18.3. Most of the collateral data to be com- linear array of CCD-cells to scan the map spread out on the
bined with remote sensing data are usually available in the table (Fig. 18.10a). Another one is a large format scanner
form of maps showing distribution of either the continuous or that can be used to scan large A0 size maps (Fig. 18.10b); as
categorical type of attribute. Pre-processing is almost the map is fed into the scanner, a CCD linear array scans the
invariably required to convert the collateral data sets into a map. The scanned output is displayed on a monitor. A cur-
form suitable for storage in GIS data bank (GIS data base), so sor/mouse is used to locate points on the monitor and
that the data are amenable to integrated analysis. It can be a co-ordinates of various points on the map are successively

Fig. 18.8 Main steps in generating raster-based GIS


276 18 Integrating Remote Sensing Data with Other Geodata …

Fig. 18.9 On-screen digitization; the figure shows marking of lineaments, as an example (courtesy of Rohan Kumar)

read and stored in the PC. By manual interaction, relative up to the boundary on either side, acquire the same value
attributes, viz. identification of different features and values (Fig. 18.12).
etc. are assigned.
3. Image display. The spatial data matrix can be presented in
2. Interpolation. At this stage, some selected cells in the the form of black-and-white images by selecting a suit-
output grid, defining lines/boundaries, have been filled able gray scale. For a continuous type of data, normally
up, the rest of the grid cells being vacant. These vacant the lowest value is given the darkest tone (0 DN) and the
grid cells need to be filled up with values in order to highest value the brightest tone (255 DN), the interme-
generate the full image. The method of filling up vacant diate values getting appropriate tones. This is similar to
cells depends upon the type of data—whether continuous the method of generating remote sensing images. Fig-
or categorical. ure 18.13a–c show the stages in generating an image from
typical profile data of a continuous type of attribute.
In the case of the continuous type of attributes, the raster
cells through which the contour lines pass are assigned The categorical type of data can also be displayed as a
appropriate relative values (e.g. 400, 500 ppm, etc. or 50, 60 black-and-white image by choosing an appropriate gray
milliGal etc.). Filling up of the vacant raster cells is done by scale, i.e. each categorical unit is displayed in one tone (a
the process of interpolation, which basically deals with particular DN value) (Fig. 18.14). In such geo-images, the
predicting unknown values at a point from known values in different DN values or gray tones may not necessarily have
the vicinity. It is a type of neighbourhood operation. Inter- any relative significance. In some cases, however, the DN
polation programmes may employ a wide range of statistical values may be linked to a parameter of interest, such as age
methods for computation. In this way, each grid cell acquires of the rocks, sand/shale ratio etc.
a digital value which represents the intensity/magnitude of In general, such geo-images of categorical attributes may
the field/parameter at that point (Fig. 18.11). have rather specific or limited utility, for instance in exam-
In the case of categorical attributes, the intervening grid ining the entire area in binary mode, e.g. areas of
cells are filled up by the ‘principle of extension’. The lines sandstone/no sandstone, or areas of marble/no marble etc.
serve as boundaries and the zones between the boundaries (Fabbri 1984). This approach could be useful in stratification
are given identification marks or values. In each zone, a unit of data sets aimed at understanding and integrating other
cell is assigned a particular value and all the adjoining cells, parameters.
18.4 Pre-processing 277

Fig. 18.10 Types of commonly used scanners. a Flat bed scanner (source www.canon.com); b A0-scanner (source www.colortac.com)

Fig. 18.11 Schematic of constructing digital imagery for continuous type of data. a Input map of continuous type data; b polygon format
encoding; c grid-cell format encoding and interpolation
278 18 Integrating Remote Sensing Data with Other Geodata …

Fig. 18.12 Schematic of constructing digital imagery for categorical type of data. a Input map; b polygon format encoding; c grid-cell format
encoding

Fig. 18.13 a Aeromagnetic (total field) profile data (Lethlakeng area, interval: 10 nTesla). c Image generated from b. Note that the bright
SE Botswana; courtesy of Geological Survey Department, Lobatze). areas on the image correspond to higher anomaly values and the darker
b Contour map prepared from the interpolated profile data (contour parts to the lower anomalies. (a–c Courtesy of GAF mbH, Munich)
18.4 Pre-processing 279

base, so that the job of registration is simplified and the


accompanying loss in information due to resampling is
minimized.
Selection of control points. The black-and-white images,
original maps etc. are studied to locate prominent and easily
defined points, such as sharp bends in the river, prominent
topographical features, or railroad intersections etc. These
serve as control points for registration. Field observations
and GPS data are now frequently used to help locate control
points.
Performing geometric transformation. Using the control
points identified above, geometric transformation is carried
out; the image data are resampled and interpolated to yield
Fig. 18.14 Representing a geological map in image mode; various co-registered images. The procedure is same as described in
units are represented in shades of gray Sect. 13.4.
Once the various data sets acquired from different sources
have been co-registered to a common geographic base, it
4. Registration. The next step, before integrated inter- forms a geodatabase or digital data bank of the GIS (see
pretation of image data sets can be attempted, is to Fig. 18.8).
geometrically superimpose the multidata sets, i.e. reg-
istration. The registration process can be broadly clas-
sified into two types: image-to-image (discussed in 18.5 Data Management
Sect. 13.4) and image-to-map registration (discussed
here). Cartographic maps are prepared using a specific The GIS software packages are built around Data Base
standard projection such as the Lambert Conformal Management Systems (DBMS) which comprise a set of pro-
Conical projection, Transverse Mercator projection etc. grams to manipulate and maintain data in a data base.
Image-to-map registration implies that all the image The DBMS in GIS use relational data models, and hence are
data are transferred to the standard map projection. also called Relational DBMS. They provide advantages in
When registering remote sensing image data onto car- operations such as controlled ordering and organization,
tographic maps, a number of factors must be appreci- sharing of data, data integrity, flexible user- preferred views,
ated (Catlow et al. 1984). search functions etc. The DBMS acts as a controller to provide
(a) The map data are invariably subjected to generalization interaction between the data base and the application program.
by the cartographer, involving simplification of shape
and combining of features.
(b) The thematic boundary drawn on maps, in attempting 18.6 Data Manipulation and Analysis
to represent the spatial distribution of continuous
variables, may be artificial, often involving consider- Taking into consideration the pattern of most commercially
able interpolation. available software packages, the discussion here is divided
(c) Temporal variations may have occurred during the time into three parts: (a) image processing operations, (b) classi-
interval of two data-set acquisitions. fication, and (c) GIS analysis. However, it must be stated
that any boundary between these raster-based operations is
Basically, the registration procedure involves the fol- purely artificial.
lowing: selection of a base projection; selection of control
points; and performing geometric transformation.
Selection of a base projection. The choice of the base 18.6.1 Image Processing Operations
projection is generally guided by two considerations: relative
spatial resolution levels of various multi-data sets, and the Once the multiple geo-data are co-registered in a raster GIS,
number of data sets available on a common projection. It is wide possibilities exist for data processing, enhancement and
often worthwhile to use the image with higher spatial reso- analysis. The whole range of digital image processing
lution as the base, so that minimal loss in overall resolution modules (Chap. 10) is available for data processing.
takes place. In addition, if several data sets possess a certain
common projection, e.g. a standard cartographic projection, 1. Black-and-white images. A black-and-white image is
it could be worthwhile to use that particular projection as the typically a single-parameter image, i.e. one image displays
280 18 Integrating Remote Sensing Data with Other Geodata …

Fig. 18.15 Image mode representation of groundwater-table data generated as topographic elevation minus water-table elevation;
(Maner basin, S. India); brighter tones imply higher elevation of brighter tones imply greater depth of water level from the ground
groundwater-table. b Groundwater- table gradient image; brighter tones surface. (a–c Courtesy of N.K. Srivastava)
imply steeper water-table gradients. c Depth to water-table image

variation in a single parameter across the scene, in shades 4. Colour-coded synthetic stereo. A still more interesting and
of gray. The various digital processing techniques for informative approach for feature enhancement can be
single and multiple images (viz. contrast enhancement, through integration of the colour coding and synthetic
filtering, transformation etc.) can be applied to enhance stereo techniques. In this way, four-dimensional data can
information and detect local and regional features. For be pictorially presented by using three variables in the
example, Fig. 18.15a is a groundwater-table image; colour space and one as parallax (Volk et al. 1986; Harding
Fig. 18.15b is produced by gradient filtering of and Forrest 1989). Figure 18.17 shows an interesting
Fig. 18.15a and shows groundwater-table gradients across example, and demonstrates the potential of pictorially
the area. Figure 18.15c is a difference image generated integrating multidisciplinary data for visual interpretation.
using topographic elevation image minus water-table-level
image; it depicts depth to water table from the surface. The image shows part of the Mayasa Concession, Spain,
A number of examples of groundwater data image pro- known for its rich and extensive polymetallic-pyrite belt
cessing are discussed by Singhal and Gupta (2010). (Strauss et al. 1981; Ortega 1986). Geologically, at Alma-
2. Colour coding. The generation of colour composites of den, Hg mineralization is found to occur in the vicinity of
co-registered multidata sets is a commonly used tech- the Almaden syncline and mostly at the contact of basic
nique for collective interpretation. Any one of the colour rocks with a certain (Criadero) quartzite. The area shown in
coding schemes (RGB or HIS) can be applied. The Fig. 18.17 constitutes the strike extension of the known Hg
resulting colour composites can be suitably interpreted deposits at Almaden. The remote sensing, geochemical and
for thematic mapping. geophysical data have been combined to form a synthetic
3. Synthetic stereo. The technique of 2.5D visualization colour stereo. The colour coding has been carried out in the
involving synthetic stereo, described earlier (Sect. 13.10), IHS scheme (I = TM4, H = Hg values, and S = constant),
can also be applied for feature enhancement and visual gravimetric data are represented as parallax, and Pb values
display of co-registered multidata sets. Frequently, the are depicted as white contours. It is interpreted that some
remote sensing image is used as the base, over which the areas are marked by the association of quartzite band,
collateral data image is incorporated as parallax (e.g. Palaeozoic rocks, high gravity anomaly corresponding to
Figure 18.16). basic rock, significant Pb values, and high Hg values, and
18.6 Data Manipulation and Analysis 281

Fig. 18.16 Synthetic stereo pair (Tharsis Mining Complex, SW Spain). The TM4 has been used as the base image and aeromagnetic (total field)
values correspond to the parallax. The maximum parallax range = 60 nT (Volk 1986)

therefore can constitute potential target areas. Thus, in this may vary laterally and the purpose of stratification is to
manner, multidisciplinary geo-exploration data can be subdivide the study area into smaller homogeneous
combined and coherently interpreted for in order to define sub-areas. The main advantages of stratification are
targets for further exploration. two-fold; upon subdivision, the units become easier to
handle, and homogeneous units with low variance are
identified, which increases the accuracy of classifica-
18.6.2 Classification tion. For example, in a quantitative study utilizing
co-registered Landsat, Seasat and SIR-A (digitized) data
As far as classification is concerned, the basic aim of sets in a part of Northern Algeria, Rebillard and Evans
incorporating collateral data with remote sensing data is to (1983) used geological maps as a first step to stratify and
improve discrimination/classification accuracy. A number of locate areas of homogeneous characteristics. They
methods have been developed to incorporate additional identified different lithological units (e.g. clay, white
spatial data, such as data on soil type, digital elevation sand, Pliocene outcrop etc.) and subsequently used a
models, data from aerial photographs, and geophysical data linear discriminant analysis program in a supervised
etc. (e.g. Franklin 1994; Foody 1995; Gong 1996). Con- approach for classification.
junctive use can be made at any one of the following three 2. Integration at the classification stage. Remote sensing
stages (Hutchinson 1982): (1) pre-classification stage for data is of the continuous type, whereas collateral data
stratification, (2) classification stage for classifier operations, can be of the continuous or the categorical type. Com-
and (3) post-classification stage for further sorting. bining continuous and categorical attributes is a partic-
ular problem and calls for special statistical approaches.
1. Pre-classification stratification. At the pre-classification Thus, if:
stage, the collateral data can be used for identification of (a) collateral data are of continuous type, they can be easily
homogeneous areas or strata, i.e. stratification. In a incorporated as additional channels or attributes in a
natural environment, the physical attributes of objects multidimensional classification (see e.g. Peddle 1993).
282 18 Integrating Remote Sensing Data with Other Geodata …

Fig. 18.17 a, b is the colour synthetic stereo pair demonstrating The quartzite ridge appears as a dark-coloured band on the image. It is
integration of remote sensing, geochemical and geophysical explo- displaced by a transverse.oblique fault F–F, along which a basic
ration data in the Mayasa Concession, Spain. The colour coding intrusive rock (high gravity anomaly) has been emplaced. The region
scheme uses IHS, where intensity (I) = TM4, hue (H) = Hg values in A is marked by the quartzite band, the Palaeozoic rocks in the north,
ppm (so that low values correspond to the blue end and high high gravity anomaly due to basic intrusives, significant Pb values and
values correspond to the red end of the colour wheel), and saturation high Hg values, so that this constitutes a suitable area for further
(S) = constant. The gravimetric data is represented as parallax, so that exploration. At B (not in stereo) and C, there occur an unknown gravity
higher positive gravity anomaly is shown as higher positive relief. The high and a low respectively. At D, the hues (green, yellow and red)
Pb values in ppm are indicated by the white contours. c Interpretation indicate the presence of high Hg values, which can be related to Hg
map of above. From the image stereo pair, a strong association of the dispersion along the streams (courtesy of Minas de Almaden y
quartzite ridge with high Hg and Pb values is evident near Almaden. Arrayanes Sa, Madrid)
18.6 Data Manipulation and Analysis 283

The statistical procedures are the same as used in other quick, easily implemented and efficient. Moreover,
classifications, except that pre-standardization of data is several types of ancillary data can be incorporated in
a requisite to enable comparison and mutual compati- framing decision rules for sorting (see e.g. Joria and
bility of multidisciplinary data (see e.g. Batchelor Jorgenson 1996).
1974).
(b) collateral data are of categorical type, some possibilities The drawback in using ancillary information in classifi-
of combining them with remote sensing data have been cation is the difficulty in dealing with data of different for-
reviewed by Strahler et al. (1980). One use of collateral mats, measurement units and scales. All the data from
data can be in computing a priori probabilities which different sources may first have to be brought onto the same
are used in a maximum-likelihood classifier. Strahler platform before these methods are applied. Nevertheless,
et al. (1978, 1980) applied this technique in a forest once the ancillary information has been incorporated effec-
cover classification study. They linked topographical tively, the accuracy of classification may be increased.
data to the occurrence of forest cover types and from In addition, the conjunctive use of ancillary data may be
this estimated a priori probabilities at different eleva- more useful in knowledge-based classifiers. However, these
tions and aspects (i.e. geographic orientation with ref- techniques are based on more complex heuristic rules which
erence to north) and found that applying these a priori sometimes are difficult to compose and understand, and also
probabilities to remote sensing data could improve are very subjective and data dependent.
classification accuracy by as much as 27%.
1. Post-classification sorting. At the post-classification
stage, the collateral data can be used to sort out classes 18.6.3 GIS Analysis
which are spectrally similar, but amenable to discrim-
ination on the basis of collateral data. Hutchinson GIS analysis functions are unique as they can concurrently
(1982) gave an example of this approach using Landsat handle spatial as well as non-spatial (attribute) data. In this
multispectral data and ancillary data from a study in a treatment, five broad types of GIS analysis functions are
desert area in California. As some object classes were discussed: retrieval, measurement, overlay, neighbourhood
spectrally similar on the Landsat data, he started by and connectivity (Table 18.4).
initially grouping such classes (e.g. basalt, desert var-
nish surface, alluvial fans and shadows) together, and 1. GIS Query/Retrieval functions. As mentioned earlier, in
later used topographical data to sort out and further GIS, a data layer or a coverage consists of a set of log-
separate the problem classes from each other ically related geographic features and their attributes. The
(Table 18.3), finally observing that this approach is information stored can simply be queried or retrieved by

Table 18.3 A simple example of post-classification sorting


Initial class assignment based on Landsat Discrimination rule based on topographical Final class assignment
MSS data
Active sand dunes Slope < 1% Dry lake
Otherwise Active sand dunes
Stabilized sand Slope < 1% Dry lake
1% < slope < 15% Stabilized sand
Otherwise Active sand dunes
Dissected cobbly alluvial fans Slope < 3% Low gradient undissected basalt, alluvial
fans
3% < slope < 15% Dissected cobbly alluvial fans
Otherwise Mountain scrub
Shadow and basalt Slope aspect N or NW Shadow
Slope < 3% Highly dissected alluvial fans
3% < slope < 8% Slightly dissected basalt alluvial fans
8% < slope < 15% Dissected basalt alluvial fans
Otherwise Basalt mountains
Hutchinson 1982
284 18 Integrating Remote Sensing Data with Other Geodata …

Table 18.4 Important GIS functions (modified after Aronoff 1989; Star and Estes 1990)
1 Maintenance and analysis of the spatial data – Format transformations
– Geometric transformations
– Editing functions etc
2 Maintenance and analysis of the attribute data – Attribute editing functions
– Attribute query functions
3 Integrated analysis of spatial and attribute – Retrieval/classification/measurement – Retrieval/Query
data – Classification
– Measurement
– Overlay operations Arithmetic/Boolean
– Neighbouring operations – Search
– Topographic
functions
– Thiessen polygons
– Interpolation
– Contour generation
– Connectivity functions – Contiguity
– Proximity
– Network
– Spread
– Seek
– Perspective view
4 Output formatting – Map annotation, text, labels, graphic symbols, patterns
etc

selective search on spatial and attribute data. The output of each value in one data layer by a value at the corre-
display will show selectively retrieved data in their sponding location in the second data layer. The logical
proper geographic locations. The criteria for query or operations involve determining those areas of interest
selective retrieval could be based on attributes, or Boo- where a particular condition is fulfilled (or not fulfilled).
lean logical conditions (see later) or classification. These are generally performed using Boolean operators:
A typical example could be: select pixels of limonite/ AND, OR, XOR, NOT (Fig. 18.18).
gossan, or those having a certain content of Au or Cu,
and a certain aeromagnetic response level, etc. As the Overlaying can be done in both raster and vector struc-
selective search can be operated on both spatial and tures. However, in vector format, overlaying is rather tedious
attribute data, this becomes a powerful function in han- as it leads to creation of new polygons of various shapes,
dling and processing data in the GIS environment. sizes and attributes which become difficult to handle. In
2. Measurement functions. Some measurement functions contrast, the overlaying operation in raster structure proceeds
are commonly included in GIS software, such as those to
measure distances between points, lengths of lines,
perimeters and areas of polygons or number of
points/cells falling in a polygon etc. Sample applications
could be: find the number of cells of a particular class, or
the area of an exploration block, or measure the distance
between a mine and the smelter plant.
3. Overlay functions. This forms possibly the most impor-
tant function in a GIS, particularly for geological studies.
Often, we have to deal with several data sets of the same
area; overlay functions perform integration in a desired
manner. There are two fundamental types of overlay
operations: arithmetic and logical. Arithmetic operations
include addition, subtraction, division and multiplication Fig. 18.18 Concept of Boolean conditions
18.6 Data Manipulation and Analysis 285

Fig. 18.19 Classification using Boolean logic. The groundwaters have been classified with respect to suitability for drinking purpose per WHO
norms using chemical quality of groundwater (TDS, Cl, Na + K, and Ca + Mg) and applying Boolean logic (courtesy of N.K. Srivastava)

cell by cell, and therefore is relatively simple. Figure 18.19 Interpolation is required to be carried out during regis-
gives an example of classification of groundwater applying tration and generation of a co-registered image data bank,
Boolean logic. Index overlay is discussed under modeling. and also in DEM generation.

4. Neighbourhood functions. These functions are required Besides, the various ‘local operations’ in the field of
to examine the characteristics of an area surrounding a digital image processing can also be considered as neigh-
specific location. They are useful in determining local bourhood operations, e.g., high-pass filtering, image
variability and adjoining information. Neighbourhood smoothing, etc.
functions commonly include search, topography and
interpolation. 5. Connectivity functions. These functions operate on the
a. Search function is a frequently used neighbourhood inter-connections of three basic elements (i.e. points,
function. A suitable search area (e.g. a rectangle, square lines and polygons). They operate by creating a new data
or circle) can be defined. Sample application could be: layer and accumulating data values in the new layer
search pixels of limestone within 10-km distance of a derived from data values over the area being traversed.
cement plant. Connectivity functions are grouped into contiguity,
b. Topographic functions. A raster data set can be repre- proximity, network, spread, stream and perspective-view
sented in terms of a digital elevation model (DEM). The functions.
term topography here refers to the characteristics of such a. Contiguity. Areas possessing unbroken adjacency are
a DEM surface. The topographic functions include slope, classed as contiguous. What constitutes broken/unbroken
curvature and aspect, which are typical neighbourhood adjacency in a particular case can be prescribed
functions. Slope is defined as the rate of change of ele- depending upon the problem under investigation. Sample
vation; curvature is the second derivative (i.e. rate of application could be: check for contiguity of a polluting
change of slope); aspect refers to the direction that a tank from the adjoining water body.
slope faces. b. Proximity function. Proximity is a measure of the dis-
c. An interpolation module is commonly provided in GIS tance between two features. The notion of distance could
packages. This is a typical neighbourhood operation and be a simple length, or a computed parameter such as
involves predicting unknown values at given locations travel time, noise level etc. Using the proximity function,
using the known values in the neighbourhood. a buffer zone is created around a feature. A buffer zone is
286 18 Integrating Remote Sensing Data with Other Geodata …

data, each polygon may be assigned a name as an attri-


bute. In raster structure, each cell is assigned a new
numerical value for class identification.

Sources of Error in GIS

Errors of several types may creep in at different stages in


GIS and affect the data quality. The two basic categories of
errors are: (a) inherent and (b) operational. Inherent error is
that which comes from the source data. Operational error is
introduced during the working of the GIS.
It is not possible to avoid errors completely; however,
they can be managed to be kept within permissible limits.
Fig. 18.20 Example of buffer zones along thrusts; arrow indicates the Therefore, an understanding of the types and sources of
closest (< 500 m) zone errors is necessary for better job management.

defined as an area of a specified width, drawn around the


map location (Fig. 18.20). Buffering can be done around 18.7 GIS Based Modelling
points, lines or polygons.
c. Network function. A set of interconnected linear features A model is a simplified representation of a situation, phe-
forming a pattern is called a network. This function is nomenon or system that enables us to understand it more
commonly used in analysis where resources are to be clearly. The modeling process of may take place in GIS or
transported from one location to another. It can be alternatively may be linked to other computer programs, e.g.
applied in environmental studies and pollutant dispersion for statistical operations. This discussion here is confined to
investigations. raster models, keeping in view the emphasis here on remote
d. A spread function helps evaluate characteristics of an sensing data. It may be mentioned that with the integration
area around a particular entity. It is endowed with char- of various advanced geostatistical techniques (multivariate
acteristics of both network and proximity functions. In mathematical models) in GIS, like discriminant analysis,
this, a running total of the computed parameter is kept, as logistic regression, artificial neural networks, fuzzy logic,
the area is traversed by a moving window. This is a very Bayesian network classifiers and artificial intelligence, a
powerful function, particularly for environmental impact significant stride has been made in GIS based modelling in
assessment and pollution studies. geosciences. Here, only some of the basic modeling con-
e. Stream function. The job of the stream function (also cepts are presented with geological applications (for more
known as the seek function) is to perform a directed details on GIS modeling, see e.g. Skidmore 2002).
search outward in an incremental manner, starting from a
specified point and using a decision rule. The outcome of 1. Index models
a stream function is the delineation of paths from the start
point until the function halts. Applications could be, for An index model calculates index value for each unit area
example, to trace the path of water flow or the path of a (pixel) and generates a map showing rank distribution based
rolling boulder. on the index values. It is typically an overlay operation and
f. Perspective view. Raster image data can be displayed as a is most readily carried out in raster format. Figure 18.21
2.5-D surface, where the height corresponds to the value provides a schematic of the above methodology. For inte-
at that pixel (see Sect. 13.10). Various enhancements grating different input data layers (factors), the weighted-
such as shaded relief modelling and perspective view can linear combination method is the simplest and widely
be applied on this type of data. This generates views applied. The relative importance of each factor or criterion is
valuable for understanding a pattern, as the human mind estimated against other factors; this becomes the weight
can easily perceive shapes and forms. Further, an addi- assigned to each input data layer. Data within each layer,
tional raster data set can be superimposed over the per- whether continuous type or categorical type, need to be
spective view model by draping. standardized and brought into a range of 0.0 to 1.0. For
g. Classification is a procedure of subdividing a population continuous (interval, ratio) type data, commonly a linear
into classes and assigning each class a name. In vector transformation of the following type is used:
18.7 GIS Based Modelling 287

Fig. 18.21 Schematic showing methodology of raster index modeling

xi  xmin In logistic regression, the dependent variable is binary,


Si ¼ ; ð18:1Þ
xmax  xmin i.e. 0/1 (presence/absence). It is used when the dependent
variable is categorical (e.g. presence or absence of landslide)
where Si is the standardized value corresponding to the
and the independent variables are categorical and/or con-
original value Xi, Xmin is the lowest value and Xmax is the
tinuous (numeric). An example of landslide susceptibility
highest value in the data layer. This brings all pixel values in
assessment using logistic regression is given by Kundu et al.
the range of 0.0–1.0. For categorical type (ordinal, nominal)
(2013).
data, a relative ranking procedure based on expertise and
knowledge is used and all the values in each data layer are
3. Process models
again brought into 0.0–1.0 range. Finally, the standardised
pixel values are multiplied by the weight of the corre-
A process model utilizes a known mathematical relationship
sponding layer and aggregated pixel wise to generate an
between a set of physical variables and a certain environ-
index value raster layer.
mental phenomenon, and is used to predict the dependent
Index modeling has found numerous applications in
environmental phenomenon. These models are also known
geosciences, e.g. in landslide hazard zonation, groundwater
as conceptual models, physically based models and process
pollution hazard assessment etc.
driven models. Process models are frequently dynamic and
the relationships could be empirically derived. Further, the
2. Regression models
dependent components of the process model may themselves
be derived separately, and then integrated in the process
A regression model develops a relationship between a
model. These models possess raster structure.
dependent variable and a set of independent variables in the
A typical example of the process model is the application
form of an equation, which can be used for prediction or
of RUSLE (Revised Universal Soil Loss Equation) in GIS
estimation. It is based on best-fit function. Two types of
for predicting average sol loss in a watershed. RUSLE states
regression models exist: linear and logistic.
that average soil loss is related to various factors, namely,
A linear regression model takes the general form of:
rainfall-runoff erosivity factor, soil erodibility factor, slope
y ¼ a þ b1 x1 þ b2 x2 þ . . .. . .. . .. . .. . . þ bn xn ð18:2Þ length factor, slope steepness factor, crop management fac-
tor and the support practice factor. All these input parameters
where y is the dependent variable and xi is the independent differ spatially and are computed for each cell (pixel), to
variable and a, bi are the regression coefficients. All variables finally compute the average soil loss in the watershed.
in the equation are numeric variables. Common transfor- Numerous studies have been carried out using this concept
mation include square, square root and logarithmic. (e.g. Kothyari and Jain 1997; Erdogan et al. 2007).
The purpose of regression model is to interpolate and or
predict values of y (dependent variable) from values of xi 4. Probabilistic models
(independent variable). Spatial regression models have been
used for many purposes, e.g. to estimate snow water Probabilistic models are based on Baye’s theory of proba-
equivalent, estimate precipitation in mountainous terrain etc. bility. A probability is a confidence level or level of rea-
Several examples of GIS-based regression analysis of geo- sonableness about the conclusion. It involves estimation of
chemical data are given by Carranza (2008). posterior probability of occurrence of a phenomenon or
288 18 Integrating Remote Sensing Data with Other Geodata …

incidence in the given set of conditions. The maximum Campbell AN, Hollister VF, Dutta RV, Hart PE (1982) Recognition of
likelihood classifier, extensively used in remote sensing, is a a hidden mineral deposit by an artificial intelligence program.
Science 217(4563):927–928
typical example of probabilistic model. During the last Carranza EJM (2008) Geochemical anomaly and mineral prospectivity
decade, development of prospectivity maps in mineral mapping in GIS. Handbook of Exploration and Environmental
exploration has been possibly the most important new Geochemistry vol. 11. Elsevier, Amsterdam, 351 p
research theme based on ‘Weight of Evidence’ (WofE) Catlow DR, Parsall RJ, Wyutt BK (1984) The integrated use of digital
cartographic data and remotely sensed imagery. In Proceedings of
modelling utilizing probabilistic approach (see Sect. 19.9). integrated approaches in remote sensing, Guildford, UK
ESA-SP-214, pp 41–66
Chang K (2008) Introduction to geographical information systems.
18.8 Applications McGraw Hill, 450 pp
Davis JC (1986) Statistics and Data analysis in geology, 3rd edn.
Wiley, New York, p 646
GIS methodology has found applications in almost all Duval JS (1983) Composite color images of aerial gamma-ray
branches of natural resources investigations—mineral spectrometric data. Geophysics 48:722–735
exploration, hydrocarbon exploration, groundwater, forestry, Erdogan EH, Erpul G, Bayramin I (2007) Use of USLE/GIS
methodology for predicting soil loss in a semiarid agricultural
hydrology, soil erosion, environmental studies, urban plan- watershed. Environ Monit Assess 131:153–161
ning, various natural hazards, seismicity evaluation etc. For Fabbri AG (1984) Image processing of geological data. Van Nostrand
example, Bonhan-Carter et al. (1988) integrated geological Reinhold, New York 244 p
data sets in GIS for gold exploration. Goosens (1991) used Foody GM (1995) Land cover classification by an artificial neural
network with ancillary information. Int J Geog Inform Sys
Landsat TM, aeromagnetic and airborne radiometric data to 9:527–542
map granitic intrusions and associated skarns. Miranda et al. Franklin SE (1994) Discrimination of subalpine forest species and
(1994) integrated SIR-B and aeromagnetic data for recon- canopy density using digital CASI, SPOT PLA and Landsat TM
naissance mapping. Rowan and Bowers (1995) integrated data. Photogram Eng Remote Sens 60:1233–1241
Gong P (1996) Integrated analysis of spatial data for multiple sources:
Landsat TM data with SAR data and a data base of known using evidential reasoning and artificial neural network techniques
mines and prospects in a GIS study for mineral exploration. for geological mapping. Photogramm Eng Remote Sens
Brainard et al. (1996) used GIS for assessing risk in trans- 62:513–523
porting hazardous waste. With the interfacing of more Goosens MA (1991) Integration of remote sensing data and ground data
as an aid to exploration for granite related mineralization,
advanced geostatistical techniques, such as regression and Salamance province, W-Spain. Proceedings of 8th International
WofE modeling, GIS is being increasingly used for various Conference on Geologic Remote Sensing, Vol I. Environmental
geoscientific applications, such in groundwater, petroleum Research Institute of Michigan, Ann Arbor, Mich, pp 393–406
exploration, generation of mineral prospectivity maps etc. Harding AE, Forrest MD (1989) Analysis of multiple geological data
sets from English Lake District. IEEE Trans Geosci Remote Sens
(see Sect. 19.9). There is virtually no end to the range of 27:732–739
possible applications of GIS. Harig C, Simons FJ (2015) Accelerated West Antarctic ice mass loss
continues to outpace East Antarctic gains. Earth Planet Sci Lett
415:134–141
Heywood I, Cornelius S, Carver T (2006) An introduction to
References geographical information systems, 3rd edn. Pearson Education
Ltd, UK, p 426
Aronoff S (1989) Geographic information systems: a management Hutchinson CF (1982) Techniques for combining Landsat and ancillary
perception. WDL Publ, Ottawa, p 294 data for digital classifieation improvement. Photogramm Eng
Barringer AR (1976) Airborne geophysical and miscellaneous systems. Remote Sens 48:123–130
In: Lintz J Jr, Simonett DS (eds) Remote sensing of environment, Joria PE, Jorgenson JC (1996) Comparison of three methods for
Addison-Wesley, Reading, pp 291–321 mapping Tundra with Landsat digital data. Photogram Eng Remote
Batchelor GB (1974) Practical approach to pattern classification. Sens 62:163–169
Plenum, London Konecny G (2003) Geoinformation. Taylor and Francis, London, New
Bonham-Carter GF (1994) Geographic information systems for York, p 248
geoscientists: modeling with GIS. Pergamon Press, Ontario, Kothyari UC, Jain SK (1997) Sediment yield estimation using GIS.
Canada, p 398 Hydrol Sci J 42(6):833–843
Bonham-Carter GF, Agterberg FP, Wright DF (1988) Integration of Kundu S, Saha AK, Sharma DC, Pant CC (2013) Remote sensing and
geological datasets for gold exploration in Nova Scotia. Pho- GIS based landslide susceptibility assessment using binary logistic
togramm Eng Remote Sens 54:1585–1592 regression model: a case study in the Ganeshganga watershed.
Brainard J, Lovett A, Parfitt J (1996) Assessing hazardous waste Himalayas. J Indian Soc Remote Sens 41(3):697–709
transport risks using a GIS. Int J Geog Inform Sys 10:831–849 Longley PA, Goodchild MF, Maguire DJ, Rhind DW (eds) (1999)
Bristow Q (1979) Gamma ray spectrometric methods in uranium Geographical information systems. Wiley, NewYork
exploration airbome instrumentation. In: Hood PJ (ed) Geophysics Maguire DJ, Goodchild MF, Rhind DW (eds) (1991) Geographic
and geochemistry in the search for metallic areas. Geological information systems—principles and applications. Longman, Har-
Survey of Canada Economic Geology Report 31:135–146 low, Essex
References 289

Miranda FP, McCafferty AE, Taranik JV (1994) Reconnaissance Skidmore A (ed) (2002) Environmental modelling with GIS and remote
geologic mapping of a portion of the rain-forest-covered Guiana sensing. Taylor and Francis, London, p 251
Shield, northwestern Brazil, using SIR-B and digital aeromagnetic Star J, Estes J (1990) Geographic information systems: an introduction.
data. Geophysics 59:733–743 Prentice Hall, Englewood Cliffs, New Jersey
Ortega GE (1986) Intrduction to the geology and metallogeny of the Strahler AH, Logan TL, Bryant NA (1978) Improving forest cover
Almaden area, Castro-Iberian zone, Spain. In Proceedings of 2nd classification accuracy from Landsat by incorporating topographic
European workshop on remote sensing in mineral exploration, EEC, information. Proceedings of 12th Symposium on Remote Sensing of
Brussels Environment, vol II. Ann Arbor, MI, pp 927–942
Parasnis DS (1996) Principles of applied geophysics. Springer 456 p Strahler AH, Estes JE, Maynard PF, Mertz FC, Stow DA (1980)
Peddle DR (1993) An empirical comparison of evidential reasoning, Incorporating collateral data in Landsat classification and modelling
linear discriminant analysis, and maximum likelihood algorithms procedures. In Proceedings of 14th Symposium, Remote Sensing of
for land cover classification. Can J Remote Sens 19:31–44 Environment, vol II. Ann Arbor, Michigan, pp 1009–1026
Rebillard P, Evans P (1983) Analysis of coregistered Landsat, Seasat Strauss GK, Roger G, Lecolle M, Lopera E (1981) Geochemical and
and SIR-A images of varied terrain types. Geophys Res Lett 10 geological study ofthe volcano-sedimentary sulfide orebody of La
(4):277–280 Zarza-Huelva, Spain. Econ Geol 76:1975–2000
Rodell M, Velicogna I, Famiglietti JS (2009) Satellite-based estimates Volk P, Haydn R, Bodechtel J (1986) Integration of remote sensing and
of groundwater depletion in India. Nature 460:999–1002 other geodata for ore exploration—a SW Iberian case study. In
Rowan LC, Bowers TL (1995) Analysis of linear features mapped in Proceedings of International Symposium on Remote Sensing
landsat thematic mapper and side-Iooking radar images of the Environment, 5th Thematic Conf, Remote Sensing for Exploration
Reno, Nevada-California 1°  2° quadrangle: implications of Geology, Reno, Nevada
mineral resource studies. Photogram Eng Remote Sens Voss KA et al (2013) Groundwater depletion in the Middle-East with
61:749–759 GRACE with implications for transboundary water management in
Singhal BBS, Gupta RP (2010) Applied hydrogeology of fractured the Tigris-Euphrates-Western Iran region. Water Resour Res
rocks, 2nd edn. Springer, Dordrecht 49:904–914
Geological Applications
19

19.1 Introduction parameters from observations on elements of photo inter-


pretation and geotechnical elements.
Multispectral remote sensing data have shown tremendous It should be appreciated that even when soil and vege-
potential for applications in various branches of geology—in tation cover is heavy, remote sensing data have their value in
geomorphology, structure, lithological mapping, mineral and providing information on subsurface geology, at least to
oil exploration, stratigraphic delineation, geotechnical, some extent. The type of bedrock and structure control the
ground water and geo-environmental studies etc. (e.g. Vin- type of soil, soil moisture and vegetation, which can give
cent 1997; Gupta 2003; Drury 2004; Sabins 2007; Prost indirect information on the geology of the area.
2013). The purpose of this chapter is to review briefly the Remote sensing investigations should not be considered
parameters involved in various thematic applications and as an alternative to field investigations. On the other hand,
present illustrative examples using mainly satellite data. remote sensing data interpretation must be supported by field
Rock attributes (i.e. structure, lithology, rock defects etc.) data—including field observations, sampling, analysis and
and physical processes (i.e. climatic setting, weathering and even subsurface exploration—for reliable inferences.
erosion agencies) operating in a region over a period of time Remote sensing investigations may be said to have a
govern the nature and appearance of landscape, i.e. topog- two-fold purpose: (a) to allow viewing of the ground features
raphy, drainage, soil and vegetation (Fig. 19.1). These in in a different perspective, on a different scale, or in a dif-
turn, influence photo-characters. The main task in remote ferent spectral region, and (b) to reduce the amount of field
sensing image interpretation is to decipher geological work involved in covering the entire study area.

Fig. 19.1 Conceptual diagram showing the bearing of rock attributes and surface processes on geotechnical elements and elements of photo
interpretation

© Springer-Verlag GmbH Germany 2018 291


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8_19
292 19 Geological Applications

Fig. 19.2 Different stages in a typical remote sensing programme comprise: defining the problem, understanding the resolution requirements,
selecting data sets, and finally processing and application

A remote sensing application programme typically passes topography, landslides, poor vegetation, higher surface
through several stages, which include defining the problem, runoff, drainage characteristics, and types of soils and
assessing resolution requirements, selecting data sets, and rocks. Therefore, it is of utmost importance to physically
finally data processing, interpretation and application conceive and define the problem.
(Fig. 19.2). 2. Resolution requirements. Once the problem has been
defined, the next step is to estimate the resolution
1. Defining the problem. The first and the most important requirements, i.e. what spatial, spectral, radiometric and
task in any application assignment is to define the prob- temporal resolution of the remote sensing data will be
lem, i.e. to identify the various physical features, pro- sufficient to detect and/or identify the above physical
cesses, and phenomena involved, so as to understand the parameters of interest. Different types of tasks have dif-
possibilities of manifestation of the phenomena on remote ferent resolution requirements. For example, in an
sensing images. In a nutshell, for a particular geological investigation to map landslides in a hilly terrain,
application, we should be able to outline precisely what broad-band panchromatic VNIR remote sensing data
we should look for on the remote sensing data. For with high spatial resolution will be necessary; on the
example, in a study on geological structure, vital clues are other hand, for delineating mineral or rock types, data
given by tectonic landforms and various types of trends with essentially high spectral resolution will be required.
and alignments. Similarly, in a problem on soil erosion, Similarly, temporal resolution (repetivity) requirements
the important physical parameters will be: areas of steep depend upon the dynamics of the features of interest.
19.1 Introduction 293

3. Selection of data. Depending upon the resolution 19.2 Accuracy Assessment


requirements, and sensor types and characteristics avail-
able, the remote sensing data sets are to be selected for a Accuracy assessment has two aspects—radiometric accuracy
particular application task. Often, care is required to (which governs the thematic accuracy) and geometric
ensure that the atmospheric-meteorological conditions accuracy (i.e. positional accuracy), both being closely
existing at the time of remote sensing coverage are opti- interdependent and interlinked with each-other (Congalton
mum, with regard to cloud cover, dust, haze, rain and solar 2005; Congalton and Green 1993, 2008).
illumination. Together with remote sensing data, related
ground truth and ancillary information, e.g. geological
structural data, topographical, soil, vegetation maps etc. 19.2.1 Factors Affecting Pixel Radiometry
and other ground information, are gathered as required. and Geometry—An Overview
4. Data processing, interpretation and application. After
selection, the data are processed, transformed, rectified, Before embarking on image interpretation and its applica-
enhanced, superimposed over other data sets and inter- tion, it is pertinent to have an overview of factors that affect
preted for features of interest. The interpretations are pixel radiometry and geometry. A discussion “what is in a
controlled by ground and ancillary information. Finally, pixel” has been presented by Cracknell (1998). Pixel is the
the results of data interpretation are transferred to appli- basic element of image and is considered to be a
cation groups. square-shaped unit area on the image that has a corre-
sponding exactly similar square-shaped ground IFOV (see
In the following pages, first we consider the accuracy Fig. 5.6). This would imply that all radiation from the
aspects and then discuss the applications of remote sensing ground IFOV is received and integrated to yield a certain
image data in various sub-disciplines of geology themati- brightness value at the pixel and no radiation from outside
cally, giving a few examples. the ground IFOV is coming into the pixel. However, this is

Table 19.1 Factors affecting pixel radiometry and geometry—an overview


S. No. Type Description and implication
1. Ground characteristics The ground resolution cell may not be homogeneous but may consist of surfaces of different spectral
(mixed pixel) characteristics; the DN value at the pixel would correspond to the total intensity of radiation
contributed by all the objects as a mixture
2. Atmospheric scattering As the EM radiation passes through the atmosphere, scattering and absorption affect the radiation
and absorption intensity; this effect may be spatially variant
3. Non-uniform detector Non uniform detector response affects pixel radiometry and leads to striping on the image
response
4. Point spread function Diffraction in the optical system leads to radiation from each infinitesimal point on the ground being
spread over a certain area on the sensor’s image plane; the net effect could be that radiation from
objects lying just outside the nominal IFOV could also affect the pixel radiometry; this may be
particularly important for adjacently located very bright ground features (e.g. surface fires in thermal
sensing)
5. Adjacency effect Adjacency effect is caused by atmospheric scattering; some radiation arising from outside the ground
IFOV may be scattered by the atmosphere to get directed into the sensor aperture resulting in
increased path radiance
6. Re-sampling/registration Geometric rectification and registration of image data are routinely required for multi-scene
interpretations and GIS applications; due to the fact that the input and output grids are differently
oriented/aligned/structured, image data are resampled using different algorithms; this may lead to
shifting of pixels or generation of new pixel values
8. Topography and relief Topography and relief lead to shift in geometric position of the image pixel; this effect may be
spatially variant
9. Sensor characteristics, Sensor characteristics including viewing angle also influence image geometry and radiometry; the
viewing angle effect is also spatially variant
10. Platform instability Platform instability results in geometric distortions
11. Digital image processing Finally, digital image processing including stretching, filtering, enhancement and transformation etc.
lead to changes in pixel values of the image
294 19 Geological Applications

too simplistic and idealized and not practically true. An agencies and the rock attributes. It depends upon three main
overview of some of the more important factors affecting factors: (a) climatic setting, including its variation in the
pixel radiometry and geometry is given in Table 19.1. past, (b) underlying bedrock (rock type and structure) and
(c) the time span involved (Fig. 19.1). One of the widest
applications of remote sensing data has been in the field of
19.2.2 Positional Accuracy Thematic Accuracy
geomorphology, due to three reasons.
Remote sensing data products (aerial photographs and
Commonly maps are generated from remote sensing image
satellite images) give direct information on the landscape—
data. Positional accuracy implies how closely the map gen-
the surface features of the Earth, and therefore geomorpho-
erated from the imagery fits the ground, i.e., accuracy of the
logical investigations are most easy to carry out based on
location of a point on the image with reference to its physical
such data.
location on the ground. Some of the factors that may influ-
Landform features can be better studied on a regional
ence positional accuracy are (discussed in Chap. 7):
scale using synoptic coverage provided by remote sensing
data, rather than in the field.
– Topography and relief
Stereoscopic ability permits evaluation of slopes, relief
– Sensor characteristics, viewing angle
and forms; vertical exaggeration in stereo viewing brings out
– Platform instability (pitch, roll and yaw).
morphological details.
Usually it is considered that positional accuracy of half a
1. Spatial resolution. Geomorphology involves the study of
pixel is acceptable for images obtained from optical sensors
a number of parameters, namely: extent and gradient of
such as Landsat TM/ETM+/OLI, ASTER, SPOT-HRV,
the slopes; their variations, shape, size, pattern; whether
IRS-LISS and other high resolution sensors such as Ikonos,
the slopes are barren or covered with soil or vegetation;
Pleiades, Cartosat, GeoEye, WorldView etc. It is imperative
type of surface material; whether the slope is stable or
to have good stable and unique control points and field DGPS
unstable; and mutual relations of the slopes. Whereas local
survey for assessing positional location. Positional accuracy
landforms are best studied on large-scale to medium-scale
is given in terms of RMSE (root mean square error). It is
stereo photographs, the regional setting of landforms
computed as sum of the square of the difference between the
extending over several kilometres and their mutual rela-
positions of the point on one data layer as compared to the
tionships can be better evaluated on coarser-resolution
position of the same point on another data layer.
space images. The study of landforms on satellite images
is also sometimes referred to as mega-geomorphology.
19.2.3 Thematic Accuracy 2. Spectral resolution. Data in the VNIR broad-band range
have been used extensively for geomorphological
Thematic accuracy refers to the accuracy of the mapped the- investigations, as they provide higher spatial resolution
matic layer (e.g. land cover type or lithologic unit type) at a and are able to bring out differences in topography,
particular time compared to what was actually present on the vegetation, soil, moisture and drainage. The application
ground at that time. Land cover/landuse may have temporal potential of thermal-IR data appears to be limited due to
variation whereas lithology is an intrinsic character. It is coarser spatial resolution. Radar may be advantageously
important that the reference data have high accuracy—both used for gathering data on micro-relief, including surface
thematic and positional, else there would be inherent dis- roughness, vegetation, soil moisture and drainage.
crepancies. Radiometric content of the pixels would govern 3. Temporal resolution. Although land surface features are
their thematic classification. Reference data or ground truth stable, their manifestation and detection on remote
can be collected by photointerpretation, aerial reconnaissance sensing images are predominantly influenced by tempo-
survey, or field checks etc. Accuracy assessment is done by ral surface parameters, namely, soil moisture, vegetation,
generating an error matrix that shows reference data (columns) land cover and drainage (dry/wet channels). Therefore, it
and remote sensing classified data (rows) (see Fig. 13.45). It is important that the remote sensing data are acquired at a
enables comparison of the two maps quantitatively. time which provides adequate discrimination between
features of interest.

19.3 Geomorphology In many cases, landforms are characteristic of a particular


type of terrain, e.g. (1) sink holes and collapse structures
Geomorphology deals with the study of landforms, includ- mark limestone terrain, (2) landslides, soil creep, rockfalls
ing their description and genesis. Landform is the end pro- etc. indicate mass wasting and unstable slopes, (3) terraces,
duct resulting from interactions of the natural surface natural levees, fans, point bars and oxbow lakes indicate a
19.3 Geomorphology 295

fluvial environment, (4) moraines, drumlins and broad


U-shaped valleys point towards a glacial environment,
(5) sand dunes and loess mark aeolian terrain, (6) deltas, spit
bars, lagoons and beaches etc. indicate marine processes,
(7) volcanic cones, calderas and volcanic flows characterize
an igneous environment, and so on. The various landforms
are described in detail in standard works on geomorphology
(e.g. Bloom 1997; Thornbury 1978). As landforms are
directly observed on remote sensing data products, it is
important that the image interpreter must have a sound
knowledge of geomorphological principles and processes.
An outstanding presentation on mega-geomorphology has
been made by Short and Blair (1986).
The following describes the salient features of various
landforms on remote sensing images, the description having
been organized using genetic classification. Many, or rather
most, landforms in nature are a result of multiple processes,
and therefore the categorization made here may appear
arbitrary in places.
Fig. 19.3 Strato-volcanoes south of Jakarta, Indonesia. The look
direction is from the top. The large circular caldera (bottom right) is
19.3.1 Tectonic Landforms more than 1 km wide. Radial drainage is well developed (courtesy of
Radarsat Inc.)

Tectonic landforms may be defined as structural landforms


of regional extent. W.M. Davis in 1899 considered that high-density dendritic and rectangular drainage patterns
structure, processes and time constitute the three most sig- (Fig. 19.5). Besides, orbital repetitive remote sensing may
nificant factors shaping the morphology of a land. Of the capture the process of active volcanism (see Figs. 19.120,
three, structure, i.e. the deformation pattern, has the most and 19.125).
profound control and this idea led to the concept of mor-
photectonics. In almost all cases, the structure of the rock has
an intrinsic influence on landforms due to selective differ- 19.3.3 Fluvial Landforms
ential erosion and denudation along structurally weaker
zones. Everett et al. (1986) provide numerous examples. Running water is one of the most prominent agents of
Many examples given here (e.g. Figs. 19.27, 19.30, 19.31, landform sculpturing, whose effects are almost everywhere
and 19.32) could also be considered as landforms of tectonic to be seen. Huge quantities of sediments or rock material are
origin. removed, transported from one place to another and dumped
by rivers, thus modifying the land surface configuration
(Baker 1986). The fluvial landscape comprises valleys,
19.3.2 Volcanic Landforms channel ways and drainage networks.
The drainage pattern is the spatial arrangement of
Volcanic landforms are primarily constructional, and result streams and is, in general, characteristic of the terrain. Dif-
from extrusion of magma along either vent centres or ferent drainage networks possess geometric regularity of
fractures on the Earth’s surface. Central-type neo-volcanic different types, which reveal the character of the geological
eruptions are confined to plate boundaries, most being terrain, and also help in understanding the fluvial system.
concentrated on the convergent margins around the Pacific Howard (1967) has summarized the geological significance
Ocean. They result in landforms such as conical mountains of various drainage patterns (Table 19.2; Fig. 19.6). Six
(Fig. 19.3). Fissure-type eruptions create sheets of flows types of drainage patterns have been considered as basic, i.e.
forming plateaus (Fig. 19.4). Basaltic weathered surfaces with gross characteristics readily distinguishable from other
are frequently marked by black-cotton soil and basic patterns, namely dendritic, rectangular, parallel,
296 19 Geological Applications

Fig. 19.4 MOMS-02P panchromatic stereo pair showing volcanic plateau landform in Somalia (courtesy of DLR, Germany)

spacing along the length. These rivers have relatively narrow


deep channels and stable banks and this pattern is most
widely developed in flood plains. A distributory pattern
consists of several branching channels, originating from the
same source. It indicates the spreading of water and sedi-
ments across the depositional basin and develops over
alluvial fans and deltas.
An anastomotic pattern comprises multiple intercon-
necting channels, separated by relatively stable areas of
flood plains. A deranged pattern is a disorderly pattern of
haphazard and erratic short streams and ponds found in
swampy areas and glacial moraines and outwash plains.
A braided pattern is controlled by the load carried by the
Fig. 19.5 Landform formed by flood basalt—Deccan Plateau, India.
stream and is marked by a shallow channel separated by
The topography is flat-topped plateau, generally dark-toned in the islands and channel bars. A barbed pattern is one in which
VNIR range, due to black-cotton soil. Vegetation is sparse and the the tributaries join the main stream in bends pointing up
terrain is marked by high-density dendritic drainage, monotonously stream. It indicates strong structural and tectonic control
extending over large tracts (Landsat MSS4 infrared image.)
and uplift. Several other types of drainage patterns related
to the geometry have also been described in the literature.
trellis, radial and annular. A number of modified basic A palimpsest pattern is one which includes traces of an
patterns have also been described. older pattern, which forms the background for the devel-
In addition to the above, several special drainage/channel opment of the present pattern. A drainage anomaly is a
patterns have also been identified. Some examples are given local deviation from the regional drainage pattern, e.g.
in Fig. 19.6. Each of these drainage patterns indicates a rectilinearity, local appearance/disappearance of meanders,
specific geological characteristic of the terrain. Meandering anomalous ponds, marshes, fills, turns, gradients, piracy,
is the most common channel pattern. Such rivers take on a rapids etc., and implies certain local geological
sinuous shape, developing alternating bends with irregular phenomena.
19.3 Geomorphology 297

Table 19.2 Common drainage patterns and their geological significance (see Fig. 19.6)
Type Description Geological significance
Dendritic Irregular branching of streams, haphazardly, resembling a tree Homogeneous materials and crystalline rocks; horizontal
beds; gentle regional slope
Subdendritic Slightly elongated pattern Minor structural control
Pinnate High drainage density pattern; feather-like Fine-grained materials such as loess
Rectangular Streams having right-angled bands Jointed/faulted rocks, e.g. sandstones, quartzites etc.
Angulate Streams joining at acute angles Joints/fractures at acute angles to each other
Parallel Channels running nearly parallel to each other Steep slopes; also in a areas of parallel elongate
Trellis Main streams running parallel and minor tributaries joining Dipping or folded sedimentary or low-grade
the main streams nearly at right angles meta-sedimentary rocks; areas of parallel fractures
Radial Streams originating from a central point of region Volcanoes, domes, igneous intrusions; residual erosion
features
Centripetal Streams converging to a central point Depression, crater or basin, sink holes
Annular Ring-like pattern Structural domes

Fig. 19.6 Important types of drainage patterns: a dendritic; b rectangular and angulate; c parallel; d trellis; e annular and sub-radial; f meandering;
g distributary; h anastomotic; i deranged; j braided; k barbed; l rectilinear
298 19 Geological Applications

Fig. 19.7 A vast alluvial fan has formed between the Kunlun and
Altun mountain ranges, China. As the river flows from SE to NW, apex
of the fan is located in the SE and distal part in the NW (ASTER image
printed black-and-white from colour; courtesy NASA/METI/AIST/
Japan Space Systems, and U.S./Japan ASTER Science Team)

The landforms associated with fluvial erosion are gorges,


canyons, V-shaped valleys, steep hill slopes, waterfalls,
pediments etc. As the stream emerges out of the mountain,
the flow velocity gets reduced; this leads to formation of
alluvial fans that may be of small to mega dimensions
(Fig. 19.7). Typical other depositional landforms include
alluvial plains, flood plains, natural levees, river terraces,
meander scars, channel fills, point bars, back swamps and
deltas. Depending upon the dimensions involved, the land-
forms can be identified on aerial and satellite remote sensing
data. An important application of repetitive remote sensing
data is the study of dynamic features, such as changes in
planform and migration of rivers (Figs. 19.8 and 19.9), and Fig. 19.8 a Landsat MSS4 image of a part of the Middle Ganges
delineation of palaeochannels and palaeohydrology basin. b Interpretation map showing various fluvial landforms.
c Successive southward migration of the Ganges River in several
(see Sect. 19.11.3). stages as interpreted from Landsat images (Philip et al. 1989)

19.3.4 Coastal and Deltaic Landforms cliffs, terraces, benches, shelves, caves, islands etc., and
depositional landforms, such as beaches, spits, bars, tidal flats
The oceans cover a major part of the Earth and surround the and deltas, can be identified on aerial photographs and satel-
continents. A coastline is the boundary between land and lite images, depending upon the dimensions involved and the
ocean. In a general sense, the coast refers to a zone of indef- scale provided by the sensor. Selected examples are given by
inite width on both sides of the coastline. Coastal landforms Bloom (1986) and Coleman et al. (1986).
are those which are influenced and controlled by proximity to Rivers transport huge quantities of sediments from the
the sea. Several types of coastal erosional landforms, such as land to the seashore, and a variety of landforms may
19.3 Geomorphology 299

Fig. 19.9 Channel planform pattern of Ganges and Burhigandak rivers, and changes through the years 1935–84. Patterns based on a topographic
maps (1935), b aerial photographs (1966), and c, d Landsat images (1983) and (1984) (after Philip et al. 1989)

Fig. 19.10 SIR-B image covering part of the Ganges flood plains, channels. The uniform grey areas on the left correspond to the flood
Bangladesh. The mottled grey and black areas on the right are plain, susceptible to recurring floods and major reworking (SIR-B
cultivated fields connected by extensive irrigation and drainage image courtesy of JPL)

develop, commonly grouped under the term delta eustatic-level changes. The shoreline of emergence is an
(Figs. 19.10, 19.11 and 19.12). Sometimes gigantic deltas exposed portion of the sea floor, subjected to sub-aerial
are formed by great rivers. Deltas include distributary agencies. Similarly, the shoreline of submergence is sim-
channels, estuaries, bars, tidal flats, swamps, marshes, etc. ply a drowned portion of a subaerial landscape, now
and are often marked by an anastomotic drainage pattern. subjected to submarine agencies. For example, Fig. 19.30
Many of the coastal landforms are composite products, could also be considered as a coastal landform of com-
having evolved as a result of multiple processes, due to posite type.
300 19 Geological Applications

streamlined by wind), blow-outs (deflation basins), desert


pavement (stony desert) and desert varnish (dark shiny
surficial stains). The transportation action removes loose
sand and silt particles to distant places. Dust storms are
aeolian turbidity currents. Loess deposits are homogeneous
non-stratified and unconsolidated wind-blown silt. They are
susceptible to gullying and may develop pinnate and den-
dritic drainage patterns. Dry loess slopes are able to stand
erect and form steep topography. Aeolian deposition leads to
sand sheets, various types of dunes such as crescent dunes,
linear dunes, star dunes, parabolic dunes, and complex dunes
and ripples (Figs. 19.13 and 19.14). Other landforms in
deserts could be due to fluvial activity, such as fans, dry river
Fig. 19.11 The Ganges delta, India-Bangladesh; note the high
sediment load and distributary drainage pattern (MERIS image, printed channels and lakes. Desert lakes (playas) are generally salty,
black-and-white)

Fig. 19.13 Longitudinal sand dunes of the famous Rub’ al Khali,


which forms the world’s largest continuous sand desert, Saudi Arabia
Fig. 19.12 Tidal flats in northern Germany (X-band aerial SAR image).
(ASTER image, printed black-and-white from colour composite;
Note the highdensity dendritic drainage and variation in surface moisture
courtesy NASA/GSFC/MITI/ERSDAC/JAROS, and US/Japan ASTER
over the tidal flats (courtesy of Aerosensing Radarsysteme GmbH)
Science Team)

19.3.5 Aeolian Landforms

Deserts, where aeolian activity predominates, cover a sig-


nificant part of the land surface. These are generally remote,
inhospitable areas. The distribution of deserts in the world is
not restricted by elevation, latitude or longitude. Erosion,
transportation and deposition create the landforms in deserts,
chiefly by wind action.
The aeolian terrain is marked by scanty or no vegetation
and little surface moisture. Therefore, on VNIR photographs
and images, the area has very light photo tones. Active dunes
have no vegetation and stabilized dunes may have scanty grass
cover. The various landforms can be distinguished on the
Fig. 19.14 Star sand dunes in part of Saudi Arabia. In the south-east
basis of shape, topography and pattern (see e.g. Walker 1986). corner are circular irrigation fields due to sprinkler irrigation
Aeolian erosional processes lead to the formation of a (MOMS-02P image, printed black-and-white from colour composite;
variety of landforms such as yardangs (sculptured landform courtesy of DLR, Oberpfaffenhofen)
19.3 Geomorphology 301

19.3.6 Glacial Landforms

Glaciers are stream-like features of ice and snow, which


move down slopes under the action of gravity. Glaciers
occur at high altitudes and latitudes, and about 10% of the
Earth’s land surface is covered with glacial ice. The areal
extent of glaciers is difficult to measure by field methods,
and remote sensing data images provide information of
much practical utility in this regard (see Sect. 19.16).
Typical erosional landforms of glacial origin are broad
U-shaped valleys, hanging valleys, fords, cirques and gla-
cial troughs. Figure 19.16 shows aligned lakes due to gla-
cial scouring (and possibly bedrock structure). The huge
moving masses of ice and snow erode and pick up vast
Fig. 19.15 Part of the Stuart desert, Australia, where a sudden storm quantities of fragmental material and transport these varying
has led to widespread flash floods and the formation of numerous lakes distances before deposition. The glacial deposit is typically
(Landsat MSS4 image) (courtesy of R. Haydn)
heterogeneous, consisting of huge blocks to fine silt or rock
flour, and is called till matrix. The depositional landforms
shallow and temporary, and constitute sources of mineral include moraines, drumlins, till, glacial drift etc. Below the
wealth such as salts formed by evaporation. snow line (line of perpetual snow), the ice melts and gives
There is little vegetation in desertic terrains, common rise to streams. In this region, up to a certain distance
types being xerophytes (drought or salt-resisting plants), downstream the landforms have characteristics with both
succulents (which store water in their system) and phreato- fluvial and glacial properties, and they are called fluvio-
phytes (with long roots that reach the water table). The plants glacial. Typical fluvio-glacial landforms include outwash
that are present hold soil, inhibit deflation, check surface plains, eskers, fans and deltas and glacial lacustrine features
velocity and provide sites for deposition. Rainfall may sel- (Fig. 19.17).
dom occur in deserts, but when it does, it may be violent and Broadly, glacial landforms produce gently rolling or
lead to widespread fluvial lacustrine processes (Fig. 19.15). hummocky topography with a deranged or kettle-hole drai-
Remote sensing data can help monitor changes in deserts, nage pattern. Images exhibit a mottled pattern due to varying
their landform, movement etc., and to locate oases and soil moisture and the presence of a large number of ponds
buried channels (see Sect. 19.11.3). and lakes.

Fig. 19.16 Stereo image pair of the near-IR spectral band showing images acquired using a Kodak DCS 460 CIR camera, with
aligned lakes due to glacial scouring (and possibly bedrock structure) in 3060  2036  12-bit format; the pixel spacing is 60 cm) (courtesy
the Canadian shield of eastern Ontario, Canada (aerial digital camera of Doug King)
302 19 Geological Applications

Most commonly, structural-geological studies commence


by deciphering planar discontinuities in the rocks, with a
view to understanding their characteristics, disposition
and spatial relations. A planar discontinuity is marked by
contrasting physicochemical conditions in rocks, in terms
of mineral composition, chemical weathering property,
mechanical strength and erodibility. The chances of
identification of discontinuities are related to the resolu-
tion of the sensor (i.e. scale of observation) and the
dimension of the discontinuity. Discontinuities can be of
various types, e.g. bedding, foliation, faults, shear zones,
joints etc. Bedding is the primary discontinuity and other
structures constitute secondary discontinuities.
3. Manifestation of discontinuities. Discontinuities are
expressed as differences in topography, slope, relief, tone
and colour of the ground, soil and vegetation, and com-
binations of these. The discontinuities may be observed
Fig. 19.17 Seasat-SAR image of part of Iceland showing prominent
glacial features such as broad valleys and flowing ice; many of the on simple photographs and images, or remote sensing
fluvio-glacial features, such as outwash plains, streams with suspended data may be processed specifically to enhance certain
material and dammed lakes, are also seen (processed by DLR; image directional trends. Further, manifestation of discontinu-
courtesy of K. Arnason)
ities on remote sensing images may be direction depen-
dent, i.e. a remote sensor may be configured so as to
Special landforms associated with specific geological rock enhance certain discontinuities in comparison to others.
types, such as karsts in limestones, intrusives in igneous 4. Vertical versus horizontal discontinuities. In general, the
rocks, etc. are described with their respective lithologic types. remote sensing techniques, such as aerial photography,
and the most widely used satellite sensors such as Landsat
TM/ETM+/OLI, ASTER, IRS-LISS, SPOT-HRV, Iko-
19.4 Structure nos, Geo-Eye, WorldView etc. provide plan-like infor-
mation, the data being collected while the sensor views
the Earth vertically from above. On these images and
1. Scope. Remote sensing techniques have found extensive
photographs, vertical and steeply dipping planar discon-
application in structural studies to supplement and inte-
tinuities are very prominently displayed. On the other
grate structural field data, the aerial and space-acquired
hand, gently dipping or sub-horizontal structural discon-
data providing a completely new dimension in terms of
tinuities are comparatively suppressed and are difficult to
synoptic view. It was in this perspective that aerial
delineate, especially in rugged mountainous terrains.
photography galvanized regional structural analysis in
Such discontinuities have a strongly curving or irregular
the late 1940−1950s, and the same phenomenon was
pattern, and can be better identified on the basis of
repeated by satellite data in early 1970–80s. The very
accompanying field/ancillary information.
high resolution satellites with pixel resolution of 0.4–
0.5 m now offer a completely new tool for detailed
mapping as well as synoptic analysis, not earlier
available. 19.4.1 Bedding and Simple-Dipping Strata
2. Basis and purpose. The basis for deriving structural
information from remote sensing data emanates from the Bedding is the primary discontinuity in sedimentary rocks,
concept of morphotectonics—that rocks acted upon by and is due to compositional layering. The alternating sedi-
erosional processes result in landforms which are related mentary layers may differ in physicochemical properties, and
to both internal characteristics (rock attributes, i.e. this leads to the appearance of regular and often prominent
lithology, and structure) and external parameters (types linear features, marked by contrasting topography, tone,
and intensity of erosional processes) (Fig. 19.1). This texture, vegetation etc., on images and photographs
implies that: (a) structures have significant influence or (Figs. 19.18, and 19.19). Linear features due to bedding are
landform development, and (b) erosional landforms carry long, even-spaced and regular, in contrast to those produced
imprints of structural features of rocks. by foliation or joints.
19.4 Structure 303

Fig. 19.18 Manifestation of bedding as prominent and regular, linear


features marked by contrasting topography, tone, texture and vegeta-
tion. Straight parallel outcrops suggest near-vertical orientation of the
bedding (SIR-A image, courtesy of JPL)

Fig. 19.20 Sub-horizontal beds resulting in an outcrop pattern of


concentric loops and ellipses. The area forming a part of Tanezrouft,
Algeria, is arid, completely barren of vegetation, quite flat and is mainly
a gravel desert. The Palaeozoic sedimentary rocks, which form the
bedrock, are mildly deformed. Wind erosion and deflation have carved
out the peculiar repetitive outcrop pattern (Landsat MSS4 image,
courtesy of R. Haydn.)

outcrop pattern is often determined by the structure and


Fig. 19.19 Bedding characterized by regular consistent banding.
relief in the area. Common landforms are: hogbacks,
Straight outcops indicate near-vertical orientation of the bedding. The cuestas, dip slopes, strike valleys and sometimes trellis
rocks present are thinly bedded limestone, sandstone, and siltstone; drainage. Strike direction of beds is given by the trend of
Ugab river section Namibia (ASTER image, printed black and white ridges, vegetation bands, tonal bands and linear features
from colour composite; courtesy NASA/METI/AIST/Japan Space
Systems, and U.S./Japan ASTER Science Team)
corresponding to lithological layering. The rule of ‘V’s
can often be successfully applied for the determination of
dip direction (Fig. 19.21).
Orientation of bedding is important, for the main aim in 3. Vertical beds are identified by straight contacts, which
structural interpretation is to delineate the attitude of beds run parallel to the strike of the beds, irrespective of
(i.e. strike and dip) and deduce their structural relations. The surface topography (Figs. 19.18, and 19.19); trellis
beds may be sub-horizontal, inclined or vertical and may drainage may also be common.
form segments of larger fold structures.

1. Flat-lying beds are recognized by a number of features,


such as banding extending along topographical contours 19.4.2 Folds
or a closed-loop pattern (Fig. 19.20), dendritic drainage
on horizontal beds, and mesa landforms. As the beds are A fold can be delineated by tracing the bedding/marker
sub-horizontal, erosional repetition is quite common. horizon along the swinging strike, and the recognition of
2. Inclined beds commonly form elongated ridges and dips of beds. Broad, open, longitudinal folds are easy to
valleys due to differential weathering (Fig. 19.21). The locate on satellite images. On the other hand, tight,
304 19 Geological Applications

Fig. 19.21 Inclined/steeply dipping beds. The aerial photographic form strike valleys. The sandstone outcrop, as it meets the river, makes
stereo pair shows competent (sandstone) and incompetent (shale) beds, a ‘V’, the notch pointing towards the down-dip direction (stereo pair
which are intercalated. The sandstones form strike hogback ridges with courtesy of Aerofilms, London)
dip slopes on one side and steep talus slopes on the other; the shales

overturned, isoclinal folds are relatively difficult to identify


on satellite images, owing to the small areal extent of
hinge areas (which provide the only clues of their pres-
ence); therefore, such folds need to be studied on appro-
priately larger scale satellite images and aerial photographs.
Some interesting examples of fold structures are given
below.

1. Richat structure, Mauritania. The Richat structure is a


classic example of the potential of remote sensing data in
structural mapping (Fig. 19.22). This structure came into
the limelight through the Gemini-4 photographs,
although it was found subsequently that some French
investigators had known about it even earlier. Within the
Great Sahara desert, the Richat structure lies in a remote
part of Mauritania and is located on a plateau about
200 m above the adjacent desert sands. The adjoining
terrain consists of sub-horizontal sedimentary rocks of
Fig. 19.22 The Richat structure located in Mauritania. It consists of
Ordovician age, resembling an extensive mesa. The
concentric ridges (quartzite) and valleys (shales), and is about
Richat structure consists of series of concentric quartzite 40  30 km in dimension. Its origin has been a long-standing
ridges, separated by concentric valleys underlain by geological riddle (see text) (ASTER image, printed black and white
shales. The rounded outcrop pattern has a gigantic ‘bull’s from colour composite; courtesy NASA/METI/AIST/Japan Space
Systems, and U.S./Japan ASTER Science Team)
eye’ shape of about 40  30 km. Annular and radial
drainage patterns are present, although poorly developed
owing to the arid conditions in the area. The origin of the
structure has been a standing geological enigma, various 2. Structures in parts of the Tange-Khoor Protected Area,
views being: (a) an impact origin (now largely dis- Iran. The Tange-Khoor Protected Area (near Lamerd,
carded), (b) intrusion of magma at depth, possibly as a Iran) comprises of typical well-bedded sedimentary rocks
plug, (c) diapiric intrusion of shales and (d) possibly just that are deformed into broad open longitudinal doubly-
a symmetrical uplift (circular anticline) that has been laid plunging anticlinal and synclinal folds. The Radarsat-2
bare by erosion. image distinctly brings out the fine morphological-
19.4 Structure 305

(Fig. 19.24). The area lies in the north-western part of the


Precambrian Indian shield and the rocks comprise pre-
dominantly quartzites, phyllites and schists, belonging to
the Delhi Super Group. The quartzites form strike ridges
and the associated phyllites and schists form slopes and
valleys.
Numerous structural features—folds and faults—are seen
in Fig. 19.24a. The general strike of the rocks is NNE–
SSW and the folds exhibit closures towards north and
south. A regular variation in wavelength of the folds is
conspicuous, the folds being broad and open on the east,
and gradually becoming tighter towards the west
(Fig. 19.24b)—a fact reported by Heron (1922) after
extensive field investigations and readily shown on the
synoptic views provided by the satellite data.
An enlarged sub-scene of the above (Fig. 19.25a) and the
‘edges’ on this sub-scene (19.25b) show the fascinating
fold pattern and many transverse structures. The inter-
Fig. 19.23 Series of folded sedimentary rocks in the Tange Khoor
Protected Area, near Lamerd, Iran; note the general absence of vegetation pretation map (Fig. 19.25c) from the Landsat images and
that renders it easy to demarcate fine lithological layering and fold the field structural map of the area (Fig. 19.25d, redrawn
closures in the longitudinal hill ranges; the flat intervening valleys have after Gangopadhyay 1967) are also compared. The
sparse agricultural fields and are marked by dork tones due to example demonstrates the utility of satellite imagery in
near-specular reflection; Radarsat-2 image, H-H polarization, extended
high mode image. © MacDonald, Dettwiler and Associates Ltd. 2008 delineating structural features of such dimensions.
4. Photogrammetric measurements for structural analysis
structural features including bedding and fold closures —Zagros mountains, Iran. The principles of deriving
(Fig. 19.23). structural orientation data from stereo photography have
3. Fold structures in parts of the Aravalli hills, India. Some been discussed elsewhere (Chap. 7). Bodechtel et al.
typical large-wavelength open folds are developed in the (1985) made quantitative measurements on Metric
area around Alwar, Aravalli hills, Rajasthan, India Camera stereo photographs for structural evaluation in

Fig. 19.24 a Regional fold pattern in the rocks of Delhi Super Group wavelength of folds across the area as shown in the synoptic small-scale
near Alwar, Rajasthan; a number of faults are also observed. image (Landsat MSS4 image.)
b Interpretation map of the above; note the regional variation in
306 19 Geological Applications

Fig. 19.25 a Sub-scene of Fig. 19.24a (window marked). b Shows area based on field investigations (after Gangopadhayay 1967). For
‘edges’ of the subscene in (a). c Structural interpretation from the further discussion, see text
images shown in (a) and (b). d Structural map of the corresponding
19.4 Structure 307

parts of the Zagros Mountains, Iran. Geologically, this horizon. The fold axial trace runs almost E–W and the
terrain consists of Mesozoic–Tertiary sedimentary axis plunges eastwards on the east and westwards on the
sequences, deformed into long-wavelength doubly west. An oval-shaped salt dome has intruded the anti-
plunging folds (Fig. 19.26). The characteristic sedi- clinal core and locally disturbed the fold axis. Further to
mentary layering is very well depicted, with dip slopes the north lies a doubly plunging syncline, which is
and fold closures. The main structure in the example succeeded by another doubly plunging anticlinal struc-
area is a doubly plunging anticline, identified as ture, identified as Kuh-e-Guniz. Its long eastward clo-
Kuh-e-Gashu (Fig. 19.26a, b). The opposite dips on sure with prominent outcrop pattern is distinct on the
both the flanks are indicated by V-shaped structures and photograph. The axial zones of folds are marked by
flat irons, formed by the outer resistant lithological minor faults and fractures (lineaments).

Fig. 19.26 a Metric Camera photograph of part of the Zagros mountains, (Iran) covering the Kuh-e-Gashu and Kuh-e-Guniz anticlines.
b Structural map based on photogrammetric investigations of the Metric Camera stereo photographs (a, b Bodechtel et al. 1985)
308 19 Geological Applications

Structural data for statistical evaluation was derived by


Bodechtel et al. (1985) by applying photogrammetric
methods on the Metric Camera stereo photographs. Orien-
tation of the bedding plane was measured using the
three-point method (i.e. the X, Y, Z co-ordinates of three
points located on the same bedding plane), and these mea-
surements provided strike and dip data of beds (Fig. 19.26
b). On projection, this gave the statistical orientation of the
bedding planes, their intersections being b or fold axis. The
computed data were found to be in close correspondence
with field data.

19.4.3 Faults
Fig. 19.27 Yamuna fault; the Siwalik group of rocks bordering the
Himalayas on the south, are well bedded and comprise mainly
One of the greatest advantages of remote sensing data from sandstones and shales and are severed and displaced by the Yamuna
aerial and space platforms lies in delineating vertical to fault. Large-scale drag effects, with a left-lateral sense of displacement,
high-angle faults or suspected faults. These are indicated on are distinct (Landsat MSS red band image)
the images and photographs by one or more of the following
criteria: (1) displacement of beds or key horizons, (2) trun- 1974). However, the Landsat image, with its synoptic
cation of beds, (3) drag effects, (4) presence of scarps, view, clearly indicates the left-lateral direction of dis-
(5) triangular facets, (6) alignment of topography including placement along this fault (Gupta 1977b; Sharma 1977).
saddles, knobs etc., (7) off-setting of streams, (8) alignment 2. Faults marked by vegetation alignment, Chittaurgarh,
of ponds or closed depressions, (9) spring alignment, Rajasthan, India. In semi-arid terrain, strong vegetation
(10) alignment of vegetation, (11) straight segment of alignment could occur along the fault traces. Fig-
streams, (12) waterfalls across stream courses, (13) knick ure 19.28 presents an example of displacement of
points or local steepening of stream gradient, and (14) dis-
ruption of valley channels.
On the other hand, low-angle faults are rather difficult to
interpret, since satellite images provide planar views from
the above. Such faults have strongly curving or irregular
outcrop and can be inferred on the basis of discordance
between rock groups, e.g. with respect to attitude of beds,
degree of deformation, degree of metamorphism etc.
Some examples of faults observed on multispectral
remote sensing images are described below.

1. Faulting marked by dislocation and drag effects—the


Yamuna fault, Sub-Himalayas, India. The river Yamuna
emerges from the Himalayas along a fault, called the
Yamuna fault or the Paonta fault (Fig. 19.27). Geologi-
cally, the terrain comprises the Siwalik Group of rocks,
containing well-bedded clastics (sandstones, siltstones,
shales and conglomerates) of Miocence–Pleistocene age.
These deposits represent the molasse sequence of the
Himalayas and border the Himalayas to the south all
along their E–W strike length. The Siwalik Group is
severed by the Yamuna fault, being truncated and dis-
placed (strike slip > 10 km). Large-scale drag effects of Fig. 19.28 A set of two parallel vertical faults displacing the
the order of a few kilometres are seen. Prior to the sedimentary layers of sandstones and shales (Vindhyan Super Group,
near Chittaurgarh, India); the area has a semi-arid climate; the faults
Landsat studies, the direction of displacement along this
traces are marked by preferential growth of vegetation all along the
fault was considered to be right-lateral on the basis of strike evidently due to groundwater seepage; sandstone and shale layers
field and aerial photographic interpretation (Rao et al. also exhibit vegetation banding (IKONOS image)
19.4 Structure 309

strike length. Close to the southern ends of the faults


occur vegetation alignments evidently due to ground-
water movement.
4. Fault with vertical displacement—the Nagar-Pakar
fault, Indo-Pak border. A prominent E–W trending fault
occurs at the northern boundary of the Rann of Kutch,
and is called the Nagar-Pakar fault (Fig. 19.30). This
fault runs for a few hundred kilometres’ strike length, and
limits the Kutch rift to the north (Biswas 1974). On the
Landsat image, the fault forms an exceedingly conspic-
uous straight boundary where any evidence of lateral
movement is lacking. The southern block is full of playas
and salinas, and indicates frequent invasion by coastal
waters, i.e. it has subsided. The northern block is covered
with extensive desert dunes. A river (the Luni), entering
Fig. 19.29 Large-scale parallel multiple faults displacing the sedi- this region from the east, takes a turn and is confined to
mentary layers of the Cuddapah basin, India; the faults can be traced for the southern block, although in its natural course it
a strike length of about 8–10 km; note the preferred vegetation
alignment along the southern parts of the fault zones indicating
appears that it ought to have been flowing on the northern
groundwater seepage (IKONOS image printed black and white) block, again indicating that the northern block has rela-
tively gone-up.

sedimentary layers of sandstones and shales (Vindhyan Several other examples of faults are given elsewhere (see
Super Group) by two parallel faults such that the faults Figs. 19.32, 19.33, 19.34, 19.110, 19.111 and 9.112).
traces are marked by preferential growth of vegetation.
3. Multiple parallel large-scale vertical faults, Cuddapah
basin, India. Figure 19.29 shows an interesting example 19.4.4 Features of Global Tectonics
of large-scale parallel multiple faults cutting across and
displacing the sedimentary rocks. The faults are Divergent plate boundaries are frequently located under the
near-vertical and extend for up-to about 8–10 km in oceans as mid-oceanic ridges. Much interest is attached to

Fig. 19.30 Occurrence of a nearly E–W-trending long (*200 km) with dunes. The terrain is dry, bare of vegetation and quite uninhabited.
prominent Nagar-Pakar fault. b Interpretation map. The block on the The Precambrian inlier at Nagar Pakar (Pakistan) forms highlands. Also
south of the fault is frequently invaded by coastal waters, playas and note the artificial Indo-Pak boundary running through the dunes
salinas and forms the downthrow side. The northern block is covered (Landsat MSS infrared image)
310 19 Geological Applications

natural features. Afar depression and the rift zone in Iceland


provide examples where mid-oceanic ridges can be observed
on land.
Afar Depression (Triple Junction)

The Afar Depression is a plate tectonic triple junction where


the spreading ridges that are forming the Red Sea and the
Gulf of Aden emerge on land and meet the East African Rift.
Figure 19.31 shows an image of the Afar triple junction. It is
clearly shown as a graben. Within the triangle, the Earth’s
crust is slowly rifting apart at a rate of 1–2 cm per year along
each of the three rift zones forming the “legs” of the triple
junction. As a result, earthquakes frequently occur accom-
panied by formation of deep fissures in the terrain hundreds
of meters long and sinking of the valley floors. Volcanic
eruptions are also common. The floor of the Afar depression
is composed of basaltic lava. It has been inferred that the
Afar Depression (triple junction) is also the site of a mantle
plume, a great uprising of mantle that melts to yield basalt as
Fig. 19.31 The Afar Depression; it is a plate tectonic triple junction it approaches the surface.
where the spreading ridges that are forming the Red Sea and the Gulf of
Aden emerge on land and meet the East African Rift; the image clearly Neovolcanic rift zone—Iceland
shows tectonic triple junction and the presence of a graben (courtesy
NASA/METI/AIST/Japan Space Systems, and U.S./Japan ASTER
Science Team) As mentioned above, the Atlantic mid-oceanic ridge is
exposed in Iceland. A part of this neotectonic–neovolcanic
these boundaries in the search for a proper understanding rift zone has been covered by Seasat-SAR from two different
and validation of the existing concepts on plate tectonics. but parallel orbits, located only 20 km apart. Compared to
Deep-sea drilling programmes on some of the mid-oceanic the sensor’s altitude of about 800 km, the set of two views
ridges are a testimony to the scientific importance of these provides a moderate stereo effect (Fig. 19.32a). The images

Fig. 19.32 Stereo pair acquired by Seasat-SAR from two adjacent and surface of Lake Thingvallavatn, which has led to a strong backscatter.
parallel orbits. It shows the neovolcanic rift zone in SW Iceland; the rift A corresponding geological interpretation map of the area is also shown
zone is characterized by numerous parallel faults, palagonite ridges, and (SAR images processed by DLR; interpretation map courtesy of K.
Holocene lava flows. On the right image, note the disturbed water Arnason)
19.4 Structure 311

show that the region is marked by some distinct structural (11) alignment of oil and gas fields; (12) occurrence of
and morphological features. Figure 19.32b is a simplified geysers, fumaroles and springs along a line; (13) linear
geological map of the corresponding area. features seen on gravity, magnetic and other geophysical
Due to the rugged topography of the terrain and the low data; (14) vegetation alignments; (15) soil tonal changes
look angle of the SAR, the effects of layover and fore- etc.; and (16) natural limits of distribution of certain fea-
shortening are distinct, particularly in the stereo view. The tures of the Earth’s surface.
hill slopes facing the radar beam appear bright and short- Hobbs in 1904 first used the term lineament to define a
ened, and those sloping away are dark and elongated. The “significant line of landscape which reveals the hidden
scene is viewed at an angle of about 20° to the strike of the architecture of rock basement”. O’Leary et al. (1976)
neovolcanic zone, marked by the general direction of faults reviewed the usage of this term and defined lineament
and palagonite ridges. The palagonites seem to have piled up essentially in a geomorphological sense as “a mappable
during subglacial fissure eruptions in the Upper Pleistocene. simple or composite linear feature of a surface whose parts
The typical rugged broken relief of the palagonite renders it are aligned in a rectilinear or slightly curvilinear relation-
easily mappable on the radar imagery. ship and which differs distinctly from the pattern of the
Many of the significant faults can also be traced, espe- adjacent features and presumably reflects a sub-surface
cially those with a considerable downthrow on the NW (i.e. phenomenon”. Hence, this category includes all structural
towards the illuminating radar pulse), which are distinct on alignments, topographical alignments, natural vegetation
the radar images. On the other hand, faults with the down- linears and lithological boundaries etc., which are very
thrown block away from the sensor are subdued. likely to be the surface expression of buried structures.
The Holocence (not glacially eroded) lava flows, which This definition seems to be the most practical in the context
have come mainly from the prominent shield volcano in the of remote sensing image interpretation.
upper middle part of the scene, show up as a medium gray 2. Scale and manifestation. The manifestation of a linea-
toned unit of generally uniform fine-grained texture. The ment is dependent on the scale of observation and
effect of differing surface roughness on the radar response is dimensions involved. Lineaments of a certain dimension
clearly demonstrated in Lake Thingvallavatn. On the and character may be more clear on a particular scale, for
right-hand image, the stronger wind has ruffled the water which reason tectonic features of the size of hundreds of
surface, which results in stronger backscatter, making it kilometres need to be studied on smaller-scale images.
difficult to distinguish the lake from the adjacent lava flows. Lineaments occur as straight, curvilinear, parallel or
en-echelon features (Figs. 19.33, 19.34 and 19.35).
Generally, lineaments are related to fracture systems,
19.4.5 Lineaments discontinuity planes, fault planes and shear zones in
rocks. The term also includes fracture traces described
from aerial photographic interpretation. Dykes and veins
may also appear as lineaments. The pattern of a linea-
1. Definition and terminology. The term lineament has been ment is important on the image; lineaments with
extensively used recently, and often with differing shades straighter alignments indicate steeply dipping surfaces;
of meaning. The photo-linears, i.e. linear alignments of by implication, they are likely to extend deeper below the
features on photographs and images, are one of the most ground surface.
obvious features on high-altitude aerial and space images, At times, the relative sense of movement along the
and therefore the use of the term lineament has proliferated fault/lineament may be apparent on the image; for
in remote sensing geology literature in recent years. example, displacement across some of the lineament
This term has also been applied to imply alignment of features is clearly seen in Figs. 19.33 and 19.34.
different geological features, such as (1) shear zones/faults; On a certain photo or image, both major and minor lin-
(2) rift valleys; (3) truncation of outcrops; (4) fold axial eaments are invariably observed. Major lineaments may
traces; (5) joints and fracture traces; (6) alignment of fis- correspond to important shear zones, faults, fractures and
sures, pipes, dykes and plutons; (7) linear trends due to major tectonic structures or boundaries. On the other
lithological layering; (8) lines of significant sedimentary hand, minor lineaments may correspond to relatively
facies change; (9) alignment of streams and valleys; minor faults, or joints, fractures, bedding traces etc.
(10) topographic alignments—subsidences and ridges; These may be expressed as soil–tonal changes,
312 19 Geological Applications

Fig. 19.33 Mapping of various


types of lineaments on IRS-1C
LISS-III image of Cuddapah
region, India (courtesy of D.
P. Rao, A. Bhattacharya and P.R.
Reddy)

vegetation alignments, springs, gaps in ridges, aligned 3. Mapping of lineaments. Mapping of lineaments can be
surface sags and depressions, and impart the textural done on all types of remote sensing images: stereo
character in a larger image scene. panchromatic photographs, multispectral and thermal-IR
The ground element corresponding to a lineament would images and SAR images. The panchromatic, NIR SWIR,
depend on the scale of the remote sensing data. On and thermal-IR images contain near-surface information,
regional scales (say 1:250,000), lineament features may whereas SAR images may provide limited depth pene-
be more than ca. 5 km in length, representing long val- tration (of the order of a few metres at best) in arid
leys and complex fractured zones. On larger scales (e.g. conditions. Manifestation of lineaments is related to
aerial photographs or Ikonos images, say 1:5000 scale), ground conditions and sensor spectral band.
shorter, local drainage features, individual fracture traces Lineaments can be mapped on simple data products, as
may appear as lineament features. well as on processed/enhanced images. Further, differ-
As lineaments are surface traces of fractures, faults, shear ent types of digital techniques, aimed at enhancing
zones etc., surface features such as topography, drainage, linear features, can also be applied in the form of
vegetation, soil moisture, springs etc. may become isotropic and anisotropic filters. Anisotropic filters
aligned along the lineaments. Vertical or steeply dipping enhance linear features in certain preferred directions;
lineaments are likely to extend to a greater depth below however, the related artifacts are a common problem
the ground resulting in deeper localized weathering and and render the interpretation of such enhanced products
more thickness of regolith (Figs. 19.35, and 19.36). quite difficult.
19.4 Structure 313

Fig. 19.34 a Example of large-scale tensional and shear lineaments open tensional fractures that are vegetated implying groundwater
(fractures) extending for several kilometers in a part of the Precambrian seepage; S are shear fractures exhibiting lateral relative displacement at
Cuddapah basin, India; IKONOS image printed black and white image; places
b Interpretation map of the above image; fractures marked T are wide

Fig. 19.36 Schematic representation of a lineament; surface manifes-


tation occurs in terms of alignment of topography, drainage, vegetation
etc.; the lineament zone possesses a greater depth of weathering

4. Visual versus digital interpretation. On an image, lin-


eaments can be easily identified by visual interpretation
using tone, colour, texture, pattern, association etc., i.e.
the elements of photo interpretation. Alternatively,
automatic digital techniques of edge detection can also be
Fig. 19.35 IRS-LISS-IV CIR composite showing several major applied for lineament detection. Edge detection tech-
intersecting lineament zones (shear zones) extending for long distances
(strike length up-to 20–25 km) and marked by weathered zone and niques, with numerous possible variations, lead to many
vegetation, up-to one km wide, in the hard granitic terrain of Karnataka, artifacts or non-meaningful linears, which may crop up
India (image courtesy of Arvind Kumar) due to illumination, topography, shadows etc. Therefore,
314 19 Geological Applications

visual interpretation technique is generally preferred and gives zones of different degrees of fracturing in the area.
extensively applied. Further, the lineament-intersection density maps can be
A new computer based automatic lineament identification rasterized and converted into lineament-intersection den-
method, namely the Segment Tracing Algorithm (STA), sity image. In the same manner, it is possible to create
was proposed by Koike et al. (1995). Its principle is to lineament-length images and lineament-number images.
detect a line of pixels as a vector element by examining Such processed data products provide statistical informa-
local variance of the gray level in the digital image, and tion on the distribution of fractured zones.
to connect retained line elements along their expected 6. Discriminating between different genetic types of linea-
directions. It may be relatively more useful in shaded ments. In rocks, fractures or joints originate in different
areas. ways. There are two main types of fractures distin-
Visual interpretation of lineaments involves some degree guished: (a) shear fractures, which originate due to shear
of subjectivity, for which reason the results may differ failure in rocks, and (b) dilational fractures, which are of
from person to person (Siegal 1977; Burns and Brown tensile (extensional) origin. A third ‘hybrid’ type exhibit
1978; Wise 1982; Moore and Waltz 1983). features of both shear and dilational origin (Price and
5. Statistical analysis. Lineaments mapped on a remote Cosgrove 1990). For some applications, it is important to
sensing image possess spatial variation in trend, fre- distinguish between the genetic types of fractures/
quency and length. For statistical analysis, lineaments are lineaments; for example, the dilational type is more
often grouped in ranges of angles (commonly 10° inter- open and more productive for groundwater. The genetic
val) in plan. The lineament data can be processed man- distinction between the various lineaments can some-
ually, which is quite tedious, or alternatively the times be made on remote sensing data, on the basis of
lineament map could be digitized. The lineament map two considerations: (a) relative movement along an
could be mounted on a digitizing tablet, linked to a individual discontinuity, and (b) orientation and statisti-
computer, and the end points of each lineament could be cal distribution of lineaments. It is always desirable that
successively read using a hand-held movable pen or both these types of evidence are mutually supportive.
cursor, to digitize each lineament. Once the data of all the Sometimes, relative movements along lineaments can be
lineaments was collected, it could be reformatted and seen to indicate faults and shear fractures (see
processed to provide the statistical information. Figs. 19.33, and 19.34); in contrast, lineaments related to
There are a number of ways to assess the statistical distri- dilational fractures do not show any relative displace-
bution of lineaments in an area. One is by considering the ment. Further, in some areas statistical analysis of lin-
number of lineaments per unit area; the second, by mea- eament trends could distinguish between fractures of
suring the total length of lineaments per unit area; and the shear origin and those of dilational origin, in a general-
third, by counting the number of lineament intersections ized way. For example, Fig. 19.38a presents a lineament
per unit area. The method of lineament intersection per unit interpretation map of a Munich–Milan section in the
area (density) is generally faster and more convenient eastern Alps. A large number of lineaments are seen;
(Fig. 19.37). The intersections of two (or more) lineaments their statistical trends are given by the rose diagram in
are plotted as points. The number of points falling within a Fig. 19.38b. Based on mutual angular relationships, it
specified grid area is counted. The data are contoured to can be inferred that S1 and S2 sets are shear fractures and
give the lineament intersection density contour map. This the T set is of dilational type (Gupta 1977a).

Fig. 19.37 a Lineament interpretation map of the granitic terrain shown in Fig. 19.92; b lineament intersection point diagram; c contour diagram
from (b)
19.4 Structure 315

Fig. 19.38 a Lineament map of the Eastern Alps as interpreted from Landsat MSS images (Gupta 1977a); b Rose diagram of lineaments in
Fig. 19.36a. Clearly three major directions statistically stand out: (1) N45−225° (S1); (2) N15−195° (T); and (3) N345−165° (S2) (Gupta 1977a)

Applications for migration of fluids, lineaments constitute important


guides for exploration of mineral deposits, petroleum pro-
Lineament studies have found applications in various fields spects, groundwater etc. A few examples to illustrate the
of Earth sciences—neotectonics, earthquake hazard, global types and scope of lineament studies are given below here
tectonic studies, analysis of structural deformation patterns (also see other relevant sections).
etc. Further, lineaments are zones of deformation and frac-
turing, which implies that they are zones of higher secondary 1. Lineament corresponding to the subduction zone—the
porosity. As these zones become significant channel-ways Indus suture line, Himalayas. The Himalayas constitute
316 19 Geological Applications

Fig. 19.39 The NW–


SE-trending prominent lineament
is the Indus suture zone (ISZ),
which forms the subduction zone
in the India–Eurasia plate
collision and constitutes a major
tectonic boundary in the
Himalayas. It extends for more
than a hundred kilometers as a
straight line in the rugged terrain
of the Himalayas. The Indus river
follows this zone for a
considerable length. The
dark-toned bodies are ophiolitic
intrusives along the suture line
(Landsat MSS infrared image)

one of the youngest and most active mountain systems on provides a regional perspective of the tectonic feature,
the Earth, formed as a result of the collision of two which occurs as a major NW–SE-trending lineament
crustal plates—the Eurasian plate on the north and the zone that can be traced as a straight line for more than
Indian plate on the south. The collision zone is named the 100 km. The dark toned area (ISZ) in the image pertains
Indus suture zone, after the Indus river, which follows the to the ultramafic ophiolite suite of rocks emplaced along
inter-plate boundary for quite a distance. Figure 19.39 the subduction zone.

Fig. 19.40 a Presence of a major N–S-trending lineament zone Sung valley carbonatite intrusive, bare of vegetation; b lineament
(indicated by arrow) in the Shillong plateau, India, (SH Shillong), tectonic interpretation map of the area (a, b Gupta and Sen 1988)
(Landsat MSS infrared image.); the circular feature in the north is the
19.4 Structure 317

2. Lineaments associated with tectonomagmatic activity— inferred that the N–S-trending lineament zone and the
Um-Ngot lineament, Shillong Plateau, India. associated carbonatite suite of rocks formed as a result of
Deep-seated tectono-magmatic activities may leave their up-arching of the mantle plume, and seem genetically
footprints on the terrain and due to their large dimensions related to the Ninetyeast Ridge in the Indian Ocean (Gupta
(a few tens to hundreds of kilometres), such features are and Sen 1988).
best identified on satellite remote sensing images. An Several other examples of such lineaments have been
interesting example is provided by the Um-Ngot linea- described in the literature. The neovolcanic rift zone in
ment zone, Shillong Plateau, which was first identified on Iceland (see Fig. 19.32) could also be considered as an
Landsat images (Gupta and Sen 1988). The Shillong example of this type.
Plateau consists mostly of an Archean gneissic complex 3. Delineation of buried lineament structures—Hoggar
and Proterozoic meta-sedimentaries and is marked by mountains, Algeria. The southern part of Algeria is
granitic intrusives and a deformation pattern characteristic dominated by arid–hyperarid climatic conditions.
of schistose–gneissic rocks. A unique and striking feature Figure 19.41a, b show a Metric Camera photograph
is the occurrence of a major lineament zone, called the (panchromatic) and an SIR-A image, respectively, of a
Um-Ngot lineament (Fig. 19.40). It is about 50 km long part in the southern Hoggar Mountains, Algerian desert.
and 5–10 km wide, cuts across the general strike of the The terrain is typically arid with dry channels, bare
rocks, and is evidently post-Precambrian. The Precam- slopes and sands. Morphologically, the area depicted in
brian trends are bent or truncated against this tectonic the photograph is characterized by hills trending NE–SW
feature. The lineament zone contains the Sung Valley and a flat plain covered with sands, located between the
alkaline-ultramafic-carbonatite complex (Fig. 19.40b), two dominant ridges. The panchromatic photograph
described from the Shillong plateau and considered to be shows the structural trend of the formations (NE–SW)
of Cretaceous age (Chattopadhyay and Hashimi 1984). and the presence of several lineaments in the hilly terrain;
The alkaline ultramafic complex is marked by relatively however, little information is obtained in the sand-
poor vegetation (Fig. 19.40a) in a terrain of generally covered flat area. The corresponding SIR-A image shows
fairly good vegetation. The vegetation anomaly is evi- that the radar return is influenced, collectively, by topo-
dently due to the presence of toxic elements in the ultra- graphical relief, overlying sand cover, sub-surface
mafic suite of rocks. A few other circular to semi-circular topography of the bedrock and possibly variation in
features are also seen in the lineament zone. It has been soil moisture. Due to the depth-penetration ability of the

Fig. 19.41 Comparison of a


metric camera photograph
(panchromatic) and SIR-A image
of the southern Hogger mountains
in Algeria. a The metric camera
photograph shows structural
details in the mountainous areas;
however, no information is
provided on the flat plain covered
with dry sand. b The
corresponding SIR-A image; note
the presence of numerous linear
structures buried under the sand
cover, brought out on the radar
image (European Space Agency)
318 19 Geological Applications

active microwave sensors in arid–hyperarid conditions, Further, it may be recalled that it is relatively easier to
numerous buried structures can be delineated on the radar identify steeply dipping faults marked by strike-slip
imagery. displacement than many other geologic discontinuities
4. Lineament corresponding to the inferred fault separating which may be low-dipping and/or may represent facies or
zones of differing metamorphism, Banas lineament, lithologic differences. An interesting example of this type
Aravalli range, Peninsular India. Significance of some of is furnished below.
the lineaments may not be clear at a first glance; our lack
of proper understanding is responsible in part for the The Aravalli mountain ranges constitute the north-western
scepticism associated with lineament type of work. part of the Indian penisular shield, and extend for about
Although lineament interpretations are somewhat sub- 600 km with a general strike of NNE–SSW. The rocks have
jective, not all lineaments may be spurious, and only undergone polyphase metamorphism and deformation (Heron
detailed ground data of different types and temporal 1953), which has led to a complex tectonic deformation pat-
coverages may reveal the true significance of lineaments. tern. Figure 19.42a shows occurrence of a peculiar and

Fig. 19.42 a Note the presence


of a major transverse-to-oblique
nearly ENE–WSW trending
lineament (marked by arrow) in
the image (IKONOS) of part of
the Aravalli hills, India. The area
comprises metamorphic rocks
generally striking NNE–SSE.
b Geological map of the area
prepared independently by
Sharma (1988) where a fault is
inferred on the basis of sharply
differing metamorphic grades.
A comparison indicates that the
lineament most probably
corresponds to the inferred fault
that may have a dip-slip of
approx. 5 km (see text)
19.4 Structure 319

conspicuous nearly E–W-trending lineament that runs Greenschist facies Amphibolite facies
transverse-oblique to the general strike of the rocks and • Greenschist facies followed by • Upper amphibolite facies;
extends for a strike length of about 100 km (Bharktya and thermal metamorphism in places local partial melting
Gupta 1983). The Banas River follows this lineament for quite • Temperature = 450° ± 50 °C • Temperature = 700
some distance in the north-east. The lineament is prominently −900 °C
marked on the image, but there is hardly any evidence of • Pressure = 3−4 kbar • Pressure = 7−8 kbar
strike-slip movement. Figure 19.42b shows the geological • Deformation D1, D2, D3 • Deformation D1, D2, D3
map of the area based on a study of metamorphism (Sharma
1988). On comparison, it is observed that the lineament
It is obvious that the observed lineament (inferred fault)
identified on the Landsat image corresponds to the inferred
separates regions of vastly different levels and types of
fault separating regions of sharply differing metamorphism.
metamorphism. Assuming a high geothermal gradient of
On the two sides of the inferred fault, the broad metamorphic
about 50−60 °C per km, the difference in depth comes to
characters of the rocks are as follows (Sharma 1988):

Fig. 19.43 a Landsat ETM


image of a part of the Tian-Shan
mountain range; b interpretation
map of the image showing the
offset of streams along the
northwest segment of the fault;
note that the main streams a–a′,
b–b′, c–c′ have been
systematically dextrally offset,
and the amounts of displacement
are 8000, 3430 and 1700 m along
the fault respectively; this has
been interpreted in terms of age of
the stream—the older the stream
channel, the larger the offset (a,
b Fu et al. 2010)
320 19 Geological Applications

about 5−6 km. It appears that the lineament is the mani-


festation of a major structural tectonic discontinuity, with
dominantly dip-slip movement, developed at depth during
the dying phase of tectonic deformation and metamorphism.

5. Displacement of active geomorphic features along a


lineament-fault zone in Tian Shan: The Pamir–Tian Shan
convergence zone is a unique example of ongoing India–
Eurasia collision and mountain building activity. Fu et al.
(2010) carried out a detailed analysis of satellite imagery
combined with extensive field geologic and geomorphic
observations, and mapped late Cenozoic folds and faults
in the region. Their investigations revealed presence of
numerous large faults and folds affecting the Quaternary-
Cenozoic deposits. Figure 19.43 shows that some of the
faults truncate and displace active drainage systems with
a systematic offset (dextral in this case), indicating their
neotectonic genesis. Fig. 19.45 The perfect circular-shaped intrusion, about 10 km in
diameter with a topographic ridge up to 600 m high, is the Kondyor
Massif, Eastern Siberia, Russia; it is an alkaline-ultrabasic massif and is
full of rare minerals and metals including coarse crystals of Pt–Fe alloy,
19.4.6 Circular Features coated with gold; the river flowing out of it forms placer mineral
deposits (Perspective view created by draping image data over
Circular features often preferentially catch the geologist’s eye ASTER-derived DEM; courtesy NASA/METI/AIST/Japan Space Sys-
on remote sensing images. This is due to the special impor- tems, and U.S./Japan ASTER Science Team)
tance attached to circular features in geology. A special
subtractive box filtering, succeeded by a histogram equal-
the host rock and also truncate the latter (also see
ization stretch has been suggested by Thomas et al. (1981) for
Figs. 19.26, 19.40, and 19.56).
enhancing circular features on remote sensing data.
2. Structural domes and basins formed by cross-folding can
Circular–quasicircular features may be associated with:
be recognized from structural trend of bedding (e.g.
(1) intrusives; (2) structural domes and basins; (3) volca-
Figs. 19.23, 19.24, 19.25, and 19.26).
noes; (4) tensional ring fracturing; and (5) meteorites. Their
3. Volcanoes have prominent morphological expressions
salient characteristics are as follows:
and frequently occur in clusters (see e.g. Figs. 19.3,
19.32, and 19.58).
1. Intrusives: Circular to near-circular features on satellite
4. Ring-like fractures due to tensional forces may also lead
images may appear due to igneous intrusions (Figs. 19.44,
to circular features on images; their genetic type must be
and 19.45). These may exhibit a ‘shoving-aside’ pattern of
identified from the tectonic setting of the area.
5. Meteoritic impact craters are marked by random distri-
bution, variability in size and form, but regularity in
morphology and lack any type of tectonic association
(Figs. 19.46 and 19.47). Field evidence of ultra-high
pressure or shock metamorphism, shatter cones and
impact melt may be associated. Remote sensing data
have shown that there are many more circular structures
possibly caused by meteoritic impacts on the Earth than
was earlier believed.
Fig. 19.44 The large dome-shaped granitic intrusion is the Brand-
berg Massif located in Namibia desert; it forms the dominant
geographic feature covering an area of 650 km2 and rising to a 19.4.7 Intrusives
height of >2500 m above the surrounding desert; (perspective view
created by draping image data over ASTER-derived DEM; courtesy Igneous intrusions may occur in a variety of forms and
NASA/METI/AIST/Japan Space Systems, and U.S./Japan ASTER
Science Team)
dimensions, such as batholiths, laccoliths, lopoliths, sills,
19.4 Structure 321

Fig. 19.46 Stereo view of the likely meteorite impact structure, called image draped over SRTM-DEM and recalculated for stereo viewing;
Iturralde structure, in Bolivia; the circular structure is 8 km wide, lies in Source http://photojournal.jpl.nasa.gov/)
heavily vegetated soft sediments and pampas of Bolivia; (ASTER

Fig. 19.47 Impact craters in


Sahara desert, northern Chad; the
concentric ring structure left of
centre is the Aorounga impact
crater, about 17 km in diameter; a
possible second crater, similar in
size to the main structure, appears
as a circular trough in the centre
of the image. The dark streaks are
deposits of wind-blown sand
(SIR-C/X-SAR image; courtesy
of NASA/JPL)

dykes, plugs etc. These are identified on the basis of their 19.4.8 Unconformity
shape, form, relations with host rocks and dimensions.
Examples are given in Figs. 19.40, 19.44, 19.45, 19.56, and Identification of unconformity on remote sensing data is
19.57. Salt domes are generally massive, lack bedding and based mostly on indirect evidence, such as truncation of
truncate existing structural features. lithological units and structural features or differences in
322 19 Geological Applications

1. Sequence: i.e. characterizing vertical changes in lithol-


ogy, documented in stratigraphic columns.
2. Correlation: determination of correspondence of strata
from different locations and/or ages.
3. Facies: identifying rocks possessing similar
depositional/environmental/age characters.
4. Geometry: i.e. 3-D form of strata, as determined through
cross-sections, fence diagrams etc.

Mapping is the basic source of stratigraphic information;


therefore this type of data is best available in sections in the
field and on large-scale maps. However, in some cases,
remote sensing data, owing to its special advantages of
synoptic view and multispectral approach, can provide new
and valuable input by revealing the nature of contacts, trends
and outcrop patterns on a different scale.
Further, stereo photo analysis and DEM-based methods
allow perception of elevation and relief; therefore, these
methods can also be usefully exploited in deriving strati-
Fig. 19.48 Angular unconformity, Chile Altiplano. On the right side
graphic data.
of the image are Cretaceous sediments dipping at an angle of about 50°
eastward. On the left side occur flatlying volcanic pyroclastic deposits The ability to define stratigraphic details (e.g. Member/
(ASTER image, printed black-and-white from colour composite) Formation/Group/Super-Group) is based on the scale of
(courtesy of NASA/GSFC/MITI/ERSDAC/JAROS, and US/Japan observation and mapping. Therefore, prime factors of con-
ASTER Science Team)
cern are the resolution and scale of remote sensing images
and photographs. The image-pixel size directly governs the
degree of deformation. The expression of an angular details that can be mapped on the image, implying the map
unconformity (Fig. 19.48) may be quite similar to that of a scale that can be obtained. Images with coarser pixels can be
low-angle fault. utilized only for regional maps, whereas image data with
smaller pixel size can allow mapping on large scales
(Fig. 19.49).
19.5 Stratigraphy In some cases, relative ages of rocks may be indicated by
an attribute more readily observed on multispectral data,
Stratigraphy deals with the deduction of the age relations and such as radar surface roughness. Figure 19.50a is a
geological history of rock units in an area, i.e. broadly, the Seasat-SAR image showing a group of Holocene basaltic
history of formation of rocks. Studies on stratigraphy attempt lava flows in the neovolcanic zone of south-central Iceland.
to derive information on the following (Lang 1999). The entire region is very dry, and almost unvegetated, as all

Fig. 19.49 Nomograph relating


spatial resolution (pixel size) to
the maximum image scale that is
potentially useful for
photo-stratigraphic interpretation
(Lang 1999)
19.5 Stratigraphy 323

Fig. 19.50 Example showing


stratigraphic discrimination using
SAR data. a Seasat-SAR image of
a part of Iceland. b Geological
map of the corresponding area
(for details, see text) (a Arnason
1988; b Jakobsson 1979)

precipitation infiltrates directly into the porous ground; only determine the relative age of the lava flows from the radar
sparse vegetation grows over one of the lava flows. On the signal. Lava flow Nos. 486, 366, 363 and 068 (between 100
VNIR imagery (Landsat MSS, TM or aerial photographs), and 4000 years old) and 056, 058 and 062 are increasingly
these lava flows are impossible to discern from each other, older, and these areas also become increasingly dark-toned
and from the adjoining alluvium (ash fields), due to their in this succession on the radar image. The regularity in the
quite similar VNIR spectral reflectances. On the other hand, relation age-versus-backscatter is somewhat disturbed where
because of the different surface roughnesses, the various lava a glacier river crosses the lava No. 068 (Fig. 19.50a).
flows and ash fields can easily be separated from each other
on the radar image (Fig. 19.50b). The original surface
roughness of the lava flows falls into the category ‘very 19.6 Lithology
rough’, with regard to the wavelength of Seasat-SAR (height
of surface irregularities » 10 cm); hence recent the new lava There are two main approaches to lithological/mineralogical
flows have a high radar backscatter. However, with time, due mapping from remote sensing data: (a) mapping of
to infilling of aeolian sand, the lava surface gradually broad-scale lithologic units, and (b) identification of mineral
becomes smoother, resulting in a gradually darker tone for assemblages (including quantification of specific minerals).
these lava flows with age on the radar image. Owing to the
constancy of other factors governing the backscatter (hu- 1. Mapping of broad-scale lithologic units is based pri-
midity, vegetation etc.) in this area, it is possible to marily on the principles and techniques of photo
324 19 Geological Applications

interpretation. This method employs the norms devel- generally preferred for geological–lithological discrimina-
oped for mainly panchromatic aerial stereo photographs. tion purposes, for this type of approach.
Various multispectral satellite data, processed images, Weathering pattern and products may carry a lot of
and radar images can also be used in the same way at information about the bedrock and need to be carefully
appropriate scales. studied. For example, Macias (1995) studied weathered
2. Identification of mineral assemblages. This utilizes the products on TM data to distinguish between mafic and
existence of characteristic absorption bands to help rec- ultramafic rocks in an area in Australia. Rencz et al. (1996)
ognize specific mineral assemblages, e.g. carbonates, could identify kimberlite plugs in Canada due to their neg-
different silicates, clays, Fe–O mineral, etc. Multispectral ative relief (depressions) leading to accumulation of water
and hyperspectral sensor data are required for this type of which resulted in ‘cooler’ signatures on the thermal-IR
application. In some cases, detailed analysis of absorp- images.
tion characteristics (absorption band depth etc.) of data Differences in lithological units may be obvious on
derived from hyperspectral sensors can help quantitative simple black-and-white remote sensing photographs and
estimation of specific minerals. images and/or on false-colour composites. Alternatively,
image manipulation, such as ratioing, hybrid compositing,
As such, there is no sharp boundary between data inputs principal component transformation etc. may also be used to
for the above two approaches. For example, aerial photo- facilitate distinction between different lithologies.
graphic and Landsat TM data have long been used to map On the following pages we discuss some important image
broad lithologic units. Further, TM-type data is also used to characteristics of the main types of lithological units with
identify specific minerals such as Fe–O, clays etc. TIMS and examples. For practical applications, sensor characteristics
ASTER multispectral data and other hyperspectral data (e.g. and resolution parameters vis-à-vis ground dimensions also
HYPERION) have been used to recognize the existence of have to be duly considered.
specific mineral groups. Further, the analysis of ASTER
thermal-IR data has enabled quantification of mineral con-
tents in certain cases. 19.6.2 Sedimentary Rocks
As hyperspectral sensor data are of a specific nature, their
processing, interpretation and applications are included Sedimentary rocks are characterized by compositional lay-
separately in Chap. 14. Here we deal with mainly panchro- ering (Fig. 19.51). The layers of different mineral assem-
matic and multispectral data applications for lithologic– blages possess differing physical attributes, and this results
mineralogic interpretations (although this segmentation is in the appearance of regular and often prominent linear
just for convenience). First we discuss mapping of features on images. Banding on remote sensing images may
broad-scale lithologic units, and then identification of min- arise due to the following: (1) different compositional bands
eral assemblages in the next section (Sect. 19.7). possessing different spectral characteristics; (2) differences
in susceptibility to erosion and weathering for different

19.6.1 Mapping of Broad-Scale Lithologic


Units—General

Mapping of broad-scale lithologic units can be carried out on


panchromatic and multispectral satellite sensor data and
radar data. Broad lithological information is deduced from a
number of parameters observed on remote sensing images,
viz. (1) general geologic setting, (2) weathering and land-
form, (3) drainage, (4) structural features, (5) soil, (6) vege-
tation and (7) spectral characteristics. The above parameters
are also interdependent, and interpretation is generally based
on multiple converging evidence; however, even a single
parameter could be diagnostic in a certain case.
Over the same lithology, the spectral response may be
quite variable, being a function of several factors: state of
weathering, moisture content, soil and vegetation. Therefore, Fig. 19.51 Typical regular sedimentary layering (X-band SAR image
spectral enhancement followed by visual interpretation is of an area in Brazil) (courtesy of Aerosensing Radarsysteme GmbH)
19.6 Lithology 325

bands, resulting in differential erosion between hard and soft scant soil cover over pure sandstone; in the case of
rocks, i.e. competent layers stand out in relief over the impure sandstone, thicker soil cover may develop. Vege-
incompetent layers; (3) differing moisture content, depend- tation—sandstone generally supports good vegetation
ing upon mineral composition; and (4) lithological layering due to good porosity and soil cover; in arid–semiarid
associated with vegetation banding. regions, it may support bushes and tree growth, whereas
Frequently, all the above features collectively lead to the adjoining clay shales may contain only grass; pure
banding on photographs and images. The resulting linear sandstone is barren of vegetation. Spectral characteris-
features are long, even-spaced, and few in number (in tics—in VIS-NIR-SWIR images, the bare slopes of pure
comparison to those produced by foliation in metamorphic sandstone are generally light-toned; in the TIR range, it
rocks), and constitute rather continuous ridges and valleys. often appears cooler (dark-toned) due to low emissivity
This type of banding is the most diagnostic feature of sed- and higher topographical location; low spectral emissiv-
imentary terrains (see Figs. 19.18, and 19.19). ity at 9 µm; the overall spectral response over the rock
may be highly variable, depending upon the presence of
1. Sandstone. Weathering—generally resistant in both other minerals (e.g. limonite, carbonate, clays), weath-
humid and dry regions, excepting when poorly cemented ering, soil cover, vegetation and orientation. Similarities
or contains soluble cement. Landform—tends to form —sandstone may appear similar to limestone in arid
hills, ridges, scarps and topographically prominent fea- terrains, especially when massive, on VNIR data; how-
tures; inclined strata form cuestas, hogbacks and cliffs; ever, limestones have a broad absorption band in the
sub-horizontal beds form mesas (Fig. 19.52). Drainage SWIR (2.35 µm); quartzites are similar photo units, but
—low to medium density due to good porosity and are located in metamorphic settings.
permeability and steeper slopes; partly internal drainage; 2. Shale. Weathering—generally incompetent; easily ero-
frequently rectangular and angulate drainage patterns due ded. Landform—tends to form low grounds and valleys;
to jointing; sub-parallel pattern in inclined strata; in humid climates, it may form gently rounded hills;
sub-dendritic in homogeneous sub-horizontal beds; trellis erosion is more intense in arid and semi-arid regions
pattern in intercalated sequence. Bedding—often shown (Fig. 19.52). Drainage—chiefly external drainage due to
by compositional layering as fine lineations on the its impervious nature; high drainage density, generally
photo-graphs/images; massive pure sandstones may lack well developed, fine-textured, uniform drainage; most
manifestation of bedding. Jointing—invariably very commonly dendritic to sub-dendritic drainage owing to
prominent; several sets may be developed; coarse- homogeneity; gullies tend to be long and gently sloping
grained sandstone and conglomerate show more widely with gentle V-shaped cross-sections; in loose unin-
spaced and less regular jointing. Soil cover—variable; durated clayey sediments, the gullies have steeper

Fig. 19.52 Stereo aerial photos covering sandstone and shale that exhibit characteristics landform and drainage (courtesy of A. White)
326 19 Geological Applications

cross-sections and form badland topography. Bedding—


rarely seen. Jointing—rarely exhibits prominent jointing.
Soil cover—often thick cover; moisture-rich in humid
areas due to its impervious nature and associated lower
elevations; dry in arid–semiarid regions. Vegetation—in
semiarid areas, clay shale may have poor vegetation due
to its impervious nature and low water content; in arid
areas, shale may be nearly barren of vegetation; in humid
areas, vegetation bandings may mark the lithology; shale
and clay grounds are often used for agricultural purposes.
Spectral characteristics—in VIS-NIR-SWIR images, dry
bare shales appear light-toned, except in the 2.1–2.4 µm
region, where they are dark due to the absorption band;
wet shales are dark-toned in solar reflection images; in Fig. 19.54 a, b SAR image of karst terrain in Indonesia characterized
by numerous pits and depressions (after Sabins 1983)
TIR images, shale generally appears light-toned (war-
mer), although the tone may be highly variable due to
other variables and surficial cover. Similarities—shale streams may be observed (sinking creeks!) (see
appears similar to other soft rocks such as schists/ Fig. 19.104b); in arid regions, the carbonates form ridges
phyllites but differs in regional setting. and hills. Drainage—marked by low drainage density in
3. Limestone and dolomite. Weathering—highly susceptible both arid and humid terrains; internal drainage high in
to dissolution by water; dolomite is harder than limestone humid areas; in arid regions, low drainage density due to
and also less susceptible to solution activity; the carbon- poor availability of water; valleys tend to be U-shaped.
ates are resistant rocks in arid regions. Landforms—in Bedding—often only weakly shown owing to the chemi-
humid areas, karst topography, subsidence and collapse cal origin of the carbonate rocks; intercalations of shales
structures, sink holes, trenches, caverns, subsurface may enhance manifestation of bedding. Jointing—gener-
channels, etc. due to action by surface and subsurface ally well developed; the joints provide sites for water
water (Figs. 19.53, and 19.54); these features are often action and control the shape and outline of solution
elongated in the direction of prominent joint, bedding or structures. Soil cover—light-coloured calcareous soil
shear planes; sudden or gradual disappearance of surface often develops on carbonates; as a result of the removal of

Fig. 19.53 Stereo aerial photographs showing the limestone terrain marked by depressions, break in topography, dry valleys and mottled surface
due to variation in surface moisture (courtesy Aerofilms Ltd.)
19.6 Lithology 327

Fig. 19.55 Discrimination between limestone and dolomite. a Aerial image (pre-dawn); discrimination between these rocks is possible on
photograph of an area in Oklahoma, USA; rock types consist of the TIR image; dolomite has higher thermal inertia than limestone and
limestone (L), dolomite (D), and granite (G); these rocks appear similar granite, and therefore is brighter (warmer) than the other two rocks;
in tone in VNIR and exhibit limited textural differences. b Thermal-IR granites appear dark (cool) (K. Watson in Drury 2004)

carbonates in solution, the insoluble constituents such as isotropic and homogeneous, characteristics which can be
limonite and clays are left as a red-coloured residue, called easily observed on the remote sensing images. The intrusive
terra-rosa. Vegetation—variable, depending upon weath- igneous rocks may occur in different shapes and dimensions,
ering and climate; in humid climates, vegetation may be e.g. batholiths, laccoliths, dykes, sills etc., and this may also
quite dense; vegetation may be sparse in karst landforms; help in identification (Figs. 19.56, and 19.57). Extrusive
carbonate terrain is also suitable for cultivation. Spectral igneous rocks can be delineated with the help of the asso-
characteristics—in the VIS-NIR-SWIR images, bare ciated volcanic landforms, lavaflows, cones, craters, vol-
slopes of limestones generally appear light toned (except canic necks, dykes etc. The flows have rough surface
at longer SWIR wavelengths, where they may be dark due topography and discordant contacts with the bedrock. Lava
to the carbonate absorption band at 2.35 µm); very fre- flows may be interbedded with non-volcanic sediments, and
quently, mottling due to variation in moisture content; a number of flows collectively may impart the impression of
terra-rosa appears as dark bands along joints or as patches, rough sub-horizontal bedding. Older extrusive rocks have
and may exhibit Fe–O absorption bands in the blue–UV thicker soil and vegetation cover due to prolonged weath-
region; SAR images may exhibit surface roughness, ering, which renders their identification more difficult;
topographical variations (collapse and subsidence etc.); younger extrusive rocks are relatively easy to demarcate.
limestones may be distinguished from dolomite on the TIR
images due to differences in thermal inertia (Fig. 19.55). 1. Granites. Weathering—weathering characteristics of
Similarities—carbonates may appear similar to sandstones granites differ greatly; in humid warm climates, granites
in arid regions but the latter exhibit compositional band- are more prone to weathering than in cold dry climates.
ing; further, carbonates possess an absorption band in the Landform—granites occur as bodies of gigantic dimen-
SWIR (at 2.35 µm) and lack one at 9 µm. sions formed either as intrusive bodies (Fig. 19.56) or as
products of migmatization; in warm humid climates, they
typically exhibit smooth rounded shapes due to spher-
oidal weathering; topography is generally low lying;
19.6.3 Igneous Rocks sometimes woolsack weathering is shown, in which
isolated outcrops protrude through thick weathered
Igneous rocks are characterized by the absence of bedding or mantle; huge boulders may be observed in valleys; in arid
foliation. Intrusive igneous rocks are generally massive, and semiarid regions, steep, sharp, jagged forms develop;
328 19 Geological Applications

susceptible to weathering and alteration, especially under


humid conditions; they commonly yield lateritic residual
deposits and montmorillonitic soil. Landform—these
bodies have smaller dimensions than the acidic intrusives
and occur as laccoliths, lopoliths, plugs etc.; they com-
monly form undulating rolling topography in humid
warm climates; in arid–semiarid regions, sharply rising
rough and jagged forms are common; domal structures or
circular depressions may indicate intrusion. Drainage—
commonly coarse dendritic drainage patterns; rectangular
and angulate patterns in jointed rocks; locally, radial and
concentric patterns associated with domal upwarps; fine
dendritic drainage over weathered soil (montmorillonitic)
cover. Bedding—absent; layers formed by primary dif-
ferentiation may resemble bedding. Jointing—often well
developed; the joints vary in orientation with the form of
the igneous body; altered masses may not exhibit a joint
pattern. Soil cover—in warm humid climates, soil is well
developed; the soil cover is thicker over ultrabasic rocks.
Fig. 19.56 The Air Plateau forming the southern extension of the Vegetation—scanty over basic rocks; ultrabasic rocks
Hoggar Mountains, North Africa, is a Precambrian massif lying on the may even be barren of vegetation owing to the toxic
border of the Sahara desert sands. Numerous large alkaline granitic effects of certain metals; plant selectivity may occur, i.e.
intrusions (late Palaeozoic or Mesozoic age) are seen. Several extensive
faults with lateral movements are also conspicuous. The area contains preferential growth of certain plants over a specific
deposits of uranium, wolfram, copper etc. (courtesy of R. Haydn) lithological unit. Spectral characteristics—in the VNIR
range, basic–ultrabasic rocks are rather dark-toned;
alteration products may comprise clays which possess
sharp and steep forms may also develop due to rapid absorption bands in the SWIR region (2.1−2.4 µm);
erosion in the tropics. Drainage due to rapid erosion in silicate absorption bands occur in the TIR region (10
the tropics, generally low to medium density dendritic −11.5 µm). Similarities—as discussed in the case of
drainage; sometimes sickle-shaped drainage develops; granites.
rectangular and angulate drainage patterns may develop 3. Dolerite. Dolerites are intrusive basic rocks of hypa-
when well jointed. Bedding—absence of compositional byssal type, occurring as dykes and sills. Therefore,
layering. Jointing—three–four sets of joints often seen; except for the mode of occurrence, most of the characters
the granites may exhibit sheeting (sub-horizontal exten- of dolerite are similar to those of gabbro. Dolerite sills
sive joints). Soil cover—may be quite thick in warm are difficult to identify on remote sensing data, as these
humid climates. Vegetation—variable from poorly to bodies can be mistaken for layering in the host rocks. On
thickly vegetated; the weathered slopes are suitable for the other hand, dykes can be easily demarcated by their
cultivation. Spectral characteristics—on VIS-NIR-SWIR discordant relationship with respect to the host rocks, and
images, granite is light- to medium-toned; on TIR images a relatively thin sheet-like shape (Fig. 19.57); in relief,
granites appear cooler (darker); low-emissivity bands due dykes commonly stand out above the country rocks,
to quartz and feldspar occur at ≅9 µm; surface moisture, appearing as walls, but may also occasionally form
soil and vegetation may significantly influence spectral trenches.
response over granites. Similarities—all large-sized 4. Acidic extrusive rocks (rhyolite, pumice and obsidian).
intrusive bodies may appear quite similar; basic and Weathering—highly susceptible to weathering, which
ultrabasic rocks are darker in the VIS region, smaller in means obliteration of characteristic landforms and fea-
size, have relatively poor vegetation cover, and display tures in older flows. Landforms—acidic lava is viscous
low emissivity bands at relatively longer wavelengths (10 and therefore restricted in extent, oblate in outline, has a
−11.5 µm) in the TIR region. rough topography and an irregular hummocky surface;
2. Basic and ultrabasic intrusives. Weathering—basic and volcanic landforms are associated. Drainage—absent or
ultrabasic rocks (gabbro and peridotite) are highly very coarse due to high porosity in newer flows; older
19.6 Lithology 329

of fissure-eruption type are more flat near the centre,


develop mesa landform (Figs. 19.4, and 19.5), become
serrated along the periphery, and are associated with
dykes. Drainage—may be almost absent in the initial
stages due to high permeability; fine dendritic drainage
develops over weathered plateau basalts due to low
permeability; radial and annular drainage may be seen
associated with cones. Bedding—absent; successive
flows interbedded with non-volcanic sediments may
resemble coarse bedding shown on scarp faces. Jointing
—typical columnar jointing often present. Soil cover—
dark-coloured montmorillonitic soils develop as a result
of weathering. Vegetation—sparse on younger flows;
weathered areas may support vegetation and cultivation.
Spectral characteristics—dark in the VIS range, espe-
cially at the blue end; medium tones in the NIR-SWIR
ranges; mottled appearance due to variation in moisture;
absorption bands of mafic minerals in the TIR range;
soils are very dark in the VIS-NIR range and exhibit clay
bands in the SWIR range. Similarities—the basalts
appear somewhat similar to acidic extrusives and may be
differentiated on the basis of landform, weathering and
spectral properties.

Figure 19.58a shows an AIRSAR image of Kilauea vol-


cano, Hawaii. Basaltic lava flows of varying ages which
possess differences in surface characteristics can be distin-
guished. Figure 19.58b is another image showing various
volcanic features and lava flows in a part of the Andes.

Fig. 19.57 Dykes causing damming of surface streams; b interpreta-


tion map (a Landsat TM FCC of a part of Central India, printed 19.6.4 Metamorphic Rocks
black-and-white)

Metamorphic rocks are marked by foliation and some


lavas display higher drainage density and dendritic stratification. The foliation is manifested as photo-lineations,
drainage; radial drainage is associated with volcanic which are short and numerous, parallel to one another.
structures. Bedding—absent. Jointing—may be faintly Rocks of regional metamorphism are generally deformed
developed in some lava flows. Soil cover—may be thick and fractured (Fig. 19.59). In semi-arid conditions, the
in older flows. Vegetation—sparse in the case of younger fractures control drainage and vegetation (Fig. 19.60). The
flows; older flows frequently support good vegetation. metamorphic derivatives of intrusive rocks, i.e.
Spectral characteristics—generally light-toned on meta-intrusives, may appear quite similar to the intrusive
VIS-NIR-SWIR images; weathering, soil cover and rocks on remote sensing images.
vegetation may have significant influence; ‘clay bands’ in
the SWIR may be present in the case of altered flows. 1. Quartzite. Weathering—highly resistant both in humid
5. Basalt. Weathering—highly susceptible to weathering; and dry climates. Landform—forms hills, ridges, scarps
original landforms are difficult to identify after weather- and topographically prominent features. Drainage—low
ing. Landforms—basaltic lava has low viscosity and is to medium density because of the steep slopes, even
commonly of a ropy type; presence of flow structures; though permeability is very low; rectangular and angu-
oblate outline with flat and rough topography; basalts of late drainage patterns frequent; trellis pattern in interca-
central-eruption type exhibit gently sloping cones; those lated sequences. Foliation—quartzites are most
330 19 Geological Applications

VIS-NIR-SWIR images; low emissivity bands due to


quartz/feldspar are prominent at *9 µm in the TIR
region; limonite, clays, and vegetation may significantly
influence spectral response. Similarities—may some-
times resemble marble in arid regions; however, marble
is a highly deformed rock and has absorption bands in the
SWIR region; may also resemble sandstone, but sand-
stone is located in sedimentary settings.
2. Marble. Weathering—in arid and semi-arid regions,
marble is quite resistant to weathering; in warm humid
climates, it may be susceptible to solution activity.
Landform—forms ridges and hills in arid climates; may
display karst features in humid climates, often shows
smooth rounded surfaces. Drainage—generally coarse
density. Foliation—the intercalated bands of amphibo-
lites and schistose rocks may indicate the trend of folia-
tion; marbles may show a highly deformed, folded
pattern due to plastic deformation. Jointing—often well
jointed. Soil cover—thin soil cover may be present.
Vegetation—variable depending up on composition,
weathering and soil cover. Spectral characteristics—in
general, marbles are light-toned on VIS-NIR-SWIR
images; variation in moisture content leads to mottling;
exhibit absorption band in the SWIR region (2.35 µm).
Similarities—as discussed in quartzites and limestones.
3. Schist and phyllite. Weathering—generally incompetent
rock. Landform—constitutes valleys and lower hill
slopes; develops rounded forms in humid climates and
relatively steeper slopes in arid climates (Fig. 19.54).
Drainage—dendritic drainage often well developed; high
drainage density; occasionally drainage may be controlled
by foliation. Foliation—strongly developed but may be
Fig. 19.58 Volcanic flows; a AIRSAR image showing Kilauea masked under soil cover. Jointing—generally well jointed
volcano, Hawaii; the main caldera is seen on the right side of the and fractured. Soil cover—often thick. Vegetation—schist
image; lava flows of different ages exhibit differences in backscatter due
and phyllite support fairly good vegetation in humid cli-
to variation in surface roughness (multiband image with colour coding:
R=P-band HV; G=L-band HV, B=C-band HV) (courtesy of mates, whereas in arid climates the vegetation may be
NASA/JPL) b ASTER FCC image of part of the Andes. The scene is sparse; suitable for cultivation. Spectral characteristics—
dominated by the Pampa Luxsar lava complex (mostly upper right of generally moderate to light tones; iron-rich minerals such
the scene). Lava flows are distributed around remnants of large
as biotite, chlorite and amphiboles may produce medium
dissected cones. On the middle-left edge are the Olca and Paruma
stratovolcanoes appearing in blue due to lack of vegetation (courtesy of tones; prominent absorption bands occur in the SWIR
NASA/GSFC/MITI/ERSDAC/JAROS, and US/Japan ASTER Science region (2.1−2.4 µm). Similarities—schists and phyllites
Team) are similar to gneisses, which are somewhat less foliated
and coarsely layered; weathering, landform and spectral
commonly massive and lack foliation. Jointing—very characteristics of schists may resemble those of shales,
prominently developed, often three to four sets. Soil which occur in sedimentary settings.
cover—massive quartzites have scant soil cover; impure 4. Slate. Slates are similar to schists and phyllites in photo
quartzites weather to yield good soil cover. Vegetation— characteristics, except that they are very dark in the
massive pure quartzites are barren of vegetation, whereas visible region and strongly foliated and cleaved.
weathered impure quartzites support good vegetation. 5. Gneiss. Weathering—possess greater resistance to
Spectral characteristics—generally light-toned on weathering than schists and phyllites, and less resistance
19.6 Lithology 331

Fig. 19.59 Aerial photo stereo


pair of a typical metamorphic
terrain; note the development of
folds, faults and fractures;
c interpretation map (photos
courtesy of A. White)

than quartzites. Landform—generally low-lying undu- developed. Soil cover—highly variable. Vegetation—
lating terrains; rounded smooth surfaces. Drainage— generally good vegetation cover. Spectral characteristics
high density; sub-parallel; sub-dendritic, rectangular —highly variable, depending upon mineral composition,
drainage patterns. Foliation—gneisses typically show as the rock comprises layers of mainly quartz + feldspar
foliation; they are commonly interbedded with other and mica + amphiboles + chlorite; spectral banding (dark
lithological layers such as mica schists, amphibolites and and light bands), is generally prominent. Similarities—at
quartzites; this leads to the development of a banded times gneisses may appear similar to acidic intrusive
pattern on photographs and images. Jointing—well rocks, and can be distinguished from the latter on the
332 19 Geological Applications

different silicate rocks from acidic to ultrabasic and carbon-


ates occur in the thermal infrared region.
It would be appropriate to mention TIMS here first. In
1980s–90s, the TIMS (Thermal Infrared Multispectral
Scanner) was an important sensor, being an experimental
airborne forerunner to ASTER. It carried spectral bands in
the TIR that were exactly similar to now in ASTER. TIMS
was flown at numerous experimental sites to demonstrate the
capability of TIR multispectral data for lithological—min-
eralogical identification. A host of examples have been
Fig. 19.60 Typical fracture-controlled drainage in a metamorphic published: viz., for mapping alluvial fans of different com-
terrain under semi-arid conditions; common drainage patterns are positions (Gillespie et al. 1984), identifying variety of
rectangular, angular and trellis type; the terrain is generally bare of basaltic flows (Lockwood and Lipman 1987), understanding
vegetation, vegetation being also preferentially aligned along drainage variation in alkali rock complex (Watson et al. 1996), dis-
channels, i.e. fractures (IKONOS image of a part of Andhra Pradesh,
India) criminating carbonatites and alkaline igneous rocks (Rowan
et al. 1993), and mapping rocks ranging from leucogranites
to anorthosites (Sabine et al. 1994), to mention a few.
basis of foliation, banded pattern and metamorphic set- We will focus the remaining discussion in this Section on
ting; they may also appear similar to schists and phyllites, the ASTER sensor data, as this has been the most suitably
from which they can be distinguished on the basis of designed spaceborne sensor for lithologic-mineralogic
coarser foliation and faint tonal/lithological layering. studies till date, and data from this sensor has been avail-
able on global basis, free of charge. The characteristics of
ASTER sensor have been described in Sect. 6.9.
While considering applications of ASTER, it is worth-
19.7 Identification of Mineral Assemblages while to also mention the limitations and problems. One
from ASTER Ratio Indices important limitation is that ASTER has a relatively coarse
pixel size (30 m in SWIR and 90 m in TIR) that often leads
19.7.1 Introduction to mixed pixels; secondly, it is not a hyperspectral sensor;
and thirdly, pixel response may be sometimes affected by
Identification of mineral assemblages is fundamentally based cross-talk (a type of noise). Further, almost invariably, ratio
on the spectral characteristics of minerals and rocks as images are required to be generated which are generally
described in Chap. 3. Figure 19.61 gives an overview of the noisier.
broad mineral-absorption bands. Iron absorption bands occur For computing spectral ratios, one may use either raw
in the VNIR; absorption bands positioned at different ASTER data, i.e. simple DN values in the spectral bands, or
wavelengths characterizing different types of hydroxyl- alternatively processed ASTER data, viz. computed value
bearing silicate minerals and carbonates occur in the SWIR; such as radiance at sensor, or top of the atmosphere radiance,
and low-emissivity (reststrahlen) bands characterizing or atmospherically corrected surface reflectance, or radiant

Fig. 19.61 Absorption bands in the optical region (visible to thermal IR) that enable remote sensing mapping of mineral assemblages and rocks.
Bands in the VIS-NIR-SWIR correspond to low reflectance, and those in the TIR to low emittance
19.7 Identification of Mineral Assemblages from ASTER Ratio Indices 333

temperature or surface emissivity (Ninomiya and Fu 2001). depth of absorption at B can be considered as (RBDB) = h1/
Use of corrected and physically computed values would lead h2. Computation of spectral ratio is a simple three point
to minimizing the interfering effects of other factors such as formulation. The numerator is the sum of the bands located
atmosphere and topography in the images. closest to the shoulders and denominator is the band nearest
to the absorption feature minimum (Crowley et al. 1989).

19.7.2 Approaches for Computing Spectral ðDNA þ DNC Þ


Spectral ratio in terms of RBDB ¼
Ratios ðDNB  2Þ
ð19:2Þ
Three different approaches have been followed for comput-
ing spectral ratios:
3. Multiple spectral slopes:
1. Linear spectral slope
In this approach spectral slopes for an absorption band are
2. Relative (absorption) band depth (RBD)
computed from both the shoulders and then multiplied
3. Multiple spectral slopes
(Ninomiya and Fu 2002; Ninomiya et al. 2005), i.e. this uses
1. Linear Spectral Slope: product of linear spectral slopes from either shoulder to the
absorption band. Figure 19.62c shows an example. The
This is the age-old simplest approach. Figure 19.62a shows absorption feature minimum is closest to the spectral band B
a spectral response curve along with two spectral bands (A and shoulders are located close to the spectral bands A and
and B) of a sensor. The DN values corresponding to the two C. The spectral slopes computed from either side are DNA/
bands are DNA and DNB respectively. Assuming a simple DNB and DNC/DNB. The spectral ratio can be given as:
linear gradient, the ratio can be computed as:
ðDNA Þ ðDNC Þ
Spectral ratio = * ð19:3Þ
Spectral ratio ¼
DNB
ð19:1Þ ðDNB Þ ðDNB Þ
DNA
Application examples of all the three approaches are
presented in the following pages.
2. Relative (absorption) band depth:

This approach uses the relative depression in the spectral 19.7.3 ASTER Ratio Indices in the VNIR Region
curve caused due to absorption, as a measure of the spectral
ratio. Figure 19.62b shows spectrum of a mineral with a
depression due to absorption along with shoulders on either 1. Ferric Iron Index. As mentioned in Chap. 3, iron oxide is
side. The sensor bands may be considered at A, B, and C. the only constituent of significance in terms of surface
The DN values corresponding to the sensor bands are DNA, occurrence and abundance that has spectral features in
DNB and DNC respectively. Per above figure, the relative the VIS-NIR. It is quite a common constituent of mineral

Fig. 19.62 Approaches for


computing spectral ratios:
a simple linear spectral slope,
b relative (absorption) band depth
(RBD), and c multiple spectral
slopes
334 19 Geological Applications

19.7.4 ASTER Ratio Indices in the SWIR Region

As mentioned in the beginning, absorption bands positioned


at different wavelengths characterizing different types of
hydroxyl-bearing silicate minerals and carbonates occur in
the SWIR. These minerals frequently occur in hydrothermal
alteration zones associated with base-metal sulphide depos-
its; therefore, a lot of remote sensing research has been
focussed on the sites of hydrothermal alteration zones, par-
ticularly of porphyry copper deposits which form large
deposits.
Typically, a porphyry copper deposit possesses a zoned
structure. There is a quartz- potassium feldspar bearing
innermost zone, which is surrounded by phyllic zone,
Fig. 19.63 Spectral reflectance curves of jarosite, hematite and argillic zone and propylitic zone. Each of these zones pos-
goethite together with ASTER bands 1, 2 and 3; spectral curves offset sesses a unique mineralogical assemblage which exhibit
for clarity; vertical lines indicate the ASTER band centre positions
characteristic absorption features in the SWIR. The ASTER
ratio indices for identifying these mineral groups are dis-
oxidation and alteration zones and occurs on the surface
cussed below and are summarized in Table 19.3.
ubiquitously. The presence of iron oxide (ferric ion in
hematite, goethite, jarosite) leads to strong absorption in
1. Generalized hydroxyl alteration. The generalized
the UV−blue region, affecting the slope of the reflectance
hydroxyl alteration can be identified by using the
curve in this region. Figure 19.63 shows the ferric
aggregation of ASTER B5+B6+B7, and the ratio can be
absorption band together with ASTER spectral bands.
formulated as:
Based on this, the ferric iron index can be formulated as
(Rowan and Mars 2003):
ðB4  3Þ
Generalized hydroxyl alteration ¼
B2 ðB5 þ B6 þ B7Þ
Ferric iron Index ¼ ð19:4Þ ð19:6Þ
B1
Higher values of this ratio imply greater amount of ferric
iron. Conceptually, this ratio is similar to MSS2/MSS1 band 2. Propylitic group. The propylitic alteration zone possesses
ratio used in MSS for mapping iron oxide/limonite (e.g. mineral assemblage: chlorite-epidote-actinolite (amphi-
Rowan et al. 1974), or TM3/TM2 used for TM/ETM sensor, bole)-calcite and this mineral assemblage is also popu-
or, in a generalized way, (red band)/(green band)—ratio for larly referred to as ‘chlorite group’ in this context.
data from other sensors. Figure 19.64 shows that all these minerals exhibit
absorption features at 2.35 lm which coincides with
2. Ferrous iron index. is rather scarcely used, as far as ASTER band 8. Therefore, the index for this absorption
ASTER data is concerned. The ferrous iron exhibits an band is given as (Rowan et al. 2005):
absorption feature at around 10 lm where ASTER does
ðB7 þ B9Þ
not have any band-pass. However, some effect of this Propylitic Index ¼ ð19:7Þ
ðB8  2Þ
absorption feature is observed in ASTER band 3 by
Rowan et al. (2005), who suggest that B5/B3 and B1/B2
could give some idea of ferrous iron. It is noteworthy that the absorption feature in silicates
3. Gossan index: utilizes the general higher reflectance in here is due to Fe, Mg–OH; therefore, other Fe, Mg–OH
B4 and a lower reflectance in B2. As discussed else- minerals such as biotite, phlogopite etc. that occur in
where, vegetation may interfere with limonite (gossan) in mafic-ultramafic rocks, also exhibit the absorption
terms of spectral response. Gossan index is given as feature at 2.35 lm. Hence, this index has significance
(Velosky et al. 2003): in identifying mafic-ultramafic suite of rocks.
3. Phyllic group. The phyllic alteration zone has mineral
B4 assemblage of muscovite-sericite-illite-smectite, also
Gossan Index ¼ ð19:5Þ
B2 popularly called ‘illite group’here. All these minerals
19.7 Identification of Mineral Assemblages from ASTER Ratio Indices 335

Table 19.3 Important ASTER ratio indices for identification of mineral assemblages
S. Feature Ratio/index Author/references Comment
no.
VNIR
 
1 Ferric iron index B2 Rowan and Mars Corresponds to TM3/TM2 (or alternatively
B1 (2003) TM2/TM1)
2 Ferrous iron index ðB5Þ ðB1Þ Rowan et al.
þ
ðB3Þ ðB2Þ (2005)
 
3 Gossan index B4 Velosky et al.
B2 (2003)
SWIR
 
4 Generalized hydroxyl-alteration ðB4  3Þ Corresponds to the widely used TM5/TM7
ðB5 þ B6 þ B7Þ
5 Propylitic index ðB7 þ B9Þ Rowan et al.
(chlorite-epidote-calcite-actinolite) ðB8  2Þ (2005)
6 Phyllic index ðB5 þ B7Þ Rowan and Mars
(sericite-muscovite-illite-smectite) ðB6  2Þ (2003)
ðB4 þ B7Þ Rowan et al.
ðB6  2Þ (2005)
ðB4  B7Þ Ninomiya
ðB6  B6Þ (2003a)

7 Argillic index ðB4 þ B6Þ Mars and Rowan


(pyrophyllite-kaolinite-alunite) ðB5  2Þ (2006)
ðB4  B7Þ Ninomiya
ðB5  B5Þ (2003a)
8 Calcite index ðB7 þ B9Þ Rowan and Mars Caution: B8 is also affected by propylitic
ðB8  2Þ (2003) alteration silicates (chlorite-epidote-hornblende)
ðB6  B9Þ Ninomiya
ðB8  B8Þ (2003b)
Thermal IR
9 Quartz index (QI) ðB11:B11Þ Ninomiya et al. Very low values of QI may be associated with gypsum
ðB10:B12Þ (2005)
   
B11 B13 Rockwell and This ratio is equivalent to a mixture of QI and SI

ðB10 þ B12Þ B12 Hofstra (2008)
 
10 Silica index (SI) B13 Ninomiya and Fu Note that this SI (B13/B12) is inverse of MI
B12 (2002)
 
B14 Rowan et al. Caution: As B14 is affected by carbonate absorption, this
B12 (2006) index using B14 is valid only for carbonate-free terrain
 
11 Carbonate index (CI) B13 Ninomiya and Fu
B14 (2002)
 
12 Mafic index (MI) B12 Ninomiya et al. Mafic Index and Ultramafic Index (below) are slightly
MIcorrected = MIc B13 (2005) overlapping
MIcorrected ¼ MIc ¼ MI3 Ninomiya et al.
CI (2005)
 
13 Ultramafic index (UMI) ðB12 þ B14Þ Rowan et al. Caution: As B14 is affected by carbonate absorption, this
ðB13  2Þ (2005) index using B14 is valid only for carbonate-free terrain

possess an intense Al–OH absorption band at 2.20 lm, ðB5 þ B7Þ


Phyllic Index ¼ ð19:8Þ
which coincides with ASTER band 6 (Fig. 19.65). ðB6  2Þ
Simple ratios (B4/B6), (B5/B6) and (B7/B6) define the
2.20 lm absorption band. The phyllic index has been ðB4 þ B7Þ
¼ ð19:9Þ
given as (Rowan and Mars 2003; Rowan et al. 2005; ðB6  2Þ
Ninomiya 2003a):
336 19 Geological Applications

sulphate but still included in this group due to its close


association in alteration zones of sulphides); this group is
popularly called ‘pyrophyllite group’. All these minerals
exhibit strong absorption in ASTER band 5 (2.17 lm)
(Fig. 19.65). Therefore, the argillic index is given as
(Mars and Rowan 2006; Ninomiya 2003a):

ðB4 þ B6Þ
Argillic Index ¼ ð19:11Þ
ðB5  2Þ

ðB4  B7Þ
¼ ð19:12Þ
ðB5  B5Þ

It may be mentioned here that minerals such as kaolinite


and alunite possess absorption features both at 2.17 and
2.20 lm; therefore they are shown up in both phyllic and
argillic index images.
Further, alunite index has also been given separately as
(Ninomiya 2003b):
Fig. 19.64 Spectral reflectance curves of propylitic minerals—chlo- ðB7  B7Þ
rite, epidote, actinolite and calcite together with ASTER bands 1–9; Alunite Index ¼ ð19:13Þ
vertical lines indicate the band centre positions; spectral data obtained ðB5  B8Þ
from USGS spectral library

5. Calcite. Calcite has strong absorption feature at 2.35 lm


(ASTER Band 8) (Fig. 19.64), which leads to formula-
tion of calcite index as (Rowan and Mars 2003; Nino-
miya 2003b):

ðB7 þ B9Þ
Calcite Index ¼ ð19:14Þ
ðB8  2Þ

ðB6  B9Þ
¼ ð19:15Þ
ðB8  B8Þ
It is obvious that propylitic silicate minerals with (Fe,
Mg)–OH absorption feature at B8 would interfere with this
Fig. 19.65 a, b. Spectral reflectance curves of selected phyllic and
argillic minerals: pyrophyllite, calcite, muscovite, gypsum, kaolinite, calcite index.
montmorillonite and alunite; vertical lines correspond to the ASTER A few image examples of the SWIR indices are given in
band centre positions (mineral spectra after Goetz and Rowan 1981) Sect. 19.8.6 (Figs. 19.79, 19.80, and 19.81) dealing with
mineral exploration.
ðB4  B7Þ
¼ ð19:10Þ
ðB6  B6Þ
Muscovite-sericite are common minerals in granitic 19.7.5 ASTER Ratio Indices in the Thermal IR
(acidic) suite of rocks and therefore the phyllic index is Region
important in identifying acidic rocks.
In thermal-IR region, the various important rock-forming
4. Argillic group. Alteration minerals included in this group minerals (viz. silicates and carbonates) exhibit the charac-
comprise pyrophyllite, kaolinite and alunite (alunite is a teristic Reststrahlen (=low-emissivity) bands (Sect. 3.6.2).
19.7 Identification of Mineral Assemblages from ASTER Ratio Indices 337

Fig. 19.66 Linear correlation of the wavelength of emissivity mini-


mum with weight percent silica of igneous rocks; minimum emissivity
values derived by fitting a Gaussian function; spectra and weight
percent silica data belong to the ASTER spectral library; simplified
after Hook et al. (2005)

The position of the low-emissivity band systematically shifts


from 9 µm in granites to 11 µm in peridotites. This
makes it possible to identify mineralogical assemblages/
rocks using multispectral TIR data.
Relationship between weight percent silica and spectral
emissivity minimum has been of particular interest to geo-
scientists (Sabine et al. 1994, Ninomiya 1995). Figure 19.66 Fig. 19.67 Emissivity spectra of a carbonate rocks, b quartzose rocks,
shows that this is generalized linear relationship (Hook et al. c granite, d diorite, e gabbro, and f peridotite with ASTER bands 10–
14; vertical lines indicate the band centre positions; modified after
2005). It is implicit that the emissivity minimum data can Ninomiya et al. (2005)
give an estimate of bulk surface mineralogic composition.
Figure 19.67 shows the spectral curves of important
rocks in the TIR together with ASTER bands. Based on estimating quartz in rocks that are poor in feldspars. Igneous
these, several rock indices have been conceived and rocks such as granites which are rich in both quartz and
formulated: K-feldspar would have low QI values. A very high value of QI
would imply rocks low in feldspars. Hence, QI also functions
1. Quartz Index (QI). Quartz here refers to quartz mineral, as an indicator of quartz-rich sedimentary and metamorphic
viz. free silica in igneous rocks (not total silica!). The rocks (pure sandstones, orthoquartzites, quartzites). It may be
spectral curve for quartzite which is composed of pre- mentioned here that very low values of QI would be indicative
dominantly quartz shows a small peak at B11 and of sulphates, typically gypsum (Ninomiya and Fu 2016) (for
absorption features at adjacent B10 and B12 (Fig. 19.67). spectral curve of sulphates, see Fig. 3.9).
This unique character of quartz is used for QI. QI is
defined as (Ninomiya et al. 2005; Rockwell and Hofstra 2. Silica Index (SI). This parameter pertains to the total silica
2008): content in the rock, whether in the form of free silica
(quartz) or as contained in silicates (e.g. in feldspars, micas,
ðB11:B11Þ amphiboles, pyroxenes, olivine etc.). If we examine the
Quartz Index ðQIÞ ¼ ð19:16Þ spectra of different rocks, there is a distinct response pattern
ðB10:B12Þ
from B12 to B14 for acidic to ultramafic rocks (Fig. 19.67),
   
B11 B13 and this pattern could be related to total silica content in
¼  ð19:17Þ
ðB10 þ B12Þ B12 rocks, as (Ninomiya and Fu 2002; Rowan et al. 2006):

Apparently, the QI of Rockwell & Hofstra (Eq. 19.17) B13


repcresents a mixture of QI and SI (see below). Silica Index ðSIÞ ¼ ð19:18Þ
B12
In general, therefore, the value of QI will be high for
quartz-rich pixels. However, as feldspars have a spectral B14
¼ ð19:19Þ
property opposite to quartz in B10 through B12, QI is good for B12
338 19 Geological Applications

SI values are higher for silica-rich pixels. The first SI absorption band in B14. The second SI (B14/B12) is valid for
(B13/B12) is just inverse of Mafic Index described below and only carbonate-free terrain due to again strong absorption by
may also be sensitive to carbonates which possess strong carbonates in B14.

(a)

(b)

(c)
Label Colour Interpretation
A Pink-orange High QI, low CI, low MI; therefore this may
represent quartz-rich, feldspar-poor,
sedimentary rocks (sandstones)
B Green High CI, low to intermediate QI, low to
intermediate MI; these are dominantly
carbonate-rich rocks (limestone-dolomite).
C Gray-dark Low QI (implying the presence of both quartz
gray and feldspar, as quartz and feldspars have
mutually opposite response in the bands used
for QI), low CI, low MI; therefore these rocks
are felsic silicate rocks (granites)
D Bluish dark Low QI, low to intermediate CI, intermediate
gray to high MI; therefore these rocks are mafic
silicates (basalts)
E Bright blue Low QI, low CI, very high MI; therefore
purplish these rocks are ultramafic (ophiolites)

Fig. 19.68 a Colour ratio composite generated from QI, CI and MI silicates and ultramafic silicates (ophiolite mélange) (image courtesy
indices derived from ASTER data coded as RGB in a part of Robert K. Corrie, The Queen’s College, University of Oxford;
west-central Tibet. The image shows no topographic effects and has annotation modified after Corrie et al. 2010); b colour ternary diagram
diverse range of rock types; note the occurrence and distribution of representing rock compositions associated with different colours on the
rocks such as: quartz-rich, carbonate-rich, felsic silicates, mafic CRC; c explanation of colours in terms of the composition
19.7 Identification of Mineral Assemblages from ASTER Ratio Indices 339

3. Carbonate Index (CI). Carbonates (calcite and dolomite) these alteration minerals may not make primary mineral
exhibit a distinct absorption at B14; this makes up for CI constituents (Rowan et al. 2005). Ninomiya et al. (2005)
which is defined as (Ninomiya and Fu 2002): mapped carbonate rocks, quartzite/quartzose rocks, and
different mafic-ultramafic rocks using ASTER based CI, QI,
B13 and MI parameters in parts of China and Australia. Further,
Carbonate IndexðCIÞ ¼ ð19:20Þ Ninomiya and Fu (2016) carried out regional lithological
B14
investigation using ASTER-TIR indices for mapping dif-
The value of CI will be high for carbonate-rich pixels. ferent types of mafic-ultramafic rocks, granitic rocks,
quartzose rocks, carbonate rocks and sulphate (gypsum)
4. Mafic Index (MI). MI (also earlier called Basic Degree layers along with suture zones in the yet poorly explored
Index, BDI) is given as (Ninomiya et al. 2005): vast Tibetan plateau and the surrounding areas.
The various ratio indices could be displayed as simple
B12 black-and-white images or combined in a color composite
Mafic Index ðMIÞ ¼ ð19:21Þ
B13 for integrated interpretation. Figure 19.68a presents a colour
ratio composite image of a part of Tibetan plateau showing
The value of MI will increase with increasing mafic
diverse rocks such as quartz-rich, granites, carbonates, mafic
character of the rock. The MI is inversely correlated with
basalts and ophiolites—identified separately and dis-
total silica content in igneous rocks and discriminates mafic
tinctly. The colour ternary diagram in Fig. 19.68b and the
versus felsic rocks. Further, vegetation is likely to interfere
associated Fig. 19.68c explain the interpretation of
with MI. Additionally, because of the proximity of strong
colours observed on the colour ratio composite. Examples of
and broad carbonate absorption band at B14, the MI
alteration mapping are given in Figs. 19.80, 19.81.
(B12/B13) has a tendency to be sensitive to carbonates. In
order to take care of this, MI has been redefined as given
below by Ninomiya et al. (2005) who observe that this
yields a robust index for mapping mafic rocks: 19.8 Mineral Exploration

MI 19.8.1 Remote Sensing in Mineral Exploration


MIcorrected ¼ MIc ¼ ð19:22Þ
CI3
Remote sensing techniques play a very significant role in
5. Ultramafic Index (UMI). It is observed that the ultramafic locating mineral deposits and effectively reducing the costs
rocks are marked by strong absorption in B13; this of prospecting and mineral development. Although the
makes way for identifying ultramafic rocks on the fol- various useful elements and minerals occur in a vast variety
lowing ratio image (Rowan et al. 2005): of genetic associations, the commercial deposits of minerals
are limited in genetic types and mode of occurrences. This
forms the basis of concept-based prospecting. With the
ðB12 þ B14Þ
Ultramafic index ðUMIÞ ¼ ð19:23Þ models of commercial ore occurrences known, remote
ðB13  2Þ
sensing techniques can help to rapidly delineate metallo-
genic provinces/belts/sites and mineral guides over a larger
UMI has higher values for pixels containing ultramafic terrain. This can isolate potential areas from non-interesting
rocks. It may be recalled that B14 is a deep absorption band areas for further exploration in a cost-effective manner.
for carbonates and hence this relationship is valid only for Most commonly, an exploration programme is marked
carbonate-free terrain. by four stages: (1) prospecting stage, (2) regional explo-
Application of ASTER data for identification of mineral ration stage, (3) detailed exploration stage and (4) mine
assemblages has been done by a number of researchers; exploration stage. Remote sensing is useful in all the stages,
however, most of the studies have been for mapping though it is most useful in the first, i.e. prospecting stage and
hydrothermal alteration zones associated with mineral becomes relatively less important in the subsequent stages.
deposits. In general, VNIR + SWIR (reflectance) data and The prospecting stage includes reconnaissance and pre-
thermal IR (emittance) data can be used in a complimentary liminary investigations. During this, the aim is to define
fashion for compositional studies. ‘targets’. In this endeavour, small-scale (*1:100,000)
Using ASTER data in Mountain Pass, California, Rowan satellite images, supplemented with limited larger-scale
and Mars (2003) mapped quartzose rocks, carbonate rocks, (*1:50 to 1:25 k) multispectral image data and airborne
granitic rocks, granodioritic rocks, and more mafic rocks geophysical surveys, constitute the most powerful data inputs
(comprising mafic gneiss+ amphibolites) and skarn deposits. for defining targets. Remote sensing serves as the
It may also be possible that in some cases a certain alteration fore-runner.
mineral or mineral assemblage, viz. limonite and/ or During the regional exploration stage, extensive surface
kaolinite, may characterize a particular lithology although geological mapping (commonly scale *1:50–1:25 k) is
340 19 Geological Applications

carried out, and a few selected sites are surveyed using micrometres in the VNIR region, to a few centimetres in the
geophysical and geochemical techniques. During this stage, TIR and some metres (in hyper-arid regions) in the micro-
detailed analysis of remote sensing data provides a valuable wave region. Therefore, in most cases, a remote sensing
input for completing surface geological mapping. Further, geoscientist has to rely on indirect clues, such as general
establishing a data base in a GIS approach (see Sect. 19.9), geological setting, alteration zones, associated rocks, regional
integrated data processing and enhancement of indicators and local structure, lineaments, oxidation products, mor-
can help in the exploration programme. phology, drainage, and vegetation anomaly, etc.—as only
During the detailed exploration, investigations are carried rarely is it possible to directly pinpoint the occurrence and
out at much larger scales (*scale 1:10,000 to 1:5000). The mineralogy of a deposit based solely on remote sensing data.
high spatial and spectral resolution remote sensing data
could be useful at this stage.
After this, the mine exploration stage starts that aims to 19.8.2 Main Types of Mineral Deposits
define the mineral deposit at depth, and finally this leads to and Their Surface Indications
development, mining and exploitation programmes. The
high spatial and spectral resolution remote sensing can be The success of remote sensing techniques in locating
used for monitoring open-cast mining (such as coal, iron ore, workable mineral deposits would depend upon the basic
bauxite etc.), and surface effects of subsurface mining. knowledge about the deposit—its genetic type, association,
It should be appreciated that there is one important limi- mode of formation and occurrence. Therefore, a brief review
tation of remote sensing data in mineral exploration—the of the major types of mineral deposits is relevant here
depth aspect. Most of the mineral deposits occur at a certain (Stanton 1972; Smirnov 1976; Guilbert and Park 1986) (see
depth and are not localized on the Earth’s surface. Remote Table 19.4). These deposit types are frequently marked by
sensing data have a depth penetration of approximately a few certain surface indicators, some of which could be observed

Table 19.4 Main types of mineral deposits and their surface indications observable on remote sensing data
Genetic type and form/mode Minerals (examples) Salient surface indications
Magmatic – Segregation and Chromite, magnetite, ilmenite, Intrusive bodies,
differentiation platinum group concordant/discordant
relations drainage and
landform
– Pipes Diamond
– Pegmatites Gemstones, rare earths
Sedimentary/volcano-sedimentary/metamorphic – Bedded/layered Banded iron formations, Bedded/layered terrain,
phosphorites, manganese, stratigraphic
volcanogenic massive sulphide (age) aspects, structural
deposits, coal controls etc.
Hydrothermal and related – Greisen Tin deposits Alteration zone,
mineralogy
– Skarn Minerals of wolfram, Host rock, lithology,
molybdenum, lead-zinc, tin, intrusive contact,
copper calc-silicate minerals,
alteration
– Porphyry Copper-molybdenum-gold Batholithic intrusives,
deposits alteration, structural and
lithological controls,
gossan
– Veins and lenses Base metals, gold Structural controls,
alteration zone, gossan
Placer – Mechanically Diamond, monazite, gold, Suitable landforms of
concentrated by platinum etc. deposition
fluvial, aeolian and
marine action
Supergene enrichment – Chemical leaching Base metal sulphides Gossan, alteration zone,
and deposition oxidation and leaching
Residual enrichment – Lateratization Bauxite, laterite, manganese Landform and drainage
minerals
19.8 Mineral Exploration 341

Table 19.5 Typical DN—values of Landsat TM bands over altered and unaltered rocks and vegetation (Khetri copper belt, India)
TM1 TM2 TM3 TM4 TM5 TM7
1 Raw data
– Unaltered rocks 62 32 44 52 72 51
5 2 3 4 5 2
– Altered rocks 6 5 6 6 6 2
– Vegetation 53 22 21 72 43 18
2 Path radiance 41 13 10 09 – –
3 Corrected data
– Unaltered rocks 21 19 34 43 72 51
– Altered rocks 15 12 26 37 60 22
– Vegetation 12 9 11 63 43 18

on remote sensing images, and the same are also mentioned chromite, magnetite, etc. The lithologically bound epigenetic
in Tables 19.4 and 19.5. deposits are formed due to strong preference of the migrating
A number of principal geological criteria or guides have mineralizing fluids to particular host rocks, e.g. carbonates,
been distinguished for mineral prospecting (Mckinstry 1948; volcanic flows, or metapelites. Remote sensing data of
Kreiter 1968; Peters 1978). Of these, those that can be adequate spatial and spectral resolution can help locate the
observed on remote sensing data are: (1) stratigraphical– occurrence of lithological guides. Several examples of this
lithological, (2) geomorphological, (3) structural, (4) rock type of application are known in the literature. For instance,
alteration and (5) geobotanical. In addition to these, geo- using Landsat MSS data and supervised classification, Hal-
chemical and geophysical anomalies and other ancillary data bouty (1976) located the likely extension of known
can also provide valuable inputs during prospecting and strata-bound copper deposits in the Tertiary Totra sandstones
could be integrated in GIS (GIS based modeling is discussed of Bolivia, into the adjoining territory of Peru.
in the next section).

19.8.4 Geomorphological Guides


19.8.3 Stratigraphical–Lithological Guides
Geomorphological guides are particularly important in
Stratigraphical (age) criteria refer to the geological setting and prospecting for mineral deposits that are products of sus-
the stratigraphical position of the geological unit (e.g. beds or tained weathering and erosion. Potential sites of deposits
intrusives). As some types of mineral deposits are confined to originating from residual and supergene enrichment can be
certain age groups/lithologic horizons (e.g. deposits of coal, located by geomorphological criteria, e.g. hills, ridges, pla-
iron, manganese, phosphorites etc.), these criteria serve as teaus and valleys, in areas of sustained weathering and
useful guides during exploration. An idea of the type of terrain, leaching. All these deposits are sought in the Quaternary
namely igneous, sedimentary or metamorphic, and the strati- cover. In all such cases, patterns of relief, drainage and
graphical position of major geological units can be obtained slopes are vital, and information on all these aspects can be
from remote sensing data on a suitable scale. Thus, attention obtained from remote sensing data and DEM. Similarly,
can be focused on sub-areas that may possess greater prospect. placer deposits (e.g. diamonds, gold, monazite etc.) are
Some mineral deposits are preferentially confined to a formed as a result of the mechanical concentration of fluvial,
particular lithology, which then forms a useful lithological aeolian, eluvial and marine processes. Suitable sites for their
guide. The deposits may be syngenetic (forming an original deposition and occurrence can be better located on the
part of the rock mass) or epigenetic (introduced into the remote sensing data, e.g. in the case of fluvial placers, by
rock). Syngenetic sedimentary deposits are typically regular delineating buried channels, abandoned meander scars and
and extensive, e.g. banded iron formations, bauxites, coal scrolls.
and phosphorites. Syngenetic igneous deposits are relatively Figure 19.69 presents a SIR-C image of Namibian dia-
less regular and occur in differentiated intrusives, e.g. mond deposits in South Africa, which form an area of
342 19 Geological Applications

Fig. 19.69 SIR-C radar image of Namibia diamond field, S. Africa. buried. Some of the ox-bow lakes can be seen on the radar image
The diamond deposits, derived from kimberlite pipes, occur as placers (printed black and-white from false-colour composite) (courtesy of
along the palaeochannels of the Orange river. The area is covered with NASA/JPL)
thick layers of sand and gravel under which the paleochannels lie

active mining for diamonds. The diamonds occur as placer (c) the localization of ore deposits in a particular ore field
deposits along the paleochannels of the Orange river, hav- (Kreiter 1968). Remote sensing data on suitable scales can
ing been derived from the famous Kimberley pipes. throw valuable light on the relationship of global, mega, and
Exploration and mining are focused on palaeochannels, minor structural features with mineral deposits.
which lie buried under the thick layers of sand and gravel. Deduction of information regarding localization of min-
As SAR has the capability to penetrate sand sheets in eral deposits by certain types of geological structural belts,
desertic areas, this type of remote sensing data is of much shear zones, faults, fractures, contacts, folds, joints or
interest in such cases. intersections of specific structural features is important in
planning exploration strategy (Cox and Singer 1986). Guild
(1972) reviewed the relationship of global tectonics and
19.8.5 Structural Guides metallogeny. Using satellite data, Offield et al. (1977) dis-
covered a significant E–W-trending ore-controlling linear
The structural controls of ore formation, which eventually feature in South America.
become structural guides during exploration, can be of Epigenetic mineral deposits are formed by deposition of
varying dimensions and scales. The structure can govern: mineral-bearing solutions in voids/fracture spaces and
(a) the distribution of metallogenic provinces within oro- replacement of the host rocks, and therefore commonly
genic belts or platforms, (b) the distribution of ore-bearing exhibit a strong structural control. The Khetri copper belt
regions and fields within the metallogenic provinces and (Rajasthan, India) furnishes an interesting example. Here
19.8 Mineral Exploration 343

Fig. 19.70 a Landsat MSS4 (near-IR) image of the Khetri Copper belt, India. b Structural interpretation map of the above image. Distribution of
major sulphide occurrences is also shown. A regional control on ore localization is obvious

the rocks belong to the Precambrian Alwar and Ajabgarh Bharktya and Gupta 1981). This provides structural–
Groups and possess a regional strike of NNE–SSW. The stratigraphical control of sulphide mineralization in the
polymetallic sulphide mineralization in the area is found to area. On satellite image, the contact is manifested as a
be localized close to the contact between the Alwar and regional lineament striking nearly NNE–SSE to NE–SW
Ajabgarh Groups (Roy Chowdhary and Das Gupta 1965; (Fig. 19.70).

Fig. 19.71 Relationship between distribution of epithermal gold number of pixels representing faults/fractures in a sample catchment
deposits and density of fault/fractures. a Map of faults/fractures in the basin to number of pixels per sample catchment basin. Triangles
Aroroy district, Philippines (compiled mostly from literature and represent locations of epithermal Au deposit occurrences (a and
interpretations of shaded-relief images of DEM illuminated from b Carranza 2008, reproduced with permission from Elsevier)
different directions). b Fault/fracture density measured as the ratio of
344 19 Geological Applications

The epithermal gold deposits in Aroroy district, Philip- and other carbonates, actinolite–tremolite, serpentine and
pines, provide another interesting example where it was talc. The alteration may affect the host rock as well the
found that the fracture intersections control the distribution deposit. Often, the alteration zones constitute ringed or
of gold deposits in this area (Fig. 19.71; Carranza 2008). zoned targets (Fig. 19.72), i.e. the ore-body may be sur-
In yet another study, Hutsinpiller (1988) correlated lin- rounded by a halo of altered rock so that there is a variation
eament intersection density to alteration and observed that in the spatial distribution of type and amount of minerals.
lineament intersection density was nearly twice as dense in
altered zones as compared to unaltered zones. 19.8.6.1 Spectral Characteristics of Rock
Numerous studies seeking influence of fractures and lin- Alterations
eaments on mineralization have been carried out with varying The alteration minerals can be broadly categorized into four
success. There occur difficulties in integrating lineament groups: (1) iron oxides, (2) hydroxyl-bearing minerals,
maps with mineral exploration models due to the following (3) carbonates, and (4) quartz-feldspars (framework silicates).
reasons (Rowan and Bowers 1995): (a) some of the features
mapped as lineaments may not be of structural–geologic 1. Iron oxides:
nature, and (b) it may not be possible to distinguish between
post-mineralization and pre-mineralization structures. Iron oxides are a common constituent of alteration zones
associated with hydrothermal sulphide deposits. The pres-
ence of iron oxides (limonite) leads to strong absorption in
19.8.6 Guides Formed by Rock Alteration
the UV–blue region, affecting the slope of the reflectance
curve in the UV–VIS-NIR region (see Fig. 3.3). Therefore,
Alteration zones often accompany mineral deposits and
the ratio of TM2/TM1 (green/blue), TM3/TM1 (red/blue),
constitute one of the most important guides for mineral
ASTER B2/B1 (red/green) or ASTER B4/B2 (NIR/red)
exploration. These are of particular significance in the case
yields high values for iron oxide bearing pixels. This spectral
of hydrothermal sulphide deposits, which include metals
characteristic has been extensively applied in one form or
such as copper, lead, zinc, cobalt, molybdenum, gold, silver
another for iron oxide mapping (Rowan et al. 1974, 1977;
etc. The alteration zones usually have an abundance of
Rowan and Mars 2003; Velosky et al. 2003; see Table 19.3).
minerals such as sericite, muscovite, goethite, hematite,
An example of limonite (gossan) mapping from ASTER
jarosite, kaolinite, montmorillonite, biotite, chlorite, epidote,
B4/B2 ratio in Sar-Cheshmeh copper mining district is given
pyrophyllite, alunite, quartz, albite, metal hydroxides, calcite
later (see Fig. 19.78).
Limonite comprises of three minerals: jarosite (which is
pale yellow), goethite (yellow–orange) and hematite (red–
orange) occurring in varying proportions. A closer look at
the spectral curves of jarosite, goethite and hematite shows
that hematite has a substantially lower reflectance in the
green-red bands than jarosite or goethite (Fig. 19.73a), its
reflectance in the red band being quite close to that of
vegetation. Therefore, on a TM3/TM2 or TM3/TM1 ratio,
although jarosite and goethite would appear as clear bright
pixels, hematite is likely to get mixed up with vegetation.
This suggests that in certain areas where hematite is the
dominant iron oxide, and there is also vegetation present,
these ratios would not yield unambiguous results.
Figure 19.73b presents spectral curves of goethite, jarosite,
hematite and vegetation together with spectral band centre
positions of ASTER B1–B4. It is obvious that ambiguity
between limonite and vegetation would persist in ASTER
band ratios (viz. B2/B1 or B4/B2).
Fig. 19.72 Ringed and zoned targets formed by alteration haloes—an In order to resolve this ambiguity, Fraser (1991) adopted
idealized schematic illustration showing spectral detection of zoned the Directed Principal Component Technique (DPCT) for
targets discriminating ferric oxide (hematite and goethite) from
19.8 Mineral Exploration 345

Fig. 19.74 Scatter plot of TM3/TM1 versus TM4/TM1 with fields of


goethite, hamatite and vegetation (data from the Newman area, Western
Australia; redrawn after Fraser 1991)

input bands for DPCT to discriminate between ferric oxide


and vegetation (Fig. 19.74). The scatter plot appears dis-
tributed along two axes—hematite-goethite form one axis
and vegetation the other. Therefore, such DPCT images can
provide discrimination between ferric-oxide and vegetation
pixels. If the scene variance is dominated by vegetation, then
DPCT-I will contain vegetation and DPCT-II ferric oxide;
on the other hand, if the variance is dominated by ferric
oxide, then DPCT-I will contain ferric oxide and DPCT-II
vegetation.
Fig. 19.73 a Spectral reflectance curves of jarosite, goethite and Ruiz-Armenta and Prol-Ledesma (1998) report a similar
hematite together with spectral bands TM1, TM2 and TM3; note the strategy for mapping goethite, hematite and vegetation, but
much lower reflectance of hematite than of jarosite and goethite in using TM3/TM1 and TM4/TM3 as the two input ratio bands
TM2-TM3; b spectral reflectance curves of jarosite, goethite, hematite
and vegetation; vertical lines represent band centre positions ASTER
for DPCT in an area in Central Mexico.
B1-B4 The ferric ion absorption features are present both in true
gossan (limonite formed as a result of decomposition of
sulphides) and pseudo-gossan (laterite formed as a result of
vegetation. The DPCT is based on applying PCA (principal weathering of ferromagnesian minerals). Distinction
component analysis) to two input band images, which are between the two, although critical for exploration, is not
selectively chosen for a certain purpose (Fraser and Green possible on the basis of the spectral character of iron oxide
1987; Chavez and Kwarteng 1989). In this way, both the alone. Thus, a limitation of the TM/ASTER band-ratio
correlated and un-correlated information is enhanced on the method is that it may produce false alarms, and therefore
resulting DPCT-I and DPCT-II images respectively. Ratio the method must be controlled through field data. On the
images are better used as input bands for DPCT, as the other hand, if there is too high dependence on this criterion
effects of illumination geometry and per-pixel brightness are alone, then some of the deposits where Fe–O minerals are
already removed in ratio images (Fraser and Green 1987). weakly developed (e.g. in a reducing environment) may get
Fraser (1991) used TM3/TM1 and TM4/TM1 as the two overlooked.
346 19 Geological Applications

2. Hydroxyl-bearing minerals: values for altered zones comprising dominantly


hydroxyl-bearing minerals. This characteristic has been
Hydrothermal alteration zones are marked by an abundance used in numerous research investigations (Abrams et al.
of clays, silicates and hydroxides that contain Al–OH and 1977, 1983; Podwysocki et al. 1983; Abrams and
Mg–OH. In this context, special mention may be made of Brown 1985; Kaufmann 1988).
porphyry copper deposits which have been investigated by As an example, Table 19.5 gives typical values of altered
remote sensing scientists with great interest all over the rocks, unaltered rocks and vegetation in Landsat-TM
world because of their immense potential. Porphyry copper bands in a part of the Khetri copper belt, India.
deposits presently provide a large amount of world’s Cu, and Figure 19.75 presents the corresponding TM spectral
Mo, most of the world’s RE, and Au, Ag, Pd, Te, Se, Bi, Zn, curves. Note the steep spectral slope between bands TM5
and Pb to varying extent. and TM7 for altered rocks. In this sub-scene, the ratio of
Porphyry copper deposits are huge disseminated deposits TM7/TM5 for unaltered rocks is found to be 0.7 and that
formed by hydrothermal activity. The residual copper- for altered rocks 0.37. An image example of generalized
bearing hydrothermal solutions derived from the cooling hydroxyl alteration is given later (see Fig. 19.79).
intrusive magma together with meteoritic solutions lead to (b) Argillic alteration: This comprises of pyrophyllite,
extensive changes in mineralogy and chemistry of the host kaolinite, alunite, all these minerals exhibiting absorp-
rock and the intrusion, and bring about mineralization. tion feature at wavelength *2.17 µm (ASTER B5).
Typically, a porphyry copper deposit is a zoned structure, (c) Phyllic alteration: comprises of illite, muscovite, ser-
possessing core of quartz +K-bearing minerals (potassic icite and smectite; these minerals are marked by
zone) surrounded by multiple zones of hydrothermal mineral absorption feature at wavelength *2.2 µm coinciding
assemblages. From centre outwards, there is a broad phyllic with ASTER B6.
zone, a narrower argillic zone and an outermost propylitic (d) Propylitic alteration: This group comprises of
zone that forms the cap. For exploration purposes, it is chlorite-epidote-calcite-actinolite, all these minerals
important to differentiate between the three zones to be able displaying absorption feature at wavelength *2.33 µm
to specifically target the phyllic zone, which is an indicator that coincides with ASTER B8.
of high-economic potential for copper mineralization within
the central shell of mineralization.

(a) Generalized hydroxyl alteration: General abundance of


clays, silicates and hydroxides that contain Al–OH and
Mg–OH implies that absorption bands in the 2.1−2.4-
µm range (= TM7) become prominent (see Fig. 3.6).
Most minerals and Earth materials reach their peak
reflectance at about 1.6 µm (= TM5). Thus, a ratio of
1.6/(2.1−2.4) µm (= TM5/TM7) would yield very high

Fig. 19.76 Summary of the absorption features for propylitic, phyllic


Fig. 19.75 Data plots of Landsat TM DN values for altered rocks, and argillic alteration zones together typical spectral curves of Al–OH-
unaltered rocks and vegetation (Khetri copper belt, India) and Mg–OH-bearing minerals
19.8 Mineral Exploration 347

Figure 19.76 presents a summary of the spectral distri- for mineral exploration. Table 19.5 gives a list of metallo-
bution of absorption features for propylitic, phyllic and genic environments and their respective alteration zones and
argillic alterations. associated characteristic SWIR active mineral assemblages
Spectral ratios for identifying phyllic, argillic and that have potential of being used for exploration using
propylitic alterations are listed in Table 19.3. All ratio hyperspectral remote sensing.
images can be displayed individually in black-and-white
or combined with other geoimage data in colour
composites. 19.8.7 Geobotanical Guides

3. Carbonates: The study of vegetation as related to geology is called


geobotany. In geobotanical remote sensing, we discuss the
The presence of carbonates leads to a general increased spectral behaviour of vegetation in order to decipher geo-
reflectance in the VNIR region; however, they have logical information. Important reviews and contributions in
absorption features in the SWIR (2.35 µm, ASTER B8) and the field of geobotanical remote sensing have been made by
TIR (11.5 µm, ASTER B14??), which can be used for many, including Brooks (1972, 1980), Lyon et al. (1982),
identifying these minerals (see Table 19.3). Mouat (1982), Collins et al. (1983), Horler et al. (1983),
Milton et al. (1983), Rock et al. (1988), Nkoane et al.
4. Tectosilicates: (2005), and Dunn (2007).
In geological studies, vegetation has generally been
Quartz and feldspars do not have absorption features in the considered as a noise or a hindrance, masking the geological
solar reflection region, and therefore their presence also information. Most of the world’s mineral production is
leads to a general increased reflectance in the VNIR–SWIR obtained from low to moderately vegetated land surface.
region. These minerals have absorption features in the However, a major part of the land surface on the Earth is
thermal-infrared; therefore, multispectral TIR data can be moderately to heavily vegetated, and the largest mineral
applied for assessing silica content (see Table 19.3) and deposits yet undiscovered, probably now remain in areas of
differentiating between different types of silicates. vegetation cover. Therefore, botanical guides for mineral
In the case of hyperspectral sensing, it may be possible to exploration, which can easily be observed from remote
identify specific minerals which may act as alteration guides sensing platforms, are gaining importance.

Fig. 19.77 The concept of


geobotanical guides. a Vegetation
banding—the density of
vegetation is related to lithology.
b Vegetation anomaly as related
to the presence of toxic metals;
the growth of trees is stunted by
toxic metals in soil derived from
the bedrock
348 19 Geological Applications

The spectral behaviour of vegetation is responsive to 2. Taxonomic differences in vegetation mean the presence
differences in soil–lithology characteristics (Fig. 19.77). or absence of some plant species (indicator plants) or
Different types of lithologic types such as shales, sandstones, changes in community structure, i.e. relative abun-
limestones, metavolcanics, and acidic to ultrabasic igneous dance. Some of the plants act as ‘universal indicators’
rocks may support vegetation that may differ in type, density and are found only on mineralized soils. For example,
and foliage. the calamine violet (Viola calaminaria) grows only on
Metals present in soil may lead to vegetation stress soils with anomalous zinc content; the copper flower
(Horler et al. 1980). For the purposes of mineral exploration, (Becium homblei) grows over copper deposits, and
the geobotanical changes in vegetation are of three types: Astragalus pattersoni and A. preussi are used to indi-
structural, taxonomic and spectral (Mouat 1982). cate selenium mineralization, usually associated with
uranium. There could additionally be ‘local indicators’,
1. Structural changes in vegetation mean changes in mor- i.e. plants which grow preferentially over mineralized
phology, i.e. vegetation density, mutation of leaves, flow- grounds in one region, but may grow over
ers, fruits and phenological changes (changes within timing non-mineralized grounds in other regions. For example,
or seasonality of physiological events). For example: the Mexican poppy (Eschscholzia mexicana) indicates
copper mineralization in Arizona but not necessarily in
(a) Chlorosis or the loss of chlorophyll pigments in green other areas.
vegetation and the consequent yellowing of leaves is a 3. Spectral differences refer to changes in spectral char-
common response due to the toxic effects of metal in acteristics. It has been reported that vegetation (con-
the soil. ifers) growing over mineralized areas exhibit a subtle
(b) Abnormality of form may result from radioactive change in spectral characteristics due to vegetation
minerals or boron in the soil, and these may even lead stress, such that the position of the red edge shifts by
to changes in the colour of flowers. about 7–10 nm.
(c) Vegetation may exhibit dwarfism or stunted growth,
due to changes in geochemical conditions; rarely, even The morphological and taxonomic differences in plant
gigantism occurs when bitumen or boron are present. associations can be picked up by good-spatial-resolution
(d) The density of total plant biomass may change, remote sensing data, e.g. broad-band panchromatic photog-
depending upon the soil characteristics; for example, raphy and images from low-altitude sensors. In order to
the vegetation growing on serpentine-bearing soil has detect phenological differences, remote sensing data needs to
been described as sparse, dwarfed and xerophytic, be collected at frequent temporal intervals (high temporal
whereas in adjoining areas the same species may be resolution) so that the phenomenon may be detected when it
mesophytic; similarly, vegetation density differences is active (Lyon et al. 1982). Hyperspectral remote sensing
have been noted over copper porphyry by Elvidge techniques can be used to detect spectral differences induced
(1982) and over hydrothermally altered andesite by due to vegetation stress.
Billings (1950).
(e) The timing of physiological events such as flowering,
fruiting and senescence may shift and occur at a pre- 19.8.8 Application Examples
mature or delayed time due to vegetation stress. On
plants growing over metallic sulphide deposits, it was The most important and widely applied image processing
found that the buds opened later and leaves were smaller and enhancement technique in mineral exploration has been
than in those growing on soils with background levels. ratioing of the spectral band images and colour coding of the
Further, senescence set in relatively earlier in such ratio images for visual interpretation and integration with
stressed plants than in other normal plants. This phe- other image data. Rowan et al. (1974) were the first to show
nomenon provides ‘autumn and spring windows’ for that a colour composite of the ratio images MSSgreen/MSSred,
detection of soil conditions and bedrock chemistry. It MSSred/MSSnir-I and MSSnir-I/MSSnir-II provides a powerful
implies that the growing season for vegetation on means for discriminating hydrothermally altered areas from
metal-rich soil is shorter than that for vegetation growing regional rock and soil units. The above approach of ratio
on soils with background concentrations of metals. This colour composite generation has been successfully adapted
phenomenon can be utilized to map mineral deposits for Landsat TM/ETM/OLI and ASTER data sets and used in
with remotely sensed imagery (Labovitz et al. 1985). a number of research investigations.
19.8 Mineral Exploration 349

Fig. 19.78 Limonite zones (bright pixels) on ASTER B4/B2 ratio


image of Meiduk-Sara copper deposit, Iran; the terrain is practically
devoid of vegetation except along drainage courses; note vegetation
also appears as bright pixels (image courtesy A.B. Pour)

In the following paragraphs, some image examples of


application of remote sensing data in mineral exploration are
presented.

1. Gossan mapping, Sar-Cheshmeh copper mining district,


Iran: Several well-known porphyry copper deposits Fig. 19.79 TM7/TM5 ratio image of a part of the Khetri copper belt,
occur in the Central Iranian Volcanic Belt including the India; dark patches with diffused boundaries (arrows) are hydroxyl
Sar-Cheshmeh deposit which is one of the largest por- mineral bearing alteration zones
phyry copper deposits of the world. The deposit occurs in
a belt of largely Eocene volcanic host rock and forms the generalized hydroxyl alteration haloes as dark gray
most important volcano-plutonic complex with tremen- ellipsoidal patches (Fig. 19.79).
dous economic potential for copper mineralization that 3. Mineral alteration mapping, Escondida porphyry deposit,
formed by subduction of the Arabian plate beneath Chile: The Escondida (or La Escondida) deposit lies in the
central Iran during the Himalayan-Alpine orogeny. Atacama desert, Andes mountains, Chile. It is a porphyry-
Hydrothermal alteration and sulphide mineralization is type deposit and produces copper–gold–silver. The min-
understood to have occurred broadly synchronous with eralization is related to a period of faulting, folding and
the granitoid intrusive of Oligo-Miocene age. Subsequent igneous activity, which accompanied intrusion of the
oxidation, weathering and leaching has produced wide- Andean batholith during Late Mesozoic–Early Tertiary
spread gossan cappings over the deposits. The terrain is times. Widespread alteration of both hypogene and
semi-arid with sparse vegetation. Using ASTER B4/B2 supergene type has taken place. The hydrothermal alter-
ratio, Pour and Hashim (2011) identified limonite (gos- ation mineral zones include propylitic, phyllic and potassic
san) occurrences in the area (Fig. 19.78). zones. A high-grade supergene cap overlies the primary
2. Generalized hydroxyl alteration mapping, Khetri copper sulphide ore.
belt, India. The Khetri copper belt is about 90 km long The Escondida deposit is being mined by open-pit
belt occurring in Precambrian rocks of Rajasthan, India. method and commenced production in 1990. Fig-
It is home to numerous known copper-multi-metal sul- ure 19.80a is an ASTER image from SWIR bands 4, 6
phide deposit occurrences with active mining in progress and 8 coded in R, G and B respectively. It depicts the
at several sites. The hydrothermal solutions causing mine and shows lithologic–mineralogic variation at the
mineralization also led to widespread wall-rock alter- surface. ASTER B4 operates in the spectral range
ations manifested in the form of secondary minerals such 1.6–1.7 µm, which is a general high-reflectance band,
as chlorite, muscovite, biotite, sericite, quartz, calcite, and is coded in red. ASTER B6 (2.225–2.245 µm) is
scapolite, anthophyllite, cummingtonite-grunerite, talc, absorbed by the phyllic (Al–OH) minerals, whereas
etc. A ratio image of Landsat-TM B7/B5 shows the ASTER B8 (2.295–2.365 µm) corresponds to the
350 19 Geological Applications

Fig. 19.80 a,b. a. ASTER image showing the open-pit Escondida and Al–Mg–OH-bearing minerals appear in shades of red (for colour
mine, Chile. ASTER-SWIR bands 4, 6 and 8 are coded in RGB interpretation, refer to Fig. 19.80b) (Courtesy: NASA/GSFC/MITI/
respectively. The Al–OH-bearing minerals appear in shades of blue– ERSDAC/JAROS, and US/Japan ASTER Science Team); b. Colour
purple, Mg–OH-bearing minerals appear in shades of green–yellow ternary diagram corresponding to the ASTER FCC in Fig. 19.80a.

absorption by propylitic minerals (Mg–OH minerals and argillic minerals, and RBD8 for propylitic minerals. Fig-
carbonates) (see Fig. 19.76). Interpretation of colours in ure 19.81b, c provide logical explanation to the colors in
this FCC is facilitated by the colour ternary diagram terms of mineral assemblages.
(Fig. 19.80b). In the present colour coding scheme,
phyllic alterations (Al–OH bearing) appear in shades of 5. Uranium exploration, Lisbon Valley, Utah. Uranium is
blue–purple; propylitic alterations (Mg–OH-bearing) found to occur in a variety of geological settings. It is
appear in shades of green–yellow; and phyllic + highly mobile and soluble in an oxidation environment
propylitic alterations (Al–Mg–OH-bearing) appear in and is precipitated as soon as the uranium-rich solutions
shades of red. enter a reducing environment. However, the reverse
4. Mineral alteration mapping in Sar-Cheshmeh Copper happens with iron, which is precipitated in an oxidation
deposit, Iran: As mentioned above, the Central Iranian environment as ferric (Fe3+) iron and becomes mobile in
Volcanic Belt contains several important porphyry cop- a reducing environment as ferrous (Fe2+) iron. Therefore,
per deposits, such as Sar-Cheshmeh, Meiduk, Seridune the uranium mineralization may be marked by bleaching
etc. These porphyry copper deposits are related to (decoloration), due to the removal of iron from the
hydrothermal activity broadly synchronous with the adjacent rock formations—as happens in the Lisbon
granitoid intrusive, that in essence lead to sulphide Valley, Utah, which has been investigated as the NASA–
mineralization together with widespread alteration. The Geosat joint test study by Conel and Alley (1985).
early hydrothermal alteration was pre-dominantly potas-
sic and propylitic that was followed by phyllic, silicic In the Lisbon Valley, uranium mineralization occurs as
and argillic alterations. strata-bound pod-like deposits within a sedimentary
sequence of sandstones, mudstones and conglomerates. The
Using ASTER band ratio images, Pour and Hashim mineralization is restricted to a lithological member called
(2011) showed the presence of different mineral assemblages the Moss Back Member, which forms a part of the Triassic
of hydroxyl alterations (propylitic, phyllic and argillic and Chinle Formation. The Chinle Formation is scantily
their mixtures) in the Sar-Chashmeh and adjoining area by exposed in plan and is generally covered by the overlying
generating color ratio composite (Fig. 19.81a) as follows: red Wingate sandstone. A very close spatial association
between the uranium mineralization in the Moss Back
• RBD6 = (B5 + B7)/(B6 * 2) coded in red; Member and the bleached sections of the overlying Wingate
• RBD5 = (B4 + B6)/(B5 * 2) coded in green; Formation has been known in the area. This is attributed to
• RBD8 = (B7 + B9)/(B8 * 2) coded in blue. the removal of iron from the overlying Wingate Formation
under the reducing condition, which must have accompa-
It is obvious from the previous discussion that RBD6 nied the deposition of uranium. From a remote sensing
image has high values for phyllic minerals, RBD5 for exploration perspective it is thus easier to detect the
19.8 Mineral Exploration 351

(a)

(b)

(c)
Colour Interpretation
Red-pink RBD6 has high values for phyllic minerals and is
coded in red; therefore phyllic minerals appear in
shades of red.

Yellow Yellow represents a mixture of phyllic and argillic


minerals.

Green RBD5 has high values for argillic minerals and is


coded in green; therefore, argillic minerals appear in
shades of green.

Cyan Cyan represents a mixture of argillic-propylitic


minerals.

Blue RBD8 has high values for propylitic minerals and is


coded in blue; therefore propylitic minerals appear in
shades of blue.

Purple Purple represents propylitic + some phyllic mineral


content.

Gray No hydroxyl absorption features of the above type;


pixels reflecting almost equally in R, G, B.

Fig. 19.81 a Colour ratio composite generated from ASTER-SWIR corresponding colour ternary diagram; c explanation of the mineral
bands to map different types of hydroxyl alterations in Sar-Chashmeh assemblages associated with different colours; note identification of a
and adjoining areas, Iran (Pour and Hashim 2011). The colour coding few new prospects
scheme is: RBD6 = R, RBD5 = G, and RBD8 = B; b shows the
352 19 Geological Applications

Fig. 19.82 a Schematic showing alteration—bleaching zone in the canonically transformed aerial scanner data. Red–olive green colour =
overlying Wingate sandstone associated with the uranium mineraliza- bleached parts of overlying Wingate Formation; shades of grey and
tion at depth. b False-colour image of the Lisbon valley, generated from beige = unbleached Wingate Formation (b Conel and Alley 1985)

bleached Wingate sandstones, whose exposures are more 19.9 GIS-Based Mineral Prospectivity
extensive, than to directly map the uranium-bearing for- Modelling
mations, whose exposure in plan is very restricted
(Fig. 19.82). 19.9.1 Introduction
It is further implied that local presence of reducing con-
ditions, which may be associated with other mineral GIS-based mineral prospectivity modelling is a relatively
deposits, including hydrocarbon seepage etc., can also be new research development with enormous growth potential
detected on remote sensing data (see Sect. 19.10.3). and application possibilities. It forms the cutting edge of
During the last some four decades, remote sensing tech- research in mineral exploration (Porwal and Kreuzer 2010;
nology has come a long way in mineral exploration. Since Porwal and Carranza 2015). The aim of modelling is to
1999, the ASTER data has been extensively used for mineral deduce and display spatial relationships between known
exploration, particularly as the data has good spatial and geological variables with the target mineral deposit
spectral resolution suited for detecting hydrothermal alter- employing GIS tools. Generation of mineral prospectivity
ation zones, it has global coverage, and the data are available maps requires skills in GIS and statistics, and a thorough
free of charge. understanding of mineral deposit processes, besides a clear
A large number of data processing algorithms have been description of the geology of the target area.
applied on ASTER data to extract spectral information for During the last about four decades, the following three
exploration. These can be grouped into four types (Pour and developments could be considered as major milestones in
Hashim 2012), viz.: (a) methods using band ratios, indices, the growth of this research field:
and logical operations; (b) those based on principal com-
ponent analysis and minimum noise fraction; (c) shape- (i) Possibly the first was the development of an expert
fitting algorithms such as spectral angle mapper (SAM), system known as ‘Prospector’ (Duda et al. 1978) for
matched filtering (MF) and mixture-tuned matched filtering mineral exploration. It used fuzzy inference system
(MTMF); and (d) partial unmixing such as linear spectral together with Bayesian probability and was used to
unmixing (LSU) and constrained energy minimization evaluate prospectivity in poorly explored/unexplored
(CEM). These methods have been discussed in Chaps. 13 regions, using inputs from geological data. Later, the
and 14. technique also got extended to raster data model (Katz
ASTER data has found applications for exploration of 1991).
porphyry copper deposits (e.g. Rowan et al. 2003, 2006; (ii) The next was the development of weight-of-evidence
Pour and Hashmi 2011); for massive sulphide mineralization (WofE) modelling, a statistical method, as applied to
in Precambrian terrain (Velosky et al. 2003); epithermal gold mineral exploration (e.g. Agterberg and Bonham-
deposits (Crosta et al. 2003; Gabr et al. 2010); iron ore Carter 1990; Agterberg et al. 1990).
deposits (Rajendran et al. 2011), chromite deposits (Rajen- (iii) Then, proliferation of raster GIS packages during the
dran et al. 2012), and several other geologic applications. late 1980s–90s could be considered as another
19.9 GIS-Based Mineral Prospectivity Modelling 353

important technological development in this direction. methods. Several examples of this type of prospectivity map
These days, the practice is to combine WofE mod- generation have been published (e.g. Porwal et al. 2010;
elling with raster GIS to generate mineral prospec- Lindsay et al. 2014; Partington 2010; González-Álvarez et al.
tivity maps. 2010; McCuaig et al. 2010; de Palomera et al. 2015).

It may be mentioned here that different types of 19.9.2.2 Mineral System Modelling
mathematical-statistical models have evolved for integrating Mineral system approach considers that mineral deposits are
data for generation of predictor maps. These include prob- parts of much larger systems of energy and mass transfer that
abilistic, regression, artificial intelligence, fuzzy etc. Out of occurred inside the Earth in a certain time and space in the
all these, the probability based WofE modelling is presently past. It is more basic or holistic approach. A mineral system
the more preferred one. has to include—source, pathways and traps, as also energy
Weights-of-evidence (WofE) is a quantitative method that considerations for mass transfer and post-formation preser-
uses available data sets to develop a log-linear form of the vation. Thus, the attention shifts from specific deposits to
Bayesian probability model. In GIS-based mineral prospec- larger scale mineral genetic system extending across large
tivity mapping, it can be used to statistically estimate relative spatial extents. This approach allows delineation and map-
importance of individual layers of evidence used for training, ping of multiple mineral deposit types in a single mineral
and thus help minimize the subjective bias in quantifying system.
spatial associations between different layers of evidence for Generation of prospectivity maps as per the mineral
the specific mineral deposit (Bonham-Carter 1994). The system modelling would involve the following considera-
technique allows the user to calculate weights for different tions (Porwal and Carranza 2015; Kreuzer et al. 2015):
classes in every evidential layer, and to then produce a
probability map of the occurrence of the type of deposit. • Source: Identifying all geological processes required for
WofE statistical computation has found applications in almost extraction of necessary ore components
all types of earth-resource problems—mineral exploration • Transport: Delineation of pathways for preferred melt or
(see below), groundwater (Nampak et al. 2014; Park et al. fluid flow
2014), petroleum exploration (Amiri et al. 2015) etc. • Trap: Identifying geological processes required for
physical trapping of fluids
• Deposition: Defining chemical selectors for precipitation
19.9.2 Approaches in Mineral Prospectivity of metals from fluids or melts
Modelling • Preservation: Processes required to preserve the accu-
mulated metals through time.
There are two main approaches for making GIS-based
mineral prospectivity maps: The mineral system approach is essentially a probabilistic
concept for exploration. GIS is ideally suited to create spatial
1. Mineral deposit modelling, and predictor maps for various mappable criteria, also called
2. Mineral system modelling. proxies, or indicators. The following summary based on
Porwal et al. (2015) provides an interesting example how
19.9.2.1 Mineral Deposit Modelling various data including surrogate/proxy data could be used as
This is based on the traditional approach of mineral explo- input variables in mineral exploration to generate mineral
ration. Geological attributes in terms of various features such prospectivity maps.
as structural, chemical, mineralogical and stratigraphic char- In Western Australia, there occurs surficial deposit of
acteristics etc. form the empirical indicators or guides for uranium in paleochannels. The first step in exploration here
prospectivity. Using these indicators or footprints, the model is mapping of uranium-rich granites (which contribute as
attempts to target new deposits in both poorly explored source of uranium) from detailed geological and geochem-
(brown-field) or unexplored (green-field) regions. For exam- ical field data. Leachability of uranium from the parent
ple, it may typically use evidence layers depicting such fea- granite rock is governed by fluid-rock ratio, granite geo-
tures as—favourable lithologies (e.g. carbonates or felsic chemistry and Eh conditions (oxidizing environment). The
intrusives etc. as the case may be), fault and lineament maps, information on fluid/rock ratio is provided by fracture den-
lineament intersection density plots, tensile open fractures, sity and that on weathering can be deduced from remote
hydroxyl mineral alterations, limonite distribution, silica/ sensing, structural and topographic data. Eh conditions can
carbonate abundance, magnetic anomalies etc. Integration of be evaluated from mineral assemblages (field data). Leach-
data layers is carried out in GIS using various statistical ing and transport of uranium must involve shallow
354 19 Geological Applications

Fig. 19.83 Weights of evidence (WofE) based Cu–Au prospectivity remaining area is below the prior probability and considered
map for porphyry type deposit in New South Wales, Australia, using unprospective for porphyry Cu–Au systems. a WofEmodel of the
mineral system modelling approach. Prospective areas classified from area; b and c provide zoom-in views (Kreuzer et al. 2015, reproduced
relatively low (blue colours) to high (red colours) prospectivity. The with permission from Elsevier)
19.9 GIS-Based Mineral Prospectivity Modelling 355

ground-water, and for this purpose, data on water table 19.10 Hydrocarbon Exploration
characteristics can be used. Sandy paleochannels, surface
drainage density, topographic slopes and data on hydraulic Oil and gas pools occur at great depth, of the order of a few
gradient is used to provide the required hydrogeologic km from the surface, and are localized in geological features
information. Paleochannels constitute the main transporta- called traps. The strategy for hydrocarbon exploration relies
tion pathways that can be mapped from remote sensing data. heavily on the delineation of suitable traps, in a general
Besides, calcrete deposits in the paleochannels form the oil-bearing terrain. The traps could be of structural or of
physical traps for depositing water-borne uranium—and stratigraphic type. Structural traps consist typically of
these can be mapped from regolith data and remote sensing structural features such as folds, faults and unconformities.
data. Finally, uranium precipitation is brought about by a Stratigraphic traps are commonly generated by facies vari-
change in pH towards neutral, and evaporation environment. ation during the sedimentation process.
For this purpose, indicators such as calcrete, gypsum and Remote sensing techniques aim at primarily identifying
playa lake environment and paleo-climatic evaporation data suitable targets for further exploration by geophysical and
can be used. drilling methods, and have been usefully applied for
In another example, in New South Wales, Australia, there hydrocarbon exploration for quite some time (e.g. Berger
occurs a porphyry type Cu–Au mineralized belt of 1994). These methods attempt to detect anomalies/evidences
Ordovician-Silurian age. Kreuzer et al. (2015) applied min- that may be related to occurrence of hydrocarbon at depth—
eral system approach for generating a mineral prospectivity for example, surface geomorphic anomalies, lineament-
map in this area using WofE modelling together with input structural control on the distribution of hydrocarbon pools,
predictor maps of the following types: surface alterations related to hydrocarbon seepage, thermal
anomalies and oceanic oil slicks.
Source: Occurrence and proximity to felsic and intermediate
intrusive and volcanic rocks of Ordovician-Silurian age.
Transport process: Sedimentary sequence deposited in 19.10.1 Surface Geomorphic Anomalies
extensional basins of the age of Cu–Au mineralization;
intrusive porphyries of Ordovician-Silurian age; arc-parallel From a remote sensing point of view, surface geomorphic
and arc-transverse faults and fault intersections that are most anomalies have a special significance in hydrocarbon
likely genetically associated with Cu–Au mineralization. exploration. As oil and gas occur at great depth, it is through
Trap processes: Proximity to calcareous rocks; competence such features that targets for exploration can be identified in
contrast among lithologies; fault bends and fault intersections. the first instance more readily. Halbouty (1980) related the
Deposition process: Mineralized stockwork-like or sheeted significance of surface geomorphic anomalies for 15 giant
quartz vein arrays; potassic alteration assemblages; relative oil and gas fields.
enrichment of rock/soil/stream sediment in path finder ele- Surface geomorphic anomalies may exhibit two charac-
ments; anomalous Au in stream sediment and anomalous teristics: (a) morphostructural, and (b) tonal, sometimes both
Cu–Ag in rock chips; areas of low magnetic values (that may concurrently, in which case they are likely to possess a
indicate magnetite destruction associated with Cu–Au higher success chance. The anomalies appear as generally
deposition). circular to oblong features that are discernible on synoptic
satellite sensor images. They may differ from the adjoining
Figure 19.83 shows the prospectivity map generated by (background) area in terms of topography, tone, vegetation,
Kreuzer et al. (2015). They observed that though this region soil, soil moisture, surface roughness etc., with boundaries
has been under exploration for more than a century now, often hazy/blurred. Some of the morphostructural anomalies
much of the potential-prospective ground remains yet may be marked by drainage patterns, reflecting adjustments
unexplored. This clearly shows the importance of GIS-based to subsurface shallow-buried structures.
strategy for mineral exploration. The drainage anomaly at Banskandi, Assam offers a nice
Several examples of GIS-based mineral prospectivity example. Numerous oil and gas pools occur in the Tertiary
map generation are given in the special issue of Ore Geology sequence of Assam and adjoining areas, north-east India.
Reviews (Porwal and Carranza 2015). The technique of Therefore, this region has been extensively surveyed for
mineral prospectivity mapping has been now extended to hydrocarbon exploration. On remote sensing images of the
3-D modelling (e.g. Xiao et al. 2015; Wang et al. 2015). area, a number of circular features and anomalies could be
“Predict” (Apel 2006) is a 3-D grid model with WofE delineated, many of which were subsequently proved by
capability. geophysical surveys and drilling (Agarwal and Misra 1994).
356 19 Geological Applications

Fig. 19.84 a Drainage pattern marking the Banskandi anomaly, about to other field-mapped geological features in the area. Based on the
20 km in diameter (printed black-and-white from Landsat MSS Landsat anomaly, further investigations were carried out; the structure
standard FCC); b interpretation map showing the anomaly in relation is yielding hydrocarbon gas (a, b courtesy of R.P. Agarwal)

One such feature is the Banskandi anomaly (Fig. 19.84),


which was first observed on the Landsat MSS image.
In the Banskandi area there are no geologic exposures on
the surface, the area being covered with soil and vegetation.
The drainage (Barak River) exhibits a striking circular
anomaly (Fig. 19.84a) that is evidently controlled by sub-
surface structure. Field data show the presence of nearly N–
S running longitudinal synclines on either side of the
anomaly, and an anticline is exposed to the south of the
anomaly area (Fig. 19.84b). Based on the above indications,
it could be inferred that the Banskandi drainage anomaly
represents a shallow-buried anticlinal–domal structure,
which could be a potential site in this region of hydrocarbon-
bearing structures. Subsequent seismic surveys confirmed
the structure (subsurface high), and drilling yielded hydro-
carbon gas in the area.

19.10.2 Lineament-Structural Control


Fig. 19.85 Landsat lineaments and distribution of oil and gas fields in
on the Distribution of Hydrocarbon a part of Colorado, USA (after Saunders et al. in Halbouty 1976)
Pools
Fig. 19.85 shows the relationship between major lineaments
Lineament-structural analysis based on aerial photographic and the distribution of oil and gas pools in a part of
and satellite image data is a widely utilized technique in Colorado.
petroleum exploration. Lineaments can be considered as
fracture zones in the Earth’s crust that may control migration
and accumulation of hydrocarbons. Studies at numerous 19.10.3 Surface Alterations Related
sites have revealed that regional fractures/discontinuities are to Hydrocarbon Seepage
essentially created due to large-scale tectonics in the base- Hydrocarbon Index
ment and/or deep subsurface rocks, and these discontinuities
may get rejuvenated in subsequent geologic times. Often, a Oil and gas pools occur in natural rock reservoirs that are not
considerable relationship is found to exist between surface absolutely tight but are leaky to some extent, due to frac-
lineaments and deep seated structures. Therefore, the linea- tures, joints, discontinuities in the reservoir rock. Thus, some
ments and fractures observed at the surface can be projected amount of hydrocarbons almost invariably escape from the
(with care) into the subsurface to infer possible subsurface reservoirs and reach the surface to appear as seeps. Usually
geological structure, and articulate on possibly conduits for the light hydrocarbons (methane, ethane, propane, butane
fluid migration and accumulation. This may provide inter- and pentane) seep vertically upwards in the form of floating
esting clues for hydrocarbon exploration. As an example, colloidal gas bubbles and continuous gas-phase flow (Brown
19.10 Hydrocarbon Exploration 357

2000) from the petroleum reservoir at depth along the per- given in Fig. 19.82 in the context of uranium
meable network (cracks, joints, faults in the overlying rocks) exploration)
to the surface. Oil seeps that can be detected by naked eye • Relatively greater abundance of kaolinite due to its for-
are referred to ‘macro-seeps’, and those that can be detected mation from both feldspars and illite
only by special techniques/evidences are termed as • Increase in secondary carbonate content
‘micro-seeps’. The vertical migration of oil and gas along • Occurrence of minerals such as pyrite, sulphides, ele-
fractures is also referred to as the ‘chimney effect’. mental sulfur, uranium, siderite etc.
Spectral/tonal anomaly, mineral alterations and vegeta-
tion changes associated with hydrocarbon seeps above oil ASTER multispectral image data can be used to detect
and gas pools have been reported by several workers (Vizy many of the above mineralogical alterations induced by
1974; Deutsch and Estes 1980; Schumacher 1996; Saunders hydrocarbon seepage. A ratio of ASTER B2⁄B1 is sensitive
et al. 1999; Van der Meer et al. 2002; Abrams 2005; Fu et al. to detecting ferric iron oxide-bearing rocks, which can be
2007; Khan and Jacobson 2008; Petrovic et al. 2008; Shi used to detect the bleached and unbleached red beds. The
et al. 2012). ratio of ASTER B4⁄B8 is useful in identifying carbonate
Hydrocarbons escaping from the underground reservoirs minerals, and the ratio ((B4 * B7)/(B5 * B5)) is capable of
cause oxidation-reduction reactions along the vertical identifying kaolinite-rich areas (see Table 19.3). Thus,
migration paths and result in mineralogical anomalies in ASTER data can be used to map subtle spectral variations
soils/rocks on the surface and changes in vegetation pattern that have a direct relationship with hydrocarbon-induced
on the surface. Anomalous surface mineralogy and vegeta- mineralogical alterations.
tion can indicate zones experiencing oil seepage. The For example, in the Tian-Shan foreland basin, where
advantage of remote sensing is that it offers a rapid cost- hydrocarbon pools are known to occur, Shi et al. (2012)
effective tool of conducting reconnaissance for hydrocarbon- suggested several new prospects based on the remote sens-
induced alterations. ing evidences of mineralogical alterations occurring above
In the context of remote sensing, surface alterations/ regional anticlinal-domal structures.
evidences related to hydrocarbon seepage can be grouped
under (1) mineralogical alterations, and (2) vegetation 19.10.3.2 Vegetation Changes
changes. Hydrocarbons present in the soil stimulate the activity of
hydrocarbon-oxidising bacteria that decreases oxygen con-
19.10.3.1 Mineralogical Alterations tent of the soil and increases its contents of carbondioxide
The leakage of hydrocarbons and their microbial oxidation and organic acids. Thus, hydrocarbon micro-seepage leads
in near-surface soils/sediments leads to production of CO2, to development of reducing environment in the soil. The
H2S, and organic acids that create reducing, slightly acidic chemical-mineralogical changes in the soil affect pH and Eh,
conditions. This leads to many mineralogical alterations, one which in turn affect the plant nutrients that control the
of the most important being bleaching of red beds that density, health and vigour of vegetation. This in turn affects
occurs whenever acidic or reducing fluids are present to alter the spectral response of vegetation and could lead to vege-
ferric iron (red hematite) into ferrous state. The presence of tation stress.
bleached and discoloured red sandstones at the surface due Geobotanical indicators for mineral exploration have
to hydrocarbon seepage from depth has been widely reported been discussed in Sect. 19.8.7 and these are conceptually
(e.g. Schumacher 1996; Saunders et al. 1999; Shi et. al. essentially the same also for hydrocarbon exploration. There
2012). These conditions also favour the formation of pyrite could be structural changes in vegetation (chlorosis, dwarf-
(FeS2) and siderite (FeCO3) from the iron released during the ism, stunted growth, change in density of plant biomass etc),
dissolution of hematite. In addition, secondary calcite is taxonomic differences (presence or absence of some plant
another mineral reported in such alteration zones (Fu et al. species), and spectral differences/anomaly (change in spec-
2007). These reducing environmental conditions also pro- tral response due to vegetation stress). Spectral indices such
mote weathering of feldspars to produce clays (kaolinite) and as normalized difference vegetation index (NDVI), soil
conversion of illite to kaolinite. adjusted vegetation index (SAVI), leaf area index (LAI), and
Thus, the surface expression of hydrocarbon-induced several other broadband greenness indices that exploit
mineralogical alteration of soils/rocks can take the following changes in spectral response of vegetation could be also used
forms: in case of vegetation stressed hydrocarbon seepage.
As described below, a geobotanical anomaly was
• Bleaching of red beds (reduction of ferric oxide) (see observed at the Patrick Draw site, Wyoming, by Lang et al.
remote sensing image example of bleaching of red beds (1985) under the NASA–Geosat joint investigation. The
358 19 Geological Applications

Patrick Draw is an oil-producing site from a stratigraphic image, which could be linked to the type of residual soils
trap. The area has a semi-arid climate, low relief, and veg- developed in the area. A unique feature, circular in outline
etation consisting of primarily indigenous sage and grass. and lemon–green in appearance on the image, was located
Oil is produced from a sandstone lens, sandwiched between on the false-colour image. This area was considered as
shales, which constitute the stratigraphic trap. anomalous, because of its size, shape, location and spectral
Figure 19.86a is a false-colour image of the Patrick Draw characteristic. Examination of aerial photographs of the area
test site generated from the aerial TM simulator scanner. (Fig. 19.86b) also could not provide any explanation for this
FCC was generated from principal component transforma- unique feature. Ground observations (Fig. 19.86c, d) indi-
tion (PC1 = red, PC2 = green, and PC3 = blue). A number of cated that this area is marked by stunted sage bushes that are
spectral units could be distinguished on the false colour smaller, less dense and less vigorous than the sage in the

Fig. 19.86 a False-colour image of the Patrick Draw test site based on of the area; the area of anomalous spectral property in (a) is also
principal component transformation of aerial TM simulator data; note outlined. c, d Ground-based photographs of stunted and healthy sage
the peculiar lemon–green nearcircular patch on the colour image, which bushes at Patrick Draw (a–d Lang et al. 1985)
is interpreted to be due to a geobotanical anomaly. b Aerial photograph
19.10 Hydrocarbon Exploration 359

adjoining background areas. It was found that the anomalous 19.10.5 Thermal Anomalies
zone corresponds to the area vertically above the known gas
pool; therefore it was inferred that seeping hydrocarbons Exploration field geoscientists have reported the existence of
from the underlying reservoir have led to vegetation stress negative thermal anomalies over oil-pools and have sug-
and the geobotanical anomaly. gested that thermal profiling can be a useful cost-effective
tool for oil exploration (Fons 1999, 2000). The cause of this
thermal anomaly is considered to be the thermo-physical
19.10.4 Hydrocarbon Index (HI) property of oil at depth (Fons op.cit.). Oil has characteris-
tically lower thermal conductivity, higher specific heat and
The presence of hydrocarbon in soils/rocks can be detected
lower thermal diffusivity than the reservoir rocks. It is well
by hyperspectral remote sensing. Cloutis (1989) showed that
known that there is a general geothermal energy flux from
hydrocarbon-bearing materials exhibit characteristic depth upwards to the Earth’s surface. The presence of oil in
absorption bands at 1.73 and 2.31 µm in the SWIR. As these
reservoir would change the bulk effective thermal conduc-
absorption bands are unique to hydrocarbon, the presence of
tivity, thus causing spatial variations in heat transfer,
hydrocarbons can be detected unambiguously. Hoerig et al. resulting in low thermal anomaly on the ground surface
(2001) used HyMap aerial hyperspectral data in these
above oil-bearing horizons (Fig. 19.88).
spectral bands to detect hydrocarbon seepage in the field.
Modeling of thermal anomalies over petroliferous basins
A closer look reveals that the band 1.73 µm is located has revealed that temperature-depth relationship depends
rather close to water absorption band; therefore, the band
mainly on thermal conductivity of the medium (Chakraborty
2.31 µm may be better suited for hydrocarbon detection
et al. 2010). Observed field data on temperature vs. depth
purposes. For Hyperion satellite data, a Hydrocarbon Index obtained from wells shows presence of an inflexion point,
has been formulated (NASA in Andreoli et al. 2007) as fol-
i.e. a change in gradient at the level above which hydro-
lows (these spectral band numbers are specific to Hyperion):
carbon pools are known to occur; the thermal gradient
  slightly decreases above the oil-bearing strata (Chakraborty
ðRB115 þ RB117 Þ
HI ¼ RBDB116 ¼ ð19:24Þ et al., op. cit.) (Fig. 19.89).
ððRB116 Þ  2Þ
This calls for thermal surveys for on-shore exploration
where RBi is the reflectance in band i. Figure 19.87 sites. However, the task of field temperature surveys over
schematically shows the computation of HI. The RBD is extensive areas and long profiles is laced with numerous
derived using the same concept as shown in Fig. 19.62b. hurdles, such as: (i) logistic difficulties in collecting concur-
Hydrocarbons are indicated if HI > 1. The depth of rent field temperature data over extensive areas; (ii) move-
absorption is related to the relative quantity of hydrocarbon; ment of field equipments for temperature measurements at
the deeper the minimum in the curve, the greater is the distant places lead to non-concurrent field temperature
hydrocarbon content. observations, and hence natural diurnal temperature variation,
as also possible atmospheric-meteorological variations need
to be considered/normalized; (iii) topography, land use land
cover, importantly vegetation and local surface moisture may
vary from place to place, and also influence the local surface
temperature—and thus data reduction to a common base for
lateral comparison becomes a stupendous task.
In this context, satellite remote sensing appears to be a
powerful viable tool, as temperature data over millions of
pixels is collected in a few minutes time, and solar illumi-
nation and atmospheric-meteorological conditions can be
considered to be largely uniform over the scene for all
practical purposes. Further, multispectral optical data can be
used to deal with the problems of vegetation and surface
moisture to a practical reasonable extent.
Gupta et al. (2009) carried out a systematic study in the
Cambay basin, India, for possible detection of thermal
Fig. 19.87 Schematic showing the concept of ‘Hydrocarbon Index’.
Hydrocarbon absorption occurs at 2.31 µm (Hyperion B116). Relative
anomalies from ASTER data. Cambay is a marginal
depth of band absorption (RBD) is calculated using spectral reflectance intra-cratonic basin comprising Tertiary sedimentary
at B115, B116 and B117 (after NASA in Andreoli et al. 2007) sand/shale sequence with more than 90 oil-gas fields, the
360 19 Geological Applications

Fig. 19.88 Schematic to show heat flux on the surface of the earth. soil moisture, topography, vegetation, agriculture etc. tend to
QSS is the solar heat flux; QEE is the heat flux due to internal crustal subdue/mask the effect of variation in cumulative heat fluxes on the
heating process that gets reduced to QE0 0 (QEE0 < QE0E ) in the earth’s surface (after Chakraborty et al. 2010)
E
hydrocarbon-bearing zone. The diverse land use/land cover, surface

Fig. 19.89 Observed


depth-temperature profile in a
petroliferous well; note the
presence of inflexion point and
decrease in thermal gradient
above the oil-bearing strata at
*1900–2000 m depth
(simplified after Chakraborty
et al. 2010)

whole sequence being covered with extensive monotonous canals, and wet fields) and ground vegetation (forests,
alluvium (Biswas 1987). agricultural fields, parks, other vegetation). This led to
An analysis of remote sensing derived temperature dis- extraction of a temperature image showing anomalous
tribution at three known major producing oil-fields revealed ‘cooler’ pixels, which has been draped over the CIR com-
an interesting pattern—that the temperature is lowest in the posite (Fig. 19.90). It is observed that there is a larger
central pixel and it gradually increases from center outwards, concentration of the anomalous cooler pixels lying in clus-
in all the cases, approaching the scene average of 306.3 K ters with-in the known oil-fields. Based on the above
(Table 19.6). empirical relationship, a possible new prospect could be
Scene-based dedicated image processing enabled mask- outlined for further investigations (Gupta et al. 2009).
ing out and exclusion of pixels that carry possible effects of Thus, field data already exist on the presence of negative
artifacts (urban areas, roads, etc.), water bodies (lakes, thermal anomalies over oil-fields, and processed satellite
19.10 Hydrocarbon Exploration 361

Table 19.6 Surface S. No. Oil-field name Location Surface temperature (K)
temperatures derived from
ASTER data over known a b c d
oil-fields in Cambay basin, India 1 Sanand 23°03′ N 299.9 303.8 305.4 306.1
(after Gupta et al. 2009) 72°30′ E
2 Kalol 23°16′ N 301.3 303.2 304.6 305.2
72°30′ E
3 Nawagam 22°55′ N 300.6 303.7 305.2 305.5
72°35′ E
Note The temperatures are derived in four spatial windows, a-gives the temperature of the central pixel; b–d
give average temperatures of 3  3 matrix; 7  7 matrix and 11  11 matrix, respectively; note that the
temperature is lowest in the central pixel and gradually increases from center outwards in all the cases; the
scene average temperature is 306.3 K

Fig. 19.90 Processed ASTER image (CIR composite) of a part of known oil-fields; windows a-d show clustering of cooler pixels in
Cambay basin, western India; yellow are the thermally anomalous oil-field regions; e is considered a possible prospect (after Gupta et al.
(cooler) pixels draped over the CIR composite; green (1–7) outlines are 2009)

sensor data in this case also exhibits preferential clustering 19.10.6 Oceanic Oil Slicks
of anomalous cooler pixels over known oil-fields. Appar-
ently, there lies a good potential in satellite sensor data that Another important application of remote sensing is in
ought to be fully investigated and exploited in exploration detecting oil slicks that occur as thin films on the ocean/sea
tasks. This approach could at least minimize wildcat water surface. An oil film on the ocean surface has a
exploration costs and losses. dampening effect on the waves, which reduces the
362 19 Geological Applications

water. Groundwater is thus commonly preferred for various


applications.
Groundwater constitutes an important component of the
hydrological cycle. Water-bearing horizons are called aqui-
fers. Good aquifers have good porosity and good perme-
ability. On the other hand, rocks, which do not possess
porosity and permeability, are unable to hold and yield
water. Therefore, aquifers may be considered as extensive,
subsurface reservoirs of water. The principle sources of
groundwater recharge are precipitation and stream flow
(influent seepage), and those of discharge include effluent
seepage into streams and lakes, springs, evaporation and
pumping.
Groundwater aquifers are of two types: unconfined and
confined. In unconfined, also called water-table aquifers,
there exists a natural water table, stable under atmospheric
conditions; in confined aquifers, the water is contained under
pressure greater than the atmospheric pressure, due to
Fig. 19.91 ERS-SAR image showing oil-film signature on the sea
overlying and underlying relatively impermeable strata.
surface interpreted to be due to natural oil leakage from the sea bottom Confined aquifers receive water from a distant area, where
(flight direction and look direction shown) (Copyright © 1994, the aquifer may be exposed or the overlying confining layer
European Space Agency and Tromso Satellite Station) is non-existent.

back-scatter (darker tone); at times, the dampening effect


may be so pronounced that it may lead to ‘no-show’ in the 19.11.1 Factors Affecting Groundwater
surrounding sea clutter. Oil slicks originate from both natural Occurrence
and man-induced sources. Natural oil slicks in oceans are of
interest for submarine hydrocarbon exploration. Many One of the most important requirements for groundwater
hydrocarbon reservoirs in offshore basins form slicks, such occurrence and flow is that the lithological horizon be por-
as those in the Santa Barbara Basin, several basins in ous and permeable, so that it may store and permit easy
Indonesia, etc. Persistent or recurrent oil slicks can point movement of water. The pores, called voids, are the open
towards the presence of undersea oil seeps. Figure 19.91 spaces in the rock in which water may accumulate, and these
shows an ERS-SAR image of a suspected natural oil slick. voids are of fundamental importance. The porosity could be
Man-induced oil slicks in the ocean are formed due to pol- primary (i.e. developed concurrently with the rock’s for-
lution from oil tankers, drilling rigs, municipal waste etc. mation) or secondary (i.e. generated subsequently in the rock
Such oil slicks pose environmental hazards to marine fauna by fracturing). The original pores and interstices or inter-
and flora and need to be detected and monitored for envi- granular voids in sedimentary rocks constitute primary
ronmental surveillance. Marine oil spills due to man-induced porosity; on the other hand, joints, fractures, shear zones,
activities are much more common (93%) than natural seep- solution openings etc. constitute secondary porosity. Rocks
ages (7%) and are discussed in detail in Sect. 19.17.4, in which primary porosity is dominant are called soft rocks,
dealing with environmental applications. and those possessing predominantly secondary porosity,
hard rocks. Unconsolidated sediments also have predomi-
nantly primary porosity, but are considered slightly differ-
19.11 Groundwater Investigations ently when discussing groundwater hydrology. Rock
porosity may range from 0 to 50%. Most unconsolidated
Water is the basic necessity for life. Areas having a good materials such as gravels and sands, and rocks such as
supply of water have always been prosperous. Groundwater sandstones and conglomerates, possess good primary
constitutes an important source of water supply for various porosity and are important bearers of groundwater. Sec-
purposes, such as domestic needs, local supplies for indus- ondary porosity may be present in fractured metamorphic
tries, agriculture etc. It is generally cool, fresh, hygienic, and igneous rocks, and in soluble rocks such as limestones.
potable and widely available, and its availability does not Rocks such as shales, clays, schists etc. may possess varying
generally vary with season as greatly as that of surface porosity but in general poor permeability, and serve only as
19.11 Groundwater Investigations 363

impermeable or confining layers. [For details on ground- 19.11.3 Application Examples


water hydrology, refer to Todd and Mays (2005); Singhal
and Gupta (2010)]. Most commonly, the purpose of groundwater investigation is
the targeting of water resources for local supply. Remote
sensing techniques can provide vital information data, which
19.11.2 Indicators for Groundwater on Remote can be supplemented and verified by other field techniques
Sensing Images (geophysical, drilling etc.). In practice, the integrated sys-
tematic approach of ‘regional scale satellite images—large
As mentioned elsewhere, remote sensing data provide sur- scale satellite images/photographs—geophysical—drilling’
face information, whereas groundwater occurs at depth, has been highly successful for groundwater exploration and
maybe a few metres or several tens of metres deep. The widely applied. Basically, it reduces time, risk and expen-
depth penetration of EM radiation is barely of the order of diture in groundwater development. Such studies for
fractions of a millimetre in the visible region, to barely a few groundwater exploration and potential mapping have been
metres in the microwave region (in hyper-arid conditions), at carried out world-wide (e.g. Solomon and Quiel 2006; Al
best. Therefore, remote sensing data are unable to provide Saud 2010; Elewa and Qaddah 2011; Shekhar and Pandey
any direct information on groundwater in most cases. 2015; Bhuiyan 2015). Rao (2006) proposed a groundwater
However, the surface morphological–hydrological–geologi- potential index for use in a crystalline terrain using remote
cal regime, which primarily governs the subsurface water sensing data. More recently, weight of evidence modeling
conditions, can be well studied and mapped on remote approaches have also been employed for remote sensing—
sensing data products. Therefore, remote sensing acts as a GIS based groundwater data processing (e.g. Nampak et al.
very efficient tool for regional and local groundwater 2014; Park et al. 2014).
exploration, particularly as a forerunner in a cost-effective
manner (see e.g. Waters et al. 1990; Krishnamurthy et al. 19.11.3.1 Image Data Selection
1996; Singhal and Gupta 2010). Selection of remote sensing data for groundwater applica-
In the context of groundwater exploration, the various tions has to be done with great care as detection of features
surface features or indicators can be grouped into two cate- of interest is related to spatial and spectral resolution of the
gories: (1) first-order or direct indicators, and (2) second-order sensor as well as seasonal conditions of data acquisition. For
or indirect indicators. The first-order indicators are directly example, small-scale image data are good for evaluating the
related to the groundwater regime (viz. recharge zones, dis- regional setting of landforms, whereas large-scale pho-
charge zones, soil moisture and vegetation). The second-order tographs are required for locating actual borehole sites.
indicators are those hydrogeological parameters which Similarly, an understanding of the spectral response of
regionally indicate the groundwater regime, e.g. rock/soil objects is crucial for selecting remote sensing data and
types, structures, including rock fractures, landforms, drai- interpretation. Further, temporal conditions (rainfall, soil
nage characteristics etc. (see Table 19.7). moisture, vegetation etc.) greatly affect the manifestation of

Table 19.7 Important indicators of groundwater on remote sensing data (after Ellyett and Pratt 1975; Singhal and Gupta 2010)
(A) First-order or direct indicators
(1) Features associated with recharge zones: rivers, canals, lakes, ponds etc.
(2) Features associated with discharge zones: springs etc.
(3) Soil moisture
(4) Vegetation (anomalous)
(B) Second-order or indirect indicators
(1) Topographic features and general surface gradient
(2) Landforms
(3) Depth of weathering and regolith
(4) Lithology—hard rock and soft rock areas
(5) Geological structure
(6) Lineaments, joints and fractures
(7) Faults and shear zones
(8) Soil types
(9) Soil moisture
(10) Vegetation
(11) Drainage characteristics
(12) Special geological features, such as karst, alluvial fans, dykes and reefs, unconformities, buried channels, salt encrustations etc., which
may have unique bearing on groundwater occurrence and movement
364 19 Geological Applications

Fig. 19.92 A set of


a post-monsoon and
b pre-monsoon IRS-LISS-II
red-band images of a part of the
Budelkhand granites in Central
India. Note that the various
landforms (buried pediments,
valley fills etc.) and lineaments
are better deciphered on the
pre-monsoon (summer) image

features. Figure 19.92 gives an example of the same area they form good reservoirs of groundwater, the groundwater
(granitic terrain in Central India) imaged by the same sensor flow regime being governed by the regional hydrological
(IRS-LISS-II) in two different seasons: post-monsoon and setting and the morphological–geological evolution of the
pre-monsoon. The post-monsoon image exhibits a wide- area.
spread thin vegetation cover, and therefore distribution of
various landforms and lineaments is not clear on this image; 1. Groundwater seepage patterns in alluvial terrain—the
it is the summer (pre-monsoon) image on which various northern Indo-Gangetic plain.
landforms such as buried pediments, valley fills and linea-
ments are clearly brought out. For groundwater studies, it is extremely important to map
areas of influent (recharge) and effluent (discharge) ground-
19.11.3.2 Unconsolidated Sediments/‘Soft Rock’ water seepage. The near–IR, thermal-IR and SAR images are
Terrain highly sensitive to surface moisture and can provide inputs
Unconsolidated sediments are characterized by the presence for mapping seepage pattern. Figure 19.93 is a plot of NIR
of high primary porosity and permeability. Such materials reflectance (Landsat MSS4.) against depth of water-table.
cover extensive areas as deposits of fluvial, aeolian, glacial The groundwater discharge zones have a shallow water-table
or marine origin, or as weathered surficial cover. In general, and lower NIR reflectance than the recharge zones.
19.11 Groundwater Investigations 365

water infiltrates, moves down the gradient, and emerges as


springs about 10−15 km further south. The effluent
groundwater seepage and gradual building up of streams are
clearly shown on the NIR-band image (Fig. 19.94).

2. Neotectonic faults disrupting shallow alluvial aquifers.

Study of aquifer geometry is essential for proper develop-


ment and utilization of groundwater resources. In many
areas there are multiple aquifers buried under alluvial cover,
like the Indo-Gangetic Plains of India, Po-Lombardy Plains
of Italy, Nile Plains of Egypt and Hwang-Ho Plains of
China. The aquifers may possess significant variation in
lateral continuity and extension due to disruption by neo-
tectonic faults and it is important to understand the 3-D
Fig. 19.93 Relationship between NIR reflectance (MSS4 DN values) geometry of the aquifer system.
and depth of water table; the distribution of groundwater recharge and
discharge areas is indicated (adapted after Bobba et al. 1992)
In this remote sensing data can be useful by indicating the
presence of vertical faults/lineaments etc.
Samadder et al. (2007) carried out a systematic study for
Multispectral image data depicting spatial characteristics, understanding shallow aquifer geometry by integrating
pattern and shape of surface water bodies can indicate whether well-log and remote sensing data in a part of Gangetic
the streams are recharging the groundwater (influent ground- Plains. The study included mapping of neotectonic linea-
water seepage) or groundwater is recharging the streams ments from remote sensing data, and lithological character-
(effluent groundwater seepage). The northern alluvial region istics and determination of aquifer depth from well-log data.
of the Indo-Gangetic plains provides an interesting example of The well-log data indicated that there occur several alter-
groundwater seepage pattern, both influent and effluent. nating sand (aquifer) and clay (aquitard) strata. As seen
The vast Indo-Gangetic plains, composed of unconsoli- widely on the surface, the lithologic units are just unde-
dated fluvial sediments, are bordered on the north by the formed and continue laterally horizontally for several kms,
sub-Himalayan (Siwalik) hill ranges. The general topo- such that it could be assumed that the litho-units possess a
graphic slope and groundwater flow is from north to south. gradient of <2°–3°. With this assumption, lateral lithologic
The foothill zones of sub-Himalayan ranges and the northern correlation was carried out. A possibility of disruption/fault
part of the Indo-Gangetic plains possess highly coarse- was considered if the correlated lithologs implied a gradient
grained deposits, boulders, gravels and sands, locally called of more than 5°. It was found that some interpreted faults
the bhabhar zone. The groundwater seepage in this zone is matched with the lineaments identified from remote sensing
influent, the surface soil moisture is generally very low, and data. In case faults were not manifested on the image, they
the area has light tones on NIR image (Fig. 19.94). The were then interpreted as buried faults. Figure 19.95a, b gives

Fig. 19.94 Groundwater seepage pattern in the northern alluvial Gangetic plains. a Landsat MSS4 (infrared) image of part of the Gangetic plains
and the sub-Himalayas. b Interpretation map showing influent and effluent seepage patterns (Singhal and Gupta 2010)
366 19 Geological Applications

an example. In this way, integration of remote sensing data Quaternary cover. A typical example is furnished by the
with well-log data resulted in deducing sub-surface geo- ‘lost’ Saraswati River, which is said to have been a mighty
logical correlation and aquifer geometry. river in Vedic or pre-Vedic times, and used to flow in the
western part of the present Indo-Gangetic plains, between
3. Buried river channel—the ‘lost’ Saraswati River. the Yamuna and Sutluj Rivers (e.g. Valdiya 2017;
Fig. 19.96a). The palaeochannels can be identified in most
Buried river channels constitute one of the most important cases in this area on the basis of higher moisture content in
targets in a groundwater exploration programme in soils, textural characteristics on images and vegetation

Fig. 19.95 a shows the image of a part of Gangetic plains (NIR band) lineament observed on the remote sensing data; on the other hand, the
with lineaments; also shown are the bore well locations and the section fault (F2–F2) located between well Nos. 48 and 102 is not manifested
alignment A–A′. b gives the interpreted geological section; the fault on the surface and therefore has to be treated as a buried fault
(F1–F1) between well Nos. 102 and 108 can be matched with the (Samadder et al. 2007)

Fig. 19.96 a The northwestern Indian subcontinent with its present Yash Pal et al. 1980). b The Landsat MSS2 (red band) image showing
river system along with major paleochannels, as deciphered from the Saraswati river paleochannel, 6–8 km wide, marked by vegetation
landsat imagery (international boundaries not shown; simplified after
19.11 Groundwater Investigations 367

patterns on former river beds. The old bed of the Saraswati is


marked as a 6−8-km-wide zone (Fig. 19.96b). Using
Landsat MSS images, Yash Pal et al. (1980) delineated the
course of the ‘lost’ Saraswati River for a distance of about
400 km. They also inferred that sudden westward diversion
of the Sutluj River, a former major tributary, led to the
drying up of the once mighty Saraswati River.

4. Buried river channels—Eastern Sahara and Libya.

In arid and hyper-arid regions, buried channels are of added


importance, owing to conditions of acute water scarcity. The
SIR-A experiment demonstrated, for the first time, the
applicability of SAR data for the delineation of buried
channels in hyper-arid regions (McCauley et al. 1982; Elachi
et al. 1984). The Eastern Sahara is blanketed by surface
deposits of wind-blown sands, which are floored mostly by
the Cretaceous Nubia Formation (sandstone and shales) and
Precambrian granites. The area has witnessed several cli-
matic cycles during Quaternary times. It is generally inferred
that aridity settled in this area during the early Pleistocene,
although many small playas and streams continued to exist
even much later—up to Recent times—and that by about
5000 years ago, hyper-aridity had set in the Eastern Sahara
Desert, bringing to a halt the fluvial processes in the region
(McCauley et al. 1982).
The Landsat MSS images show that the terrain is covered
with sand sheets and dunes and is barren and quite featureless
(Fig. 19.97a). The corresponding SIR-A image reveals the
presence of subsurface buried channels—segments of
defunct river systems (Fig. 19.97b). Some of the larger val-
leys are interpreted to be relicts of a much earlier Tertiary
river system, and thus could represent palimpsest drainage,
related to several previous episodes of fluvial activity.
The buried river channels appear as areas of darker tone
on the SAR image and this warrants some further explana-
tion. The thin sand sheets possess no micro-roughness on the
top and lack scatterers, being homogeneous over large areas. Fig. 19.97 a Landsat image of part of the Libyan desert showing
extensive windblown deposits; b corresponding SIR-A image showing
In this situation, the micro-relief of the substrate layer may subsurface channels; c relative degree of back-scatter across the area
influence the radar return, if the sand cover is fully pene- and d schematic cross-section (a, b Courtesy of J.P. Ford, Jet
trated by the incident radar energy. In areas where buried Propulsion Laboratory, Pasadena)
channels are present, the sand cover, which is fine grained
and homogeneous, is thicker due to the erosion of the bed- (paleochannel). Prior to the SIR-C images, only the western
rock in comparison to the adjacent areas. This results in branch of the Wadi Kufra was known to exist. The SIR-C
weaker return from areas of buried channels (where the data revealed for the first time the existence of a broader
bedrock is deeper); in contrast, there is relatively higher (5-km-wide, 100-km-long) eastern branch of the Wadi Kufra.
radar return from the adjoining areas (where the bedrock is
shallower) (Fig. 19.97c, d). 5. Paleo River Delta—the Thar Desert, Rajasthan.
Another interesting example is furnished by SIR-C SAR
image from Kufra oasis, Libya (Fig. 19.98). The area is now The Thar Desert is a large, arid region in the northwestern
hyper-arid such that the valleys and dry ‘wadis’ or channels part of the Indian subcontinent and is the world’s ninth
are mostly buried under windblown sand. The SIR-C image largest subtropical desert. The age of the Thar Desert is still
reveals the system of an old stream valley, now inactive debated but is generally considered as about 4000 years. It is
368 19 Geological Applications

the interpretation map showing several paleochannels pos-


sessing distributary pattern characteristic of a river delta. This
implies that the sea on the south/southwest extended close to
here in the past. This has obvious bearing on paleo-hydrology
of the area, in addition to exploration potential of hydrocar-
bons in the paleo-deltaic region.

6. Land subsidence associated with over-abstraction of


groundwater—La Vegas

Over abstraction of groundwater is a common cause of


surface land subsidence. Geodetic techniques (spirit level-
ing) have been conventionally used for measuring land
subsidence. Recently, InSAR technique has also been used
Fig. 19.98 SIR-C image showing the palaeochannel of Wadi Kufra, for monitoring land surface deformation and aquifer com-
Libya. Prior to the SIR-C image, only the west branch of the paction. Figure 19.100 shows InSAR derived land subsi-
palaeodrainage, known as Wadi Kufra, was recognized; the broader dence data and its comparison with conventional levelling
east branch (5 km wide, about 100 km long) was known only after
data in the Las Vegas valley, Nevada (Galloway and Hoff-
these data from the SIR-C (printed black-and-white from FCC)
(courtesy of JPL/NASA, Pasadena) man 2007).

7. GRACE derived estimates of groundwater depletion—


believed that perhaps around 2000–1500 BCE, the Ghaggar/ India.
Saraswati River ceased to be a major river due to neotec-
tonics that led to significant modifications in drainage GRACE provides data on gravitational attraction exerted by
courses affecting stream-flows in this part of area (Bakliwal the Earth on the orbiting satellite. As the gravitational
and Grover 1988; Valdiya 2017). Numerous palaeochannels attraction depends on the mass, repetitive coverage bring
still bear evidence to this idea (e.g. Fig. 19.96). out information on changes in mass occurring on the Earth
Figure 19.99a shows an interesting example of presence of below the satellite trajectory. Rodell et al. (2009) used
paleo river delta brought to light by RISAT SAR image (after repetitive GRACE data of the period Aug 2002–Oct 2008
Rajawat 2014). The area is a part of the Thar desert, Rajas- of the NW Gangetic Plains (Rajasthan, Punjab and Har-
than, and is covered largely with sand dunes. Figure 19.99b is yana) to deduce basin-level changes in ground

Fig. 19.99 a RISAT SAR image showing the presence of river delta lying buried under the wind-blown desert sands of Thar desert, Rajasthan;
b interpretation map showing paleo-channels with distributary drainage pattern characteristic of river delta (a after Rajawat 2014)
19.11 Groundwater Investigations 369

Fig. 19.100 a InSAR derived land subsidence, Las Vegas valley, Nevada, April 1992 to December 1997. b Subsidence rates compared to historic
leveling lines at 1 and 10 for given periods (months per year) (Fig. a, b, after Galloway and Hoffman 2007)

wateroccurring in the region. They observed that ground- 19.11.3.3 ‘Hard-Rock’ Terrain
water is being depleted at an average rate of 4.0 ± ‘Hard-rock’ terrains are marked by groundwater character-
1.0 cm year−1 in the entire NW part of the Gangetic basin istics quite different from those of ‘soft-rock’ terrains. As the
(Fig. 19.101). Similar studies have also been done in the dominant porosity in such areas is of secondary type, the
Middle East (Voss et al. 2013). pattern of groundwater distribution is characterized by non–
uniform flow. The most common exploration strategy in
such areas is to delineate zones of higher secondary porosity
(e.g. fracture systems, faults, shear zones, joints etc.),
weathered horizons, vegetation, drainage anomalies and
suitable landforms.

1. Fracture traces and lineaments in hard-rock areas.

Lineaments are surface traces of shear zones, fractures, faults


etc., along which topography, drainage, vegetation, soil
moisture, springs etc. may get aligned, and the zone is likely
to possess a greater thickness of regolith (see Fig. 19.35a, b).
The significance and manifestation of lineaments is dis-
cussed in detail elsewhere (see Sect. 19.4.5). In some cases,
lineaments may be better manifested on thermal-IR and or
SAR images (see Figs. 12.12 and 19.41).
The technique of mapping fracture traces and local lin-
eaments from aerial photographs, which has now been
Fig. 19.101 GRACE derived estimate of change in terrestrial water extended to satellite imagery for locating zones of higher
storage in the NW Gangetic Plains, India (Rodell et al. 2009) permeability in hard-rock terrain, was developed by Parizek
370 19 Geological Applications

(1976; Lattman and Parizek 1964). It has been shown by which has a temperature corresponding to that of the land, is
Parizek (1976) that wells located on fracture traces (linea- cooler than the sea water, the temperature difference being
ments) yield about 10−1000 times more water than the wells generally of the order of 3–5 °C. After eliminating thermal
in similar rocks and topographical conditions, but located patterns due to surface drainage, the remaining sites of
away from fracture traces. Further, wells are found to be thermal plumes may be attributed only to subsurface dis-
more consistent in yield when located on lineaments than charge. The NIR band image can be interpreted for linea-
under other conditions. ment–geological structure, which can help delineate
Whereas lineaments are of significance in groundwater fractures apparently controlling freshwater discharge into the
studies, not all lineaments are of the same type and it is sea (Fig. 19.102b, c).
important to differentiate between different types of linea-
ments to understand their hydrogeologic significance (Tam 3. Weathered zones and alluvial fills.
et al. 2004; Chandra et al. 2006; Sander 2007; Singhal and
Gupta 2010; Acharya et al. 2012; Bhuiyan 2015). The ten- For groundwater exploration, identification of suitable geo-
sile lineaments are generally open and have higher hydraulic morphic units is important as features like weathered zones,
conductivity than the shear lineaments which are tight, buried pediments and alluvial fills may form potential sites
closed and may have little hydrogeologic potential. The for groundwater targeting in hard-rock terrain. Aerial pho-
distinction between tensile and shear lineaments is described tographs have conventionally been used for this purpose.
in Sect. 19.4.5. Now, with the advent of good-spatial-resolution space sen-
Bhuiyan (2015) attempted a correlation of lineament sors, satellite data are also profitably applied for such stud-
types and hydrogeological characteristics of adjacent vadose ies. Figure 19.103 shows an example.
zone in a part of the Aravalli mountain range, using statis-
tical measures on a large number of data sets. He found 4. Karst features.
incidence of comparatively higher water-level fluctuations
near areas of curvilinear drainage lines, antiformal fold axes Karst features deserve special mention in the context of
and lateral-slip faults, as compared to areas near straight groundwater. Karst terrains are characterized by circulation
drainage lines, synformal fold axes, and faults with vertical of subsurface water. As the limestone rock gets dissolved by
slip. This further highlights the need of understanding the the water, formation of pits, cavities, depressions, sink holes
nature of lineaments for such applications. and caves takes place. The surface of such a karst terrain is
highly uneven marked by uneven topography and moisture
2. Freshwater springs in coastal areas. (see Fig. 19.54). Further, there could be instances where one
could see sudden appearing and disappearing streams
Coastal areas are often faced with unique and sometimes (Fig. 19.104a, b), indicating subsurface circulation of water.
severe problems of freshwater supply, especially in
hard-rock terrains. In many cases, surface streams may be 5. Artificial groundwater recharge.
few or intermittent or with insufficient discharge, and it may
be necessary to tap groundwater resources to fulfill the water Hard rock areas often face water scarcity problems and
needs of the area. artificial groundwater recharge is an important issue in such
In hard-rock terrain, the groundwater is contained in areas to augment water resources. Remote sensing-GIS
fractures and zones of secondary porosity. The groundwater, based strategy is best suited for selecting sites of ground-
which is freshwater, moves down the gradient and eventu- water resources (Saraf and Choudhury 1998; Yeh et al.
ally becomes lost in the sea in the form of submarine springs. 2009, 2016). The important parameters that need to con-
Sea water has a relatively higher density owing to higher sidered are commonly: lithology, land cover/land use, lin-
total dissolved solids. Due to the density contrast, the eaments, drainage, and slope. The porous permeable zones
freshwater rises and spreads over the sea surface, forming a like sandy-gravelly beds and highly fractured horizons
plume, as the mixing process of the freshwater with the located at lower topographic elevations constitute better
seawater goes on concurrently. potential sites for artificial recharge of groundwater than the
Figure 19.102 presents an example. The thermal data non-porous and low-permeability horizons like massive
provide thermal anomalies (Fig. 19.102a). During the night granites or shales. Thematic data layers pertaining to these
the land is cooler than the sea, and therefore freshwater, parameters can be readily prepared by remote sensing
19.11 Groundwater Investigations 371

Fig. 19.102 Detection of submarine coastal springs at a test site in (lighter) sea. b NIR-band aerial photograph of the same area acquired at
Italy using remote sensing data. a Thermal-IR aerial scanner (pre-dawn) noon. c Interpretation map. A number of fractures controlling freshwater
showing discharge of cooler (darker) groundwater into the warmer discharge into the sea can be delineated (a–c Courtesy of J. Bodechtel)

techniques, and data integration can be conveniently effected


in raster GIS. Methodology of GIS based raster data inte-
gration is discussed in Chap. 18.

19.12 Engineering Geological Investigations

Remote sensing techniques are now routinely used in engi-


neering geological/geotechnical investigations. A great value
of remote sensing data in such cases lies in their synoptic
view, which can be highly useful in predicting likely engi-
neering geological problems and hazards, and suggesting
alternative possibilities and solutions (Belcher 1960; Rib
1975). Moreover, repetitive satellite coverage provides vital
data on geoenvironmental changes occurring with time.
Usually, different stages of engineering investigations
Fig. 19.103 Landform map showing inselbergs/pediments, buried require data on different scales and the present-day remote
pediments and valley fills in a granitic terrain. The corresponding sensing techniques can readily supply inputs on the various
IRS-LISS-II image is shown in Fig. 19.19
372 19 Geological Applications

Fig. 19.104 a Effluent groundwater seepage in the karst terrain near structures can be readily observed; the image shows very scant
Immendingen, Germany; the Danube River flows from SE to NW; note vegetation, and whatever is present is aligned along the drainage
the dry channel in the upstream region (SE) and water-bearing channel channels controlled by fractures; influent groundwater drainage can be
in the downstream (NW); b Karst in the Nullarbor Plain, Australia; it is clearly interpreted by the sudden disappearance of surface water;
a flat almost tree-less arid or semi-arid terrain and the world’s largest possible dolines are present at the intersection of fractures (a, b Source
single piece of limestone; due to the absence of vegetation cover, karst GoogleEarth)

scales required. The geological features to be studied depend zones, cavities etc.; (13) availability of suitable construction
on the type of engineering project—the commonly required material; (14) sites for ancillary structures; (15) accessibility
parameters being landform, topography, drainage, lithology, of the site by road; (16) resources likely to be submerged in
structure, orientation of discontinuities, soil, surface mois- the reservoir, such as strategic routes, mineral resources, etc.;
ture and weathering properties. and (17) seismic status of the area. Finally, large-scale
photographs and images can help in planning the actual
location and design of dam structures, such as the alignment
19.12.1 River Valley Projects—Dams of the dam axis and the location of spillway, diversion
and Reservoirs tunnels, powerhouse, channels etc. Multispectral images can
also help identify specific geotechnical problems.
Remote sensing techniques provide a wealth of data, crucial Figure 19.105 shows the reservoir Lake Nasser that
for planning, construction and maintenance of river valley resulted from construction of the Aswan High Dam, Egypt.
projects. The data are of special utility with respect to the The dam was completed in 1970, is about 3.8 km long and
study of the following parameters during site selection: 111 m high built on the Nile River. The reservoir Lake
(1) topography of various potential sites; (2) shape of the Nasser is about 550 km long. Figure 19.106 shows the
valley (chord to height ratio); (3) size of the catchment and Three Gorges Dam, built over the Yangtze River, China, and
likely river discharge; (4) amount of storage capacity likely is the largest hydropower projects in the world. It is named
to be generated; (5) erosion hazard in the catchment and silt “Three Gorges” as the river passes through three picturesque
yield; (6) nature of valley slopes and areas of potential gorges in this part of its stretch. The dam was completed in
landslides; (7) type of bedrock; (8) depth to bedrock or around 2006, stretches more than two kilometers across the
thickness of overburden; (9) structure and orientation of river, is 200 m high, and has created a reservoir that extends
bedding planes/foliation; (10) presence of faults, shear for 600 km.
zones, and joints etc.; (11) silting sites in the reservoir; Many benefits are associated with large dams—flood
(12) water-tightness of the reservoir and presence of seepage control, increase in agricultural and irrigated land area,
19.12 Engineering Geological Investigations 373

around 2006, stretches more than two kilometers across the


river, is 200 m high, and has created a reservoir that extends
for 600 km.
Many benefits are associated with large dams—flood
control, increase in agricultural and irrigated land area,
hydropower generation, regulated water supply to urban
areas, drought management, increase in fishing industry,
shipping transportation, etc. Major problems commonly
include: relocation and submergence of settlements and
historical-cultural sites in the reservoir, silting hazard in the
reservoir, water logging, and decreased water supply in the
downstream region.
Large river valley projects often have environmental
Fig. 19.105 Landsat image showing the area west of Lake Nasser, implications associated with them, which need to be moni-
Aswan High Dam, Southern Egypt; several new lakes have been
created from Nasser’s excess water (courtesy of NASA) tored. For example, in the case of Aswan Dam (Fig. 19.105),
several new lakes have been created from the excess water of
Lake Nasser. This has brought water to this part of Sahara
for the first time in about 6000 years. As also seen from the
image, the terrain comprises mainly longitudinal dunes,
which would have low permeability. Therefore, the new
lakes are likely to lead to problems of water logging and
associated hazards. Repetitive remote sensing data can
enable long-term monitoring of such environmental aspects.

19.12.2 Landslides

Landslides are downward and outward movement of


slope-forming material due to gravity and are particularly
important in projects related to highways, railroads and dam
reservoirs in mountainous terrains. Landslides are best
studied on scale of about 1:10,000−1:25,000 that provide
spatial resolution of about 5–10 m on the ground. The high
spatial resolution satellite sensor data are now quite routinely
Fig. 19.106 Landsat image showing the Three Gorges Dam con-
structed over the Yangtze River, China utilized for such investigations.
While studying remote sensing data for landslides, the
most useful strategy is to identify situations and phenomena
hydropower generation, regulated water supply to urban which lead to slope instability, e.g. (1) presence of weak and
areas, drought management, increase in fishing industry, unconsolidated rock material, (2) bedding and joint planes
shipping transportation, etc. Major problems commonly dipping towards the valley, (3) presence of fault planes and
include: relocation and submergence of settlements and shear zones etc.; (4) undercutting by streams and steepening
historical-cultural sites in the reservoir, silting hazard in the of slopes, (5) seepage of water and water saturation in the
reservoir, water logging, and decreased water supply in the rock material, and (6) increase in overburden by human
downstream region. activity such as movement of heavy machinery, construction
Figure 19.105 shows the reservoir Lake Nasser that etc. As remote sensing data provide a regional view, areas
resulted from construction of the Aswan High Dam, Egypt. likely to be affected by landslide can be easily delineated for
The dam was completed in 1970, is about 3.8 km long and further detailed field investigations (Wieczorek 1984).
111 m high built on the Nile River. The reservoir Lake Landslides are marked by a number of photo-
Nasser is about 550 km long. Figure 19.106 shows the characteristics on panchromatic images and photographs
Three Gorges Dam, built over the Yangtze River, China, and (Rib and Liang 1978), viz. sharp lines of break in the
is the largest hydropower projects in the world. It is named topography, hummocky topography on the down-slope side,
“Three Gorges” as the river passes through three picturesque abrupt changes in tone and vegetation, and drainage
gorges in this part of its stretch. The dam was completed in anomalies such as a lack of proper drainage on the slided
374 19 Geological Applications

Fig. 19.108 A swarm of landslides and debris flow tracks near


Uttarkashi, Himalayas (IRS-1D PAN image; bar on lower left corner =
Fig. 19.107 IRS-1D PAN image showing the occurrence of landslide
400 m); the light-toned scars-edges in the higher-elevated areas with
in the Gola river valley (Himalayas). Note that the Gola river, flowing
fan apex-downward are the source areas, and the thin linear light-toned
from east to west in this section, is blocked by the landslide debris
features are the debris flow track (Gupta and Saha 2000)
originating from the southern slope of the valley. The various typical
photo-characteristics of landslides (sharp lines of break, abrupt change
in tone, vegetation, lack of drainage on the debris) are well depicted
(image courtesy of A.N. Singh)
the expected frequency of mass movements in the area is
available. Landslide hazard zonation is a process of ranking
different parts of an area according to the degrees of actual or
debris. Figure 19.107 presents an example of a landslide potential hazard from landslides.
occurring in the Gola river (Kumaon, Himalayas). The evaluation of landslide hazard is a complex task as
During the last decade, InSAR techniques have shown the occurrence of a landslide is dependent on many factors.
distinct promise for satellite based landslide investigations. With the advent of remote sensing and GIS technology, it
For example, time series InSAR analysis of ALOS/PALSAR has become possible to efficiently collect, manipulate and
image data revealed deformation of slopes in the range of integrate a variety of spatial data, such as lithological map,
30–70 mm year−1 regularly over a 3-year period, prior to the structural data (lineaments, faults etc.), land use land cover,
major landslide that occurred in 2010 (Zhouqu landslide, surface conditions, and slope characteristics of an area,
China) killing >1700 people (Sun et al. 2015a). Slow- which can be used for landslide hazard zonation (Gupta and
moving landslides, also called creeps, and can also be Joshi 1990; van Westen 1994; Nagarajan et al. 1998). Sev-
monitored by InSAR techniques (Sun et al. 2015b). eral statistical data processing techniques such as ANN,
Another related feature that needs attention in this context fuzzy, combined neural-fuzzy (Kanungo et al. 2006) and
is the debris flow track. Debris flows derive their source analytical hierarchy process (Kumar and Anbalagan 2016)
material from landslides, and move in surges, not in a con- have also been applied for remote sensing—GIS based
tinuous manner. They may remain almost just inactive and landslide hazard zonation.
dry for most part of the year but may carry a sizeable amount The example of GIS-based data integration methodology
of debris during rainy season, sufficient to block the trans- presented here is a rather simple straightforward one and
portation network. It is therefore necessary to identify and pertains to a study in a part of the Bhagirathi Valley,
map debris flow tracks during planning of developmental Himalayas (Saha et al. 2002) (Fig. 19.109). It utilized dif-
activities, particularly for highways, in mountainous terrains ferent types of data, including topographic maps, DEM,
(Gupta and Saha 2000; Fig. 19.108). lithological and structural maps, remote sensing multispec-
Landslides can also be triggered by earthquakes; for tral and PAN sensor data, and field observations. Processing
example in the case of Gorkha earthquake (Nepal 2015), a of multi-geodata sets was carried out in a raster GIS envi-
total of >4000 co-seismic and post-seismic landslides were ronment to generate the following data layers:
mapped (Kargel et al. 2016).
Landslide hazard zonation. Landslides cause widespread • buffer map of thrust faults
damage the world over, every year. Mitigation of disasters • buffer map of photo lineament
caused by landslides is possible only when knowledge about • lithology map
19.12 Engineering Geological Investigations 375

Fig. 19.109 Scheme of data integration in GIS for landslide hazard zonation (Saha et al. 2002)

• land-use/land-cover map structure is spread over a large distance by length, and a


• buffer map of drainage relatively smaller distance by width, so that their investiga-
• slope angle map tions have certain aspects in common. Remote sensing data
• relative relief map are a powerful aid in planning, exploration, construction and
• landslide distribution map (training area). maintenance of such structures.
During the initial planning and feasibility stage, once the
As landslides are caused by a collective interaction of the terminal points are given by strategic/political/economic
above factors, relative importance of these factors was esti- considerations, the various possible alternative routes for
mated. A simple approach that involved putting all data on highways, railroads, canals, pipelines or tunnels, can be best
ordinal scale and then implementing weighting–rating sys- selected on satellite images, rather than on the ground
tem for integration was adopted (Fig. 19.109). The Land- (Fookes et al. 1985). Subsequently, detailed investigations,
slide Hazard Index (LHI) frequency was used to delineate preferably with stereo remote sensing data, can throw
various landslide hazard zones, namely, very low, low, valuable light on the following relevant aspects: (1) terrain
moderate, high and very high, which were validated from morphology and surface gradients involved; (2) types of
field data (Saha et al. 2002). soils and rocks; (3) geological structure (faults, shear zones
etc.); (4) slope stability (orientation of structural features vis-
à-vis slopes); (5) volume of earthworks involved; (6) surface
19.12.3 Route Location (Highways drainage characters and water channel crossings;
and Railroads) and Canal, Pipeline (7) groundwater conditions; (8) availability of suitable con-
and Tunnel Alignments struction material; (9) amount and type of clearing required;
and (10) property-value compensation. This can give an idea
Highways, railroads, canals, pipelines and tunnels are all of the total length of the structure and the economics
linear engineering structures, i.e. a single engineering involved.
376 19 Geological Applications

For projects such as highways, railroads, canals and Earth’s crust. Most earthquakes are caused by reactivation of
pipelines, surface geotechnical investigations, which could existing faults, as they provide the easiest channels of release
be mainly field and remote sensing based, are largely suffi- of strain—the natural lines of least resistance. Remote
cient. However, tunnels, as they are located several hundred sensing can help in locating such active and neotectonic fault
meters below the surface, need a different strategy for zones, and this information could be well utilized by earth-
investigations. They need to have subsurface data obtained quake engineers while designing structures.
through drilling, trenching etc. Further, it is necessary that the Neotectonic or active faults are considered to be those
geotechnical features or conditions of the rock mass (shear along which movements have occurred in the Holocene (past
zones, faults, joints, bedding, rock types, groundwater con- 11,000 years). Seismologists distinguish between neotec-
ditions etc.) as observed in the field, interpreted from satellite tonic and active faults, calling neotectonic those which have
images and or deduced from drilling, are integrated and been active in geologically Recent times, and active those
projected to tunnel axis for their proper appraisal. This has which exhibit present-day activity. However, no such dis-
paved way for 2D and 3D GIS. GIS based management of tinction is made in this discussion here. Evidence for neo-
geological and geotechnical information dedicated to tun- tectonic movements may comprise one or more of the
nelling was initiated by Kim et al. (2005) and Zheng et al. following: (1) structural disruption and displacement in rock
(2010). Thum and De Paoli (2015) developed a GIS based units of age less than 11,000 years, (2) indirect evidence
approach to perform geological survey and to automate a part based on geomorphological, stratigraphic or pedological
of the geological/geomechanical mapping during tunnel criteria, and (3) historical record of earthquakes.
excavation. This enabled the calculation of volumes of
wedges, assessment of the adequacy of rock support etc., and 1. Structural disruption and displacement in the rock units
enhanced the capability of anticipating forthcoming geolog- of Holocene age. This forms a direct indication of neo-
ical problems during tunneling. tectonic activity. Commonly, high spatial resolution
remote sensing data coupled with ground data are useful in
locating such displacement zones, e.g. in Holocene ter-
19.13 Neotectonism, Seismic Hazard races, alluvium etc. A prerequisite in this case is knowl-
and Damage Assessment edge of the age of the materials in which the displacement
is mapped. For example, in the Cottonball Basin,
Earthquakes cause great misery and extensive damage every Death Valley, California, Berlin et al. (1980), using
year. The technology of earthquake prediction, to enable the 3-cm-wavelength radar images, deciphered two neotec-
sounding of warning alarms beforehand to save people and tonic faults in the evaporite deposits that are less than
resources, is still in its infancy. However, the earthquake risk 2000 years old. The delineation was made possible as the
is not the same all over the globe, and therefore seismic risk disturbed zone is represented by a somewhat more irreg-
analysis is carried out in order to design structures (such as ular surface than is found in immediately adjacent areas.
atomic power plants, dams, bridges, buildings etc.) in a Figure 19.110 is an image of the Aravalli hills, Rajasthan.
cost-effective manner. Seismic risk analysis deals with esti- The rocks are strongly deformed Precambrian metamor-
mating the likelihood of seismic hazard and damage in a phic and possess a general strike of NNE–SSW. The
particular region. It is based mainly on two types of input Landsat image shows the presence of an extensive linea-
data: (1) neotectonism, i.e. spatial and temporal distribution ment (L-L in Fig. 19.110) in the Recent sediments, on the
of historical earthquakes, and observation of movements western flank of the Aravalli hill range. It is marked by
along faults, and (2) local ground conditions, because the numerous headless valleys, off-setting of streams and
degree of damage is linked to the local ground and foundation abrupt changes in gradients of streams (alignment of knick
conditions. Remote sensing can provide valuable inputs to points), indicating a Recent fault. The aerial photographic
both these aspects. Further, high-resolution remote sensing is interpretation of Sen and Sen (1983) is in conformity with
becoming a powerful tool in damage assessment. Therefore, the above Landsat image interpretation. This fault extends
the discussion here is divided into three parts: (1) neotecton- for about 300 km in strike length, parallel to the Aravalli
ism, (2) local ground conditions, and (3) damage assessment. range, and can be called the western Aravalli fault. It is
inferred that the fault has a strike-slip displacement with a
left-lateral sense of movement, and a vertical component
19.13.1 Neotectonism of movement with the eastern block relatively upthrown.
Figure 19.111 shows the Kunlun fault, which binds Tibet
Earthquakes are caused by rupturing and movement on the north. This is a gigantic strike-slip fault running for
accompanied by release of accumulated strain in parts of the a strike length of about 1500 km. The geological data
19.13 Neotectonism, Seismic Hazard and Damage Assessment 377

Fig. 19.110 The Landsat image shows a prominent lineament L inferred to have a left-lateral strike-slip component and a vertical
−L extending for >100 km along strike. The lineament is marked by component of movement with the eastern block upthrown [Landsat
morphological features such as headless valleys, off-set streams and MSS2 (red-band) image of part of the Aravalli hill ranges, India; N =
alignment of knick points, indicating it to be a neotectonic fault. It is Nimaj; D = Deogarh]

brought in contact with alluvial fans and displacement of


young streams.
The examples shown in Figs. 19.30 and 19.42 depicting
large faults in the Holocene sediments could also be
grouped under this category.
2. Indirect evidence based on geomorphological features.
Mapping of present-day morphological features can
provide important, though indirect, clues for delineating
neotectonism. Characteristic patterns such as bending
and off-setting of streams ridges, sag-ponds, springs,
scarps, hanging and headless valleys, river capture etc.,
and their alignments in certain directions indicate Recent
movements. These features may be relatively difficult to
decipher in the field, and more readily observed on
remote sensing images, due to their advantage of
plan-like synoptic overview.
The Insubric–Tonale Line (Fig. 19.112a) provides an
example. It is a major tectonic feature in the Alps that
Fig. 19.111 The Kunlun fault, one of the gigantic strike-slip faults runs for a distance of more than 100 km in nearly straight
that bound Tibet on the north. In the image, two splays of the fault, both E–W direction, disregarding all geological–structural
running E–W, are distinctly shown; the northern fault (A-A) brings
boundaries. On the Landsat image, the Insubric–Tonale
sedimentary rocks of the mountains against alluvial fans on the south;
the southern fault (B-B) cuts through the alluvium; off-sets of young Line appears as a well-defined zone, marked by drag
streams with left-lateral displacement is observed (courtesy of effects, indicating a right-lateral sense of displacement.
NASA/GSFC/MITI/ERSDAC/JAROS, and US/Japan ASTER Science Based on field data, Gansser (1968) and Laubscher
Team) (1971) also inferred displacement of similar type along
this zone. Figure 19.112b shows the Insubric-Tonale
show that the Indian plate is moving northwards, which is Line together with other neotectonic lineaments deci-
believed to have resulted in left-lateral motion along the phered on the basis of Landsat image interpretation;
Kunlun fault, uniformly for the last 40,000 years at a rate some of these lineaments possess left-lateral and some
of 1.1 cm/year, giving a cumulative offset of more than right-lateral displacement. However, the sense of move-
400 m. On the image, the Recent activity of the fault is ment along these lineaments as interpreted from the
clearly manifested in terms of sedimentary rocks being Landsat data, is in conformity with the orientation of the
378 19 Geological Applications

Fig. 19.112 a The Insubric–Tonale Line, Eastern Alps; the Alps interpreted from Landsat images; these lineament features, with
present-day geomorphological features on either side of the geotectonic their sense of movement are in conformity with the orientation of the
boundary are aligned with drag effects indicating a right- lateral sense present-day stress field (shown as P1) deduced from fault-plane
of displacement. b Neotectonic lineaments in a section of the eastern solutions and in-situ stress measurements (a, b Gupta 1977a, b)

present-day stress field as deduced from in-situ stress acquisition of the two SAR images. The fault movement
measurements and fault-plane solution studies in the is aseismic because the movement occurred without
Central Europe (Gupta 1977a). being accompanied by earthquake.
Aseismic creep exhibited along the Hayward fault, Cal- 3. Historical record of earthquakes. The data record on past
ifornia, is another interesting example of neotectonic (historical) earthquakes is another type of evidence of
movement. Figure 19.113 is an interferogram generated seismicity. It can be carefully interpreted in conjunction
from the pair of C-band ERS-SAR data sets acquired in with data on the structural–tectonic setting in order to
June 1992 and September 1997. A gradual displacement derive useful information (Allen 1975). The technique of
of 2−3 cm, with a right-lateral sense of movement, lineament mapping and analysis was discussed earlier
occurred during the 63-month interval between the (Sect. 19.4.5). The neotectonic potential of lineaments
can be assessed by co-relating historical earthquake data
with lineaments. Figure 19.114a shows the distribution
of earthquakes (magnitude > 6.0) in the region of the San
Andreas fault, California and Fig. 19.114b shows the
SRTM-derived perspective view of the San Andreas
fault.
Micro-earthquake (MEQ) data can also be utilized in a
similar manner for understanding the neotectonic poten-
tial of lineaments. Figure 19.115a is a Landsat MSS
image showing the presence of an important lineament in
Shillong plateau (India). Micro-earthquake epicenters
appear to preferentially cluster along the lineament,
which points towards the neotectonic activity along this
lineament (Fig. 19.115b).

19.13.2 Local Ground Conditions

Damage resulting from an earthquake varies spatially. Close


to the epicentre, the point directly above the initiation of
rupture, disaster is far more severe, and farther away, it
Fig. 19.113 Aseismic creep along the Hayward fault, California.
Based on SAR interferogram generated from images acquired in June
generally decreases due to reduced intensity of vibration.
1992 and September 1997, aseismic creep of 2–3 cm with right-lateral Post-earthquake surveys rely on field observations of dam-
sense of movement has been inferred (Source photojournal.jpl.nasa.gov) age to different types of buildings and structures. Within the
19.13 Neotectonism, Seismic Hazard and Damage Assessment 379

Fig. 19.114 a Relationship of earthquake (magnitude > 6.0, 1912– from the SRTM (February 2000). The view looks south-east; the fault is
1974) and Quaternary faulting, southern California (simplified after the distinct linear feature to the right of the mountains (courtesy of
Allen 1975). b Perspective view of the San Andreas fault generated NASA/JPL/NIMA) (source photojournal.jpl.nasa.gov)

Fig. 19.115 a Landsat MSS image (infrared) of a part of Shillong the region; in the north is the Brahmaputra river. b Micro-seismicity
plateau (India). Note the prominent lineament running between map showing alignment of MEQs along the lineament (mapped as
Dalgoma (DA) and Durgapur (DU), for a distance of more than Dudhnai fault) (b after Kayal 1987)
60 km. Cross marks indicate the rocks of carbonatite-type reported in

same zone of vibration or shock intensity, the damage may Bihar), where the intensity of disaster was most severe.
vary locally, being a function of both the type of structure However, subsequent detailed seismological analysis (See-
and ground conditions. Some of the ground materials ber et al. 1981) has shown that the 1934 earthquake epi-
forming foundations are more susceptible to damage than centre was probably located in Nepal, about >100 km away
others. Remote sensing can aid in delineating different types from the main damage zone. The widespread and severe
of foundation materials, such as soil types etc., which may damage in Bihar was a result of liquefaction of soil in the
have a different proneness to earthquake damage. alluvial plains, and the striking feature is that the slump belt
Liquefaction during the north Bihar earthquake (1934). —the zone of liquefaction—is located far from the epicentral
In the north Bihar (India) earthquake of 1934, extensive estimates (Chander 1989).
damage occurred in the northern plains of Bihar Liquefaction is a peculiar problem in soils and occurs due
(Fig. 19.116a). Based on the initial analysis, it was postu- to vibrations in saturated, loose alluvial material. It is more
lated that the epicentre was located near Madhubani (in severe in fine sands and silts than in other materials (Prakash
380 19 Geological Applications

Fig. 19.116 a Disaster map of the north Bihar earthquake, 1934; epicentre. b Landsat TM image of part of the above area. The dark
isoseismals on Mercalli scale are redrawn after GSI (1939); much zone on the image is a wet clayey zone, north of which lie fine
damage occurred due to soil liquefaction in the slump belt; epicentral sands, a lithology more susceptible to liquefaction; note that the
estimates of the earthquake after GSI (Roy 1939) (R) and Seeber boundary passing north of Darbhanga (D), seen on the image,
et al. (1981) (SA) are indicated; note that the slump belt is located matches closely with the southern limit of the slump belt in (a) (a,
quite a distance from the recent estimates of the earthquake b Gupta et al. 1998)

1981). Figure 19.116a shows the zone of soil liquefaction Liquefaction during the Bhuj (Kutch) earthquake (2001).
mapped soon after the earthquake of 1934 (GSI 1939). A severe earthquake struck western parts of India on 26
Figure 19.116b is the Landsat TM image (25 May 1986) of January 2001. It caused extensive damage in the area around
part of the area. On the image, a gradational boundary can be Bhuj (Kutch), where the epicentre was located. The earth-
marked separating alluvial (fine) sands on the north from a quake was also accompanied by substantial discharge of
wet clayey zone (dark tone, abundant backswamps etc.) on water from subsurface to surface, due to soil liquefaction.
the south, and this boundary has a close correspondence with Figure 19.117 obtained from IRS-WiFS sensor gives a time
the limit of the liquefaction zone of the 1934 earthquake series of the phenomenon. Figure 19.117a is a pre-earthquake
(Gupta et al. 1998). The above is in conformity with the image; Fig. 19.117b–d were acquired sequentially after the
ideas that fine sands are susceptible to soil liquefaction earthquake, and show the emergence of some water on the
during vibrations, whereas clayey zones are not. surface and its gradual drying up (Mohanty et al. 2001).

Fig. 19.117 Soil liquefaction during the Bhuj earthquake (26 January surface. c Image of 29 January 2001 shows substantial spread of water
2001); the images are from IRS-WiFS, NIR-band. a Image of 23 (arrows). d Image of 4 February 2001, showing that most of the water
January 2001, before the earthquake. b Image of 26 January 2001, channels have dried up (a–d Mohanty et al. 2001)
about 100 min after the earthquake, shows some water surges on the
19.13 Neotectonism, Seismic Hazard and Damage Assessment 381

19.13.3 Disaster Assessment Figure 19.119 shows an example of pre- and


post-earthquake images of the Gorkha earthquake that hit
Disaster following an earthquake gets spread across a region. Nepal on 25th April 2015.
For rescue, relief, and reconstruction purposes, the man-
agement authorities require information about the area,
amount, and type of damage particularly to habitats and 19.14 Volcanic and Geothermal Energy
buildings. Remote sensing techniques play an important role Applications
in this respect because of their fast response, non-contact,
low cost and synoptic view capabilities. Areas of volcanic and geothermal energy are characterized
Remote sensing application to earthquake induced dam- by higher ground temperatures, which can be detected on
age assessment to buildings is reviewed by Dong and Shan thermal-IR bands from aerial and space-borne sensors. In
(2013). There are two basic approaches: (a) those that utilize usual practice, the thermal-IR data are collected at pre-dawn
multi-temporal strategy, i.e. evaluation of changes between hours in order to eliminate the direct effect of heating due to
pre- and post-event images, and (b) those that interpret solar illumination, and minimize that of topography. How-
post-event data (mono-temporal strategy) only. The data ever, daytime thermal-IR data can be well utilized for
used in both cases has included optical, LiDAR and SAR observing volcanic and geothermal energy areas (Watson
images. Whereas optical data has the advantage of easy 1975). The effect of solar heating can be considered to be
interpretability, SAR images have advantage of all-time uniform across a region of flat topography. In the forenoon
all-weather capability. It is generally considered that a spa- (09.00−10.00 h) and late afternoon (16.00 h), when ther-
tial resolution of about 1–0.5 m is adequate for damage mal crossing pertaining to solar heating occurs, differential
assessment purposes. Figure 19.118 shows the damage effect due to solar heating or ground physical properties is
occurring during the Bhuj earthquake (26 Jan 2001). minimal (see Fig. 12.3b); these hours become suitable for

Fig. 19.118 Damage during the Bhuj earthquake (26 January 2001); some buildings have collapsed and some appear to have altered
the IKONOS Pan (1-m resolution image acquired on 2 Feb 2001 shows rooflines (courtesy Spaceimaging.com)
extensive damage to individual buildings caused by the earthquake;
382 19 Geological Applications

Fig. 19.119 a, b Damage assessment during the Gorkha earthquake Kathmandu; note the conspicuous damage to the Tower and the
(25 April 2015), Nepal; the figure shows pre-earthquake (25 Oct 2014) adjoining areas (courtesy DigitalGlobe)
and post-earthquake (27 April 2015) images of the central part of

picking up geothermal anomalies. Therefore, thermal-IR dioxide into the atmosphere (Fig. 19.120). Monitoring of
remote sensing surveys can be carried out at 09.30 and volcanoes is important in order to understand their activity
16.00 h to map volcanic and geothermal energy areas. and behaviour and also possibly predict eruptions and rela-
ted hazards. Satellite remote sensing offers a means of reg-
ularly monitoring the world’s sub-aerial volcanoes,
19.14.1 Volcano Mapping and Monitoring generating data on even inaccessible or dangerous areas. In
the Central Andes, for example, using Landsat TM multi-
19.14.1.1 Introduction spectral data, Francis and De Silva (1989) mapped a number
Volcanic eruptions are natural hazards that destroy human of features characteristic of active volcanoes, such as the
property and lives and also affect the Earth’s environment by well-preserved summit crater, pristine lava flow texture and
emitting large quantities of carbon dioxide and sulfur morphology, flank lava flows with low albedo, evidence of
post-glacial activity, and higher radiant temperatures (from
SWIR bands). This led them to identify presence of more
than 60 major potentially active volcanoes in the region,
whereas only 16 had previously been catalogued.
A convenient criterion for regarding a volcano as ‘active’
or ‘potentially active’ is that it should exhibit evidence of
having erupted during the last 10,000 years. In the absence of
isotope data, morphological criteria have to be used. A vol-
cano may be taken as potentially active if it possesses such
features as an on-summit crater with pristine morphology or
flank lava with pristine morphology (Francis and De Silva
1989). Surface expression of hot magmatic features associ-
ated with volcanism, particularly at the pre-eruption stage, is
usually of relatively small spatial extent. This implies that the
use of thermal-IR imagery with a high spatial resolution
would be most appropriate to monitor volcanic activity. Our
ability to monitor volcanic activity using satellite remote
sensing is thus constrained by the sensor resolution—in terms
of spatial, spectral and temporal aspects.

Fig. 19.120 Chaitén volcano, Chile, in eruption during May 2008,


19.14.1.2 Lava Flow Mapping
releasing plumes of steam and volcanic ash (Black-and-white printed
from ASTER colour image; courtesy NASA/METI/AIST/Japan Space The Lascar volcano, Chile, has been one of the most active
Systems, and U.S./Japan ASTER Science Team) volcanoes recently, and has been a site of many investigations
19.14 Volcanic and Geothermal Energy Applications 383

Fig. 19.121 JERS-1-OPS


sensor composite of channels 831
(RGB) of the Luscar volcano,
Chile (images approx. 10 km2).
a Pre-eruption; b 1-day after the
eruption, showing the new
pyroclastic flows (printed
black-and-white from FCC)
(Denniss et al. 1998)

(e.g. Oppenheimer et al. 1993; Denniss et al. 1998). The


various products of eruption, viz. volcanic lava, ash, pumice,
plume etc. can be mapped on remote sensing data. Fig-
ure 19.121 shows the extent of pyroclastic flows during the
April 1993 eruption at Lascar. Figure 19.121a corresponds to
pre-eruption where older lava flows can be seen; Fig. 19.121b,
acquired one day after the eruption shows the new pyroclastic
flows emplaced on the N, NW and SE flanks of the crater.
Figure 19.122 shows a Landsat-8 thermal-IR image of Fig. 19.123 ASTER-SWIR band images of the lava flow at Mount
the Holuhraun flow, Iceland. The image was acquired during Etna, Italy (29 July 2007); a in ASTER band 4; b in ASTER band 9; the
a night-time overpass (24 Oct 2014). The lava field is more wider area in (b) demonstrates the detection of radiance from relatively
than 85 km2 in area and 1.4 km3 in volume, one of the cooler surfaces by B9, on the periphery of the lava flow, than by B4,
which detects the emissions only from the very hot lava flow itself
largest in Iceland. (Blackett 2016)
If a hot volcanic lava flow is sensed by a multispectral
sensor with the same pixel size in different bands (e.g.
ASTER having 6 bands in SWIR, all with 30 m spatial wavelength image (Fig. 19.123a, b; Blackett 2017). This is
resolution), then the event appears possessing a wider area due to the fact that longer wavelength band detects radiance
on the longer wavelength image than on the shorter from relatively cooler surfaces e.g. on the periphery of the
lava flow, whereas the shorter wavelength image detects
emissions only from the relatively hot surface in the central
zone of the lava flow.

19.14.1.3 Temperature Estimation


The volcanic vent is found to have a temperature of gener-
ally around 1000 °C, and emissivity would be in the range
0.6–0.8 (Rothery et al. 1988; Oppenheimer et al. 1993).
Features with such high temperatures emit radiation also in
the SWIR region (1.0−3.0 µm), as indicated by the Planck’s
law. Therefore, although, the SWIR region is generally
regarded as suitable for studying reflectance properties of
vegetation, soils and rocks, it can also be used for studying
Fig. 19.122 Night-time (local-time: 22.07) image of Holuhraun lava
field, Iceland, acquired by Landsat-8 on 24 Oct 2014; the volcanic high-temperature features.
eruption started on 29 August 2014 and ended on 27 February 2015; Although the vent may have a temperature of around
the lava field is more than 85 km2 in area and 1.4 km3 in volume, one 1000 °C, it need not occupy the whole of the pixel. For this
of the largest in Iceland; the contours are based on radar images from reason, the temperature integrated over the entire pixel,
a surveillance flight of the coast guard aircraft (courtesy Volcanological
and Natural Catastrophe Group at the Institute for Earth Sciences of would be less than the vent temperature, unless the vent fully
University of Iceland) covers the pixel. The pixel-integrated temperatures for
384 19 Geological Applications

Fig. 19.124 a Anomaly on


Landsat TM7 image (16 March
1985) within a crater of the
Luscar volcano, Chile;
b corresponding TM7-based
pixel-integrated temperatures (a,
b Francis and Rothery 1987)

Landsat TM/ASTER resolution are found to be around 200 19.14.1.4 Plume Observations
−400 °C. The thermal-IR band Landsat TM6 gets saturated Plume columns as high as tens of kilometres may accom-
at 68 °C and the ASTER-TIR bands have sensitivity up-to pany large volcanic explosions. Due to such heights, adia-
97 °C; therefore, the Landsat TM/ASTER thermal-IR are batic cooling takes place. Therefore, the plumes are marked
not ideally suited for studying high-temperature objects by a negative thermal anomaly and a higher reflectance in
having 200−400 °C. On the other hand, the SWIR bands the VNIR. Jayaweera et al. (1976) were some of the first to
viz. Landsat TM7, TM5, and ASTER-SWIR can be used for observe a direct eruption of a volcano on remote sensing
such investigations. (NOAA) data and measure the height of the plume from
The procedure of temperature estimation was discussed such data as shadow length and sun elevation. The dimen-
earlier (Sect. 12.4). It involves the following main steps: sions of a plume at a particular instant can be calculated
(1) determination of emitted radiation for each pixel, using simple trigonometric relations and data on plume
including subtraction of radiation from other sources, such height. Such data are useful in aviation management and
as solar reflected radiation etc; (2) conversion of corrected damage assessment.
DN values into emitted radiance; and finally (3) conversion The Eyjafjallajokull Volcano, Iceland, erupted in 2010
of emitted spectral radiance into radiant temperature emitting a huge quantity of smoke and gas. This eruption
(pixel-integrated temperature) values. Figure 19.124a shows was specifically covered by ASTER at every possible
a Landsat TM7 image exhibiting a thermal anomaly within a overpass, day and night. All data were processed to either
crater on Lascar, with computed pixel-integrated tempera- brightness temperature in Celsius, VNIR reflectance, or
tures in Fig. 19.124b. plume composition. Composition of the plume was derived
An active lava flow will consist of hot, incandescent, from a spectral deconvolution approach using laboratory
molten material in cracks or open channels, surrounded by TIR spectra. Figure 19.125a is day-time plume composition
a chilled crust. Therefore, thermally, the source pixel will image derived from the thermal-IR data, and Fig. 19.125b is
be made up of two distinct surface components: (1) a hot an example of night-time temperature distribution image
molten component (occupying fraction ‘p’ of the pixel), and derived from the thermal-IR data. The entire set of data
(2) a cool crust component which will occupy the allows for a more precise determination of thermal output,
remaining (1 − p) part of the pixel (see Fig. 12.19). Using monitoring of potentially new temperature anomalies, and
the dual-band method (Matson and Dozier 1981), the determination of the products in the plume (source: http://
temperature and size of these two sub-pixel heat sources ivis.eps.pitt.edu/data/iceland/).
can be calculated. (see Sect. 12.4.4). Rothery et al. (1988),
Glaze et al. (1989) and Oppenheimer (1991) adapted this 19.14.1.5 Global Monitoring and Early-Warning
technique to estimate sub-pixel temperatures at several of Volcanic Eruption
volcanoes. Global monitoring of volcanic activity is an issue of utmost
Thus, satellite remote sensing using SWIR bands can concern and priority. Significant advances have been made
generate data for understanding the cooling of lava flows— in the field of global monitoring and early-warning of vol-
an information that would hardly be available at erupting canic eruptions using satellite observations. Broadly, there
volcanoes by any other technique. are two main approaches to the problem: (a) detecting
19.14 Volcanic and Geothermal Energy Applications 385

Fig. 19.125 ASTER images of Eyjafjallajokull Volcano, Iceland, b night-time temperature distribution image derived from the
acquired during the eruption in 2010; a day-time plume composition thermal-IR data (12 May 2010) (Source http://ivis.eps.pitt.edu/data/
image derived from the thermal-IR data (3 May 2010), note the iceland/) (accessed on 6th Nov 2016)
distribution of silicate ash, SO2 and water vapour at that instant;

thermal anomalies, and (b) detecting surface deformation Using ASTER-TIR image data, Pieri and Abrams (2005)
(InSAR application). detected pre-eruption thermal anomaly of Chikurachki
volcano, Russia (Fig. 19.126). This observation and
(a) Methods utilizing thermal anomalies ASTER sensor capability was further corroborated by
Carter et al. (2008). Carter and Ramsay (2010) analysed
Pre-eruption indications of a volcanic activity may be in the
form of surface temperature anomalies—that are small in
spatial extent, and have a limited time window. This means
that ideally the satellite-based pre-eruption warning system
ought to have a high spatial resolution and a high repetivity.
However, both these requirements are difficult to fulfill
concurrently. Presently, there are satellite sensors with large
swath width providing high repetivity but with low spatial
resolution; for example: AVHRR and MODIS provide
twice-daily coverage but with a coarse spatial resolution of
*1 km. On the other hand there are satellites sensors with
higher spatial resolution (Landsat TM, ETM+, OLI,
ASTER.) that have good spatial resolution (60–90 m in the
TIR), but a low repetivity (repeat cycle of *16 days).
Nevertheless, in spite of the above limitations, significant
advances have been made in thermal remote sensing in the
context of volcano monitoring.
ASTER-TIR sensor is sensitive to temperatures that range Fig. 19.126 a, b ASTER image (14 Feb 2003) of Chikurachki
from −73 to 97 °C, and has a 1–2 °C detection threshold volcano summit, Russia; the image preceded the eruption by about two
with a ±3 K radiometric accuracy (Yamaguchi et al. 1998); months; white pixels correspond to temperatures of *266 K pixel
this makes it ideal to observe low temperature as well as integrated temperature, whereas the darkest pixels correspond to
*250 K pixel averaged temperature; this indicates that high spatial
slightly hotter thermal features resulting from magmatic resolution thermal IR data has the potential to detect subtle thermal
activity. Thus, ASTER has a unique utility in watching anomalies (on the order of 3–5 K) with typical dimensions of about
world’s volcanoes (Pieri and Abrams 2004). 100 m, that may precede a volcanic eruption (Pieri and Abrams 2005)
386 19 Geological Applications

Fig. 19.127 False colour


composite generated from
multi-temporal ASTER-TIR
temperature images of Shiveluch
volcano, Russia; the FCC shows
images acquired on 19 May 2001
(red), 11 May 2004 (green), and
29 March 2005 (blue), and
highlights the extent (area) and
magnitude (intensity of the
colour) of the warm deposits
which were generated by the
explosive eruptions on 19 May
2001, 9 May 2004, and 28
February 2005 (Carter and
Ramsay 2010)

ASTER-TIR time series data of Shiveluch volcano (Kam- (500 m in SWIR and 1 km in TIR) but a high repeat cycle
chatka, Russia) pertaining to the period 2000–2009, during daily of twice daily (with two MODIS satellites in orbit).
which period six explosive eruptions occurred at Shiveluch. MODVOLC computes normalized temperature index
Figure 19.127 presents a multi-temporal FCC of the vol- (NTI) as (Flynn et al. 2002; Wright et al. 2004):
cano showing hot flows that were outpoured during three
different episodes of volcanic activity. Carter and Ramsay ðR22  R32 Þ
NTI ¼ ð19:25Þ
(2010) deduced the pixel-integrated temperature of the ðR22 þ R32 Þ
hottest pixels at the volcano summit in this span of time.
where R22 and R32 refer to reflectance values in MODIS B22
They found that the temperature of the hottest pixel at the
(3.9 µm) and B32 (12.00 µm) respectively.
volcano summit gradually rose till eruption occurred, and
then the temperature decreased—this happening in a cyclic (b) Methods utilizing detection of surface deformation
manner. (InSAR technique)
MODVOLC. For pre-eruption warning, the basic strategy
using satellite remote sensing is to detect a hotspot at the The principle of SAR interferometry has been discussed in
volcano summit that could be attributed to fresh magma; this Sect. 17.2 The technology has made rapid strides during the
would indicate that the volcano could erupt in the near last two decades and has been applied to all problems of
future. For this purpose, SWIR and TIR band data are used earth sciences where subtle surface changes are involved.
in conjunction. For AVHRR, the method used brightness As is well known, emplacement of dykes and evolution
temperature difference between the SWIR (3.8 µm) and TIR of magma reservoir at shallow depth are precursors to vol-
(10.8 µm) bands; where-ever the brightness temperature canic eruption, and concurrently with that some ground
difference exceeded a given threshold of 10 K, the surface deformation including volcanic inflation/deflation or flank
was interpreted as possessing a subpixel hotspot that con- deformation around volcanoes may occur, which could be
tributed to SWIR band (Harris et al. 1995). This became the detected by InSAR. With the above background, numerous
foundation of OKMOK algorithm that was applied to investigations have been carried out for monitoring volca-
Aleutian Islands volcano (Dean et al. 1998; Dehn et al. noes using SAR interferometry, the world over (e.g. Mas-
2000). sonnet et al. 1995; Briole et al. 1997; Lu et al. 2010; Lu and
Presently, MODVOLC is the most pervasive and exten- Dzurisin 2010; Qu et al. 2015). This has the potential of
sively used volcanic detection thermal algorithm. This also developing into a fore-warning tool.
uses the same principle that hotter (lava flow or magmatic) Figure 19.128 presents an example of Kilauea volcano,
surface will produce higher spectral radiance at shorter Hawaii, the world’s most active volcano. There is a promi-
wavelengths than at longer wavelengths. MODVOLC uses nent rift running eastward from the main summit, along
MODIS data which has a rather coarse spatial resolution which lava eruption and tectonic movements also occur. On
19.14 Volcanic and Geothermal Energy Applications 387

Fig. 19.128 SAR-interferometry image of Kilauea volcano, Hawaii. where each colour cycle represents 1.5 cm of surface motion. The
The image has been generated from SAR overpasses of 11 Feb 2011 and circular pattern of concentric fringes towards the left represent deflation
7 March 2011(Italian Space Agency—ASI constellation of COSMO- of the magma source beneath the Kelauea caldera. The complex pattern
SkyMed radar satellites). On 5 March 2011 (two days before the second towards the right represents the deformation caused by the volcanic
overpass), a large fissure eruption began on the east rift zone of Hawaii’s dyke intrusion and subsequent fissure eruption taking place along the
Kilauea volcano. Surface displacements are seen as contours or fringes east rift zone (courtesy ASI/NASA/JPL-Caltech)

5 March 2011, a large fissure eruption began on the east rift was observed to be maximum (10 cm) at the centre, having
zone of Hawaii’s Kilauea volcano. The interferometric risen at an average rate of 2.5 cm per year. Presumably, this
image (Fig. 19.128) here depicts the relative deformation of could be a result of the intrusion of a small volume of
Earth’s surface at Kilauea. Deflation of the magma source magma below the ground surface. However, subsequent
beneath the Kilauea caldera and deformation caused by the investigations during 2005 showed the uplift to have slowed
volcanic dyke intrusion and subsequent fissure eruption down, and therefore the rate of magma intrusion also
along the east rift zone are well documented (source: http:// apparently declined (source: http://volcano.si.edu/volcano.
www.nasa.gov/topics/earth/features/kilauea_2012.html). cfm?vn=322070).
The Three Sisters volcano region in Oregon, USA, makes As such, the duration, and final culmination of subsurface
another interesting case study. It drew much attention when activity and their intensity and episodes are quite impossible
initially field surveys indicated that a phase of uplift had to forecast, and only continued monitoring can help safe-
started in 1997 in this area. Using satellite SAR data of 1996 guard against possible disasters.
and 2000, the USGS detected uplift of the ground surface Finally, efforts have been made to deduce inter-
over an area of 15–20 km diameter (Fig. 19.129). The uplift relationship between earthquakes and volcanism on global

Fig. 19.129 Differential InSAR


from satellite SAR data (passes in
1996, 2000) of the Three Sisters
volcano region, Oregon. A broad
uplift of the ground surface over
an area of about 15–20-km
diameter, with maximum uplift of
about 10 cm at its centre, is
detected (courtesy of USGS;
interferogram by C. Wicks; http://
vulcan.wLusgs.gov/volcanoes/
sisters/)
388 19 Geological Applications

scale. Donne et al. (2010) computed heat flux inventory for Krafla area, Iceland. An interesting example of applica-
volcanism from satellite data and inter-related this with tion of Landsat TM data in mapping volcanic and geother-
earthquake activity. With data of the period 2000–2007, they mal areas is available from the Krafla area, Iceland (courtesy
found that earthquake incidence frequently leads to subse- of K. Arnason). In this region the tectono-volcanic activity is
quent increase in heat flux at volcano with-in 1–21 days. manifested through volcanic systems, i.e. a central volcano
Whether a volcano responds or not, depends on several and an associate fissure swarm passing through it. The
factors viz., the earthquake magnitude, distance to the epi- activity takes place episodically rather than continuously,
center, and orientation of the earthquake focal mechanism in with a period of 100−150 years, each episode of activity
respect to the volcano. lasting about 5–20 years. The most recent activity started in
1975 in the Krafla neovolcanic system and continued for
almost a decade. The activity occurred on the 80-km-long
19.14.2 Geothermal Energy N–S-striking fissure swarm, which witnessed an E–W rifting
of *5−7 m. It was accompanied by earthquakes, vertical
The satellite-acquired thermal-IR data have shown limited ground movements, changes in geothermal activity and
utility in the mapping of geothermal areas, mainly due to volcanic eruptions. Although the main magmatic activity
constraints of ground spatial resolution. Although geother- remained subsurface, several (nine) phases of short lava
mal areas, as such, have a large areal extent (several tens of eruptions occurred during this time interval. The magma was
km2), the top surface is by and large relatively cool. Rock highly fluid and spread laterally to solidify in a 36 km2 lava
material is a poor conductor of heat. The geothermal flux or field, no more than 7 m thick on average. A very large
heat is transported from depth to the surface along narrow eruption took place during 4–18 September 1984, barely
fissures and faults, mainly by convection. It is the detection about 3 weeks before a Landsat TM pass acquired a set of
of these narrow warmer zones on the surface that can lead to data on 3 October 1984.
the identification of geothermal areas, and for this adequate Figure 19.130a shows the extent of the lava fields from
ground spatial resolution in the thermal-IR band is required. older (1975–1981) and the later (September 1984) volcanic

Fig. 19.130 a Map of the areal


extent of Krafla lava field from
the older (1975−81) and the
younger (September 1984)
eruptions; b shows the Landsat
TM6 image (3 October 1984);
note that the hot lava from the
September 84 eruption is very
conspicuous (bright) whereas the
older flow has cooled down and is
not discernible on the thermal IR
image; a number of geothermal
features are also observed (a,
b Courtesy of K. Arnason)
19.14 Volcanic and Geothermal Energy Applications 389

eruptions. The TM6 image (Fig. 19.130b) shows the hot spatial resolution is in surveying geothermal resources.
lava of the September 1984 eruption (bright signature). The Although geothermal areas usually extend over tens of
older parts of the lava field, which are 3-year older, have square kilometres, the spatial extent of hot sites on the sur-
already cooled down and are not discernible on the image. face, i.e. exposure, is limited, and mostly bound by certain
Further, some other thermal anomalies are also observed on tectonic structures. Mapping of these structures, along which
the Landsat TM6 of the area, where detectability is largely heat is effectively transported to the surface, is important,
influenced by ground size, and topographic and environ- and this is possible only through sensors of adequate spatial
mental conditions. resolution in the TIR.
Heimaey Island, Iceland. Figure 19.131 is an 8–14 µm A 200–500 m broad zone of the lava along the coast
thermal-IR aerial scanner image of Heimaey, an island appears to have already cooled down completely. This is due
10 km off the south coast of Iceland. The image shows part to cold seawater penetrating the rock through cracks result-
of the local village and a lava flow originating from a vol- ing from extension and shrinking of the cooling lava mass.
cano (located just outside the image) in an eruption in 1973. Also, the eastern-most part of the lava does not show any
The aerial scanner image was acquired in 1985. The eruption increased thermal emission. This is due to the cooling
became famous because of the great efforts to save the vil- activities in 1973 mentioned above. This cold area correlates
lage from the flowing magma, as about 6 million tons of sea well with the area which was sprayed with enormous
water was pumped on part of the flowing magma threatening quantities of seawater during the eruption.
the village and its fishing harbour. These activities acceler- Hot Springs. A hot spring is a spring produced by the
ated the solidification and cooling of the lava and are emergence of geothermally heated groundwater that rises
believed to have saved a considerable part of the village. from the Earth’s crust. Hot springs exist in many locations
Now the village is served by a central heating system which all over the crust of the Earth and differ in temperature and
is operated by pumping water through the porous cracked water discharge from place to place. The high temperature at
lava mass, where it evaporates, and the vapour is then col- depth in some cases, e.g. near magma, may cause water to be
lected in heat exchangers at shallow depth. heated enough that it boils or becomes superheated, and then
The image shows strikingly the surface temperature dis- it may erupt in a jet above the surface of the Earth with
tribution on the lava surface in 1985, 12 years after the steam, called a geyser. In many areas, hot springs possess
eruption. Although the natural pattern of the temperature societal significance, medicinal value and may be also local
distribution has been somewhat disturbed by man-made source of energy; therefore, it is important to locate and map
features (roads, pipelines etc.), some important characteris- hot springs.
tics are obvious. The surface of the lava shows distinct linear Typically, hot springs have small areal extent; whereas
or curvilinear structures of high thermal emission. Between the out-flowing thermally anomalous water channel may
these warm structures, the lava surface is cooler. As massive possess a length of 100–200 m, the width of the water
rock is a very poor thermal conductor, the heat reaches the channel is typically small, barely a couple of meters. This
surface mainly by convection through open cracks and fis- makes their identification on thermal IR satellite images
sures. This example shows clearly how important high

Fig. 19.131 Thermal-IR (8–


14-lm) aerial scanner image
(survey 1985) of Heimaey,
Iceland, showing the temperature
distribution on the lava surface
12-years after the eruption that
occurred in 1973; on the west, a
village is visible with reticulate
road network (Björnsson and
Arnason 1988)
390 19 Geological Applications

fractures, vents etc. to reach the coal. The fires burn out a
precious energy resource, hinder mining operations and pose
a danger to man and machinery, besides leading to environ-
mental pollution and problems of land subsidence. There is,
therefore, a need to monitor the distribution and advance of
fires in coal fields. Various field methods, such as the delin-
eation of smoke-emitting fractures, thermal logging in bore
holes etc., have been adopted with varying degrees of success
at different places. However, the study of coal fires is a dif-
ficult problem as fire areas are often inaccessible; therefore,
remote sensing techniques could provide valuable inputs.
The first documented study of coal fires using thermal-IR
remote sensing is that of Slavecki (1964) in Pennsylvania
(USA). Ellyett and Fleming (1974) reported a thermal-IR
Fig. 19.132 Aerial thermal imagery of a part of Pilgrim hot springs, aerial survey for investigating coal fires in the Burning
Alaska, acquired with a broad-band thermal FLIR camera; the image Mountain, Australia. Since then, a number of remote sensing
depicts the location of hot springs, seeps, thermal pools, and thermal studies have been carried out world-wide (e.g. Huang et al.
streams draining the geothermal area; heated ground is manifested as
1991; Bhattacharya and Reddy 1994; Mansor et al. 1994;
areas of anomalous snow-melt in the winter-time (28 April 2011) image
(Haselwimmer et al. 2013) Prakash et al. 1995a; Prakash and Gupta 1999; Kuenzer et al.
2008; Martha et al. 2010; Huo et al. 2015).
To deal with problems of coal fires, information is often
quite impossible, with the present spatial resolution of sought on a number of aspects, such as: occurrence, distri-
thermal sensors being in the range of 60–90 m. bution and areal extent of fires, whether the fire is surface or
For a study of the Pilgrim Hot Spring, Alaska, Hasel- subsurface or both, depth of coal fire, temperature of ground
wimmer et al. (2013) acquired *1 m spatial resolution or surface fire, propagation of fire, subsidence etc. Remote
airborne thermal imagery using a broadband (7.5–13 lm) sensing can provide useful inputs on all the above aspects.
FLIR (Forward Looking Infrared) camera. The image data
enabled mapping of various geothermal features including 1. Fires in the Jharia Coal Field, India
hot springs and pools, thermally anomalous ground and ice
free-areas (Haselwimmer and Prakash 2012). Further, the The Jharia coal field (JCF), India, is a fairly large coal field
winter-time TIR data could also detect areas of heated of high-quality coking coal. It covers area of about 450 km2,
ground that appeared as zones of anomalous snow-melt where about 70 major fires are reported to be actively
located close to the stream, a direct indicator of heating from burning (Sinha 1986). Coal fires, both surface and subsur-
the underlying shallow geothermal aquifer (Fig. 19.132). face, are distributed across the entire Jharia coal field
ASTER and WV-2 data also showed the presence of (Figs. 19.133a, b). Detailed investigations have been carried
anomalous vegetation pattern (early green healthy vegeta- out to evaluate the utility of remote sensing technology for
tion) growing over areas of heated ground vis-a-vis the rest the study of coal fires and related problems in the JCF (e.g.
of the area, time window of such an anomaly being short. Prakash et al. 1995a, b, 1997; Saraf et al. 1995; Gupta and
This opens up possibilities of indirect detection of hot Prakash 1998; Prakash and Gupta 1998, 1999; Chatterjee
springs through high spatial and temporal resolution VNIR 2006; Chatterjee et al. 2007, 2016).
satellite data.
Subsurface fires

19.15 Coal Fires Surface temperature of the ground above subsurface coal
fires is usually marked by a mild thermal anomaly of about
Coal fires are a widespread problem in coal mining areas the 4–8 °C, due to the low thermal conductivity of rocks such as
world over, e.g. in Australia, China, India, South Africa, sandstone, shale, coal etc. As Landsat TM6 gets saturated at
USA, Venezuela and various other countries. These coal fires 68 °C, and ASTER TIR at 97 °C, these sensors are
exist as coal seam fires, underground mine fires, coal refuse well-suited for sensing thermal anomalies over subsurface
fires and coal stack fires. The main cause of such fires is fires.
spontaneous combustion of coal occurring whenever it is Figure 19.134 shows an IHS-processed Landsat image of
exposed to oxygen in the air, which may pass through cracks, a part of the JCF. On the lower-left corner is the inset of a
19.15 Coal Fires 391

Fig. 19.133 Field photographs


showing a surface fire and
b subsurface fire in the JCF
(a Prakash and Gupta 1999;
b courtesy of S. Sengupta)

observations are corroborated by field measurements


(Fig. 19.135).
Du et al. (2015) developed a self-adaptive threshold
based method for subsurface coal fire detection. This is
based on segmentation and thresholding of image data,
which they applied on ASTER TIR data of Wuda coalfield,
China, to auto-detect spatial distribution of thermal
features.
Estimating the depth of subsurface fires. Subsurface fires
in the JCF occur at varying depth, ranging from just a few
metres up to tens of metres. Estimating the depth of a fire is
important not only for combating fire but also for various
applications, e.g. for hazard assessment, rehabilitation plans
etc. Depth modelling of buried hot features (such as sub-
Fig. 19.134 Processed Landsat TM data of JCF (IHS-processed, I = surface fire) from remote sensing data is still in its infancy,
TM4, H = TM6, S = constant, such that red corresponds to highest as it is quite a difficult problem requiring repetitive TIR data
DNs); the sickle-shaped field above is the Jharia coal field; the relief and estimates of realistic values of various physical param-
shown in the background is from TM4; black linears and patches are eters (Prakash et al. 1995b).
coal bands, quarries and dumps; blue is the background (threshold)
surface temperature, anomalous pixels have green, yellow and red In simple cases, however, a geometric method can be
colours with increasing temperature; on the lower left is an inset of the employed for depth estimation, collectively using informa-
psuedocolour TM6 band; the Damodar river appears in the south tion on geological-structural setting and the position of
(Prakash et al. 1995a) anomalous thermal pixels. The location of a subsurface fire
can be determined from thermal anomalies, and VNIR
Landsat TM6 band sub-scene in psuedocolour, where pixels images can provide information on the location of the out-
related to higher ground temperatures (subsurface fires) are crop (Fig. 19.136a, b). With the field information on orien-
discriminated from non-fire areas, using density slicing. It is tation of the strata, the depth of a subsurface fire can be
obvious that the thermal anomalies related to various land computed using simple planar geometry (Fig. 19.136c).
surface features can be much better located on the IHS image Results obtained by the above method are reported to be in
than on the psuedocolour. Ground surface temperatures reasonable agreement with the field data in the JCF (Saraf
corresponding to thermal anomalies as derived from TM6 et al. 1995). However, the method may have limitations in
data are found to be in the range of 25.6–31.6 °C, the areas of multiple coal seams, particularly if information on
background temperatures being <24 °C. These temperature the specific coal seam with the fire is lacking.
392 19 Geological Applications

Fig. 19.135 A typical profile of surface temperatures (field measurements) above subsurface fire area; the background temperatures are <24 °C;
the anomalous ground temperatures reach up-to 28 °C in the profile (Gupta and Prakash 1998)

Fig. 19.136 a Landsat TM4 and b Landsat TM6 digital data outputs of the same area showing location of a coal seam and thermal anomaly
respectively; c principle of computing depth of fire from location on thermal anomaly and outcrop (Saraf et al. 1995)

Surface fires

Surface fires in coal fields are features of local high surface


temperature but generally small areal extent. As Landsat
(TM/ETM+/OLI) SWIR bands have the capability to mea-
sure temperatures in the range approx. 150−500 °C range
(see Fig. 12.16), and ASTER-SWIR that in 101–449 °C
range (see Fig. 12.17), these sensors can be used for
studying surface fires.
Figure 19.137 shows a Landsat FCC TM753 (coded in
RGB). The higher DN values in Landsat TM7 and TM5
enable identification of surface fires. TM7 has sensitivity in
the temperature range of 160–277 °C and that TM5 in the
range of 267–420 °C. Pixels with high temperatures
(>267 °C) are radiant in both TM7 and TM5, and so appear
yellow. Pixels with of temperatures <267 °C are radiant in
only TM7, and hence appear red. In many places, a sort of
‘zoning effect’ is seen where red pixels (of relatively lower
temperature) enclose or border the yellow (highest-
temperature) pixels. Fig. 19.137 FCC of Landsat TM753 (RGB); windows I, J, K, L, M
and N depict areas of surface fires; yellow pixels correspond to areas of
The procedure of temperature estimation from TM5 and highest temperatures being radiant in both TM5 and TM7; red areas
TM7 has been described in Sect. 12.4. In the JCF, the (radiant only in TM7) are relatively lower temperatures (see enlarged
pixel-integrated temperatures (based on TM7 and TM5) windows K and L) (Prakash and Gupta 1999)
19.15 Coal Fires 393

have been found to range between 217 and 410 °C (Prakash


and Gupta 1999).
Further, in many cases, fires do not occupy the whole of
the pixel, i.e. only a part of the pixel is filled with surface
fire. The pixel-integrated temperatures are therefore less than
the actual surface temperatures of fires. In suitable data
conditions it is possible to compute sub-pixel area and
temperature using the dual-band method developed by
Matson and Dozier (1981). The sub-pixel temperatures are
Fig. 19.139 Coal fires in Wuda coal field, China; the figure shows
found to be in the range of 342−731 °C and sub-pixel areas
spreading directions predicted based on the coal fires that were
in the range between 0.2 of a pixel (= 180 m2) and 0.003 of a extracted from Landsat TM/ETM+ thermal-IR long-time series data
pixel (= 27 m2) (Prakash and Gupta op. cit.). (1999–2006) (Huo et al. 2015)
The above demonstrates the utility of Landsat SWIR data
for delineation and mapping of areas affected by surface as
well as subsurface fires in coal fields. 2002, 2003, 2004, 2006) series of data sets, they deduced the
gradual spreading directions of coal fires in successive years,
2. Coal Fires in Xinjiang, China and then predicted the likely coal fire development/dynamics
for the entire coal field in future (Fig. 19.139).
One of the largest deposits of coal in the world occurs in
north China, stretching over a region of about 5000 km E–W 4. Coal Mine Subsidence
along strike, and 750 km N–S. Coal fires occur in almost all
the fields—in scattered or clustered forms. Several workers Underground mining areas often face problems of land
(e.g. Huang et al. 1991) have reported coal-fire studies in subsidence where it is caused by volume loss due to mining,
China using remote sensing. Figure 19.138 (Zhang 1998) underground water, unfilled stopes or their gradual com-
shows the airborne thermal-IR image draped over the DEM. paction, and subsurface fires, as in the case of coal fields.
The coal fires detected from the thermal-IR scanner appear Generally, subsidence occurs after mining has ceased in an
red, and are distributed generally along the NE–SW strike of area; however, sometimes it occurs even when a mine is still
the coal seam. in operation, in which case it may lead to loss of lives,
settlements, resources, infrastructure etc. Therefore, it is one
3. Coal Fires in Wuda, China of the worst environmental hazards.
In the earlier days, conventional surveying techniques
This is another large coal field located in north China where were used to generate subsidence data and maps. However,
major fires are known to occur. To study coal fire dynamics in with the advent of Synthetic Aperture Radar Interferometry
the coal field, Huo et al. (2015) derived land surface tem- (InSAR) techniques, small-scale surface deformations and
peratures (LST) from remote sensing (Landsat TM/ETM+.) elevation changes can be mapped by SAR data. Subsidence
data sets and identified thermal anomalies related to coal fires. studies in coal mining areas using DInSAR technique have
Based on the results from long-time (years: 1999, 2000, 2001, been carried out by Yue et al. (2011) in Fengfeng coal mine
area, China, Engelbrecht et al. (2011) in South Africa,
Dong et al. (2013) in Huainan coal field, China, and
Chatterjee et al. (2015, 2016) in Jharia coal field, India,
among others.
Using Radarsat-2 C-band InSAR data pairs of 2012,
Chatterjee et al. (2016) identified recently subsiding areas in
the JCF. It is well known that dynamic land cover changes
result in temporal decorrelation problems for DInSAR pro-
cessing in mining areas. They innovatively used smaller
temporal baseline data pairs and adopted InSAR coherence-
guided incremental filtering with smaller moving windows
to highlight the deformation fringes over temporal decorre-
lation noise. The identified deformation fringes were vali-
Fig. 19.138 Coal fires in the Xinjiang coal field, China; airborne
dated with ground precision levelling data. This resulted in
thermal-IR image data co-registered and draped over the DEM; the coal
fires appear aligned and distributed generally along the NE–SW strike detection of several new previously unreported subsidence
of the coal seam (Zhang 1998) areas (Fig. 19.140).
394 19 Geological Applications

Fig. 19.140 Radarsat-2 C-band


differential interferogram of the
JCF showing DInSAR fringes at
multiple locations corresponding
to the subsiding areas during
2012; solid black line is the
outline of JCF (Chatterjee et al.
2016)

5. Environmental Effects of Coal Fires better used when vegetative cover is rather low (Huete
1988), as happens in many mining areas.
High temperature areas related to coal mine fires have a Inter-relationship between SAVI and temperature was
negative effect on vegetation and lead to reduction in studied by Saini et al. (2016) in a part of the Jharia coal field
potential of soil to support plant growth. Therefore, it is using data from Landsat OLI/TIRS for the period year 2013.
important to understand the relationship between vegetation SAVI image was generated using the standard algorithm from
index and ground temperature, both of which can be eval- TOA reflectance data (see Sect. 19.17.1); brightness tem-
uated from remote sensing data on spatio-temporal basis. For perature image was derived the thermal-IR data. Fig-
the study of vegetation from remote sensing data, various ure 19.141 shows the SAVI and temperature images. Profiles
vegetation indices can be used (see Sect. 19.17.1). Out of drawn at a selected alignment clearly brings out the inverse
these, NDVI has been most extensively used in general. relationship. It is observed that SAVI values are low where the
However, as NDVI has been found to be sensitive to soil temperature is high and vice-versa (Fig. 19.141a3), implying
brightness, the soil adjusted vegetation index (SAVI) may be a negative correlation between temperature and SAVI.

Fig. 19.141 Inter-relationship between SAVI and temperature in a temperature images a3, b3 are the profiles; note the inverse relationship
part of the Jharia coal field where numerous coal fires are known to between SAVI and temperature; also note the change in spatial
exist; a1 – a3 are generated from Landsat TM (15 Jan 1991) and b1 – b3 distribution of SAVI and temperature over a period of * 20 years
from Landsat OLI (13 Dec 2013); a1, b1 are SAVI images, a2, b2 are (1991–2013) (Saini et al. 2016)
19.16 Snow, Ice and Glaciers 395

19.16 Snow, Ice and Glaciers

19.16.1 Introduction

Snow is one of the most sensitive and vital natural resources


and covers a considerable part of the Earth’s surface. On
global and continental scales, snow due to its highly
reflective nature and large surface coverage has great impact
on climate variations, surface radiation balance and energy
exchange (Wang and Li 2003). Further, snow cover esti-
mates constitute the primary input parameter for hydrologi-
cal modelling (Rango and Martinec 1981), glacier mass
balance studies (Haeberli et al. 1998) and snow hazard
Fig. 19.142 Spectral curves of dry snow, wet (melting) snow and
prediction modelling (Quincey et al. 2005). The subject glacial ice
matter of remote sensing of snow, ice and glaciers is so vast
that it can fill volumes. This section here is a selective brief occurring at *1.5 and *2.0 µm (Fig. 19.142). The spectral
review designed to introduce the subject and touch upon properties of snow/ice can vary with bulk properties of
some of the salient remote sensing applications. constituent ice grains, metamorphism, age factor, particulate
impurities and liquid water content. The absorption of
radiation by snow and ice is intense at wavelengths longer
19.16.2 Snow/Ice Facies than 2.5 µm (Warren and Wiscombe 1980).
Sun is the main source of heat energy leading to the
Snow has several facies. During winter months at high alti- melting of snow. The dry versus melting snow line/zone is
tudes and high latitudes, the snow falling on the earth’s surface influenced by a complex interplay of factors, such as sun
is dry snow, essentially due to the fact that the temperature of direction and angle vis-à-vis local topographic slope and
the troposphere and that of the earth’s surface, both, are below aspect, time of the day, latitude–longitude, besides various
freezing point. Dry snow is crystalline, granular, flaky with atmospheric-meteorological factors. Thus, the dry versus
dendritic or highly branched crystals possessing very low melting snow line/zone could vary in altitude/location within
relative density (*0.1–0.5). It comprises of solid crystals with a basin, and is gradational and fuzzy. It is a transient feature
no melt or moisture film on crystal surfaces and has a minimal that changes position with time within a basin, exhibiting a
amount of water content, and a large amount of air pockets. systematic gradual movement toward higher elevation with
Dry snow has a very high reflectance, reflecting more than time as the snowmelt season advances. As such, satellite
90% of the incident solar radiation. Because of lack of cohe- remote sensing data appear most optimally suited to collect
sion, it is susceptible to avalanches. repetitive data for mapping of dry versus melting snow
With the onset of melting season, as the environmental line/zone (Fig. 19.143).
temperature rises and crosses freezing point (0 °C), snow
starts to melt and almost invariably at this time undergoes
repeated cycles of partial melting (day) and refreezing
(night). This leads to development of recrystallized small ice
crystals with increased density forming the typical melting
seasonal snow, also called ne’ve’. Ne’ve’ is a young gran-
ular snow facies that has been partially melted and refrozen.
Ne’ve’ that survives a full season of ablation is called firn,
another snow facies, which after metamorphism eventually
becomes glacial ice.
In the optical region, snow, melting snow and ice are
characterized by a high reflectivity in the visible wave-
lengths (0.4–0.7 µm). In the near-infrared (0.7–1.0 µm) Fig. 19.143 Relative distribution of dry snow (white) and melting
region, whereas snow has a high reflectivity, melting sea- seasonal snow (cyan-blue) in the Gangotri basin, Himalaya; dry snow
sonal snow and ice have a reduced medium to low reflec- occurs at higher altitudes and is fringed by the melting seasonal snow
on relatively lower altitudes; tongue-shaped features extending down
tivity (Dozier and Painter 2004; Hall et al. 1992). The SWIR
the valleys represent glacier ice; perspective view generated by draping
region (1.0–3.0 µm) is marked by a general low reflectivity IRS-satellite standard FCC over the digital elevation model (Gupta
and the presence of characteristic snow/ice absorption bands et al. 2005)
396 19 Geological Applications

Although, melting (wet) snow and glacial ice appear A spectral index characterizes the basic spectral differ-
similar on remote sensing satellite images, the two can be ences in the classes to be separated as well as it assists in
differentiated from each other on the basis of shape, site, and diminishing the radiometric effects of differential solar illu-
repetivity in occurrence on different temporal images. The mination and topography. The NDSI is defined as the dif-
ice body occurs only in glacial valley, is lingulate, has a ference of reflectance observed in a visible band (usually
sharp boundary and a permanent position, i.e., it is repeat- green band, 0.52–0.59 µm) and the SWIR band (1.55–
edly shown at the same place in various images, whereas the 1.70 µm) divided by the sum of the above two band reflec-
wet snow occurs as a fringe adjoining the dry snow all over tance. The basis of NDSI is the high reflectance of snow/ice
on hill slopes and valleys, and this fringe changes position in the visible region and a very low reflectance in the SWIR
with time in the melting season, i.e., appears at different region, thereby providing high values of NDSI for snow/ice
heights/places on images of different dates (Gupta et al. covered areas. Data from almost all satellite sensors (TM,
2005; Gupta 2011). ETM+, OLI, MODIS, LISS etc.) operating in these wave-
Anomalous local deviations in snow cover pattern viz., late length ranges have been used for NDSI computation.
occurrence in the accumulation season and advance presence A typical NDSI histogram is bimodal in character
in the melting season, may indicate local higher geothermal showing frequency distribution of snow/no-snow surfaces
gradient, e.g., due to hot springs (see Fig. 19.132). (Fig. 19.144a). The selection of the threshold value in the
NDSI distribution to distinguish between snow/non-snow is
a critical task as a slight change can lead to overestimation or
19.16.3 Snow Cover Mapping underestimation of the areal extent of snow cover. The
threshold value is usually found to be 0.4 but may be dif-
In the context of snow studies, the most important parameter ferent for different satellite sensors and for different seasons
is evidently the snow cover area. Traditionally, snow cover (Dozier 1989; Hall et al. 1995; Wang and Li 2003). The
area was mapped using ground surveys and aerial photo- snow cover area is determined by summing up the number of
graphic coverage. Now, with the advent of high resolution pixels having NDSI value higher than or equal to the defined
satellite sensors, remote sensing has effectively and effi- threshold value. An NDSI image shows spatial distribution
ciently taken over the task of providing snow cover esti- of snow/ice (Fig. 19.144b), and can be converted into a
mates, as it is aptly suited to make repetitive observations binary image showing snow/non-snow areal distribution
and generate the data in a cost-effective manner on a routine (Fig. 19.144c).
basis globally (Kargel et al. 2005). There are two main advantages of using NDSI: (a) dis-
Spectral characteristics of dry snow, melting snow and ice crimination between snow and cloud, and (b) snow mapping
have been mentioned above which facilitate their identifi- under mountain shadows. Both these advantages are related
cation. For snow cover mapping from optical sensor data in to the spectral characteristics of snow, which are utilized in
a mountainous terrain, two main impediments commonly NDSI computation (Fig. 19.145). Overall, the spectral index
encountered are: mountain shadows and cloud cover. Ini- NDSI provides a robust and fast statistical method for snow
tially, simple spectral band ratios (e.g. (NIR/SWIR) Hall cover area estimation. However, selection of an appropriate
et al. 1987) were used to deal with such problems. Presently, threshold value is a critical task, as even a slight variation in
the most extensively used spectral index for snow cover threshold value may lead to over- or underestimation.
mapping is the Normalized Difference Snow Index (NDSI, On lines similar to NDSI, Xiao et al. (2001) proposed the
Dozier 1989) (Table 19.8). Normalized Difference Snow and Ice Index (NDSII-1)

Table 19.8 Spectral indices for the study of snow/ice


Name of the index Authors Formulation Utility
Normalized difference snow Dozier (1989); GreenSWIR
NDSI ¼ Green For differentiating between (snow + ice) from
þ SWIR
index (NDSI) Hall et al. (1995) non-(snow + ice)
Normalized difference snow Xiao et al. (2001) RedSWIR
NDSII ¼ Red For differentiating between (snow + ice) from
þ SWIR
and ice index (NDSII-1) non-(snow + ice) using SPOT data
S3 index Shimamura et al. S3 ¼ ðNIRNIR ðredSWIRÞ
þ redÞ ðNIR þ SWIRÞ
For mapping snow and ice cover under dense
(2006) forest
Normalized difference glacier Keshri et al. GreenRed
NDGI ¼ Green For differentiating between (snow + ice) versus
þ Red
index (NDGI) (2009) (ice-mixed-debris) (IMD)
Normalized difference snow GreenNIR
NDSII ¼ Green For differentiating between snow versus ice
þ NIR
and ice index (NDSII-2)
19.16 Snow, Ice and Glaciers 397

Fig. 19.144 a NDSI histogram showing frequency distribution of snow/no-snow surfaces; b image corresponding to the above NDSI; c binary
image corresponding to the NDSI showing snow (white) and non-snow (dark gray) areal distribution (Gupta et al. 2005)

Fig. 19.145 a NDSI image and b green band image; the image pair shows the capability of NDSI image vis-à-vis Green band image in
discriminating snow (Sn) versus cloud (Cl), and for snow mapping under mountain shadow (Sh) (a, b Gupta et al. 2005)

(Table 19.8) for mapping snow/ice cover utilizing SPOT-4 be very slow, as little as a few millimetres per day to con-
VEGETATION (VGT) sensor. Shimamura et al. (2006) siderable faster rates, as in the case of surging glaciers.
observed that another index, called S3 index (Table 19.8), is Glaciers are broadly classified as continental glaciers (oc-
advantageous for mapping snow cover under dense forest. curring as ice sheets, e.g. in the Antarctica and Greenland)
Besides, techniques of digital image classification (both, and Alpine glaciers, also called valley glaciers (that occur in
per pixel and sub-pixel), described in Sect. 13.12, can also mountain valleys, e.g. in the Alps and Himalayas etc.).
be followed for classification and mapping of snow/ice A valley glacier typically originates from a bowl-shaped
surfaces (see Arora et al. 2011). snow-ice reservoir called cirque and is often joined by a
number of tributary glaciers in its course before ending at
snout.
19.16.4 Glaciers A glacier is considered to have two broad zones: (a) an
accumulation zone located at a higher elevation and an
Glaciers are rivers of ice in continuous movement. They are (b) ablation zone located at a lower elevation, the two being
formed by compaction and re-crystallization of snow over separated from each-other by an imaginary line called the
hundreds of years leading to development of ice that moves equilibrium line. Mass is transferred by glacier flow from
gradually down the slope due to gravity. The movement may accumulation zone to the ablation zone. Health of a glacier
398 19 Geological Applications

refers to mass balance of the glacier, i.e. increase or decrease


in size over a period of time. Positive mass balance during a
period means relatively greater accumulation due to lower
temperature and greater snowfall, and negative mass balance
means relatively more melting due to higher temperature and
less snowfall during that period.
Glaciers are one of the most sensitive indicators of cli-
mate as they grow and shrink in response to the changing air
temperature. They are also our storehouse of fresh water as
most perennial streams originate from glaciers the world
over. Therefore, proper inventorying, mapping, and Fig. 19.147 Spectral reflectance plots of snow, ice, ice-mixed debris
observing changes in glaciers are of great importance to (IMD), and debris as derived from ASTER data (after Keshri et al.
mankind. As field based methods have inherent limitations 2009)
in generating repetitive data over large and difficult glacial
terrains, high resolution satellite remote sensing is now
routinely used to provide inputs in this task. 2. Debris Cover over Glaciers

1. Glacier Retreat The term debris is taken here to include all rock materials
lying on the glacier or adjacent to the glacier. In the VNIR
Glaciers all over the world have been experiencing reces- region, debris has a spectral response similar to the sur-
sion at varying rates (Haeberli et al. 1999; Oerlemans 2005; rounding moraines/rocks, and therefore it poses an impedi-
Paul et al. 2007; IPCC 2007). In view of this, studies on ment in mapping glaciers from remote sensing data. For
glaciers using various field and satellite remote sensing data defining the glacier boundary, it is important to discriminate
have been carried out by a large number of workers the and map supraglacial debris (debris lying over the glacier)
world over, particularly during the last two decades. There from that lying outside the glacier (periglacial debris). It has
exists a large volume of literature on this aspect. For been suggested that remote sensing thermal-IR data can be
example, combining remote sensing and various other data useful in segregating supraglacial debris from the rest
sets, Bolch et al. (2012) generated a big picture of the state (Taschner and Ranzi 2002; Shukla et al. 2010).
of Himalaya–Karakorum glaciers observing that the most Broadly, a glacier terrain can be considered to be typi-
Himalaya -Karakorum glaciers have lost mass since the cally composed of four main surface cover types: snow, ice,
mid-19th century. Bhattacharya et al. (2016) used high- ice mixed debris (IMD) and debris. Figure 19.147 shows
resolution satellite data from 1965 to 2015 (including the typical spectral reflectance curves of these surface types.
Corona, Hexagon, ASTER, Landsat TM, ETM+ and OLI Whereas (snow + ice + IMD) as one unit can be segregated
data sets) for glacier inventory, snout monitoring and from no-snow areas using NDSI, it is still necessary to
ice-velocity calculations of the Gangotri glacier Himalayas further identify snow, ice and IMD and map these sepa-
(Fig. 19.146). rately. Keshri et al. (2009) proposed two spectral indices
[Normalized Difference Glacier Index (NDGI) and Nor-
malized Difference Snow and Ice Index (NDSII-2)] to
facilitate discrimination of snow versus ice versus IMD
(Table 19.8).

3. Glacier Velocity

Several methods exist for estimating glacier surface ice


velocity from satellite data such as: using SAR interfer-
ometry, SAR image data intensity tracking and optical
image data feature tracking. Application of SAR/InSAR
techniques for glacier velocity mapping has limitations as
visibility of target glacier may be affected in mountainous
terrain due to oblique viewing in SAR sensing. Besides,
Fig. 19.146 Retreat of the Gangotri glacier during the period 1965–
2015 based on high resolution multiple satellite data sets (Corona,
there may be an additional requirement of accurate Digital
Hexagon, ASTER, Landsat TM, ETM+ and OLI) (Bhattacharya et al. Elevation Models (DEMs) for geometric rectification of
2016) SAR image data.
19.16 Snow, Ice and Glaciers 399

Fig. 19.148 Velocity map of ice


in southeastern Alaska near
Malaspina and Hubbard glaciers
generated from Landsat-OLI data
(Source http://earthobservatory.
nasa.gov/IOTD/view.php?id=
89261) (accessed on 19 Dec
2016)

Optical image correlation is a valuable technique used to 19.16.5 SAR Data Application in Snow-Ice
deduce deformation or displacement of a moving object Studies
from optical remote sensing images. The principle involved
in this technique is that two images acquired at different The main problem with data from visible and near-infrared
times are correlated to find out the shift in position of any sensors is their weather dependency (e.g., influence of the
moving object, which is then treated as displacement in this atmosphere on these data, especially the presence of cloud
time interval. Different methods for correlating image to cover). Active microwave sensors provide a possibility to
derive velocity have been developed and applied in observe the Earth in all weather, all time conditions from
glaciology like normalized cross-correlation, phase correla- space. However, SAR data suffers from several limitations
tion, orientation correlation etc. (Heid and Kääb 2012). such as difficulties in interpretation of the recorded
Using Landsat-8 OLI PAN data, Fahnestock et al. (2016) backscatter, complex image processing requirements,
created time series maps of ice flow for the period June 2013 necessity of an accurate DEM for geometric rectification and
to June 2015 for glaciers in Antarctica and Greenland using limitation of visibility due to oblique viewing.
feature tracking that yielded *1 m precision. In a similar One important application of SAR is in dry snow pene-
study utilizing Landsat-8 OLI PAN data, ice velocity maps tration (Hall 1998; Rees 2006). In high altitude and high
were generated for a part of south-eastern Alaska near latitude conditions, snowfall occurs as dry snow during the
Malaspina and Hubbard glaciers (Fig. 19.148) (source: http:// winter months. This may cover up the locally present
earthobservatory.nasa.gov/IOTD/view.php?id=89261). ice-water bodies or moraine dam lakes, which the VNIR
image data then cannot detect (Fig. 19.149a). Under suitable
4. Glacier Mass Balance conditions of snow density and snow thickness, the ice-lakes
under snow cover can be delineated by SAR as SAR radi-
For the purpose of mass balance studies, optical satellite ation can penetrate through dry snow (Fig. 19.149b). Fig-
remote sensing data has also been employed including for ure 19.150 depicts a schematic of the SAR radar return in
glacier characteristics such as area, thickness and volume the two cases. After snow penetration, the SAR signal
(for reviews, see Bamber and Rivera 2007; Racoviteanu incident on the plane ice-lake surface is almost specularly
et al. 2008). reflected leading to dark gray tones on the SAR image. On
400 19 Geological Applications

resources. Several aspects for study are possible, for exam-


ple the following.

1. Land-use changes associated with open-cast strip mining.


2. After-effects of underground mining, such as subsidence
etc.
3. Evolution of dumping grounds.
4. Spread and dispersion of smoke plumes from industries
and power plants.
Fig. 19.149 Synchronous AWiFS and RISAT images showing SAR 5. Discharge of thermal plumes from power plants and
signal penetration through dry snow; a AWiFS-VNIR image dated 25
industries in rivers and lakes.
February 2013 showing only snow cover on top; b RISAT-SAR image
dated 24 February 2013 showing the presence of moraine dam lake 6. Deforestation and erosion in river catchments and sedi-
buried under snow (a, b Singh et al. 2015) ment load studies.
7. General warming of the environment in industrial areas.
8. Discharge from nuclear power plants and associated
environmental hazards.
9. Degradation in quality of vegetation, due to atmospheric
pollution in industrial areas and metropolitan cities.

In addition to the above, investigations into many other


specific problems are possible. Changes in spectral charac-
teristics in the geo-environmental setting in space lead to
possibilities for detecting corresponding changes in the
environmental setting, through repetitive remote sensing
Fig. 19.150 Schematic showing penetration of dry snow by SAR observations. Some examples of applications are given
incident signal followed by specular reflection from the planar ice-lake
below.
surface and diffused scattering from the uneven bedrock

the other hand, the adjacent bedrock/moraine scatters the


19.17.1 Vegetation
signal in various directions (diffuse scattering) leading to
medium gray tones on the image.
Spectral characteristics of vegetation have been discussed in
Besides, it may be summarily mentioned here that SAR
Sect. 3.8. Vegetation stress leads to changes in the spectral
data is useful to map snow cover and identify different types
characteristics of vegetation. Healthy vegetation normally
of snow required for glacier mass balance estimation. Fur-
reflects strongly in the near-infrared region, whereas dying
ther, it has utility in quantitative estimation of snowpack
or diseased vegetation has a decreased reflectance in the
characteristics like snow wetness, snow grain size, snow
near-infrared region. Further, for stressed vegetation, the
density, and snow water equivalence needed for hydrologic
red-band absorption feature is shorter and shallower
applications (for details, see Venkataraman and Singh 2011).
(Singhroy and Kruse 1991).

1. NDVI
19.17 Environmental Applications
A widely used parameter for estimating the vigour and
Environmental geoscience is a highly interdisciplinary field
density of vegetation is the Normalized Difference Vegeta-
with offshoots extending into almost all scientific disciplines.
tion Index (NDVI) (Table 19.9). It is expressed as:
The application potential of remote sensing techniques for
environmental surveillance stems from their unique advan- ðNIR  REDÞ
tages: a multispectral approach, synoptic overview and NDVI ¼ ð19:26Þ
ðNIR þ REDÞ
repetitive coverage, i.e. the possibility of examining objects
in different EM spectral ranges, in the perspective of the where NIR and RED are reflectance values in the near-IR
regional setting, and repetitively at time intervals. Broadly, and red bands respectively (after atmospheric correction).
the various geo-environmental problems can be related to NDVI values range between −1 and +1, a value of −1
changes or degradation of land, water, air or vegetation indicating water body and +1 indicating dense green forest.
19.17 Environmental Applications 401

Table 19.9 Selected spectral indices of vegetation density


Name of the index Authors Formulation Utility
Normalized difference Rouse et al. ðNIR  RedÞ For estimating vegetation density and its health
NDVI ¼
vegetation index (NDVI) (1973) ðNIR þ RedÞ and vigour
 
Soil adjusted vegetation Huete (1988) ðNIR  RedÞ For estimating vegetation density in sparsely
SAVI ¼  ð1 þ LÞ
index (SAVI) ðNIR þ Red þ LÞ vegetated areas by correcting for soil brightness
   
Stabilized vegetation Ninomiya NIR Green For estimating vegetation density from
index (StVI) (2003a, b), StVI ¼  atmospherically uncorrected radiance at-sensor
Red Red
(2004) data

Fig. 19.151 NDVI images (density sliced) of a part of Jharia coal non-vegetated area and decrease in moderate to dense vegetated area
field in a time series; a Landsat MSS, 7 Nov 1971; b Landsat TM, 1 over a time of ca. 40 years (Saini et al. 2015)
Nov 1992; c Landsat OLI, 13 Dec 2013; note the distinct increase in

NDVI is used for a variety of applications—in forestry, 3. Stabilized Vegetation Index (StVI)
agriculture, crop estimation, drought management and change
detection. Figure 19.151 shows an example of NDVI image NDVI as well as SAVI, as given above, ought to be com-
series. puted from atmospherically corrected reflectance data. In
general, the most commonly used remote sensing data is
2. SAVI radiance at-sensor data (e.g. ASTER Level 1-B product)
which is not atmospherically corrected. If NDVI is com-
NDVI is sensitive to soil brightness, therefore in areas where puted from such data, it is unstable and can yield erroneous
vegetation cover is sparse, a correction is needed. SAVI was results. For the purpose of detecting vegetation pixel with
formulated by Huete (1988) to estimate vegetation density stability and consistency from atmospherically uncorrected
by correcting for the influence of soil brightness. SAVI is remote sensing data, Ninomiya (2003a, b, 2004) defined a
given as (Table 19.9): stabilized vegetation index (StVI) as (Table 19.9):
     
ðNIR  RedÞ NIR GREEN
SAVI ¼  ð 1 þ LÞ ð19:27Þ StVI ¼  ð19:28Þ
ðNIR þ Red þ LÞ RED RED

where Red and NIR are the spectral reflectance of the pixel This vegetation index is considered to work well for
in the red band and the near infrared band respectively, L is radiance at-sensor data without atmospheric correction in
the soil brightness correction factor. The range of SAVI arid-semiarid regions.
varies from −1 to +1, where −1 corresponds to water and
barren surfaces and +1 to very dense forests. The value of L 4. Tassesled Cap Transformation
varies with the amount of vegetation cover and soil bright-
ness, varying according to the cover of green vegetation. In The concept of Tasseled Cap Transformation was given by
very high vegetation regions, L = 0; and in areas with no Kauth and Thomas (1976). TCT orthogonally transforms
green vegetation, L = 1. A value of L = 0.5 is generally the the remote sensing multispectral data into three new axes
default value used for areas with both soil and vegetation. described as: brightness, greenness, and yellowness +
An example of SAVI image is given in Fig. 19.141. wetness. The first, brightness is a measure of soil; the
402 19 Geological Applications

second greenness is a measure of vegetation, and the third 3. Local changes in morphology, landscape and drainage.
yellowness + wetness gives interrelationship of soil canopy 4. Discharge of highly mineralized waters from mine sumps,
moisture. TCT has been used in a number of studies and contamination of surface and subsurface waters.
dealing with vegetation-plant growth (e.g. Amine and
Hadria 2012). High spatial resolution satellite data at repetitive intervals
can help observe, map and monitor the mine development
and associated environment in a cost-effective manner
19.17.2 Land Use and Mining (Fig. 19.153).

Assets obtained from the land surface can be grouped under


land resources. Land use deals with the use of the land 19.17.3 Soil Erosion
surface, such as the land area involved in forests, agricultural
crops, grazing ground, water reservoirs, waste disposal sites, Soil erosion leads to removal of the top fertile
habitation etc. The degradation of land may result from humus-bearing soil cover and this problem is quite acute in
indiscriminate land use, mining, subsidence and dumping of places. For planning and management, soil scientists require
waste material. Repetitive coverage from remote sensing information on a number of parameters such as: topography,
systems is useful for surface activity monitoring slopes, type of soil, land cover and drainage. Remote sensing
(Fig. 19.152). can provide data on a number of aspects.
Open-cast is a common method of mining when the Figure 19.154 shows a typical landscape with a soil
deposits are flat-dipping, overburden is thin, and the surface erosion problem. The area is covered with a thick blanket of
topography does not have much relief. This method involves loess. The drainage on the loess is typically high density
removal of overburden (stripping) by open excavations on dendritic, due to the homogeneous character of the soil and
the surface, in order to reach and extract the mineral resource impervious nature. Fine dissection of the loess cover is
at depth. The mining is usually carried out in benches, and as shown up as barren gullies of varying dimensions, indicating
the earlier sections are mined, fresh areas are successively widespread soil erosion.
stripped. This is accompanied by several environmental Coastal areas may be subjected to intensive erosion.
problems, such as the following: Remote sensing can play a very important role in mapping
and management of coastal areas. An example is shown in
1. Degradation of land use due to dumping of mine spoil. Fig. 19.155 where an inward shift in high tide line between
2. Erosion of bare or thinly vegetated spoil dump slopes. 1989–91 and 2004–06 is shown (after Rajawat et al. 2015).

Fig. 19.152 Sequential NIR band images showing increase in open cast mining, Jharia coal field, India; a Landsat MSS image of 1975;
b IRS-LISS-II image of 1990; c Landsat TM image of 1994 (Prakash and Gupta 1998)
19.17 Environmental Applications 403

Fig. 19.153 A high spatial


resolution (0.5 m) image of iron
ore open cast mine, Marcona,
Peru, acquired by GeoEye
satellite (courtesy
Satimagingcorp)

Fig. 19.155 Coastal erosion resulting in inward shift in shore line


between 1989–91 and 2004–06 in the Katchall Island, Nicobar;
IRS-LISS-IV image forms the base on which high tide lines of 1989–91
Fig. 19.154 Soil erosion in a loess-covered region (Rajasthan, India); and 2004–06 are overlain (Rajawat et al. 2015)
Landsat MSS2 (redband) image; the dark areas are vegetated; the
light-toned areas in dendritic pattern are barren of vegetation and wastes into the sea, (c) offshore oil drilling operations, and
correspond to the areas of soil erosion by gully action (d) natural submarine seepage. Oil spills lead to a number
of environmental effects, such as toxic effects on marine
19.17.4 Oil Spills fauna and flora, tainting of seafood, and possible damage to
the food chain as a long-term biological effect. The liveli-
Pollution of marine waters due to oil spills is a common hood of many coastal people can be impacted by oil spills,
phenomenon worldwide. The various sources of aquatic oil especially depending upon fishing and tourism for their
pollution are the following: (a) marine accidents involving livelihood. Oil spilled on agricultural land can degrade soil
collision of oil tankers etc., (b) disposal of oil bearing fertility and pollute water resources. Thus, detection of oil
404 19 Geological Applications

spills is important, for environmental management, as early artificial source) stimulates fluorescence on the oil sur-
detection of oil spills can enable timely protection mea- face, but not on the water surface. Therefore, at wave-
sures. Remote sensing can be used for detecting and lengths in the range of 0.30 to 0.45 µm (UV–blue
monitoring oil spills (Sabins 1997; Jha et al. 2008; Fingas region), the oil surface exhibits a brighter signature than
and Brown 2014). the water, due to fluorescence (Table 19.10).
Two types of UV systems are in use—passive and active.
Types of Oil Spills The passive UV systems record energy stimulated by the
sun and therefore can be operated only in the daytime. UV
A typical oil spill in water takes the shape of a plume, sensors (Vizy 1974) have been used for such applications.
spreading in the direction of wind and water currents The active UV systems (such as the Airborne Laser Flu-
(Fig. 19.156). Concentration, thickness and colour of the oil orosensor, ALF) record energy stimulated by laser and
spill may vary across the plume, and different terms are used can acquire images both day and night. Active sensors
for these variations. Mousse is the thickest type of oil spill; it record fluorescence as a spectrum rather than an image,
forms thick streaks and bands and comprises of a brown which can be compared with a reference library of spectra
emulsion of oil, water and air. Slick is a brown-or black- for identifying unknown oil spills (Quinn et al. 1994).
coloured oil layer, relatively thick but less thick than As UV energy is strongly scattered by the atmosphere, a
mousse. Sheen/rainbow are the terms used to describe a thin, fully clear atmosphere and a lower-altitude (<1000 m)
silvery oil layer, commonly exhibiting iridescent multicolour survey are required. On UV images floating patches of
bands, with no brown/black colour. foam and seaweed also have bright UV signatures but
these can be distinguished from oil on visible-band image
Spectral Characteristics of Oil Spills data.
2. Visible, Near-IR and SWIR Images. In the blue spectral
1. UV Images. UV imaging is a highly sensitive and band, a thin film of oil on the water surface has a higher
effective remote sensing method for detecting oil on a reflectance than water that may lead to a brighter signa-
water surface and is capable of detecting very thin oil ture of oil films than water in the visible band images and
films. The incident UV radiation from the sun (or an photographs. However, it is not always possible to dis-
tinguish between oil film and water on visible and
near-IR images.
In the SWIR region, characteristic hydrocarbon absorp-
tion bands occur at 1.73 and 2.31 µm out of which the
2.31 µm band can be better used applying hyperspectral
remote sensing techniques such as Hyperion data (see
Sect. 19.10.4).
3. Thermal-IR Images. The presence of an oil film on a
water surface leads to a lower emissivity. Although the
kinetic temperature for oil film/clear water is the same,
the difference in emissivity produces a difference in
Fig. 19.156 A typical oil spill—shape and terminology radiant temperature of the order of 1.5 °C (Sabins 1997).

Table 19.10 Remote sensing of oil spills (after Sabins 1997)


Spectral region Oil signature Oil property detected Imaging requirements False signature
UV (0.3–0.4 lm) Bright Fluorescence Passive-day; Foam/seaweed
stimulated by sun/laser active-all-time; clear
atmosphere
Visible and reflected Generally brighter than water; however, Reflection and Day; clear atmosphere Wind slicks,
NIR (0.4–3.0 lm) discrimination not always possible absorption of sunlight discoloured
water
Thermal IR (8–14 Mousse–bright Radiant temperature Day and night; good Cool currents
lm) Slick-dark controlled by weather
emissivity
SAR (3–30 cm) Dark Dampening of All-time, Wave patterns
capillary waves All-weather
19.17 Environmental Applications 405

As most thermal IR detectors have temperature sensi- water and appears brighter than the water (which is dark
tivity of the order of 0.1 °C, this difference in radiant in the background). This sensor can work well in adverse
temperature between oil and water can readily be mea- weather conditions, both day and night. However, an
sured. On a thermal-IR image, an oil slick appears cooler important disadvantage of using the MWR is the low
due to lower emissivity (darker) than the surrounding spatial resolution, and high costs as required in aerial
water (brighter signature). However, thicker slicks surveys.
(mousse) may behave as a blackbody, absorb greater
amounts of radiation, and as a result they may show as
warmer streaks. Thin sheens may not be detected in 19.17.5 Smoke from Oil Well Fires
the TIR.
The thermal-IR region can provide useful data day and Oil well fires are quite common in oil fields, due to blow-out.
night, although rain and fog may impose constraints on In addition, fires may also be set to exhaust an excess of
data acquisition. Further, on a TIR image, oil slicks may gases present in the reservoir. In the last decade, there were
be confused with cool water currents; this ambiguity can also cases of oil wells catching fire due to the activities of
be resolved by using UV and SWIR images. war, as in the case of the Iraq–Kuwait war. All such oil well
4. SAR Images. Due to the surface tension of an oil film, fires lead to the formation of thick dark smoke which spread
small-scale surface waves in the sea are dampened, out from the source.
reducing/eliminating the roughness of the seawater sur- As observation frequency is of paramount importance in
face. This leads to near-specular reflection of SAR monitoring the dynamics of a smoke plume, meteorological
waves, resulting in low backscatter (dark signature), satellites (e.g. NOAA-AVHRR) and MODIS sensor, with
surrounded by the sea clutter (stronger backscatter, an adequate coverage cycle of about 12 h, have proven to
brighter signature) from rough clean sea water. be very useful for this purpose. In order to extract the
Space-borne radar sensors such as Seasat and ERS −1/−2 signature of oil fire smoke from AVHRR images,
owing to their low look angles, are particularly sensitive pre-processing and textural analysis are necessary (e.g.
to differences in water surface roughness. Figure 19.157 Khazenie and Richardson 1993).
is a SIR-C radar image showing oil slick in the offshore On satellite images, in the visible spectrum oil smoke
drilling field (Bombay High) about 150 km west of appears black, due to the smoke’s high absorptive character
Bombay. and high optical depth. It is quite prominently observed over
On a SAR image, dark streaks may also be caused by land-masses (Fig. 19.158). Oil smoke is may not be detected
smooth water channels (e.g. related to internal waves and over water bodies, due to the fact that oil smoke and sea
shallow bathymetric features), and not only by oil. This surface both appear dark in the visible wavelengths.
ambiguity can be resolved by comparing radar images Oil well fire smoke has climatic–environmental implica-
with image data in other wavelength bands. tions as it hinders solar insulation, affects the thermal budget
5. Passive Microwave Sensing: Passive microwave
radiometers (MWR) can also be used for oil spill
detection and oil thickness measurements over ocean
bodies. Oil emits stronger microwave radiation than

Fig. 19.157 SIR-C image showing oil slick in the ‘Bombay High’
offshore drilling field, about 150 km west of Mumbai, India. Oil slicks
appear dark and drilling platforms appear as bright white spots. Also
note the internal waves (left centre) and ocean swell (blue areas Fig. 19.158 Landsat TM image showing widespread dark smoke
adjacent to internal waves) (R = L-band VV, G = average of L-band from oil well fires in Kuwait (image dated 1 July 1991, printed
VV and C-band VV, B = C-band VV; Evans et al. 1997) black-and-white from colour FCC) (courtesy EOSAT Inc.)
406 19 Geological Applications

of the Earth and leaves toxic effects of gaseous emissions. presents an example of a plume emanating from a fire
Limaye et al. (1991), using Meteosat data, reported that burning at an industrial sulfur plant in Mosul, Iraq.
smoke from Kuwaiti oil fires could be traced to about ASTER-TIR bands highlight the presence of SO2 in purple
2000 km east of Kuwait. In the case of large fires it is (also see Fig. 19.125 for composition detection of volcanic
necessary to have accurate estimates on the spread of the gaseous emissions).
smoke plume in order to be able to take aviation/civilian/ Finally, it can be summarized that the advancements in
environmental precautions. remote sensing technology have added tremendously to
man’s ability to monitor the environment. This should help
to evaluate the finite natural resources available to humanity
19.17.6 Atmospheric Pollution for their optimal utilization, to maintain the natural envi-
ronment, and to preserve the precarious ecological balance
Air pollution is caused by the discharge of industrial waste on the Earth.
and exhaust into the atmosphere. It affects visibility, and
causes acid rain and respiratory problems. Finally, when the
particulates from the atmosphere settle on the ground, the 19.18 Future
deposition of dust on the canopy affects photosynthesis and
retards the growth of plants. Remote sensors have come to stay, the technology having
The dispersion of pollution in the atmosphere depends on made tremendous strides during the last 3–4 decades. It has
the intensity of the industrial discharge, type of discharge, become an integral part of our day-to-day life, such as by
and the atmospheric conditions. Satellite sensor data are way of navigational GPS, high resolution spatial informa-
proving to be very effective in mapping dispersion patterns tion, agricultural forecasting, disaster management and
and regional spread of atmospheric pollutants. The trend, environmental surveillance. In the initial stages, the remote
plume size and its dispersion vary with the prevailing wind sensing programs were funded and organized solely by
direction and the extent of the area likely to be affected due government agencies. Over the years, in view of wide
to the dispersion of smoke can be mapped. The fact that societal applications and direct economic potential, joint
some plumes may be carried for very long distances was, in public-private enterprises have become more common.
fact, first revealed only by remote sensing data. With sensors The following technological trends are expected to con-
such as ASTER-TIR, composition of the gases present in the tinue into the future (Hartley 2003; Khorram et al. 2016):
plumes can also be estimated to some extent. Figure 19.159
• Miniaturization and integration of electronics
• High performance onboard computing such as hetero-
geneous parallel computing, cloud computing and
quantum and biological computing
• Progress in large apertures and larger antennas for higher
resolution
• Increased power for active systems
• Compact optics
• Increase in storage technology
• Frequency flexibility, i.e. tunable sensing systems for
dedicated applications
• Development of small and nano-satellites
• Development of UAV-based data acquisition systems
• Advances in techniques for processing “Big Data”.

Over all, the technological developments should improve


the spatial resolution of satellites in LEO (low earth orbit)
and facilitate inclusion of multispectral-hyperspectral sen-
sors and SAR-LIDAR active sensors from geostationary
orbits (GEO). Some of these developments are already in
place. For example, Geo-Eye-1 provides data of 41 cm pixel
Fig. 19.159 Smoke emitted by a fire burning at an industrial sulfur
size, World-View-3, -4 that of 31 cm pixel size, and the
plant in Mosul, Iraq; the ASTER-TIR bands highlight the presence of
SO2 in purple (courtesy NASA/METI/AIST/Japan Space Systems, and Indian Carosat-3 is planned to collect imagery at 25 cm
U.S./Japan ASTER Science Team) spatial resolution.
19.18 Future 407

Further, there has been the rise of “Big Data” due to Abrams MJ, Brown D, Lepley L, Sadowski R (1983) Remote sensing for
satellite remote sensing. A large number of satellites have porphyry copper deposits in south Arizona. Econ Geol 78:591–604
Acharya T, Nag SK, Basumallik S (2012) Hydraulic significance of
been pouring out valuable voluminous data on global basis fracture correlated lineaments in Precambrian rocks in Purulia
—Landsat, SPOT, MODIS, VIIRS, IRS, ERS, JERS, district, West Bengal. J Geol Soc Ind 80:723–730
Hyperion, MAGSAT, GRACE etc., besides the numerous Agarwal RP, Misra VN (1994) Application of remote sensing in
airborne sensors and field data. Innovative computational petroleum exploration case studies from Northeastern region
ofIndia. Ind J Petrol Geol 3(2):45–68
techniques are required to handle “Big Data”. A generalized Agterberg FP, Bonham-Carter GF (1990) Deriving weights-of-evidence
peep into the future is presented by Khorram et al. (2016). from geosciences contourmaps for prediction of discrete events. In:
As far as geological remote sensing specifically is con- Proceedings 22nd APCOM symposium, Berlin, Germany, vol 2,
cerned, the following may be noted: pp 381–395
Agterberg FP, Bonham-Carter GF, Wright DF (1990) Statistical pattern
integration for mineral exploration. In: Gaal G, Merriam DF
• Spectral features of minerals and mineral groups are (eds) Computer applications in resource estimation prediction and
located largely in the thermal infrared (3–14 µm) region, assessment for metals and petroleum. Pergamon Press, Oxford,
where the present-day sensors from LEO provide a pp 1–21
Al Saud M (2010) Mapping potential areas for groundwater storage in
coarse resolution of *60–100 m. Breakthrough in sen- Wadi Aurnah Basin, western Arabian Peninsula, using remote
sor technology (optical systems and detector technology, sensing and geographic information system techniques. Hydrogeol J
both) is needed to achieve spatial resolution of *5 m. 18:1481–1495
• UAV (unmanned aerial vehicle) is likely to be more Allen CR (1975) Geological criteria for evaluating seismicity. Geol Soc
Am Bull 86:1041–1057
frequently used for very high resolution (a few cen- Amine RM, Hadria FI (2012) Integration of NDVI indices from the
timeters) image data that may be required for repetitive tasseled cap transformation for change detection in satellite images.
surveys of small stretches, e.g. for monitoring mining, Int J Comput Sci 9(2):1694–0814
environmental aspects, small springs, local thermal Amiri MA, Karimi M, Sarab AA (2015) Hydrocarbon resources
potential mapping using evidential belief functions and frequency
anomalies and landform changes. ratio approaches, southeastern Saskatchewan, Canada. Can J Earth
• Geological features of interest such as lithological con- Sci 52(3):182–195
tacts and structures often lie buried under soil, vegeta- Andreoli G, Bulgarelli B, Hosgood B, Tarchi D (2007) Hyperspectral
tion, scree, debris, alluvium and cultivation. On the other analysis of oil and oil-impacted soils for remote sensing purposes.
European Commission, Joint Research Centre, Ispra, Italy, 34 pp
hand, satellite remote sensing data brings information Apel M (2006) Predict—a Bayesian resource potential assessment
from the top few microns zone of the Earth’s surface plug-in for Gocad. Available at: http://www.geo.tu-freiberg.de/
(solar reflection region), or some centimeters thick top *apelm/predict.htm. Accessed on 15 Oct 2015
surface layer (thermal IR region). In this context that Arnason K (1988) Geowissenschaftliche Ferner kundung mit Satelit-
tendaten in Island—Möglichkeiten und Grenzen. Doctoralthesis,
dedicated image enhancement and interpretation skills Ludwig-Maximilians University, Munich
would continue to be needed for geological applications. Arora MK, Shukla A, Gupta RP (2011) Digital information extraction
• Mineral deposits and groundwater occur several hundred techniques for snow cover mapping from remote sensing data. In:
meters deep below the surface and hydrocarbons even at Singh VP, Singh P, Haritashya UK (eds) Encyclopedia of snow, ice
and glacier. Springer, Dordrecht, pp 213–232
kilometers depth. For resources exploration, future Baker VR (1986) Fluvial landforms. In: Short NM, Blair RW Jr
technological developments are likely to revolve around (eds) Geomorphology from space, NASA SP-486, US Govt
genetic model based prospectivity mapping for thematic Printing Office, Washington DC, pp 255–316
applications using “Big Data” in the overall framework Bakliwal PC, Grover AK (1988) Signature and migration of Saraswati
River in Thar desert, Western India. . Rec Geol Surv India
of integrated remote sensing—GIS. 116(3–8):77–86
Bamber JL, Rivera A (2007) A review of remote sensing methods for
glacier mass balance determination. Glob Planet Change 59:138–148
Belcher DJ (1960) Photointerpretation in engineering. In: Colwell RN
References (ed) Manual of photographic interpretation. Am Soc Photogramm,
Falls Church, VA, pp 403–456
Berger Z (1994) Satellite hydrocarbon exploration. Springer, Berlin,
Abrams M (2005) Significance of hydrocarbon seepage relative 319 pp
to petroleum generation and entrapment. Mar Pet Geol Berlin GL, Schaber GG, Horstman KC (1980) Possible fault detection
22:457–477 in Cottonball Basin, California: an application of radar remote
Abrams MJ, Brown D (1985) Silver Bell, Arizona, porphyry copper sensing. Remote Sens Environ 10:33–42
test site. The Joint NASA/Geosat Test Case Study, Section 4, Am Bharktya DK, Gupta RP (1981) Regional tectonics and sulphide ore
Assoc Petrol Geol, Tulsa, Oklahoma localisation in Delhi-Aravalli belt, Rajasthan, India—use of Landsat
Abrams MJ, Ashley RP, Rowan LC, Goetz AFH, Kahle AB (1977) imagery. Advances in Space Research vol l, Pergamon, London,
Mapping of hydrothermal alteration in the Cuprite Mining District, pp 299–302
Nevada, using aircraft scanner images for the spectral region 0.46 to Bharktya DK, Gupta RP (1983) Lineament structures in the Precam-
2.36µm. Geology 5:713–718 brians of Rajasthan as deciphered from Landsat images. Recent
408 19 Geological Applications

Researches in Geology, vol 10. Structure and tectonics of precam- Carter A, Ramsey M (2010) Long-term volcanic activity at Shiveluch
brian rocks. Hindustan Publishing, New Delhi, pp 186–197 volcano: nine years of ASTER spaceborne thermal infrared
Bhattacharya A, Reddy S (1994) Underground and surface coal mine observations. Remote Sens 2:2571–2583. doi:10.3390/rs2112571
fire detection in India’s Jharia Coal Field using airborne thermal Carter AJ, Girina O, Ramsey MS, Demyanchuk YV (2008) ASTER
infrared data. Asian-Pacific Remote Sens J 7(1):59–73 and field observations of the 24 December 2006 eruption of
Bhattacharya A, Bolch T, Mukherjee K, Pieczonka T, Kropac J, Bezymianny volcano, Russia. Remote Sens Environ
Buchroithner M (2016) Overall recession and mass budget of 112:2569–2577
Gangotri Glacier, Garhwal Himalayas, from 1965 to 2015 using Chakraborty R, Gupta RP, Awasthi AK (2010) Model thermal
remote sensing data. J Glaciol 62(236):1115–1133 anomalies over petroliferous basins. Oil Gas J 108:72–75
Bhuiyan C (2015) Hydrological characterisation of geological linea- Chander R (1989) Southem limits of major earthquake ruptures along
ments: a case study from the Aravalli terrain, India. Hydrogeol J the Himalaya between longitudes 75° and 90° E. Tectonophysics
23:673–686 170:115–123
Billings WP (1950) Vegetation and plant growth as affected by Chandra S, Rao VA, Krishnamurthy NS, Dutta S, Ahmed S (2006)
chemically altered rocks in the Western Great Basin. Ecology Integrated studies for characterization of lineaments used to locate
30:62–74 groundwater potential zones in a hard rock region of Karnataka,
Biswas SK (1974) Landscape of Kutch: a morpho-tectonic analysis. India. Hydrogeol J 14:1042–1051
Indian J Earth Sci 1:177–198 Chatterjee RS (2006) Coal fire mapping from satellite thermal IR data
Biswas SK (1987) Regional tectonic framework, structure and evolu- —a case example in Jharia Coalfield, Jharkhand, India. ISPRS J
tion of the western marginal basins of India. Tectonophysics Photogramm Remote Sens 60:113–128
135:307–327 Chatterjee RS, Wahiduzzaman Md, Shah A, Raju EVR, Lakhera RC,
Björnsson S, Arnason K (1988) Strengths and shortcomings in ATM Dadhwal VK (2007) Dynamics of coal fire in Jharia coalfield,
technology as applied to volcanic and geothermal areas in Iceland. Jharkhand, India during the 1990s as observed from space. Curr Sci
Proceedings of the 4th International Conference, Spectral Signatures 92:62–68
of Objects in Remote Sensing, Aussois, France, ESA-SP287, Chatterjee RS, Thapa S, Singh KB, Varunakumar G, Raju EVR (2015)
pp 189–191 Detecting, mapping and monitoring of land subsidence in Jharia
Blackett M (2016) Progress in the infrared remote sensing of volcanic coalfield, Jharkhand, India by spaceborne differential interferomet-
activity. http://dx.doi.org/10.20944/preprints201610.0011.v1. Accessed ric SAR, GPS and precision levelling techniques. J Earth Syst Sci
on 6 Nov 2016 124(6):1359–1376
Blackett M (2017) An overview of infrared remote sensing of volcanic Chatterjee RS, Singh KB, Thapa S, Kumar D (2016) The present status
activity. J Imaging 3:13. doi:10.3390/jimaging3020013 of subsiding land vulnerable to roof collapse in the Jharia Coalfield,
Bloom AL (1986) Coastal landforms. In: Short NM, Blair RW Jr India, as obtained from shorter temporal baseline C-band DInSAR
(eds) Geomorphology from Space. NASA SP-486 US Govt Printing by smaller spatial subset unwrapped phase profiling. Int J Remote
Office, Washington DC, pp 353–406 Sens 37(1):176–190
Bloom AL (1997) Geomorphology: a systematic analysis of late Chattopadhyay N, Hashimi S (1984) The Sung valley alkaline-
Cenozoic landforms, 3rd edn. Prentice Hall, Upper Saddle River ultramafic-carbonatite complex, East Kasi and Jaintia Hills Districts,
Bobba AG, Bukata RP, Jerome JH (1992) Digitally processed satellite Meghalaya. Rec Geol Surv India 113(IV):24–33
data as a tool in detecting potential groundwater flow systems. Chavez PS Jr, Kwarteng AY (1989) Extracting spectral contrast in
J Hydrol 131:25–62 Landsat thematic mapper image data using selective principal
Bodechtel J, Kley M, Münzer U (1985) Tectonic analysis of typical component analysis. Photogramm Eng Remote Sens 55:339–348
fold structures in the Zagros Mountains, Iran, by the application of Cloutis E (1989) Spectral reflectance properties of hydrocarbons:
quantitative photogrammetric methods on Metric Camera data. In: remote-sensing implications. Science 245:165–168
Proceedings DFVLR-ESA workshop oberpfaffenhofen, ESA Coleman IM, Roberts HH, Huh OK (1986) Deltaic landforms. In:
SP-209, pp 193–197 Short NM, Blair RW Jr (eds) Geomorphology from space,
Bolch T et al (2012) The state and fate of Himalayan glaciers. Science NASA-SP-486, US Govt Printing Office, Washington, DC,
336:310–314 pp 317–352
Bonham-Carter GF (1994) Geographic information systems for Collins W, Chang SH, Raines G, Channey F, Ashley R (1983)
geoscientists. Pergamon, Oxford Airborne biogeochemical mapping of hidden mineral deposits. Econ
Briole P, Massonnet D, Delacourt C (1997) Post-eruptive deformation Geol 78:737–749
associated with the 1986–87 and 1989 lave flows of Etna, detected Conel JE, Alley RE (1985) Lisbon Valley, Utah, uranium test case
by radar interferometry. Geophys Res Lett 24:37–40 report. The Joint NASA-Geosat Test Case Study, Section 8, Am
Brooks RR (1972) Geobotany and biogeochemistry in mineral Assoc Petrol Geol, Tulsa, Oklahoma
exploration. Harper and Row, New York, 290 pp Congalton RG (2005) Thematic and positional accuracy assessment of
Brooks RR (1980) Indicator plants for mineral prospecting—a critique. digital remotely sensed data. In: Proceedings of the 7th annual forest
J Geochem Explor 12:67–78 inventory and analysis symposium, 3–6 Oct 2005. US Department
Brown A (2000) Evaluation of possible gas microseepage mechanisms. of Agriculture, Portland, USA
Am Assoc Petrol Geol Bull 84:1775–1789 Congalton RG, Green K (1993) A practical look at the sources of
Burns KL, Brown GH (1978) The human perception of geological confusion in error matrix generation. Photogram Eng Remote Sens
lineaments and other discrete features in remote sensing imagery: 59:641–644
signal strength, noise levels and quality. Remote Sens Environ Congalton RG, Green K (2008) Assessing the accuracy of remotely
7:163–167 sensed data: principles and practices. CRC Press, New York
Carranza EJM (2008) Geochemical anomaly and mineral prospectivity Corrie RK, Ninomiya Y, Aitchison JC (2010) Applying advanced
mapping in GIS. Handbook of exploration and environmental spaceborne thermal emission and reflection radiometer (Aster)
geochemistry vol 11. Elsevier, Amsterdam, 351 pp spectral indices for geological mapping and mineral identification
References 409

on the Tibetan plateau. Int Archives Photogramm Remote Sens Thematic conference remote sensing for exploration geology. Fort
Spatial Inf Sci XXXVIII(8):464–469, Kyoto, Japan Worth, Texas, pp 661–667
Cox D, Singer DA (eds) (1986) Mineral deposit models. USGS Engelbrecht J, Inggs MR, Makusha G (2011) Detection and monitoring
Bull1693, U S Geol Surv, Washington D C of surface subsidence associated with mining activities in the
Cracknell AP (1998) Review article: synergy in remote sensing-what’s Witbank coalfields, South Africa, using differential radar interfer-
in a pixel? Int J Remote Sens 19:2025–2074 ometry. S Afr J Geol 114:77–94
Crosta AP, Filho CRS, Azevedo F, Brodie C (2003) Targeting key Evans DL, Plant JJ, Stofan ER (1997) Overview of the Spacebome
alteration minerals in epithermal deposits in Patagonia, Argentina, Imaging Radar-C/X band Synthetic Aperture Radar (SIR-C/X-SAR)
using ASTER imagery and principal component analysis. Int J missions. Remote Sens Environ 59:135–140
Remote Sens 24:4233–4240 Everett IR, Morisawa M, Short NM (1986) Tectonic landform. In:
Crowley JK, Brickey WD, Rowan LC (1989) Airborne imaging Short NM, Blair RW Jl (eds) Geomorphology from space, NASA
spectrometer data of the Ruby Mountains, Montana: mineral SP-486, US Govt Printing Office, Washingtor DC, pp 27–184
discrimination using relative absorption band-depth images. Remote Fahnestock M et al (2016) Rapid large-area mapping of ice flow using
Sens Environ 29:121–134 Landsat 8. Remote Sens Environ 185:84–94
De Palomera AP, van Ruitenbeek FJA, Carranza EJM (2015) Fingas M, Brown C (2014) Review of oil spill remote sensing. Mar
Prospectivity for epithermal gold–silver deposits in the Deseado Pollut Bull 83:9–23
Massif, Argentina. Ore Geol Rev 71:484–501 Flynn LP, Wright R, Garbeil H, Harris AJL, Pilger E (2002) A global
Dean K, Servilla M, Roach A, Foster B, Engle K (1998) Satellite thermal alert system using MODIS: initial results from 2000–2001.
monitoring of remote volcanoes improves study efforts in Alaska. Adv Environ Monit Model 1:37–69
Eos Trans Am Geophys Union 79:413–423 Fons L (1999) Temperature method can help locate oil, gas deposits.
Dehn J, Dean K, Engle K (2000) Thermal monitoring of north pacific Oil Gas J 97:59–64
volcanoes from space. Geology 28:755–758 Fons L (2000) Temperature anomaly mapping identifies subsurface
Denniss AM, Harris AJL, Rothery DA, Francis PW, Carlton RW hydrocarbons. World Oil, Sept 2000. http://findarticles.com/p/
(1998) Satellite observation of the April 1993 eruption of Lascar articles/mi_m3159/is_9_221/ai_65487026
volcano. Int J Remote Sens 19(5):801–821 Fookes PG, Sweeney M, Manby CND, Martin RP (1985) Geological
Deutsch M, Estes JE (1980) Landsat detection of oil from natural seeps. and geotechnical engineering aspects of low-cost roads in moun-
Photogramm Eng Remote Sens 46:1313–1322 tainous terrain. Eng Geol 21:1–152
Dong L, Shan J (2013) A comprehensive review of earthquake-induced Francis PW, De Silva SL (1989) Application of the landsat thematic
building damage detection with remote sensing techniques. ISPRS J mapper to the identification of potentially active volcanoes in the
Photogramm Remote Sens 84:85–99 Central Andes. Remote Sens Environ 28:245–255
Dong S, Yin H, Yao S, Zhang F (2013) Detecting surface subsidence in Francis PW, Rothery DA (1987) Using the Landsat Thematic Mapper
coal mining area based on DInSAR technique. J Earth Sci 24 to detect and monitor active volcanoes: an example from Lascar
(3):449–456 volcano, northern Chile. Geology 15:614–617
Donne DD, Harris AJL, Ripepe M, Wright R (2010) Earthquake-induced Fraser SJ (1991) Discrimination and identification of ferric oxides using
thermal anomalies at active volcanoes. Geology 38(9):771–774 satellite thematic mapper data: a Newman case study. Int J Remote
Dozier J (1989) Spectral signature of alpine snow-cover from the Sens 12(3):635–641
Landsat Thematic Mapper. Remote Sens Environ 28:9–22 Fraser SJ, Green AA (1987) A software defoliant for geological
Dozier J, Painter TH (2004) Multispectral and hyperspectral remote analysis of band ratios. Int J Remote Sens 8:525–532
sensing of alpine snow properties. Annu Rev Earth Planet Sci Fu B, Zheng G, Ninomiya Y, Wang C, Sun G (2007) Mapping
32:465–494 hydrocarbon-induced mineralogical alteration in the northern Tian
Drury SA (2004) Image Interpretation in Geology, 3rd edn. Blackwell Shan using ASTER multispectral data. Terra Nova 19:225–231
Sciences, Malden MA, 304 p Fu B, Ninomiya Y, Guo J (2010) Slip partitioning in the northeast
Du X et al (2015) Self-adaptive gradient-based thresholding method for Pamir-Tian Shan convergence zone. Tectonophysics 483:344–364
coal fire detection based on ASTER data—part 2, validation and Gabr S, Ghulam A, Kusky T (2010) Detecting areas of high-potential
sensitivity analysis. Remote Sens 7:2602–2626 gold mineralization using ASTER data. Ore Geol Rev 38:59–69
Duda RO, Hart PE, Nilsson NJ, Sutherland GL (1978) Semantic Galloway DL, Hoffman J (2007) The application of satellite differential
network representations in rule-based interference systems. In: SAR interferometry derived ground displacements in hydrogeology.
Waterman DA, Hayes-Roth F (eds) Pattern-directed inference Hydrogeol J 15:133–154
systems. Academic Press, New York, pp 203–221 Gangopadhyay PK (1967) Structural framework of Alwar region with
Dunn CE (2007) Biogeochemistry in mineral exploration. In: Hale M special reference to the occurrence of some rock types. In:
(ed) Handbook of mineral exploration and environmental geochem- Proceedings symposium Upper Mantle Project, Nat Geophys Res
istry, vol 9. Elsevier, Amsterdam, 480 pp Inst, Hyderabad, pp 420–429
Elachi C, Roth LE, Schaber GG (1984) Spacebome radar subsurface Gansser A (1968) The insubric Line—a major geotectonic problem.
imaging in hyperarie regions. IEEE Trans GE-22:382–387 Schweiz Mineral Petrogr Mitt 48:123–143
Elewa HH, Qaddah AA (2011) Groundwater potentiality mapping in Gillespie AR, Kahle AB, Palluconi FD (1984) Mapping alluvial fans in
the Sinai Peninsula, Egypt, using remote sensing and Death Valley, California using multichannel thermal infrared
GIS-watershed-based modeling. Hydrogeol J 19:613–628 images. Geophys Res Lett 11:1153–1156
Ellyett CD, Fleming AW (1974) Thermal infrared imagery of the Glaze LS, Francis PW, DA SelfS Rothery (1989) The 16 September
Buming Mountain coal fire. Remote Sens Environ 3(1):79–86 1986 eruption of Lascar volcano, north Chile: satellite investiga-
Ellyett CD, Pratt DA (1975) A review of the potential applications of tions. Bull Volcanol 51:146–160
remote sensing techniques to hydrogeological studies in Australia. Goetz AFH, Rowan LC (1981) Geologic remote sensing. Science
Australian Water Resources Council Technical Paper No 13, 147 pp 211:781–791
Elvidge CD (1982) Affect of vegetation on airborne thematic maper González-Álvarez I, Porwal A, Beresford SW, McCuaig TC,
imagery of the Kalamazoo porphyry copper deposit, Arizona. In: Maier WD (2010) Hydrothermal Ni prospectivity analysis of
International symposium remote sensing environment, 2nd Tasmania, Australia. Ore Geol Rev 38:168–183
410 19 Geological Applications

Guilbert JM, Park CF Jr (1986) The Geology of ore deposits. Freeman, Haselwimmer C, Prakash A (2012) Thermal infrared remote sensing of
New York, 985p geothermal systems (Chap. 22). In: Kuenzer C, Dech S (eds) Ther-
Guild PW (1972) Metallogeny and the new global tectonics. In: mal infrared remote sensing: sensors, methods, applications, remote
Proceedings of the 24th international geological congress Sect 4, sensing and digital image processing 17, Springer, Dordrecht,
Mineral Deposits, pp 17–24 pp 453–473
Gupta RP (1977a) Delineation of active faulting and some tectonic Haselwimmer C, Prakash A, Holdmann G (2013) Quantifying the heat
interpretations in Munich-Milan section of eastem Alps-use of flux and outflow rate of hot springs using airborne thermal imagery:
Landsat imagery. Tectonophysics 38:297–315 case study from Pilgrim Hot Springs, Alaska. Remote Sens Environ
Gupta RP (1977b) Neue geologische Strukuren in Himalaja entdekt. 136:37–46
Umschau 77:329–330 Heid T, Kääb A (2012) Evaluation of existing image matching methods
Gupta RP (2003) Remote sensing geology, 2nd edn. Springer, Berlin, for deriving glacier surface displacements globally from optical
655p satellite imagery. Remote Sens Environ 118:339–355
Gupta RP (2011) Dry and wet snow line/zone. In: Singh VP, Singh P, Heron AM (1922) Geology of the western Jaipur. Rec Geol Surv Ind
Haritashya UK (eds) Encyclopedia of snow, ice and glacier. LIV: 345–397
Springer, Dordrecht, pp 240–241 Heron AM (1953) The geology of central Rajputana. Mem Geol Surv
Gupta RP, Joshi BC (1990) Landslide hazard zoning using the GIS Ind 79:1–389
approach—a case study from the Ramganga Catchment, Himalayas. Hoerig B, Kuehn F, Oschuetz F, Lehmann F (2001) HyMap
Eng Geol 28:119–131 hyperspectral remote sensing to detect hydrocarbons. Int.
Gupta RP, Prakash A (1998) Reflectance aureoles associated with J. Remote Sens 22(8):1413–1422
thermal anomalies due to subsurface mine fires in the Jharia Hook SJ, Dmochowski JE, Howard KA, Rowan LC, Karlstrom KE,
coalfield, India. Int J Remote Sensing 19(14):2619–2622 Stock JM (2005) Mapping variations in weight percent silica
Gupta RP, Saha AK (2000) Mapping debris flows in the Himalayas. measured from multispectral thermal infrared imagery—examples
GIS@Development IV(12):26–27. http://www.gis-development.net from the Hiller mountains, Nevada, USA, and Tres Virgenes-La
Gupta RP, Sen AK (1988) Imprints of the Ninety-East Ridge in the Reforma Baja California Sur, Mexico. Remote Sens Environ
Shillong Plateau, Indian Shield. Tectonophysics 154:335–341 95:273–289
Gupta RP, Saraf AK, Chander R (1998) Discrimination of areas Horler DNH, Barber J, Barringer AR (1980) Effects of heavy metals on
susceptible to earthquake induced liquefaction from Landsat data. the absorbance and reflectance spectra of plants. Int J Remote Sens
Int J Remote Sens 19(4):569–572 1:121–136
Gupta RP, Haritashya UK, Singh P (2005) Mapping dry/wet snow Horler DNH, Dockray M, Barber J, Barringer AR (1983) Red edge
cover in the Indian Himalayas using IRS multispectral imagery. measurements for remote sensing plant chlorophyll content. In:
Remote Sensing Environ 97(4):258–269 Proceedings symposium remote sensing mineral exploration com-
Gupta RP, Chakraborty R, Awasthi AK (2009) Satellite data can cost munication on space research, Ottawa
effectively show oil field thermal anomalies. Oil Gas J 107:34–36 Howard AD (1967) Drainage analysis in geologie al interpretation: a
Haeberli W, Hoelzle M, Suter S (1998) In to the second century of summation. Am Assoc Petrol Geol Bull 51:2246–2259
worldwide glacier monitoring: prospects and strategies. A contribu- Huang Y, Huang H, Chen W, Li Y (1991) Remote sensing approaches
tion to the International Hydrological Programme (IHP) and the for underground coal fire detection. In: Proceedings of the
Global Environment Monitoring System (GEMS), UNESCO Stud- international conference on reducing of geological hazards, Beijing,
ies and Reports in Hydrology, vol 56, 228 p pp 634–641
Haeberli WR, Frauenfelder R, Hoelzle M, Maisch M (1999) On rates Huete AR (1988) A soil-adjusted vegetation index (SAVI). Remote
and acceleration trends of global glacier mass changes. Geogr Ann Sens Environ 25(3):295–309
Ser A Phys Geogr 81:585–595 Huo et al (2015) A study of coal fire propagation with remotely sensed
Halbouty MT (1976) Application of Landsat imagery to petroleum and thermal infrared data. Remote Sens 7:3088–3113
mineral exploration. Am Assoc petrol Geol Bull 60:745–793 Hutsinpiller A (1988) Discrimination of hydrothermal alteration
Halbouty MT (1980) Geologic significance of Landsat data of 15 giant mineral assemblages at Virginia City, Nevada, using the airborne
oil and gas fields. Am Assoc Petrol Geol Bull 64(1):8–36 imaging spectrometer. Remote Sens Environ 24:53–66
Hall DK (1998) Remote sensing of snow and ice using imaging radar. IPCC (2007) Climate change: the physical science basis. In: Solomon S
In: Henderson FM, Lewis LA, Ryerson RA (eds) Principles and et al (eds) Contribution of working group I to the fourth assessment
applications of imaging radars, vol. 2, 3rd edn. Wiley, New York, report of the intergovernmental panel on climate change, Cambridge
pp. 677–703 University Press, Cambridge, UK, p 996
Hall DK, Ormsby JP, Bindschadler RA, Siddalingaiah H (1987) Jakobsson SP (1979) Petrology of recent basalts of the eastern volcanie
Characterization of snow and ice zones on glaciers using Landsat zone, Iceland. Aeta Naturalia Islandiea, No 26, Icelandic Museum
Thematic Mapper data. Ann Glaciol 9:104–108 of Natural History, Reykjavik
Hall DK, Foster JL, Chang ATC (1992) Reflectance of snow as Jayaweera K, Seifert R, Wendler G (1976) Satellite observations of the
measured in situ and from space in sub-arctic areas in Canada and eruption of Tolbackhik Volcano. Trans Am Geophys Union
Alaska. IEEE Trans Geosci Remote Sens 30(3):634–637 57:196–200
Hall DK, Riggs GA, Salomonson VV (1995) Development of methods Jha MN, Levy J, Gao Y (2008) Advances in remote sensing for oil spill
for mapping global snow cover using moderate resolution imaging disaster management: state-of-the-art sensors technology for oil spill
spectroradiometer data. Remote Sens Environ 54:127–140 surveillance. Sensors 8:236–255
Harris AJL, Vaughan RA, Rothery DA (1995) Volcano detection and Kanungo DP, Arora MK, Sarkar S, Gupta RP (2006) A comparative
monitoring using AVHRR data: the Krafla eruption, 1984. Int J study of conventional, ANN black box, fuzzy and combined neural
Remote Sens 16:1001–1020 and fuzzy weighting procedures for landslide susceptibility zonation
Hartley J (2003) Earth remote sensing technologies in the twentyfirst in Darjeeling Himalayas. Eng Geol 85:347–366
century. In: Proceedings of international geoscience and remote Kargel JS et al (2005) Multispectral imaging contributions to global
sensing symposium (IGARSS-2003), July 21–25, Toulouse, France, land ice measurements from space. Remote Sens Environ 99(1–
vol 1, pp 627–629 2):187–219
References 411

Kargel JS et al (2016) Geomorphic and geologic controls of geohazards Laubseher HP (1971) The large-scale kinematics of the western Alps
induced by Nepal’s 2015 Gorkha earthquake. Science 351:aac8353. and the western Apennines and its palinspastic implications. Am J
doi:10.1126/science.aac8353 Sci 271:193–226
Katz SS (1991) Emulating the ‘Prospector’ expert system with a raster Limaye SS, Suomi VE, Velden C, Tripoli G (1991) Satellite observation
GIS. Comput Geosci 17:1033–1050 of smoke from oil fires in Kuwait. Science 252:1536–1539
Kaufmann HJ (1988) Mineral exploration along the Aqaba-Levant Lindsay MD, Betts PG, Ailleres L (2014) Data fusion and porphyry
structure by use of TM data-concepts, processing and results. Int J copper prospectivity models, southeastern Arizona. Ore Geol Rev
Remote Sens 9(10–11):1639–1658 61:120–140
Kauth RJ, Thomas GS (1976) The tasseled cap—a graphic description Lockwood JP, Lipman PW (1987) Holocene eruptive history of Mauna
of the spectral-temporal development of agricultural crops as seen Loa volcano. In: Decker RW, Write TL, Stauffer PH (eds) Volcan-
by Landsat. In: Proceedings symposium machine processing of ism in Hawaii. Voll, USGS Prof Pap 1350, U S Geological Survey,
remotely sensed data, Purdue University, West Lafayette, Indiana, Washington DC, pp 509–535
pp 4B-41-50 Lu Z, Dzurisin D (2010) Ground surface deformation patterns, magma
Kayal JR (1987) Microseismicity and source mechanism study: supply, and magma storage at Okmok volcano, Alaska, inferred
Shillong Plateau, northeast India. Bull Seism Soc Am from InSAR analysis: II. Co-eruptive deflation, July–August 2008.
77(1):184–194 J Geophys Res 115:B00B02. doi:10.1029/2009JB006970
Keshri A, Shukla A, Gupta RP (2009) ASTER ratio indices for Lu Z, Dzurisin D, Biggs J, Wicks C Jr, McNutt S (2010) Ground
supraglacial terrain mapping. Int J Remote Sens 30(2):519–524 surface deformation patterns, magma supply, and magma storage at
Khan S, Jacobson S (2008) Remote sensing and geochemistry for Okmok volcano, Alaska, inferred from InSAR analysis: I.
detecting hydrocarbon microseepages. Geol Soc Am Bull Inter-eruptive deformation, 1997–2008. J Geophys Res 115:
120:96–105 B00B03. doi:10.1029/2009JB006969
Khazenie N, Richardson KA (1993) Detection of oil fire smoke over Lyon RJP, Elvidge C, Lyon JG (1982) Practical requirements for
water in the Persian Gulf region. Photogramm Eng Remote Sens operational use of geobotany and biogeochemistry in mineral
59:1271–1276 exploration. In: Proceedings international symposium remote sens-
Khorram S, van der Wiele C, Koch FH, Nelson SAC, Potts MD (2016) ing environment, 2nd thematic conference remote sensing explo-
Principles of applied remote sensing. Springer, 306p ration geology, Fort Worth, Texas, pp 85–91
Kim CY, Hong SW, Kim KY, Baek SH, Bae GJ, Han BH, Jue KS (2005) Macias LF (1995) Remote sensing of mafic-ultramafie rocks: examples
GIS-based application and intelligent management of geotechnical from Australian Precambrian terranes. J Aust Geol Geophys
information and construction data in tunnelling. In: Erdem Y, Solak T 16:163–171
(eds) Underground space use—analysis of the past and lessons for the Mansor SB, Cracknell AP, Shilin BV, Gornyi VI (1994) Monitoring of
future, Proc int world tunnel congress and the 31st ITA General underground coal fires using thermal infrared data. Int J Remote
Assembly, Istanbul, Turkey, Taylor & Francis pp 197–204 Sens 15(8):1675–1685
Koike K, Nagano S, Ohmi M (1995) Lineament analysis of satellite Mars JC, Rowan LC (2006) Regional mapping of phyllic- and
images using a segment tracing algorithm (STA). Comput Geosci argillic-altered rocks in the Zagros magmatic arc, Iran, using
21:1091–1104 Advanced Spaceborne Thermal Emission and Reflection Radiometer
Kreiter VM (1968) Geological prospecting and exploration. Mir, (ASTER) data and logical operator algorithms. Geosphere 2
Moscow, 361 p (3):161–186
Kreuzer OP et al (2015) Comparing prospectivity modelling results and Martha TR, Guha A, Kumar KV, Kamaraju MVV, Raju EVR (2010)
past exploration data: a case study of porphyry Cu–Au mineral Recent coal-fire and land-use status of Jharia Coalfield, India from
systems in the Macquarie Arc, Lachlan Fold Belt, New South satellite data. Int J Remote Sens 31:3243–3262
Wales. Ore Geol Rev 71:516–544 Massonnet D, Briole P, Arnaud A (1995) Deflation of Mount Etna
Krishnamurthy J, Venkatesa Kumar N, Jayaraman V, Manivel M monitored by spaceborne radar interferometry. Nature
(1996) An approach to demarcate groundwater potential zones 375:567–570
through remote sensing and a geographic information system. Int J Matson M, Dozier J (1981) Identifieation of subresolution high
Remote Sens 17:1867–1884 temperature sources using a thermal infrared sensor. Photogramm
Kuenzer C, Hecker C, Zhang J, Wessling S, Wagner W (2008) The Eng Remote Sens 47(9):1311–1318
potential of multidiurnal MODIS thermal band data for coal fire McCauley JF, Schaber GC, Breed CS, Grolier MJ, Haynes CV,
detection. Int J Remote Sens 29:923–944 Issawi B, Elachi C, BIom R (1982) Subsurface valleys and
Kumar R, Anbalagan R (2016) Landslide susceptibility mapping using geoarchaeology of the eastern Sahara revealed by shuttle radar.
analytical hierarchy process (AHP) in Tehri reservoir rim region, Science 218:1004–1019
Uttarakhand. J Geol Soc Ind 87:271–286 McCuaig TC, Beresford S, Hronsky JMA (2010) Translating the
Labovitz ML, Masuoka EJ, Bell R, Nelson RF, Latsen EA, Hooker LK, mineral systems approach into an effective exploration targeting
Troensegaard KW (1985) Experimental evidence for spring and system. Ore Geol Rev 38:128–138
autumn windows for the detection for geobotanical anomalies Mckinstry HE (1948) Mining geology. Prentice Hall, Englewood Cliffs,
through the remote sensing of overlying vegetation. Int J Remote NJ, 680 p
Sens 6:195–216 Milton NM, Collins W, Chang SH, Schmidt RG (1983) Remote
Lang HR (1999) Stratigraphy. In: Rencz AN (ed) Remote sensing for detection of metal anomalies on Pilot Mountain, Randolph County,
the earth sciences, manual of remote sensing, vol 3, 3rd edn, Am North Carolina. Econ Geol 78:605–617
Soc Photogram Remote Sens. Wiley, New York, pp 357–374 Mohanty KK, Maiti K, Nayak S (2001) Monitoring water surges.
Lang HR, Alderman WH, Sabins FF (1985) Patrick draw, wyoming, GIS@Development 5(3):32–33, http://www.gis-development.net
petroleum test case report. The Joint NASA/Geosat Test Case Moore GK, Waltz FA (1983) Objective procedures for lineament
Project, Sect 11, Am Assoc Petrol Geol Tulsa, Oklahoma enhancement and extraction. Photogram Eng Remote Sens
Lattman LH, Parizek RR (1964) Relationship between fracture traces 49:641–647
and the occurrence of groundwater in carbonate rocks. J Hydrol Mouat DA (1982) The response of vegetation to geochemical
2:73–91 conditions. In: Proceedings international symposium remote sensing
412 19 Geological Applications

environment, 2nd thematic conference remote sensing exploration Partington GA (2010) Developing models using GIS to assess
geology, Fort Worth, Texas, pp 75–84 geological and economic risk: an example from VMS copper gold
Nagarajan R, Mukherjee A, Roy A, Khire MV (1998) Temporal remote mineral exploration in Oman. Ore Geol Rev 38:197–207
sensing data and GIS application in landslide hazard zonation of Paul F, Kääb A, Haeberli W (2007) Recent glacier changes in the Alps
part of Western Ghat, India. Int J Remote Sens 19(4):573–585 observed by satellite: consequences for future monitoring strategies.
Nampak H, Pradhan B, Manap M (2014) Application of GIS based data Global Planet Change 56:111–122
driven evidential belief function model to predict groundwater Peters WC (1978) Exploration mining and geology. Wiley, New York,
potential zonation. J Hydrol 513:283–300 644p
Ninomiya Y (1995) Quantitative estimation of SiO2 content in igneous Petrovic A, Khan SD, Chafetz HS (2008) Remote detection and
rocks using thermal infrared spectra with a neural network geochemical studies for finding hydrocarbon-induced alterations in
approach. IEEE Trans Geosci Remote Sens 33:684–691 Lisbon Valley, Utah. Mar Petrol Geol 25:696–705
Ninomiya Y (2003a) Rock type mapping with indices defined for Philip G, Gupta RP, Bhattacharya A (1989) Channel migration studies
multispectral thermal infrared ASTER data: case studies. Proc SPIE in the middle Ganga basin, lndia, using remote sensing data. Int J
4886:123–132 Remote Sens 10:1141–1149
Ninomiya Y (2003b) A stabilized vegetation index and several Pieri DC, Abrams MJ (2004) ASTER watches the world’s volcanoes: a
mineralogic indices defined for ASTER VNIR and SWIR data. new paradigm for volcanological observations from orbit. J Vol-
In: Proceedings international geoscience and remote sensing canol Geotherm Res 135:13–28
symposium (IGARSS-2003) (IEEE) 1294172. doi:10.1109/ Pieri D, Abrams M (2005) ASTER observations of thermal anomalies
IGARSS.2003.1294172 preceding the April 2003 eruption of Chikurachki volcano, Kurile
Ninomiya Y (2004) Lithologic mapping with multispectral ASTER Islands, Russia. Remote Sens Environ 99(1–2):84–89
TIR and SWIR data. Proc SPIE 5234:180–190 Podwysocki MH, Segal DB, Abrams MJ (1983) Use of multispectral
Ninomiya Y, Fu B (2001) Spectral indices for lithologic mapping with scanner images for assessment of hydrothermal alteration in the
ASTER thermal infrared data applying to a part of Beishan Marysvale, Utah mining area. Econ Geol 78:675–687
mountains, Gansu, China. In: IEEE—IGARSS remote sensing and Porwal A, Carranza EJM (2015) Introduction to the special issue:
geoscience symposium, 9–13 July 2001, vol 7, pp 2988–2990 GIS-based mineral potential modelling and geological data analyses
Ninomiya Y, Fu B (2002) Mapping quartz, carbonate minerals and for mineral exploration. Ore Geol Rev 71: 477–483
mafic–ultramafic rocks using remotely sensed multispectral thermal Porwal AK, Kreuzer OP (2010) Introduction to the special issue:
infrared ASTER data. Proc SPIE 4710:191–202 mineral prospectivity analysis and quantitative resource estimation.
Ninomiya Y, Fu B, Cudahy TJ (2005) Detecting lithology with Ore Geol Rev 38:121–127
advanced spaceborne thermal emission and reflection radiometer Porwal AK, González-Álvarez I, Markwitz V, McCuaig TC,
(ASTER) multispectral thermal infrared “radiance-at-sensor” data. Mamuse A (2010) Weights of evidence and logistic regression
Remote Sens Environ 99:127–139 modeling of magmatic nickel sulfide prospectivity in the Yilgarn
Ninomiya Y, Fu B (2016) Regional lithological mapping using Craton, Western Australia. Ore Geol Rev 38:184–196
ASTER-TIR data: case study for the Tibetan Plateau and the Porwal A et al (2015) Fuzzy inference systems for prospectivity
surrounding area. Geosciences, 6, 39. doi:10.3390/ modeling of mineral systems and a case-study for prospectivity
geosciences6030039 mapping of surficial Uranium in Yeelirrie Area, Western Australia.
Nkoane BBM, Sawula GM, Wibetoe G, Lund W (2005) Identification Ore Geol Rev 71:839–852
of Cu and Ni indicator plants from mineralized locations in Pour AB, Hashim M (2011) Spectral transformation of ASTER data
Botswana. J Geochem Explor 86(3):130–142 and the discrimination of hydrothermal alteration minerals in a
Oerlemans J (2005) Extracting climate signals from 169 glacier records. semi-arid region, SE Iran. Int J Phys Sci 6(8):2037–2059
Science 308:675–677 Pour AB, Hashim M (2012) The application of ASTER remote sensing
Offield TW, Abbott EA, Gillespie AR, Loguercio SO (1977) Structure data to porphyry copper and epithermal gold deposits. Ore Geol Rev
mapping on enhanced Landsat images of southern Brazil: tectonic 44:1–9
control of mineralization and speculation on metallogency. Geo- Prakash S (1981) Soil dynamics. McGraw-Hill, New York, pp 274–339
physics 42:482–500 Prakash A, Gupta RP (1998) Land-use mapping and change detection
O’Leary DW, Friedman JD, Pohn HA (1976) Lineament, linear and in coal mining area—a case study in the Jharia Coalfield, India. Int J
lineation: some proposed new standards for old terms. Geol Soc Am Remote Sens 19(3):391–410
Bull 87:1463–1469 Prakash A, Gupta RP (1999) Surface fires in the Jharia coalfield, India
Oppenheimer C (1991) Lava flow cooling estimated from Landsat —their distribution and estimation of area and temperature from TM
Thematic Mapper infrared data: the Lonquimay eruption (Chile, data. Int J Remote Sens 20(10):1935–1946
1989). J Geophys Res 96:21865–21878 Prakash A, Saraf AK, Gupta RP, Dutta M, Sundaram RM (1995a)
Oppenheimer C, Francis PW, Rothery DA, Carlton RW, Glaze LS Surface thermal anomalies associated with underground fires in
(1993) Infrared image analysis of volcanic thermal features: Jharia coal mines, India. Int J Remote Sens 16(12):2105–2109
Volcano Lascar, Chile, 1984–1992. J Geophys Res 98:4269–4286 Prakash A, Sastry RGS, Gupta RP, Saraf AK (1995b) Estimating the
Pal Yash, Sahai B, Sood RK, Agrawal DP (1980) Remote sensing of depth of buried hot features from thermal IR remote sensing data: a
the ‘lost’ Saraswati river. Proc Ind Acad Sci (Earth Planet Sci) conceptual approach. Int J Remote Sens 16(13):2503–2510
89(3):317–331 Prakash A, Gupta RP, Saraf AK (1997) A landsat TM based
Parizek RR (1976) Lineaments and groundwater. In: McMurthy GT, comparative study of surface and subsurface fires in the Jharia
Petersen GW (eds) Interdisciplinary application and interpretations coalfield, India. Int J Remote Sens 18(11):2463–2469
of EREP data within the Susquehanna River Basin. Pennsylvania Price NJ, Cosgrove J (1990) Analysis of geological structures.
State University, pp 4-59–4-86 Cambridge University Press, Cambridge, 502p
Park I, Kim Y, Lee S (2014) Groundwater productivity potential Prost GL (2013) Remote sensing for geoscientists, 3rd edn, CRC Press,
mapping using evidential belief function. Groundwater 52:201–207 New York, 702 p
References 413

Qu F, Lu Z, Poland M, Freymueller J, Zhang Q, Jung HS (2015) Rowan LC, Bowers TL (1995) Analysis of linear features mapped in
Post-eruptive inflation of Okmok Volcano, Alaska, from InSAR, Landsat thematic mapper and side-Iooking radar images of the
2008–2014. Remote Sens 7:16778–16794. doi:10.3390/ Reno, Nevada-California 1°  2° quadrangle: implications of
rs71215839 mineral resource studies. Photogram Eng Remote Sens
Quincey DC et al (2005) Optical remote sensing techniques in 61:749–759
high-mountain environments: application to glacial hazards. Prog Rowan LC, Mars JC (2003) Lithologic mapping in the Mountain Pass,
Phys Geogr 29(4):475–505 California area using advanced spaceborne thermal emission and
Quinn MF et al (1994) Measurement and analysis procedures for reflection radiometer (ASTER) data. Remote Sens Environ
remote identification of oil spills using a laser fluoro sensor. Int J 84:350–366
Remote Sens 15:2637–2658 Rowan LC, Wetlaufer PH, Goetz AFH, Billingsley FC, Stewart JH
Racoviteanu AE, Williams MW, Barry RG (2008) Optical remote (1974) Discrimination of rock types and detection of hydrothemally
sensing of glacier characteristics: a review with focus on the alerted areas in south-central Nevada by use of computer-enhanced
Himalaya. Sensors 8:3355–3383 ERTS images. USGS Prof Pap 883:35p
Rajawat AS (2014) SAR Applications in geosciences/geo-archaeology, Rowan LC, Goetz AFH, Ashley RP (1977) discrimination of
NISAR Science Workshop, 17–18 Nov 2014. Space Applications hydrothermally altered and unaltered rocks in visible and
Centre, ISRO, Ahmedabad, India. www.sac.gov.in/nisar/NISAR% near-infrared multispectral images. Geophysics 42:522–535
20Science%20Workshop_Presentations/BR-GT1.pdf. Accessed on Rowan LC, Watson K, Crowley JK, Anton-Pancheco C, Gumiel P,
18 Oct 2016 Kingston MJ, Miller SH, Bowers TL (1993) Mapping lithologies in
Singh SK, Rajawat, AS, Rathore BP, Bahuguna IM, Chakraborty M the Iron Hill, Colorado, carbonatite alkalic igneous rock complex
(2015) Detection of glacier lakes buried under snow by RISAT-1 using thermal infrared multispectral scanner and airborne
SAR in the Himalayan terrain. Curr Sci 109(9):1728–1732 visible-infrared imaging spectrometer data. In: Proceedings 9th
Rajawat et al (2015) Assessment of coastal erosion along the Indian thematic conference on geology remote sensing, vol I, Env Res Inst
coast on 1: 25,000 scale using satellite data of 1989–1991 and Michigan, Ann Arbor, Mich, pp 195–197
2004–2006 time frames. Curr Sci 109(2):347–353 Rowan LC, Hook SJ, Abrams MJ, Mars JC (2003) Mapping
Rajendran S, Thirunavukkarasu A, Balamurugan G, Shankar K (2011) hydrothermally altered rocks at Cuprite, Nevada, using the
Discrimination of iron ore deposits of granulite terrain of southern advanced spaceborne thermal emission and reflection radiometer
peninsular India using ASTER data. J Asian Earth Sci 41:99–106 (ASTER), a new satellite-imaging system. Econ Geol 98(5):
Rajendran S et al (2012) ASTER detection of chromite bearing 1019–1027
mineralized zones in Semail Ophiolite Massifs of the northern Rowan LC, Mars JC, Simpson CJ (2005) Lithologic mapping of the
Oman mountain: exploration strategy. Ore Geol Rev 44:121–135 Mordor N.T., Australia ultramafic complex by using the advanced
Rango A, Martinec J (1981) Accuracy of snowmelt runoff simulation. spaceborne thermal emission and reflection radiometer (ASTER).
Nord Hydrol 12:265–274 Remote Sens Environ 99:105–126
Rao NS (2006) Groundwater potential index in a crystalline terrain Rowan LC, Schmidt RG, Mars JC (2006) Distribution of hydrother-
using remote sensing data. Environ Geol 50(7):1067–1076 mally altered rocks in the Reko Diq, Pakistan, mineralized area
Rao YSN, Rahman AA, Rao DP (1974) On the structure of the Siwalik based on spectral analysis of ASTER data. Remote Sens Environ
range between the rivers Yamuna and Ganga. Himalayan Geol 104:74–87
4:137–150 Roy SC (1939) Seismometric study. Mem Geol Surv India 73:49–75
Rees GW (2006) Remote sensing of snow and ice. CRC Press, Boca Roy Chowdhary MK, Das Gupta SP (1965) Ore localization in Khetri
Raton, p 284 copper belt. Econ Geol 60:69–88
Rencz AN, Bowie C, Ward B (1996) Application of thermal imagery Ruiz-Armenta JR, Prol-Ledesma RM (1998) Techniques for enhancing
from Landsat data to identify kimberlites, Lac de Gras area, District the spectral response of hydrothermal alteration minerals in
of Mackenzie, N.W.T.: Searching for diamonds in Canada. In: Le thematic mapper images of Central Mexico. Int J Remote Sens
Chaimant AN, Richardson DG, Di Labio RNW, Richardson KA 19(10):1981–2000
(eds). Geological Survey of Canada, Open File 3228, pp 255–257 Sabine C, Realmuto VJ, Taranik JV (1994) Quantitative estimation of
Rib HT (1975) Engineering: regional inventories, corridor surveys and granitoid composition from thermal infrared multispectral scanner
site investigations. In: Reeves RG (ed) Manual of remote Sensing, (TIMS) data, Desolation Wilderness, northern Sierra Nevada,
Am Soc Photogramm, Falls Church, VA, pt 2, pp 1881–1945 Califomia. J Geophys Res 99(B3):4261–4271
Rib HT, Liang TA (1978) Recognition and identification. In: Sabins FF Jr (1983) Geologic interpretation of space shuttle radar
Schuster RL, Krizek RV (eds) Landslides analysis and control. images of Indonesia. Am Assoc Petrol Geol Bull 67:2076–2099
Trans Res Board Nat Res Council USA Spec Rep 176:34–80 Sabins FF Jr (1997) Remote sensing-principles and interpretation, 3rd
Rock BN, Hoshizaki T, Jr Miller (1988) Comparison of in-situ and edn. Freeman & Co, NY
airborne spectral measurements of the blue-shift associated with Sabins FF Jr (2007) Remote sensing: principles and interpretation, 4th
forest decline. Remote Sens Environ 24:109–127 edn. Waveland Press, Long Grove, 512 p
Rockwell BW, Hofstra AH (2008) Identification of quartz and Saha AK, Gupta RP, Arora MK (2002) GIS-based landslide hazard
carbonate minerals across northern Nevada using ASTER thermal zonation in the Bhagirathi (Ganga) Valley, Himalayas. Int J Remote
infrared emissivity data, implications for geologic mapping and Sens 23(2):357–369
mineral resource investigations in well-studied and frontier areas. Saini V, Gupta RP, Arora MK (2015) Spatio-temporal pattern of
Geosphere 4(1):218–246 eco-environmental parameters in Jharia coalfield, India. In: Michel
Rodell M, Velicogna I, Famiglietti JS (2009) Satellite-based estimates U et al (eds) Proceedings SPIE 96441H, Earth resources and
of groundwater depletion in India. Nature 460:999–1002 environmental remote sensing/GIS applications VI, 21–24 Sept
Rothery DA, Francis PW, Wood CA (1988) Volcano monitoring using 2015, Toulouse, France. doi:10.1117/12.2196645
short wavelength IR data from satellites. J Geophys Res 93 Saini V, Arora MK, Gupta RP (2016) Relationship between surface
(B7):7993–8008 temperature and SAVI using Landsat data in a coal mining area in
Rouse JW, Haas RH, Schell JA, Deering DW (1973) Monitoring India. In: Khanbilvardi R, Ganju A, Rajawat AS, Chen JM
vegetation systems in the Great Plains with ERTS. Proceedings of (eds) Land surface and cryosphere remote sensing III, Proceed-
the Third Earth Resources Technology Satellite-1 Symposium, ings SPIE Asia Pacific Remote Sensing, 4–7 April 2016, New
NASA SP-351, Greenbelt, pp 309–317 Delhi. doi:10.1117/12.2228094
414 19 Geological Applications

Samadder RK, Kumar S, Gupta RP (2007) Conjunctive use of well-log Sun Q, Zhang L, Ding X, Hu J, Liang H (2015a) Investigation of
and remote sensing data for interpreting shallow aquifer geometry slow-moving landslides from ALOS/PALSAR images with
in Ganga Plains. J Geol Soc India 69:925–932 TCPInSAR: a case study of Oso, USA. Remote Sens 7:72–88
Sander P (2007) Lineaments in groundwater exploration: a review of Sun Q, Zhang L, Ding XL, Hu J, Li ZW, Zhu JJ (2015b) Slope
applications and limitations. Hydrogeol J 15:71–74 deformation prior to Zhouqu, China landslide from InSAR time
Saraf AK, Choudhury PR (1998) Integrated remote sensing and GIS series analysis. Remote Sens Environ 156:45–57
for groundwater exploration and identification of artificial recharge Tam VT, De Smedt F, Batelaan O, Dassargues A (2004) Study on the
sites. Int J Remote Sens 19:1825–1841 relationship between lineaments and borehole specific capacity in a
Saraf AK, Prakash A, Sengupta S, Gupta RP (1995) Landsat-TM data fractured and karstified limestone area in Vietnam. Hydrogeol J
for estimating ground temperature and depth of subsurface coal fire 12:662–673
in the Jharia coalfield, India. Int J Remote Sens 16(12):2111–2124 Taschner S, Ranzi R (2002) Comparing opportunities of Landsat-TM and
Saunders DF, Burson KR, Thompson CK (1999) Model for hydrocar- ASTER data for monitoring a debris covered glacier in the Italian
bon microseepage and related near-surface alterations. Am Assoc Alps within the GLIMS project. In: Proceedings international
Petrol Geol Bull 83:170–185 geoscience remote sensing symposium (IGARSS 2002) (IEEE), 24–
Schumacher D (1996) Hydrocarbon-induced alteration of soils and 28 June 2002, vol 2. Toronto, Canada, Piscataway, NJ, pp 1044–1046
sediments. In: Schumacher D, Abrams MA (eds) Hydrocarbon Thomas IL, Howorth R, Eggers A, Fowler ADW (1981) Textural
migration and its near surface expression. Am Assoc Petrol Geol enhancement of a circular geological feature. Photogramm Eng
Mem 66:71–89 Remote Sens 47:89–91
Seeber L, Annbruster JG, Quitmeyer RC (1981) Seismicity and Thornbury WD (1978) Principles of geomorphology, 2nd edn. Wiley,
continental subduction in the Himalayan arc. Inter Union Com- New York
mission on Geodynamics, Working Group 6:215–242 Thum L, De Paoli R (2015) 2D and 3D GIS-based geological and
Sen D, Sen S (1983) Post-Neogene tectonism along the Aravalli range, geomechanical survey during tunnel excavation. Eng Geol
Rajasthan, India. Tectonophysics 93:75–98 192:19–25
Sharma RP (1977) The role of ERTS-l multispectral imagery in the Todd DK, Mays LW (2005) Groundwater hydrology, 3rd edn.
elucidation of tectonic framework and economic potentials of Wiley, NJ
Kumaun and Simla Himalaya. Himal Geol 7:77–99 Valdiya KS (2017) Prehistoric River Saraswati. Springer, Western India
Sharma RS (1988) Patterns of metamorphism in the Precambrian rocks Van der Meer F, Van Dijk P, Van der Werff H, Yang H (2002) Remote
of the Aravalli mountain belt. Mem Geol Soc Ind 7:33–75 sensing and petroleum seepage: a review and case study. Terra
Shekhar S, Pandey AC (2015) Delineation of groundwater potential Nova 14:1–17
zone in hard rock terrain of India using remote sensing, geograph- van Westen CJ (1994) GIS in landslide hazard zonation: a review, with
ical information system (GIS) and analytic hierarchy process examples from the Andes of Columbia. In: Price M, Heywood I
(AHP) techniques. Geocarto Int 30(4):402–421 (eds) Mountain environments and geographic information system.
Shi P, Fu B, Ninomiya Y, Sun J, Li Y (2012) Multispectral remote Taylor & Francis, Basingstoke, pp 135–165
sensing mapping for hydrocarbon seepage-induced lithologic Velosky JC, Stern RJ, Johnson PR (2003) Geological control of
anomalies in the Kuqa foreland basin, south Tian Shan. J Asian massive sulfide mineralization in the Neoproterozoic Wadi Bidah
Earth Sci 46:70–77 shear zone, southwestern Saudi Arabia, inferences from orbital
Shimamura Y, Izumi T, Matsumaya H (2006) Evaluation of a useful remote sensing and field studies. Precambr Res 123(2–4):235–247
method to identify snow-covered areas under vegetation—compar- Venkataraman G, Singh G (2011) Radar application in snow, ice and
isons among a newly proposed snow index, normalized difference glaciers. In: Singh VP, Singh P, Haritashya UK (eds) Encyclopedia
snow index and visible reflectance. Int J Rem Sens 27(21): of snow, ice and glaciers. Dordrecht, Springer, pp 883–903
4867–4884 Vincent RK (1997) Fundamentals of Geological and Environmental
Short NM, Blair RW Jr (eds) (1986) Geomorphology from space. Remote Sensing. Prentice Hall
NASA SP-486 US Govt Printing Office, Washington, DC Vizy KN (1974) Detecting and monitoring oil slicks with aerial photos.
Shukla A, Arora MK, Gupta RP (2010) Synergistic approach for Photogramm Eng 40:697–708
mapping debris-covered glaciers using optical-thermal remote Voss KA et al (2013) Groundwater depletion in the Middle-East with
sensing data with inputs from geomorphometric parameters. GRACE with implications for transboundary water management in
Remote Sens Environ 114:1378–1387 the Tigris-Euphrates-Western Iran region. Water Resour Res
Siegal BS (1977) Significance of operator variation and the angle of 49:904–914
illumination in lineament analysis on synoptic images. Mod Geol Walker AS (1986) Eolian landforms. In: Short NM, Blair RW Jr
6:75–85 (eds) Geomorphology from space, NASA SP-486, US Govt
Singhal BBS, Gupta RP (2010) Applied hydrogeology of fractured Printing Office, Washington, DC, pp 447–520
rocks, 2nd edn. Springer, Dordrecht, 408p Wang J, Li W (2003) Comparison of methods of snow cover mapping
Singhroy VH, Kruse FA (1991) Detection of metal stress in boreal by analyzing the solar spectrum of satellite remote sensing data in
forest species using the 0.67 µm chlorophyll absorption band. In: China. Int J Remote Sens 24(21):4129–4136
Proceedings 8th thematic conference geology remote sensing, vol I. Wang et al (2015) 3D geological modeling for prediction of subsurface
Env Res Inst Michigan, Ann Arbor, Mich, pp 361–372 Mo targets in the Luanchuan district, China. Ore Geol Rev
Sinha PR (1986) Mine fires in Indian coalfields. Energy 11 71:592–610
(11/12):1147–1154 Warren SG, Wiscombe W (1980) A model for the spectral albedo of
Slavecki RJ (1964) Detection and location of subsurface coal fire. In: snow, II, snow containing atmospheric aerosols. J Atmos Sci
Proceedings of the 3rd symposium, remote sensing environment, 37:2734–2745
University of Michigan, Ann Arbor, MI, 14–16 Oct 1964, Waters P, Greenbaum D, Smart PL, Osmaston H (1990) Applications
pp 537–547 of remote sensing to groundwater hydrology. Remote Sens Rev 4
Smirnov V (1976) Geology of mineral deposits. Mir, Moscow (2):223–264
Solomon S, Quiel F (2006) Groundwater study using remote sensing Watson K (1975) Geologic applications of thermal infrared images.
and geographic information systems (GIS) in the central highlands Proc IEEE 63(1):128–137
of Eritrea. Hydrogeol J 14:1029–1041 Watson K, Rowan LC, Bowers TL, Anton-Pacheco C, Gumiel P,
Stanton RL (1972) Ore petrology. McGraw Hill, New York, 713 p Miller SH (1996) Lithologic analysis from multispectral thermal
References 415

infrared data of the alkalic rock complex at Tron Hill, Colorado. Yamaguchi Y, Kahle AB, Tsu H, Kawakami T, Pniel M (1998)
Geophysics 61:706–721 Overview of advanced spaceborne thermal emission and reflection
Wieczorek GF (1984) Preparing a detailed landslide-inventory map for radiometer (ASTER). IEEE Trans Geosci Remote Sens 36
hazard evaluation and reduction. Bull Assoc Eng Geol 21(3): (4):1062–1071
337–342 Yeh HF, Lee CH, Hsu KC, Chang PH (2009) GIS for the assessment of
Wise DU (1982) Linesmanship and practice of linear geo-art. Geol Soc the groundwater recharge potential zone. Environ Geol 58:185–195
Am Bull 93:886–888 Yeh HF, Cheng YS, LinHI Lee CH (2016) Mapping groundwater
Wright R, Flynn L, Garbeil H, Harris A, Pilger E (2004) MODVOLC: recharge potential zone using a GIS approach in Hualian River,
Near-real-time thermal monitoring of global volcanism. J Volcano Taiwan. Sustain Environ Res 26(1):33–43
Geother Res 135:29–49 Yue H, Liu G, Guo H, Li X, Kang Z, Wang R, Zhong X (2011) Coal
Xiao X, Shen Z, Qin X (2001) Assessing the potential of mining induced land subsidence monitoring using multiband
VEGETATION sensor data for mapping snow and ice cover: a spaceborne differential interferometric synthetic aperture radar data.
normalized difference snow and ice index. Int J Remote Sens 22 J Appl Remote Sens 5–1:53518–53529. doi:10.1117/1.3571038
(13):2479–2487 Zhang XM (1998) Coal Fires in North China-detection, monitoring and
Xiao K, Li N, Porwal A, Holden E, Bagas L, Lu Y (2015) GIS-based 3D prediction using remote sensing data. ITC Publ No 58, l33p
prospectivity mapping: a case study of Jiama copper-polymetallic Zheng K, Zhou F, Liu P, Kan P (2010) Study on 3D geological model
deposit in Tibet, China. Ore Geol Rev 71:611–632 of highway tunnels modeling method. J Geogr Inf Syst 2:6–10
Appendices

An interesting aspect is that colours are amenable to


Appendix A: What Is Colour? mixing, i.e. addition/subtraction. Varying visual effects (i.e.
colours) can be produced by mixing various colours in dif-
In simple words, colour is the visual effect produced by EM
ferent proportions. For example, white light is a visual
radiation incident on the retina of the human eye. The
sensation produced by mixing different wavelengths, e.g. the
average human eye is sensitive to radiation from approxi-
well-known VIBGYOR. In addition, white light can also be
mately 0.4–0.7 µm, which is called the visible region; in
produced by combining blue + green + red, blue + yellow,
exceptional individuals, visibility may extend to slightly
red + cyan, or green + magenta. It is not possible to
shorter and longer wavelengths. The effect of incident
uniquely and definitively identify actual colour input com-
radiation, depending upon the collective wavelengths, leads
ponents in a particular instance of mixture, since the same
to colour vision.
visual effect can be generated by several alternative combi-
The visual process can be classified into two types:
nations. For example, yellow can be produced by green plus
achromatic and chromatic. The achromatic process is one in
red, or by white minus blue.
which only the cumulative brightness or intensity variation is
Colour perception is highly subjective and a standard
portrayed. This is a one-dimensional variation. Black-and-
colour classification scheme, acceptable for all purposes, is
white pictures are typical examples. On the other hand, in
difficult to evolve. Several schemes and models of colour
chromatic processes, the relative intensities of different
description have been formulated (see e.g. Harris et al.
wavelengths are of interest (and not merely the cumulative
1999). We discuss here three colour models which are more
intensity of radiation). Colour vision falls in this category.
basic in understanding the nature of colour and its applica-
Colour is described in terms of three parameters: inten-
tions in digital image processing:
sity, hue and saturation. Intensity is the brightness or scene
luminance, i.e. luminous intensity. Hue is the dominant
1. CIE colour system
wavelength present. In a particular visual process, several
2. RGB colour model
wavelengths may be present at the same time, and the most
3. IHS colour model.
dominant of these is the hue. Saturation relates to the pro-
portion of the dominant wavelength vis-a-vis other wave- 1. CIE colour system. In 1931, the Commission Interna-
lengths present, i.e. it describes the percentage of the tionale del’ Echlairage (International Commission on
dominant wavelength as contained in a mixture with white Illumination) adopted a systematic method for colour
light. It is a measure of the relative purity of hue. A pure hue designation, called the CIE method. The CIE started by
containing no white component is a colour with a single specifying a set of X, Y, Z artificial primaries. These
wavelength; it is said to be saturated. A colour field thus has primaries can be obtained by mathematical transforma-
a three-dimensional variation – in terms of hue, saturation tions of the real spectrum colours, but as such do not
and intensity. represent the spectrum colours. The relative amounts of
The human eye can distinguish about only 20–30 gray X, Y, Z primaries required to match the spectral colour of
tones (achromatic vision), whereas it can distinguish more a particular wavelength are referred to as tristimulus
than a million colours (chromatic vision). Due to this coefficients (x′, y′, z′) at that wavelength. The relative
high-order difference, scientists prefer colour pictures to amounts (tristimulus coefficients) can be converted into
facilitate better discrimination and identification of features fractional amounts x, y, z, called the trichromatic coef-
of interest. ficients, such that x + y + z = 1. Thus, any colour can be

© Springer-Verlag GmbH Germany 2018 417


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8
418 Appendices

and 0.7 µm. White appears in the center. The line joining
blue/violet with red is called the purple line. The triangular
colour field given by white-blue–red–white contains the
non-spectral colours (Fig. A.1).
For any colour, the CIE chromaticity diagram can be used
to determine hue and saturation. For a colour P1 (in Fig.
A.1), the line joining white to P1 is extended to intersect the
curve at H1; H1 is the corresponding hue. Saturation is given
by the distance away from the white point, i.e. proximity to
the curve {= d/(l + d)} in Fig. A.1. Saturated colours appear
on the boundary of the curve. Unsaturated colours appear on
the lines connecting points of pure hue (boundary) with
white point (W).
As mentioned earlier, colours are amenable to mixing.
The chromaticity diagram can be used to define the range of
colours that would be generated by mixing any set of col-
ours. Even unsaturated colours can be used as end members
for mixing purposes.
Fig. A.1 The CIE chromaticity diagram Blue (B), green (G) and red (R) are called the primary
additive colours, since B, G, R, when added together in
represented in terms of X, Y, Z primaries by specifying equal proportion, produce white light (Fig. A.2a). This col-
any two of the three x, y, z coefficients. This facilitates a our combination permits generation of a fairly large gamut
two-dimensional representation of the colour field. The of colours, although the entire colour field may not be
graphic representation in terms of chromaticity coordi- generated by combining any three primary colours.
nates x and y is known as the chromaticity diagram The B, G, R colour coding is used in superimposing
(Fig. A.1). multispectral image data sets (viz. generating FCCs). Fig-
ure A.2b gives the schematic positions of colours in such a
In the CIE chromaticity diagram spectral colours appear ternary colour field. The following points may be noted
on the boundary of the curve, giving the hue between 0.38 (considering mixing in equal proportions):

Fig. A.2 a The colour field generated by B, G, R primaries. b Schematic positions of colour in B, G, R (or RGB) ternary colour diagram
Appendices 419

Fig. A.4 The concept of the Munsell colour circuit or wheel. Hue is
Fig. A.3 RGB colour model as a cube utilizing three-dimensional represented as the polar angle and saturation on the radial axis
Cartesian coordinate system

2. RGB colour model. The RGB colour model can be


B þ G + R ! W ðB, G, R are called the primary additive coloursÞ visualized as a threedimensional Cartesian co-ordinate
system, the three axes being R, G and B starting from
Further: black (Fig. A.3). The diagonal of the cube from black (0,
0, 0) to white (1, 1, 1) is achromatic, the various gray
ðW  B Þ ! Y Yellow (Y), magenta (M) and cyan (C)are called the primary
ðW  GÞ ! M subtractive colours as they are produced by subtracting the three
shades lying on this line.
ðW  RÞ ! C primary additive colours, one by one, from white, respectively.
The three primary subtractive colours (cyan, magenta,
ðB þ YÞ ! W B and Y, G and M, R and C are called the mutually yellow) appear on the diagonally opposite corners of the
ð G þ MÞ ! W complimentary colours. three primary additive corresponding colours.
ð R þ CÞ ! W
The RGB model is used in computer hardware and
software systems for colour displays.
It is easy to follow that:

ðB þ G Þ ! C 3. IHS colour model. This is based on the Munsell colour


system or wheel model (Fig. A.4). It uses three param-
ðG þ RÞ ! Y
eters—intensity (I), hue (H) and saturation (S). The
ðR þ B Þ ! M colour space is conceived as a cylinder where hue is
Further, when the three primary subtractive colours are represented by the polar angle, saturation by the radius,
added together, the result is black: and intensity by the vertical distance on the cylinder axis
(Fig. A.5a). However, as the number of colours that can
Y þ M þ C ! ðWBÞ þ ðWGÞ þ ðWRÞ ! ðWWÞ ! black be perceived decreases with intensity and the

Fig. A.5 IHS colour model as a a cylinder and b a cone. Intensity is represented on the vertical axis, hue as the polar angle and saturation on the
radial axis
420 Appendices

contributions of hue and saturation become insignificant


at an intensity of zero, a cone could be a better repre-
sentation of the IHS colour model (Fig. A.5b).

Transformations between RGB and IHS colour models


have been described by many workers (e.g. Buchanan and
Pendgrass 1980; Haydn et al. 1982; Gillespie et al. 1986;
Edwards and Davis 1994; Harris et al. 1999). If blue is
chosen as the reference point for the IHS co-ordinate
system, the following equations relate RGB to IHS values
in a cylindrical co-ordinate system (Edwards and Davis
1994):

I ¼ ðDNR þ DNG þ DNB Þ ðA:1Þ


pffiffiffi
ðDNG  DNR Þ 3 Fig. B.1 Concept of Modulation Transfer Function (MTF); a a
H ¼ tan1 ðA:2Þ
ð2DNB  DNG  DNR Þ sinusoidal wave corresponding to the intensity variation on the ground;
b spatial frequency (cycles/mm) versus system modulation
" 2     #1=2
1 1 2 1 2
S¼ DNB  þ DNG  þ DNR 
3 3 3 which the frequency is de facto spatial and not temporal. The
ðA:3Þ frequency is given here as cycles/mm. If the variation in
ground features occurs faster and closely, the frequency is
Conversely, the following equations relate a pixel’s IHS higher, and if the variation occurs slowly, the frequency of
values to RGB DNs: the sine wave is lower. The modulation M of a wave of a
certain wavelength and frequency is, by definition
1 S cosðHÞ S sinðHÞ
R ¼  pffiffiffi  pffiffiffi ðA:4Þ (Fig. B.1).
3 6 2
Pmax  Pmin
1 S cosðHÞ S sinðHÞ M¼ ðA:7Þ
G ¼  pffiffiffi þ pffiffiffi ðA:5Þ Pmax þ Pmin
3 6 2 How faithfully this modulation is perceived and recorded
pffiffiffi by the sensor is very important for remote sensing. The ratio
1 S 6 cosðHÞ of the image modulation (Mi) to object modulation (Mo) is
B¼ þ ðA:6Þ
3 3 called the modulation transfer function (MTF).
Therefore,

Mi
Appendix B: Modulation Transfer MTF ¼ ðA:8Þ
Mo
Function (MTF)
The MTF is frequency dependent; it will have different
The Modulation Transfer Function (MTF) is an important values for different frequencies. For lower frequencies, i.e. if
concept in the evaluation of performance of remote sensors. the variation in ground features is not rapid, the MTF will be
It can be applied to any system, such as a scanner system or nearly unity, meaning that object modulation is being mat-
photography, or to its component parts, such as a lens, film, ched with image modulation. For higher frequencies, i.e.
line scanner etc. MTF is a measure of the faithfulness with spatially closer variation, the MTF will have lower values,
which a sensor portrays an object in image form. implying relatively poor portrayal of variations in the image
Basically, an object scene can be considered to consist of as compared to the object. Thus, a frequency-dependent
several unit areas, and the spatial variation across the scene evaluation of the MTF for a sensor system or its part is
may be imagined as temporal variation, if successive pixels carried out to evaluate its performance (for more details, see
are considered one after another. This gives a sine wave in e.g. Slater 1980).
Appendices 421

Appendix C

Table C.1 Measurable temperature ranges for ASTERa


Subsystem Band no. Gain mode Gain factor Minimum input Maximum input Brightness
radiance (Lmin) radiance (Lmax) temperature
(mW/cm2/sr/µm) (mW/cm2/sr/µm) range (ºC)
VNIR 1 L1 0.75 1.14 56.9 N/A
N 1 0.85 42.7
H 2.5 0.34 17.1
2 L1 0.75 0.95 47.7 N/A
N 1 0.71 35.8
H 2 0.36 17.9
3 L1 0.75 0.58 29.1 721–999
N 1 0.44 21.8 705–973
H 2 0.22 10.9 669–915
SWIR 4 L2 0.75 0.15 7.33 283–467
L1 0.75 0.15 7.33 283–467
N 1 0.11 5.50 273–449
H 2 0.05 2.75 250–410
5 L2 0.167 0.211 10.54 203–387
L1 0.75 0.055 2.35 156–301
N 1 0.035 1.76 149–288
H 2 0.018 0.88 131–257
6 L2 0.157 0.201 10.06 195–378
L1 0.75 0.042 2.11 148–290
N 1 0.032 1.58 140–277
H 2 0.016 0.79 123–246
7 L2 0.171 0.177 8.83 184–362
L1 0.75 0.040 2.01 140–280
N 1 0.030 1.51 132–267
H 2 0.015 0.755 115–237
8 L2 0.162 0.130 6.51 165–333
L1 0.75 0.028 1.407 122–254
N 1 0.021 1.055 115–242
H 2 0.011 0.528 99–213
9 L2 0.116 0.1386 6.93 160–329
L1 0.75 0.0214 1.072 108–234
N 1 0.0161 0.804 101–222
H 2 0.0080 0.402 86–195
TIR 10 N 1 Radiance of 200 K blackbody Radiance of 370 K blackbody (−)73–97
11
12
13
14
Minimum input radiance is taken as 2% of maximum input radiance
a
Data after Urai et al. (1999)
422 Appendices

Appendix D 9. ILWIS: Integrated Land and Water Information System


(http://www.ilwis.org/)
Remote Sensing Software 10. gvSIG (http://www.gvsig.org/) (Table C.1).

1. Erdas Imagine (http://www.hexagongeospatial.com)


2. ENVI + IDL (http://www.harrisgeospatial.com/)
3. IDRISI TerrSet (https://clarklabs.org/) References
4. TNT mips (http://www.microimages.com/)
5. PCI Geomatica (http://www.pcigeomatics.com/). Buchanan MD, Pendgrass R (1980) Digital image processing: can
intensity hue and saturation replace red, green and blue? Electro Opt
Syst Des 12(3):29–36
Open Source: Edwards K, Davis PA (1994) The use of intensity-hue-saturation
transformation for producing color shaded-relief images. Pho-
1. QGIS with Semi-Automatic Classification Plugin togramm Eng Remote Sens 60:1369–1374
Gillespie AR, Kahle AB, Walker RE (1986) Color enhancement of
(http://www.qgis.org/en/site/) (List of Plug-ins: https://
highly correlated images: I-decorrelation and HIS contrast stretches.
plugins.qgis.org/plugins/tags/raster/) Remote Sens Environ 20:209–235
2. SAGA GIS: System for Automated Geoscientific Harris JR, David WV, Andrew NR (1999) Integration and visualization
Analyses (http://www.saga-gis.org/) of geoscience data. Remote sensing for the earth sciences, manual
of remote sensing, vol 3, 3rd edn. American Society for
3. Opticks (http://grass.osgeo.org/)
Photogrammetry and Remote Sensing, pp 307–354
4. PolSARPro (https://earth.esa.int/web/polsarpro) Haydn R, Dalke GW, Henkel J, Bare JE (1982) Application of the IHS
5. ORFEO: Optical and Radar Federated Earth Observa- colour transform to the processing of multi sensor data and image
tion (http://orfeo-toolbox.org/) enhancement. In: Proceedings of the international symposium on
remote sensing of arid and semi-arid lands, Cairo, pp 599–616
6. OSSIM: Open Source Software Image Map (http://trac.
Slater PN (1980) Remote sensing-optics and optical systems. Addison
osgeo.org/ossim/) Wesley, Reading, p 575
7. InterImage (http://www.lvc.ele.pucrio.br/projects/ Urai M, Fukui K, Yamaguchi Y, Pieri DC (1999) Volcano observation
interimage/) potential and global volcano monitoring plan with ASTER.
Volcanol Soc Jpn 44(3):131–141 (in Japanese)
8. E-foto (http://www.efoto.eng.uerj.br/en)
Brainstorming

Section A: Questions 8. A satellite at an altitude of 900 km has to have a higher


(Short answer is given immediately after the question; for a orbital velocity than the one at 700 km altitude from
more detailed answer including logic, see Section B below) the Earth’s surface, for the simple reason that it has to
cover a longer orbital path (True/False)
1. What colour does a minus-blue optical filter have? Ans. False
(Blue/Magenta/Yellow/Cyan) 9. The Earth as a satellite of the Sun revolves around the
Ans. Yellow Sun and also spins around its own axis; however, an
2. What colour does a haze-cutter optical filter have? Earth’s satellite has to revolve around the Earth and not
(Blue/Yellow/Black/Colourless) spin around its own axis (True/False)
Ans. Colourless Ans. True
3. A rose with blue petals and green leaves is imaged in 10. What is the difference between GRD and GSD?
black-and-white by a camera with a yellow filter. How Ans. GRD is used during interpretation whereas GSD
the appearance of the rose would be? (a) Petals black, is a sensor design parameter.
leaves light toned; (b) petals very light and leaves dark 11. For an aerial opto-mechanical line scanner, other things
gray; (c) petals and leaves both equally medium gray; remaining the same, an increase in flying height by say
(d) petals medium gray, leaves dark gray 2 km has to be accompanied by a corresponding
Ans. (a) change in aircraft speed as: (a) increase in velocity
4. During stereo viewing, where does the geometric dis- (b) decrease in velocity; (c) no change in aircraft speed.
tortion occur? (a) In the terrain; (b) In the photographs; Ans. (a) increase in velocity
(c) In the preceptor’s mind 12. The image skew is typically associated with
Ans. (c) (a) whiskbroom line scanners only, (b) all line imaging
5. For geological mapping of a mountainous terrain, two devices, (c) all space-borne imaging devices
remote sensing image data sets are available: Set Ans. (b) all line imaging devices
I-acquired from space-borne sensor (sensor altitude 13. Other sensor factors remaining the same, the effect of
500 km, focal length 5 m); Set II acquired from aerial the Earth’s curvature is relatively less manifest on
platform (sensor altitude 12 km, focal length 12 cm). (a) higher altitude sensor data, (b) lower altitude sensor
Other factors remaining the same, which one of the two data, (c) uniformly manifest on all image data
sets should be preferred? Why? Ans. (b) lower altitude sensor data
Ans. Set I 14. Satellite sensor images often show striping due to the
6. From an aeroplane when one peeps through the win- non-equal detector response. If a CCD pushbroom
dow, one sees tiny ground objects with little relief. sensor is used for capturing the image, how would the
However, if photographs are acquired from the same striping appear and why (assuming near-polar
altitude, and stereo aerial photographs are viewed, Sun-synchronous orbit)? (Answer in about 20 words)
accentuated relief is seen. Why so? Ans. Vertical (N-S) striping on the image.
Ans. B/H ratio plays the role 15. When both panchromatic and multispectral sensors are
7. A geostationary satellite is one that does not move in used from the same satellite platform, it is invariably
space (True/False) observed that the panchromatic sensor has a better
Ans. False resolution than the multispectral sensor. Why?

© Springer-Verlag GmbH Germany 2018 423


R.P. Gupta, Remote Sensing Geology, https://doi.org/10.1007/978-3-662-55876-8
424 Brainstorming

Ans. Signal-to-noise ratio is a function of both IFOV 25. Ratioing for the study of spectral slopes is a typical
and spectral range, such that the two can be traded-off point operation/local operation.
against each other. Ans. Point operation
16. What type of striping could possibly be observed in an 26. If NDVI is computed from radiance data without
image acquired by a digital camera? atmospheric path radiance correction, then it leads to:
(a) Horizontal striping, (b) Vertical striping, (c) Both (a) underestimation; (b) overestimation; (c) path radi-
horizontal and vertical striping possible, (d) no striping ance has no effect on NDVI
possible Ans. (a) underestimation.
Ans. (d) No striping possible 27. For estimating ferric iron content during hydrothermal
17. Aerial sensors using X-rays could become the next exploration, if radiance data without atmospheric path
important tool in remote sensing of the Earth radiance correction is used, then it leads to: (a) under-
(True/false) estimation; (b) overestimation; (c) path radiance has no
Ans. False effect on ferric iron index
18. The near-IR reflectance of plants is governed by which Ans. (a) underestimation
one of the following: (a) chlorophyll; (b) carotenoid; 28. For hydroxyl alteration mapping, if radiance data with-
(c) anthocyanin; (d) cell structure out atmospheric path radiance correction is used, then it
Ans. (d) cell structure leads to: (a) underestimation; (b) overestimation; (c) path
19. Surface mineral coatings, oxidation, hydration etc. have radiance has no effect on hydroxyl alteration mapping
a greater influence on spectral response in solar Ans. (c) path radiance has no effect on hydroxyl
reflection region than in the thermal-IR region alteration mapping
(True/False) 29. In a basaltic terrain, there are two exposures: (A) rough
Ans. True and broken surface; and (B) smooth surface. If ground
20. Generally speaking, sensors operating in the temperatures at the two surfaces are same, would the
visible-NIR region have a higher spatial resolution than radiant temperature be different at the two places? If so,
those in the thermal-IR; therefore, visible-NIR sensors which will be higher, A or B? Why?
possess a greater potential for lithologic discrimination/ Ans. Higher at A
identification than thermal-IR sensors (True/False) 30. If a lava-flow is imaged by a multispectral thermal
Ans. False. scanner say about one week after its eruption, how
21. An image shows several features, all with rather low would the surface area of the lava flow appear on dif-
DN values, which need to be differentiated from each ferent thermal band images? (a) Surface area of the hot
other. Which of the following image enhancement lava flow will be larger on the longer wavelength TIR
techniques should be used? (a) exponential stretch, image; (b) surface area of the hot lava flow will be
(b) logarithmic stretch, (c) gaussian stretch, (d) his- larger on the shorter wavelength TIR image; (c) surface
togram equalization stretch area will be equal on all TIR band images.
Ans. (b) logarithmic stretch Ans. (a) Surface area of the hot lava flow will be larger
22. If we need to discern subtle variations in the brightness on the longer wavelength band image
of very bright objects in an image, which of the fol- 31. On a noon-time thermal IR remote sensing image, a hot
lowing image enhancement techniques should be used? aluminium roof top would appear cool (True/False)
(a) exponential stretch, (b) logarithmic stretch, Ans. True
(c) gaussian stretch, (d) histogram equalization stretch 32. An undulating granitic terrain is largely covered with
Ans. (a) exponential stretch snow of varying thickness (about 1–5 m thick). It is
23. In the visible band images, deep clear water bodies desired to make a spatial assessment of the snow thick-
ought to appear very dark (almost zero DN values) ness. Which one of the following methods will be best
owing to strong absorption of radiation by the ground suited? (a) High resolution aerial multispectral imagery
object (deep clear water); however, many times these including SWIR bands; (b) high resolution aerial thermal
features are not dark on the image. Why so? (Answer in IR sensing; (c) low altitude aerial gamma ray profiling.
about 20 words) Ans. (c) low altitude aerial gamma ray profiling
Ans. Owing to atmospheric path radiance 33. On a colour ratio composite (CRC-FCC) using ASTER
24. Edge enhancement leads to sharpening of pixels in the bands such that Red = Quartz Index ((B11.B11)/(B10.
image and is typically a point operation/local operation. B12)), Green = Carbonate Index (B13/B14), and
Ans. Local operation Blue = Mafic Index (B12/B13), the granitic rocks
Brainstorming 425

would appear in: (a) shades of red; (b) shades of blue; (a) Along-track intererferometry; (b) across-track
(c) shades of purple; (d) shades of gray. Give reasons interferometry, (c) repeat pass interferometry
for your answer Ans. (a) Along-track intererferometry
Ans. (d) shades of gray 42. If two DEMs are generated, one using C-band and the
34. On a colour ratio composite (CRC-FCC) using ASTER other using S-band InSAR, which DEM would possess
bands such that: Red = phyllic index, Green = argillic a higher vertical accuracy and why?
index, and Blue = propylitic index, the carbonate- Ans. C-band
bearing minerals, if present, would appear in: 43. If several SAR data pairs for InSAR processing are
(a) shades of red; (b) shades of blue; (c) shades of yellow; available for generation of DEM, which one of the
(d) shades of gray. Give reasons for your answer following data pairs would have a higher sensitivity to
Ans. (b) shades of blue height variation of ground objects and why (other
35. On a TIR image colour composite generated from factors remaining the same)?
ASTER bands (B10, B11, and B13) it is observed that (a) Longer normal baseline distance (less than the critical
desert sand appears in shades of cyan. Which of the distance); (b) shorter normal baseline distance; (c) longer
following colour coding schemes appears to have been temporal difference; (d) longer parallel baseline distance
followed? Ans. (a) larger normal baseline distance
(a) Red = B10, Green = B11, Blue = B13; 44. Surface deformation is in progress at two sites: site A
(b) Red = B11, Green = B13, Blue = B10; (deformation rate 0.2 cm/day) and site B (deformation
(c) Red = B13, Green = B10, Blue = B11. rate 1.5 cm/day).Two sets of InSAR data pair are
Ans. (a) Red = B10, Green = B11, Blue = B13; available—one pertaining to X-band (3.1 cm wave-
36. Considering the blackbody radiation, if a certain mag- length) and another pertaining to P-band (50 cm
nitude of spectral radiance at a wavelength corresponds wavelength), both data sets with 5 day intervals.
to a certain temperature, then the same magnitude of Comment which of the following data sets should be
spectral radiance at a longer wavelength must corre- used for measuring the surface deformation at A and B
spond to a lower temperature (True/False) giving reasons for your answer:
Ans. False (a) X-band at both the sites; (b) P-band at both the sites;
37. In an area, there was an overnight snowfall giving rise to a (c) X-band at A and P-band at B; (d) X-band at B and
blanket of snow cover about 20–25 cm thick. Which one P-band at A.
of the following techniques/data can be used to detect Ans. (c) X-band at A and P-band at B
buried frozen glacial lakes? (a) SAR image data; (b) high 45. What are the two general data formats used in GIS?
resolution SWIR band imagery; (c) multispectral thermal (a) Points and lines, (b) digital and paper maps,
IR data; (d) High resolution panchromatic imagery. (c) features and attributes, (d) vector and raster
Ans. (a) SAR image data Ans. (d) Vector and raster
38. If an agricultural field appears light gray on an X-band 46. What is the difference between metadata and attribute
SAR image and dark gray on an L-band SAR image, data?
how would it appear on a P-band SAR image? (a) dark Ans. Metadata deals with general information about the
gray; (b) medium gray; (c) light gray geospatial data, whereas attribute data describe the
Ans. (a) dark gray characteristics of specific spatial features in GIS.
39. A certain surface being sensed by C-band radar appears 47. An image is density sliced and displayed in gray tone.
rough at an incidence angle of 20°. On which of the following scales are the data thus
If the incidence angle is increased to 40°, how would converted? (a) Nominal, (b) ordinal, (c) interval,
the surface appear? (d) ratio.
(a) Smooth, (b) intermediate, (c) rough Ans. (b) Ordinal
Ans. (c) Rough
40. A certain surface being sensed by P-band radar appears Section B: Answers
smooth at an incidence angle of 40°.
If the incidence angle is made 20°, how will the surface 1. If an optical filter absorbs blue radiation, the remaining
appear? radiation that is transmitted is yellow (white minus blue
(a) Smooth, (b) Intermediate, (c) Rough is yellow; yellow is the complimentary colour of blue);
Ans. (a) Smooth therefore the minus-blue filter appears yellow.
41. Which one of the following methods is best suited for 2. A haze cutter filter does not affect the visible range;
detecting moving objects such as vehicles, boats etc.? therefore it is colourless.
426 Brainstorming

3. A yellow filter would block the blue radiation but ratio can be maintained by reducing IFOV for a larger
would transmit green radiation; therefore petals would spectral range (panchromatic sensor), or reducing
be black and leaves light toned. spectral range for a larger IFOV (multispectral sensor).
4. The terrain appears geometrically distorted in the 3-D 16. In a digital camera, each cell generates a pixel, but no
mental model; therefore distortion occurs in the per- line; therefore, no striping is possible in an image
ceptor’s mind. acquired by a digital camera.
5. Data acquired from a higher altitude platform has less 17. X-rays attenuate very quickly with distance in the
geometric distortions associated with relief and look atmosphere and cannot be used for remote sensing.
angles; therefore the Set I would be preferred. 18. Cell structure in plants governs the near-IR reflectance.
6. When one peeps through the window, the two eyes 19. The visible-NIR carries information from the top about
become the two perspective centres the distance 50 µm of the surface whereas the thermal-IR region
between the two perspective centres (eyebase) being carries information from about 10-20 cm thick top
quite small. On the other hand, when aerial photog- surface zone; therefore, visible-NIR is more influenced
raphy is done from the same altitude and the pho- by surface coatings, oxidation, hydration etc.
tographs are viewed through the stereoscope, the 20. The characteristic spectral features that mark various
eyebase gets transposed to airbase. This is responsible mineral groups occur in the thermal-IR, and not in the
for increased B/H ratio which results in higher vertical visible-NIR region; therefore sensors in the thermal-IR,
exaggeration. though have a coarser resolution, have more potential
7. A geostationary satellite revolves around the Earth but for lithologic discrimination/identification than those in
is stationary relative to the Earth viewing the same part visible-NIR region.
of the globe continuously. 21. A logarithmic stretch is able to more effectively rescale
8. A satellite at an altitude of 900 km has a lower orbital and stretch DN-values lying in the lower range in the
velocity than the one at 700 km altitude from the original image.
Earth’s surface, as governed by the Kepler’s Laws. 22. DN-values in the upper range in the original image are
9. It is not imperative for a satellite to spin on its axis, more effectively rescaled and stretched by an expo-
while it revolves around the Earth. nential stretch.
10. The term Ground Resolution Distance (GRD) is used 23. Although there is little ground reflectance, some radi-
during interpretation of remote sensing data; it is the ation arising from the atmospheric scattering (path
distance that can be actually resolved on remote sens- radiance) enters the sensor FOV which is responsible
ing products during interpretation. The term Ground for rather low DN values corresponding to such dark
Sampled Distance (GSD) refers to ground distance ground features in the image.
sampled by the scanner; it equals IFOV (linear dis- 24. In edge enhancement, DN values of individual pixels
tance) of the scanner and is a sensor design parameter. are changed depending upon the neighbouring pixel
11. In the case of an aerial optomechanical line scanner values; therefore it is a local operation.
having a constant mirror speed, V/H ratio needs to be 25. Ratioing deals with computation of ratios of DN-values
kept constant; therefore, an increase in flying height in different spectral bands, pixel by pixel, irrespective
needs to be accompanied by a comensurate increase in of the adjacent pixel values; therefore it is a point
aircraft velocity. operation.
12. In line imaging devices, scanning is carried out line by 26. NDVI is computed as ((NIR−R)/(NIR+R)). Red
line. As the sensor completes one scan operation and wavelength is additively affected by path radiance
positions itself for the next scan operation, the Earth’s whereas NIR is not. In case of presence of path radi-
relatively rotates around its axis from west to east, ance, due to over-subtraction the numerator would
causing image skew. have an unduly lower value; therefore NDVI gets
13. With the same sensor FOV, the swath width of the image underestimated
acquired from a higher altitude is wider, which results in 27. Ferric iron index is computed as (green band/blue
relatively greater effect of the Earth’s curvature. band) with higher value of the ratio corresponding to
14. The striping will be vertical (i.e. N-S direction) because higher ferric iron content. The blue band is more
the linear CCD array is aligned perpendicular to the additively affected by atmospheric path radiance than
orbital track and each sensor cell generates a scan line the green band; therefore, the ratio (green/blue) gets
in the along-track (nearly N-S) direction. unduly reduced due to path radiance; this is results in
15. Radiation intensity reaching the sensor is directly rela- underestimation of the ferric iron content.
ted to both IFOV (ground resolution) and spectral range 28. Hydroxyl alteration is computed from SWIR bands
(spectral resolution). Adequate sensor signal-to-noise which are not affected by atmospheric path radiance.
Brainstorming 427

29. Radiant temperatures depend upon ground (kinetic)


temperature and emissivity of the surface; emissivity of
the rough and broken surface is higher than that of a
smooth surface; therefore radiant temperature at A will
be higher.
30. As the lava flow would gradually cool down, the outer
peripheral areas would have the lowest temperatures,
and the central areas the highest. The relatively lower
temperature lava on the periphery would be detected
on the longer wavelength thermal data whereas it
might be missed on the shorter wavelength thermal
band images. Therefore, the surface area of the lava
flow will be greater on the longer wavelength thermal
band image.
31. On a noon-time thermal IR image, a hot aluminium
roof top would appear cool because of the low emis-
sivity of aluminium metal.
Fig. BS.1 A plot of the Planck’s function. The horizontal line A–A′
32. Attenuation of gamma ray emanating from the granitic corresponds to the magnitude of spectral radiance at wavelength ƛ3
bedrock can be used for estimating the spatially vari- emitted by the blackbody at temperature 600 °C. It is obvious that that
able snow cover thickness. the same magnitude of spectral radiance corresponds to different
33. Granites contain quart and feldspar, no carbonate and temperatures at wavelengths ƛ1 , ƛ2 , ƛ3 and ƛ4
no mafic mineral; however, quartz and feldspar have
mutually opposite response on the bands used for 38. Dark gray, because of the fact that P- band has a
quartz index QI (Aster B10, B11, B12); therefore, felsic wavelength longer than the L-band; when the agricul-
rocks (granites) appear in shades of gray-dark gray. tural field appears smooth at L-band, it will appear even
34. Carbonate minerals closely interfere with propylitic smoother at P-band.
minerals; therefore, they would appear in shades of blue. 39. Increase in incidence angle will lead to only enhanced
35. Desert sand has low emissivity in B10, as compared to rough appearance of the surface.
B11 and B13; therefore, on a colour composite 40. Decrease in incidence angle will lead to only enhanced
(Red = B10, Green = B11, Blue = B13), it will appear smoother appearance of the surface.
in shades in cyan. 41. Along-track interferometry is best suited for detecting
36. The answer to this statement is hidden in the shape of moving objects such as vehicles, boats etc.
the Planck’s function. Figure BS.1 shows a plot of the 42. C-band SAR wavelength (5.7 cm) is smaller than S-band
Planck’s function. The horizontal line A-A′ corre- SAR wavelength (15 cm); therefore, C-band will gen-
sponds to the magnitude of spectral radiance at wave- erate DEM with a higher vertical accuracy than S-band.
length ƛ3 emitted by the blackbody at temperature 43. The InSAR technique utilizes relative phase differences
600 °C. It is obvious that that the same magnitude of between the radar responses at the two SAR antenna
spectral radiance corresponds to different temperatures positions from the ground scatterers. This relative
at wavelengths ƛ1, ƛ2, ƛ3 and ƛ4. By comparing T2 and phase difference would be greater for longer normal
T3, one can see that even a higher spectral radiance may baseline distance (within limits such that it is less than
correspond to a lower temperature. Further, by com- the critical distance that may cause loss of coherence).
paring T3, T1 and T4, one finds that even a lower Therefore, the InSAR data pair with a longer normal
magnitude of spectral radiance at a shorter or longer baseline distance would have a higher sensitivity to
wavelength may correspond to a higher temperature. elevation variation on the ground.
Therefore, the basic relationship between temperature 44. The total deformation during the 5-day interval at sites A
and radiance is wavelength dependent, and is governed and B would be 1 cm and 7.5 cm, respectively. Con-
by the Planck’s function. sidering the case of X-band (3.1 cm wavelength), it can
37. SAR has the capability to penetrate snow cover and easily measure the phase difference caused by deforma-
would be able to detect frozen glacial lakes. tion at A (1 cm), but not at B (deformation 7.5 cm). In the
428 Brainstorming

case of P-band (50 cm wavelength), the phase difference 46. Metadata provides the general information about the
caused by deformation at A (1 cm) is too small for this geospatial data (such as remote sensing data), whereas
wavelength, but it is well suited to measure deformation attribute data describe the characteristics of specific
at B (deformation 7.5 cm). Therefore, data pair of spatial entities (features) in GIS.
X-band should be used at A and that of P-band at B. 47. By density slicing the image, the image tones get into a
45. Vector and raster are the two basic data formats used hierarchy of states in which the intervening lengths are
in GIS. not equal, i.e. on ordinal scale.

You might also like