You are on page 1of 216

Using Image Analysis for ArcGIS

Geographic Imaging by Leica Geosystems GIS & Mapping


Julie Booth-Lamirand
Using the Image Analysis Extension for ArcGIS

Copyright © 2003 Leica Geosystems GIS & Mapping, LLC


All rights reserved.
Printed in the United States of America.

The information contained in this document is the exclusive property of Leica Geosystems GIS & Mapping, LLC. This work is protected under United States copyright law
and other international copyright treaties and conventions. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopying and recording, or by any information storage or retrieval system, as expressly permitted in writing by Leica Geosystems GIS & Mapping, LLC. All
requests should be sent to Attention: Manager of Technical Documentation, Leica Geosystems GIS & Mapping, LLC, 2801 Buford Highway NE, Suite 400, Atlanta, GA,
30329-2137, USA.

The information contained in this document is subject to change without notice.

CONTRIBUTORS

Contributors to this book and the On-line Help for Image Analysis for ArcGIS include: Christine Beaudoin, Jay Pongonis, Kris Curry, Lori Zastrow, Mladen Stojic′ , and
Cheryl Brantley of Leica Geosystems GIS & Mapping, LLC.

U. S. GOVERNMENT RESTRICTED/LIMITED RIGHTS

Any software, documentation, and/or data delivered hereunder is subject to the terms of the License Agreement. In no event shall the U.S. Government acquire greater than
RESTRICTED/LIMITED RIGHTS. At minimum, use, duplication, or disclosure by the U.S. Government is subject to restrictions set forth in FAR §52.227-14 Alternates I,
II, and III (JUN 1987); FAR §52.227-19 (JUN 1987), and/or FAR §12.211/12.212 (Commercial Technical Data/Computer Software); and DFARS §252.227-7015 (NOV
1995) (Technical Data) and/or DFARS §227.7202 (Computer Software), as applicable. Contractor/Manufacturer is Leica Geosystems GIS & Mapping, LLC, 2801 Buford
Highway NE, Suite 400, Atlanta, GA, 30329-2137, USA.

ERDAS, ERDAS IMAGINE, and IMAGINE OrthoBASE are registered trademarks. Image Analysis for ArcGIS is a trademark.

ERDAS® is a wholly owned subsidiary of Leica Geosystems GIS & Mapping, LLC.

Other companies and products mentioned herein are trademarks or registered trademarks of their respective trademark owners.
Contents

Contents Contents iii


Foreword vii

Getting started
1 Introducing Image Analysis for ArcGIS 3
Learning about Image Analysis for ArcGIS 10

2 Quick-start tutorial 11
Exercise 1: Starting Image Analysis for ArcGIS 12
Exercise 2: Adding images and applying Histogram Stretch 14
Exercise 3: Identifying similar areas in an image 18
Exercise 4: Finding areas of change 22
Exercise 5: Mosaicking images 30
Exercise 6: Orthorectification of camera imagery 33
What’s Next? 38

3 Applying data tools 39


Using Seed Tool Properties 40
Image Info 45
Options 47

Working with features


4 Using Data Preparation 55
Create New Image 56
Subset Image 58
Mosaic Images 63
Reproject Image 66
III
5 Performing Spatial Enhancement 69
Convolution 70
Non-Directional Edge 75
Focal Analysis 77
Resolution Merge 79

6 Using Radiometric Enhancement 83


LUT Stretch 84
Histogram Equalization 87
Histogram Matching 91
Brightness Inversion 93

7 Applying Spectral Enhancement 95


RGB to IHS 96
IHS to RGB 99
Vegetative Indices 101
Color IR to Natural Color 104

8 Performing GIS Analysis 107


Information versus data 108
Neighborhood Analysis 109
Thematic Change 111
Recode 114
Summarize Areas 120

9 Using Utilities 123


Image Difference 124
Layer Stack 126

10 Understanding Classification 129


The Classification Process 130

IV USING IMAGE ANALYSIS FOR ARCGIS


Classification tips 132
Unsupervised Classification/Categorize Image 134
Supervised Classification 138
Classification decision rules 140

11 Using Conversion 143


Conversion 144
Converting raster to features 145
Converting features to raster 147

12 Applying Geocorrection Tools 149


When to rectify 150
Geocorrection property dialogs 153
SPOT 158
The Spot Properties dialog 160
Polynomial transformation 161
The Polynomial Properties dialog 168
Rubber Sheeting 169
Camera Properties 171
IKONOS, QuickBird, and RPC Properties 173
Landsat 177

Glossary 183
References 201
Index 205

CONTENTS V
VI USING IMAGE ANALYSIS FOR ARCGIS
Foreword

An image of the earth’s surface is a wealth of information. Images capture a


permanent record of buildings, roads, rivers, trees, schools, mountains, and other
features located on the earth’s surface. But images go beyond simply recording
features. Images also record relationships and processes as they occur in the real
world. Images are snapshots of geography, but they are also snapshots of reality.
Images chronicle our earth and everything associated with it; they record a specific
place at a specific point in time. They are snapshots of our changing cities, rivers,
and mountains. Images are snapshots of life on earth.

The data in a GIS needs to reflect reality, and snapshots of reality need to be
incorporated and accurately transformed into instantaneously ready, easy-to-use
information. From snapshots to digital reality, images are pivotal in creating and
maintaining the information infrastructure used by today’s society. Today’s
geographic information systems have been carefully created with features,
attributed behavior, analyzed relationships, and modeled processes.

There are five essential questions that any GIS needs to answer: Where, What,
When, Why, and How. Uncovering Why, When, and How are all done within the
GIS; images allow you to extract the Where and What. Precisely where is that
building? What is that parcel of land used for? What type of tree is that? The new
extensions developed by Leica Geosystems GIS and Mapping, LLC use imagery
to allow you to accurately address the questions Where and What, so you can then
derive answers for the other three.

But our earth is changing! Urban growth, suburban sprawl, industrial usage and
natural phenomena continually alter our geography. As our geography changes, so

VII
does the information we need to understand it. Because an
image is a permanent record of features, behavior,
relationships, and processes captured at a specific moment in
time, using a series of images of the same area taken over
time allows you to more accurately model and analyze the
relationships and processes that are important to our earth.

The new extensions by Leica Geosystems are technological


breakthroughs which allow you to transform a snapshot of
geography into information that digitally represents reality in
the context of a GIS. Image Analysis™ for ArcGIS and
Stereo Analyst® for ArcGIS are tools built on top of a GIS to
maintain that GIS with up-to-date information. The
extensions provided by Leica Geosystems reliably transform
imagery directly into your GIS for analyzing, mapping,
visualizing, and understanding our world.

On behalf of the Image Analysis for ArcGIS and Stereo


Analyst for ArcGIS product teams, I wish you all the best in
working with these new products and hope you are
successful in your GIS and mapping endeavors.

Sincerely,

Mladen Stojic′
Product Manager
Leica Geosystems GIS & Mapping, LLC

VIII USING IMAGE ANALYSIS FOR ARCGIS


Getting started

Section 1
1Introducing Image Analysis for ArcGIS
Introducing Image Analysis
for ArcGIS 1
IN THIS CHAPTER Image Analysis for ArcGIS™ is primarily designed for natural resource and
infrastructure management. The extension is very useful in the fields of forestry,
• Updating a database agriculture, environmental assessment, engineering, and infrastructure projects
such as facility siting and corridor monitoring, and general geographic database
• Categorizing land cover and
update and maintenance.
characterizing sites
Today, imagery of the earth’s surface is an integral part of desktop mapping and
• Identifying and summarizing
GIS, and it’s more important than ever to have the ability to provide realistic
natural hazard damage
backdrops to geographic databases and to be able to quickly update details
• Identifying and monitoring urban involving street use or land use data.
growth and changes
Image Analysis for ArcGIS gives you the ability to perform many tasks:
• Extracting features automatically
• Import and incorporate raster imagery into ArcGIS.
• Assessing vegetation stress • Categorize images into classes corresponding to land cover types such as
vegetation.
• Evaluate images captured at different times to identify areas of change.
• Identify and automatically map a land cover type with a single click.
• Find areas of dense and thriving vegetation in an image.
• Enhance the appearance of an image by adjusting contrast and brightness or by
applying histogram stretches.
• Align an image to a map coordinate system for precise area location.
• Rectify satellite images through Geocorrection Models.
3
Up datin g database s

There are many kinds of imagery to choose from in a wide range of scales, spatial, and spectral resolutions, and map accuracies. Aerial
photography is often the choice for map updating because of its high precision. With Image Analysis for ArcGIS you are able to use imagery
to identify changes and make revisions and corrections to your geographic database.

Airphoto with shapefile of streets

4 USING IMAGE ANALYSIS FOR ARCGIS


Categorizing lan d cover and characte rizin g sites

Transmission towers for radio-based telecommunications must all be visible from each other, must be within a certain range of elevations,
and must avoid fragile areas like wetlands. With Image Analysis for ArcGIS, you can categorize images into land cover classes to help
identify suitable locations. You can use imagery and analysis techniques to identify wetlands and other environmentally sensitive areas.

The Classification features enable you to divide an image into many different classes, and then highlight them as you wish. In this case the
areas not suitable for tower placement are highlighted, and the placement for the towers can be sited appropriately.

Classified image for radio towers

INTRODUCING IMAGE ANALYSIS FOR ARCGIS 5


Iden tify ing a nd su mmarizing natu ral hazard d amag e

When viewing a forest hit by a hurricane, you can use the mapping tools of Image Analysis for ArcGIS to show where the damage occurred.
With other ArcGIS tools, you can show the condition of the vegetation, how much stress it suffers, and how much damage it sustained in
the hurricane.

Below, Landsat images taken before and after the hurricane, in conjunction with a shapefile that identifies the forest boundary, are used for
comparison. Within the shapefile, you can see detailed tree stand inventory and management information.

The upper two pictures show the area in 1987 and in 1989 after Hurricane Hugo. The lower image features the shapefile.

6 USING IMAGE ANALYSIS FOR ARCGIS


Iden tify ing a nd mo nito rin g urban grow th and chang es

Cities grow over time, and images give a good sense of how they grow, and how remaining land can be preserved by managing that growth.
You can use Image Analysis for ArcGIS to reveal patterns of urban growth over time.

Here, Landsat data spanning 21 years was analyzed for urban growth. The final view shows the differences in extent of urban land use and
land cover between 1973 and 1994. Those differences are represented as classes. The yellow urban areas from 1994 represent how much
the city has grown beyond the red urban areas from 1973.

The top two images represent urban areas in red, first in 1974 and then in 1994. The bottom image shows the actual growth.

INTRODUCING IMAGE ANALYSIS FOR ARCGIS 7


Ext r a c ti n g fea tu r e s a u t om a t ic a lly

Suppose you are responsible for mapping the extent of an oil spill as part of a rapid response effort. You can use synthetic aperture radar
(SAR) data and Image Analysis for ArcGIS tools to identify and map the extent of such environmental hazards.

The following image shows an oil spill of the northern coast of Spain. The first image shows the spill, and the second image gives you an
example of how you can isolate the exact extent of a particular pattern using Image Analysis for ArcGIS.

Images depicting an oil spill off the coast of Spain and a polygon grown in the spill using Seed Tool.

INTRODUCING IMAGE ANALYSIS FOR ARCGIS 8


Ass essing veg etatio n stress

Crops experience different stresses throughout the growing season. You can use multispectral imagery and analysis tools to identify and
monitor a crop’s health.

In these images, the Vegetative Indices function is used to see crop stress. The stressed areas are then automatically digitized and saved as
a shapefile. This kind of information can be used to help identify sources if variability in growth patterns. Then, you can quickly update
crop management plans.

Crop stress shown through Vegetative Indices

INTRODUCING IMAGE ANALYSIS FOR ARCGIS 9


Learning about Image Analysis for ArcGIS
If you are just learning about geographic information systems Contacting Leica Geosystems GIS &
(GISs), you may want to read the books about ArcCatalog and Mappi ng
ArcMap: Using ArcCatalog and Using ArcMap. Knowing about
these applications will make your use of Image Analysis for If you need to contact Leica Geosystems for technical support, see
ArcGIS much easier. the product registration and support card you received with Image
Analysis for ArcGIS. You can also contact Customer Support at
If you’re ready to learn about how Image Analysis for ArcGIS 404/248-9777. Visit Leica Geosystems on the Web at
works, see the Quick-start tutorial. In the Quick-start tutorial, you’ll www.gis.leica-geosystems.com.
learn how to adjust the appearance of an image, how to identify
similar areas of an image, how to align an image to a feature theme, Contac ting ESRI
as well as finding areas of change and mosaicking images.
If you need to contact ESRI for technical support refer to “Getting
Find ing answ ers to qu estio ns technical support” in the Help system’s “Getting more help”
section. The telephone number for Technical Support is 909-793-
This book describes the typical workflow involved in creating and 3744. You can also visit ESRI on the Web at www.esri.com.
updating GIS data for mapping projects. The chapters are set up so
that you first learn the theory behind certain applications, then you Lei ca Geosy stems GI S & Mapp ing
are introduced to the typical workflow you’d apply to get the results Ed ucati on Solutions
you want. A glossary is provided to help you understand any terms
you haven’t seen before. Leica Geosystems GIS & Mapping Division offers instructor-based
training about Image Analysis for ArcGIS. For more information,
Getti ng he lp on your compu ter got to the training Web site located at www.gis.leica-
geosystems.com. You can follow the training link to Training
You can get a lot of information about the features of Image Centers, Course Schedules, and Course Registration.
Analysis for ArcGIS by accessing the online help. To browse the
online help contents for Image Analysis for ArcGIS, click Help ESRI educ atio n sol utio ns
near the bottom of the Image Analysis menu. From this point you
can use the Table of contents, index, or search feature to locate the ESRI provides educational opportunities related to GISs, GIS
information you need. If you need online help for ArcGIS, click applications, and technology. You can choose among instructor-led
Help on the ArcMap toolbar and choose ArcGIS Desktop Help. courses, Web-based courses, and self-study workbooks to find
educational solutions that fit your learning style and pocketbook.
For more information, visit the Web site www.esri.com/education.

10 USING IMAGE ANALYSIS FOR ARCGIS


2 Quick-start tutorial
2
IN THIS CHAPTER Now that you know a little bit about the Image Analysis for ArcGIS extension and
its potential applications, the following exercises give you hands-on experience in
• Starting Image Analysis for using many of the extension’s tools. By working through the exercises, you are
ArcGIS going to use the most important components of the Image Analysis for ArcGIS
extension and learn about the types of problems it can solve.
• Adjusting the appearance of an
image
In Image Analysis for ArcGIS, you can quickly identify areas with similar
• Identifying similar areas in an characteristics. This is useful for identification in cases such as environmental
image disasters, burn areas or oil spills. Once an area has been defined, it can also be
quickly saved into a shapefile. This avoids the need for manual digitizing. This
• Finding areas of change tutorial will show you how to use some Image Analysis for ArcGIS tools and give
you a good introduction to using Image Analysis for ArcGIS for your own GIS
• Mosaicking images needs.

• Orthorectifying an image

11
Exercise 1: Starting Image Analysis for ArcGIS
In the following exercises, we’ve assumed that you are using
1
a single monitor or dual monitor workstation that is
configured for use with ArcMap and Image Analysis for 2
ArcGIS. That being the case, you will be lead through a
series of tutorials in this chapter to help acquaint you with
Image Analysis for ArcGIS and further show you some of the
abilities of Image Analysis for ArcGIS.
In this exercise, you’ll learn how to start Image Analysis for
ArcGIS and activate the toolbar associated with it. You will
be able to gain access to all the important Image Analysis for
ArcGIS features through its toolbar and menu list. After
completing this exercise, you’ll be able to locate any Image
A dd i n g t h e I m ag e A n a ly si s for A rc GI S
Analysis for ArcGIS tool you need for preparation,
ex tensi on
enhancement, analysis, or geocorrection.
This exercise assumes you have already successfully 1. If the ArcMap dialog opens, keep the option to create a
completed installation of Image Analysis for ArcGIS on your new empty map, then click OK.
computer. If you have not installed Image Analysis for
ArcGIS, refer to the installation guide packaged with the
Image Analysis for ArcGIS CD, and install now.
S t a r t i n g I m ag e A n a ly s i s for A rc G I S
1. Click the Start button on your desktop, then click
Programs, and point to ArcGIS.
2. Click ArcMap to start the application.

2. In the ArcMap window, click the Tools menu, then click


Extensions.

12 USING IMAGE ANALYSIS FOR ARCGIS


3. In the Extensions dialog, click the check box for Image
Analysis Extension to add the extension to ArcMap.

1
3

Once the Image Analysis Extension check box has been The Image Analysis toolbar is your gateway to many of the
selected, the extension is activated. tools and features you can use with the extension. From the
Image Analysis toolbar you can choose many different
4. Click Close in the Extensions dialog. analysis types from the menu, choose a geocorrection type,
Adding too lbars and set links in an image.
1. Click the View menu, then point to Toolbars, and click
Image Analysis to add that toolbar to the ArcMap
window.

QUICK-START TUTORIAL 13
Exercise 2: Adding images and applying Histogram Stretch
Image data, displayed without any contrast manipulation, 4. Click Add to display the image in the view.
may appear either too light or too dark, making it difficult to
begin your analysis. Image Analysis for ArcGIS allows you
to display the same data in many different ways. For
example, changing the distribution of pixels allows you to
alter the brightness and contrast of the image. This is called
histogram stretching. Histogram stretching enables you to
manipulate the display of data to make your image easier to
visually interpret and evaluate.
A dd a n I m ag e A n a ly s i s for A rc G I S t h e m e o f
Mos c ow
1. Open a new view. If you are starting this exercise
immediately after Exercise 1, you should have a new, The image Moscow_spot.tif appears in the view.
empty view ready.
Apply a Histo gra m Eq ualization
2. Click the Add Data button .
Standard deviations is the default histogram stretch applied
3. In the Add Data dialog, select moscow_spot.tif, and to images by Image Analysis for ArcGIS. You can apply
click Add to draw it in the view. The path to the example histogram equalization to redistribute the data so that each
data directory is ArcGIS\ArcTutor\ImageAnalysis. display value has roughly the same number of data points.
More information about histogram equalization can be found
in chapter 6 “Using Radiometric Enhancement”.
1. Select moscow_spot.tif in the Table of contents, right
click your mouse, and select Properties to bring up
Layer Properties.
2. Click the Symbology tab and under Show, select RGB
Composite.
3
3. Check the Bands order and click the dropdown arrows
to change any of the Bands.
4

14 USING IMAGE ANALYSIS FOR ARCGIS


You can also change the order of the bands in your current
image by clicking on the color bar beside each band in the
Table of contents. If you want bands to appear in a certain
order for each image that you draw in the view, go to
Tools\Options\Raster in ArcMap, and change the Default
6
RGB Band Combinations.
1 3

4 7. In the Histogram Equalization dialog, make sure


moscow_spot.tif is in the Input Image box.
8. The Number of Bins will default to 256. For this
exercise, leave the number at 256, but in the future, you
can change it to suit your needs.
9. Navigate to the directory where you want your output
images stored, type a name for your image, and click
5 Save. The path will appear in Output Image.

4. Click the dropdown arrow and select Histogram You can go to the Options dialog, accessible from the Image
Equalize as the Stretch Type. Analysis toolbar, and enter the working directory you want to
use on the General tab of the dialog. This step will save you
5. Click Apply and OK. time by automatically bringing up your working directory
6. Click the Image Analysis menu dropdown arrow, point whenever you click the browse button to navigate to it in
to Radiometric Enhancement, and click Histogram order to store an output image.
Equalization.

QUICK-START TUTORIAL 15
2. If you want to see the histograms for the image, click the
Histograms button located in the Stretch box.
7
3. Check the Invert box.
8
9
1

10

10. Click OK.


2
The equalized image will appear in your Table of
contents and in your view.
3

4. Click Apply and OK.

This is the histogram equalized image of Moscow.


Ap ply an Inve r t Stretch to the imag e of
Mos c ow
In this example, you apply the Invert Stretch to the image to
redisplay it with its brightness values reversed. Areas that
originally appeared bright are now dark, and dark areas are
bright.
1. Select the equalized file in the Table of contents, and
right-click your mouse. Click Properties and go to the
Symbology tab.

16 USING IMAGE ANALYSIS FOR ARCGIS


This is an inverted image of Moscow_spot.tif.
You can apply different types of stretches to your image to
emphasize different parts of the data. Depending on the
original distribution of the data in the image, one stretch may
make the image appear better than another. Image Analysis
for ArcGIS allows you to rapidly make those comparisons.
The Layer Properties Symbology tab can be a learning tool to
see the effect of stretches on the input and output histograms.
You’ll learn more about these stretches in chapter 6 “Using
Radiometric Enhancement”.

QUICK-START TUTORIAL 17
Exercise 3: Identifying similar areas in an image
With Image Analysis for ArcGIS you can quickly identify
areas with similar characteristics. This is useful for
identification of environmental disasters or burn areas. Once
an area has been defined, it can also be quickly saved into a
shapefile. This action lets you avoid the need for manual
digitizing. To define the area, you use the Seed Tool to point
to an area of interest such as a dark area on an image
depicting an oil spill. The Seed Tool returns a graphic
polygon outlining areas with similar characteristics.
Add an d draw an Imag e An alys is for
A rc G I S t h e m e d e p i c t i n g a n o i l s p i l l This is a radar image showing an oil spill off the
northern coast of Spain.
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File C r e a t e a s ha p e f i l e
button on your ArcMap tool bar. You do not need to
In this exercise, you use the Seed Tool (also called the
save the image. If you are beginning here, start ArcMap
Region Growing Tool). The Seed Tool grows a polygon
and load the Image Analysis for ArcGIS extension.
graphic in the image that encompasses all similar and
contiguous areas. In order to use the Seed Tool, you will first
1 2 need to create a shapefile in ArcCatalog and start editing in
order to enable the Seed Tool. After going through these
steps, you can point and click inside the area you want to
highlight, in this case an oil spill, and create a polygon. The
polygon enables you to see how much of an area the oil spill
2. Click the Add Data button. covers.
3. In the Add Data dialog, select radar_oilspill.img, and 1. Click the Zoom In tool, and drag a rectangle around the
click Add to draw it in the view. black area to see the spill more clearly.

18 USING IMAGE ANALYSIS FOR ARCGIS


1
2 4

2. Click the ArcCatalog button. You can store the shapefile


you’re going to create in the example data directory or
navigate to a different directory if you wish.
3. Select the directory in the Table of contents and right 5 6
click or click File, point to New, and click Shapefile.

3 7. In the Spatial Reference Properties dialog, click Import,


and select radar_oilspill.img and click Add from the
Browse for Dataset dialog that will pop up containing
the example data directory.
8. Click Apply and OK.
9. Click OK in the Create New Shapefile dialog.
4. In the Create New Shapefile dialog, name the new 10. Select the oilspill shapefile, and drag and drop it in the
shapefile oilspill, and click the Feature Type dropdown ArcMap window. Oilspill will appear in the Table of
arrow and select Polygon. contents.
5. Check Show Details. 11. Close ArcCatalog.
6. Click Edit.

QUICK-START TUTORIAL 19
1
7

8
2

Draw the po lygo n with the See d Tool


1. Click the Image Analysis dropdown arrow, and click 3
Seed Tool Properties.
2. Type a Seed Radius of 10 pixels in the Seed Radius text
box.
3. Uncheck the Include Island Polygons box.
4
The Seed Radius is the number of pixels surrounding the
target pixel. The range of values of those surrounding 5. Click the Editor toolbar button on the ArcMap toolbar to
pixels is considered when the Seed Tool grows the display the Editor toolbar.
polygon.
6. Click Editor on the Editor toolbar in ArcMap, and select
4. Click OK. Start Editing.

20 USING IMAGE ANALYSIS FOR ARCGIS


5

This is a polygon of an oil spill grown by the Seed Tool.


If you don’t automatically see the formed polygon in the
image displayed in the view, click the refresh button at the
bottom of the view screen in ArcMap.
You can see how the tool identifies the extent of the spill. An
emergency team could be informed of the extent of this
disaster in order to effectively plan a clean up of the oil.

7. Click the Seed Tool and click a point in the center of the
oil spill. The Seed Tool will take a few moments to
produce the polygon.

QUICK-START TUTORIAL 21
Exercise 4: Finding areas of change
The Image Analysis for ArcGIS extension allows you to see 4. Click OK.
changes over time. You can perform this type of analysis on
either continuous data using Image Difference or thematic
data using Thematic Change. In this exercise, you’ll learn
how to use Image Difference and Thematic Change. Image
Difference is useful for analyzing images of the same area to
identify land cover features that may have changed over time.
Image Difference performs a subtraction of one theme from
another. This change is highlighted in green and red masks
depicting increasing and decreasing values.
Find cha ng ed a reas
In the following example, you are going to work with two
With images active in the view, you can calculate the
continuous data images of the north metropolitan Atlanta,
difference between them.
Georgia, area—one from 1987 and one from 1992.
Continuous data images are those obtained from remote C o m p u t e t h e d i f fe r e n c e d u e t o
sensors like Landsat and SPOT. This kind of data measures develop ment
reflectance characteristics of the earth’s surface, analogous to
1. Click the Image Analysis dropdown arrow, click
exposed film capturing an image. You will use Image
Utilities, and click Image Difference.
Difference to identify areas that have been cleared of
vegetation for the purpose of constructing a large regional
shopping mall.
Add an d draw th e imag es o f Atlan ta
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File
button on your ArcMap tool bar. You do not need to
save the image. If you are beginning here, start ArcMap
and load the Image Analysis for ArcGIS extension.
2. Click the Add Data button.
3. Press the Shift or Ctrl key, and click on
atl_spotp_87.img and atl_spotp_92.img in the Add Data
dialog.

22 USING IMAGE ANALYSIS FOR ARCGIS


2 3

1 4
5
6

7 8

2. In the Image Difference dialog, click the Before Theme


dropdown arrow, and select Atl_spotp_87.img. 9

3. Click the After Theme dropdown arrow, and select


Atl_spotp_92.img. 4. Choose As Percent in the Highlight Changes box.
5. Click the arrows to 15 in the Increases more than box.
6. Click the arrows to 15 in the Decreases more than box.
7. Navigate to the directory where you want to store your
Image Difference file, type the name of the file, and
click Save.
8. Navigate to the directory where you want to store your
Highlight Change file, type the name of the file, and
click Save.
9. Click OK in the Image Difference dialog.
The Highlight Change and Image Difference files
appear in the Table of contents and the view.

QUICK-START TUTORIAL 23
Image Difference calculates the difference in pixel values.
With the 15 percent parameter you set, Image Difference
finds areas that are at least 15 percent increased than before
(designated clearing) and highlights them in green. Image
Difference also finds areas that are at least 15 percent
decreased than before (designating an area that has increased
vegetation or an area that was once dry, but is now wet) and
highlights them in red.
C l o s e t h e v iew
You can now clear the view and either go to the next portion
of this exercise, Thematic Change, or end the session by
closing ArcMap. If you want to shut down ArcMap with
Image Analysis for ArcGIS, click the File menu, and click
Highlight Change shows the difference in red and green Exit. Click No when asked to save changes.
areas.
Using Thematic Change
10. In the Table of contents, click the check box to turn off
Highlight Change, and check Image Difference to Image Analysis for ArcGIS provides the Thematic Change
display it in the view. feature to make comparisons between thematic data images.
Thematic Change creates a theme that shows all possible
combinations of change and how an area’s land cover class
changed over time. Thematic Change is similar to Image
Difference in that it computes changes between the same area
at different points in time. However, Thematic Change can
only be used with thematic data (data that is classified into
distinct categories). An example of thematic data is a
vegetation class map.
This next example uses two images of an area near Hagan
Landing, South Carolina. The images were taken in 1987 and
1989, before and after Hurricane Hugo. Suppose you are the
forest manager for a paper company that owns a parcel of
The Image Difference image shows the results of the land in the hurricane’s path. With Image Analysis for
subtraction of the Before Theme from the After Theme. ArcGIS, you can see exactly how much of your forested land
has been destroyed by the storm.

24 USING IMAGE ANALYSIS FOR ARCGIS


Add th e imag es o f an area damag ed by 1. Click the dropdown arrow in the Layers section of the
Hu rr icane Hu go Image Analysis toolbar to make sure tm_oct87.img is
active.
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File 2. Click the Image Analysis dropdown arrow, point to
button on your ArcMap toolbar. You do not need to save Classification, and click Unsupervised/Categorize.
the image. If you are beginning here, start ArcMap and 3. Click the Input Image dropdown arrow to make sure
load the Image Analysis for ArcGIS extension. tm_oct87.img is in the text box.
2. Open a new view and click Add Data. 4. Click the arrows to 3 or type 3 in the Desired Number of
3. Press either the Shift key or Ctrl key, and select both Classes box.
tm_oct87.img and tm_oct89.img in the Add Data 5. Navigate to the directory where you want to store the
dialog. Click Add. output image, type the file name (use
unsupervised_class_87 for this example), and click
Save.
6. Click OK in the Unsupervised Classification dialog.

3
This view shows an area damaged by Hurricane Hugo. 4

Create three class es of land cover 5

Before you calculate Thematic Change, you must first


categorize the Before and After Themes. You can access 6
Categorize through Unsupervised Classification, which is an
option available from the Image Analysis dropdown menu.
You’ll use the thematic themes created from those
classifications to complete the Thematic Change calculation.

QUICK-START TUTORIAL 25
Using Unsupervised Classification to categorize continuous 4. Select Class 001, and double-click Class 001 under
images into thematic classes is particularly useful when you Class_names. Type the name Water.
are unfamiliar with the data that makes up your image. You 5. Double-click the color bar under Symbol for Class 001,
simply designate the number of classes you would like the and choose blue from the color palette.
data divided into, and Image Analysis for ArcGIS performs a
calculation assigning pixels to classes depending on their 6. Select Class 002, and double-click Class 002 under
values. By using Unsupervised Classification, you may be Class_names. Type the name Forest.
better able to quantify areas of different land cover in your 7. Double-click the color bar under Symbol for Class 002,
image. You can then assign the classes names like water, and choose green.
forest, and bare soil.
8. Select Class 003, and double-click Class 003 under
7. Click the check box of tm_oct87.img so the original Class_names. Type the name Bare Soil.
theme is not drawn in the view. This step makes the
remaining themes draw faster in the view.
2

5 4

Give the classe s names and a ssign c olors


to represent them
1. Double-click the title unsupervised_class_87.img to 10
access the Layer Properties dialog.
2. Click the Symbology tab. 9. Double-click the color bar under Symbol for Class 003,
3. Verify that Class_names is selected in the Value Field. and choose a tan or light brown color.
10. Click Apply and OK.

26 USING IMAGE ANALYSIS FOR ARCGIS


1

Categorize and name the areas in the post-


h u r r i c a n e i m ag e
1. Follow the steps provided for the theme tm_oct87.img
on pages 25 and 26 under “Create three classes of land
cover” and “Give the classes names and assign colors to
represent them” to categorize the classes of the
tm_oct89.img theme. 2
3
2. Click the box of the tm_oct89.img theme so that it does
not draw in the view. 4

Rec ode to pe rman ently write class names


a n d c o l o rs t o a f i l e 5
After you have classified both of your images, you need to do
a recode in order to permanently save the colors and class
names you have assigned to the images. Recode lets you 2. Click the Input Image dropdown arrow to select one of
create a file with the specific images you’ve classified. the classified images.
1. Click the Image Analysis dropdown arrow, point to GIS 3. The Map Pixel Value through Field will read <From
Analysis, and click Recode. view>. Leave this as is.
4. Click the browse button to bring up your working
directory, and name the Output Image.
5. Click OK.

QUICK-START TUTORIAL 27
Now do the same thing and perform a recode on the other 9. In the Symbology tab, double-click the symbol for was:
classified image you did of the Hugo area. Both of the images Class 002, is now: Class 003 (was Forest, is now Bare
will have your class names and colors permanently saved. Soil) to access the color palette.
Use Them atic Chan g e to see how land 10. Click the color red in the color palette, and click Apply.
cov er chang e d bec ause o f Hugo You don’t have to choose red, you can use any color you
like.
1. Make sure both recoded images are checked in the Table
of contents so both will be active in the view. 11. Click OK.
2. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Thematic Change.

3
4
5

3. Click the Before Theme dropdown arrow and select the You can see the amount of destruction in red. The red
87 classification image. shows what was forest and is now bare soil.

4. Click the After Theme dropdown arrow, and select the A dd a fe a t u r e t h e m e t h a t s h ow s t h e


89 classification image. prop er ty bound ar y

5. Navigate to the directory where you want to store the Using Thematic Change, the overall damage caused by the
Output Image, type the file name, and click Save. hurricane is clear. Next, you will want to see how much
damage actually occurred on the paper company’s land.
6. Click OK.
1. Click Add Data.
7. Click the check box of Thematic Change to draw it in
the view. 2. Select property.shp, and click Add.
8. Double-click the Thematic Change title to access Layer
Properties.

28 USING IMAGE ANALYSIS FOR ARCGIS


3
4
5

6
Thematic Change image with the property shapefile
Make the pro per ty transpa rent
1. Double-click on the property theme to access Layer
Properties.
2. Click the Symbology tab, and double-click the color
symbol.
3. In the Symbol Selector, click the Hollow symbol.
4. Click the Outline Width arrows, or type the number 3 in
the box.
5. Click the Outline Color dropdown arrow, and choose a
color that will easily stand out to show your property
line.
6. Click OK. The yellow outline clearly shows the devastation within
the paper company’s property boundaries.
7. Click Apply and OK on the Symbology tab.

QUICK-START TUTORIAL 29
Exercise 5: Mosaicking images
Image Analysis for ArcGIS allows you to mosaic multiple The two airphotos display in the view. The Mosaic tool
images. When you mosaic images, you join them together to joins them as they appear in the view: whichever is on
form one single image that covers the entire area. To mosaic top is also on top in the mosaicked image.
images, simply display them in the view, ensure that they Zoo m in to s ee imag e de tails
have the same number of bands, then select Mosaic.
1. Select Airphoto1.img, and right-click your mouse.
In the following exercise, you are going to mosaic two
airphotos with the same resolution. 2. Click Zoom to raster resolution.
Add an d draw th e imag es
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File
button on your ArcMap tool bar. You do not need to
save the image. If you are beginning here, start ArcMap
and load the Image Analysis for ArcGIS extension with
a new map.
2. Click the Add Data button.
3. Press the Shift key and select Airphoto1.img and
Airphoto2.img in the Add Data dialog. Click Add.
4. Click Airphoto1.img and drag it so that it is at the top of The two images are displayed at a 1:1 resolution. You
the Table of contents. can now use Pan to see how they overlap.
3. Click the Pan button, then maneuver the images in the
view.

30 USING IMAGE ANALYSIS FOR ARCGIS


1
This illustration shows where the two images overlap.
4. Click the Full Extent button so that both images display
their entirety in the view.

Use Mosaic to join the imag es


1. If you want to use some other extent than Union of
Inputs for your mosaic, you must first go to the Extent
tab in the Options dialog and change the Extent before
opening Mosaic Images. After opening the Mosaic
Images dialog, you cannot access the Options dialog.
However, it is recommended that you keep the default of
Union of Inputs for mosaicking. 2. Click the Image Analysis dropdown arrow, point to Data
Preparation, and click Mosaic Images.
3. Click the Handle Images overlaps dropdown arrow and
choose Use Order Displayed.

QUICK-START TUTORIAL 31
4. If you want to automatically crop your images, check
the box, and use the arrows or type the percentage by
which to crop the images.

3
4

6
The Mosaic function joins the two images as they
7
appear in the view. In this case Airphoto1 is mosaicked
over Airphoto2.

5. Choose Brightness/Contrast as the Color Balancing


option.
6. If you have changed the extent to something other than
Union of Inputs, check this box, but for this exercise you
will need to leave the extent set at Union of Inputs and
the box unchecked.
7. Navigate to the directory where you want to save your
files, type the file name, and click Save.
8. Click OK.

32 USING IMAGE ANALYSIS FOR ARCGIS


E x e r c i s e 6 : O r t h o r e c t i f ic a t i o n o f c a m e r a i m a g e r y
The Image Analysis for ArcGIS extension for ArcGIS has a Se lect the c oordin ate sy stem for th e imag e
feature called Geocorrection Properties. The function of this This procedure defines the coordinate system for the data
feature is to rectify imagery. One of the tools that makes up frame in Image Analysis for ArcGIS.
Geocorrection Properties is the Camera model.
1. Either select Layers in the Table of contents and right
In this exercise you will orthorectify images using the click, or move your cursor into the view and right click.
Camera model in Geocorrection Properties.
2. Select Properties at the bottom of the menu to bring up
Add raster an d feature d atasets the Data Frame Properties dialog.
1. If you are starting immediately after the previous
exercise, clear your view by clicking the New Map File 3
button on your ArcMap tool bar. You do not need to
save the image. If you are beginning here, start ArcMap
and load the Image Analysis for ArcGIS extension with
a new map.
2. Click the Add Data button.
3. Hold the Shift key down and select both ps_napp.img
and ps_streets.shp in the Add Data dialog. Click Add.
4. Right click on ps_napp.img and click Zoom to Layer.
4

7
The images are drawn in the view. You can see the
fiducial markings around the edges and at the top. 3. Click the Coordinate System tab.
4. In the box labeled Select a coordinate system, click
Predefined.
QUICK-START TUTORIAL 33
5. Click Projected Coordinate Systems, and then click 4. Navigate to the ArcGIS ArcTutor directory, and choose
Utm. ps_dem.img as the Elevation File.
6. Click NAD 1927, then click NAD 1927 UTM Zone 5. Click the Elevation Units dropdown arrow and select
11N. Meters.
7. Click Apply, and click OK. 6. Check Account for Earth’s curvature.
Or tho rectifyi ng your imag e us ing
3
Geocorre ction Prop er ti es
1. Click the Model Types dropdown arrow, and click
Camera.

4
5
2

6
2. Click the Geocorrection Properties button on the toolbar
to open the Camera dialog.

7. Click the Camera tab.


8. Click the Camera Name dropdown arrow, and select
Default Wild.
9. In the Principal Point box, enter -0.004 for X and 0.000
for Y.
10. Enter a Focal Length of 152.804.
11. Click the arrows, or type 4 for the number of Fiducials.
12. Click in the Film X and Film Y box where the number
of Fiducials will reduce to 4.
3. Click the Elevation tab, and select File to use as the
Elevation Source.
34 USING IMAGE ANALYSIS FOR ARCGIS
13. Type the following coordinates in the corresponding 2. Click the Green fiducial, and the software will take you
fiducial spaces. Use the Tab key to move from space to to the approximate location of the first fiducial
space. placement. Your cursor has become a crosshair.
1. -106.000 106.000 3. Click the Fixed Zoom In tool, and zoom in until you can
2. 105.999 105.994 see the actual fiducial, and click the crosshair there. The
software will take you to each of the four points where
3. 105.998 -105.999 you can click the crosshair in the fiducial marker.
4. -106.008 -105.999
1
7
2
14 8
15
9 11
12
10

16

14. Name the camera in the Camera Name box.


15. Click Save to save the camera information with the When you are done placing fiducials, make sure to click
Camera Name. Apply then OK to close. You can then right click on the
image in the Table of contents, and click Zoom to Layer. You
16. Click Apply and move to the next section. will notice that both the image and the shape file are now
Fidu cial place ment displayed in the view. To look at the root mean square error
(RMSE) on the fiducials tab, you can reopen the Camera
1. Click the Fiducials tab, and make sure the first fiducial Properties dialog. The RMSE should be less than 1.0. Now,
orientation is selected. it is time to rectify the images.

QUICK-START TUTORIAL 35
After placing fiducials, both the image and the shapefile Your first link should look approximately like this:
are shown in the view for rectification.
P l a c i n g l i n ks
1. Click the Add Links button.
2. Looking closely at the image and shapefile in the view,
and using the next image as a guide, line up where you
should place the first link. Follow the markers in the
next image to place the first three links. You will need to
click the crosshair on the point in the image first and
then drag the cursor over to the point in the shapefile
where you want to click.

3. Place links 2 and 3.

36 USING IMAGE ANALYSIS FOR ARCGIS


After placing the third link, your image should look Your image should warp and become aligned with the streets
something like this: shapefile. You can use the Zoom tool to draw a rectangle
around the aligned area and zoom in to see it more clearly.

4. Zoom to the upper left portion of the image, and place a


link according to this next image.

Now take a look at the RMS Error on the Links tab of Camera
Properties. You can go to Save As on the Image Analysis
menu and save the image if you wish.

5. Zoom to the lower left portion of the image, and place a


link according to the previous image.

QUICK-START TUTORIAL 37
What’s Next?
This tutorial has introduced you to some features and basic
functions of Image Analysis for ArcGIS. The following
chapters go into greater detail about the different tools and
elements of Image Analysis for ArcGIS, and include
instructions on how to use them to your advantage.

38 USING IMAGE ANALYSIS FOR ARCGIS


3 Applying data tools
3
IN THIS CHAPTER You will notice when you look at the Image Analysis menu that there are three
choices called Seed Tool Properties, Image Info, and Options. All three aid you in
• Seed Tool Properties manipulating, analyzing, and altering your data so you can produce results that are
easier to interpret than they would be with no data tool input.
• Image Info
• Seed Tool Properties automatically generates feature layer polygons of similar
• Options spectral value.
• Image Info gives you the ability to apply a NoData Value and recalculate
statistics.
• Options lets you change extent, cell size, preferences, and more.

39
Using Seed Tool Properties
As stated in the opening of the chapter, the main function of Seed
Tool Properties is to automatically generate feature layer polygons
of similar spectral value. After creating a shapefile in ArcCatalog,
you can either click in an image on a single point, or you can click
and drag a rectangle in a portion of the image that interests you.
You can decide which method you wish to use before clicking the
tool on the toolbar, or you can experiment with which method looks
best with your data.

In order to use the Seed Tool, you must first create the shapefile for
the image you are using in ArcCatalog. You will need to open
ArcCatalog, create a new shapefile in the directory you want to use, Seed Tool dialog
name it, choose polygon as the type of shapefile, and then use Start
Editing on the Editor toolbar in ArcMap to activate the Seed Tool. Se ed R adiu s
Once you are finished and you have grown the polygon, you can go
back to the Editor toolbar and select Stop Editing. When you use the simple click method, the Seed Tool is controlled
by the Seed Radius. You can change the number of pixels of the
The band or bands used in growing the polygon are controlled by Seed Radius by opening the dialog from the Image Analysis menu.
the current visible bands as set in Layer Properties. If you only have From this dialog, you select your Seed Radius in pixels. The Image
one band displayed, such as the red band, when you are interested Analysis for ArcGIS default Seed Radius is 5 pixels.
in vegetation analysis, then the Seed Tool only looks at the statistics
of that band to create the polygon. If you have all the bands (red, The Seed Radius determines how selective the Seed Tool is when
green, and blue) displayed, then the Seed Tool evaluates the selecting contiguous pixels. A larger Seed Radius includes more
statistics in each band of data before creating the polygon. pixels to calculate the range of pixel values used to grow the
polygon, and typically produces a larger polygon. A smaller Seed
When a polygon shapefile is being edited, a polygon defined using Radius uses fewer pixels to determine the range. Setting the Seed
the Seed Tool is added to the shapefile. Like other ArcGIS Radius to 0.5 or less restricts the polygon to growing over pixels
graphics, you can change the appearance of the polygon produced with the exact value as the pixel you click on in the image. This can
by the Seed Tool using the Graphics tools. be useful for thematic images in which a contiguous area might
have a single pixel value, instead of a range of values like
Co ntroll ing the Seed Too l continuous data.

You can use the Seed Tool simply by choosing it from the Image
Analysis toolbar and clicking on an image after generating a
shapefile. The defaults usually produce a good result. However, if
you want more control over the parameters of the Seed Tool, you
can open Seed Tool Properties from the Image Analysis menu.

40 USING IMAGE ANALYSIS FOR ARCGIS


Isla nd Po lygo ns

The other option on the Seed Tool Properties dialog is Include


Island Polygons. You should leave this option checked for use with
Find Like Areas. For single feature mapping where you want to see
a more refined boundary, you may want to turn it off.

APPLYING DATA TOOLS 41


P r e p a r i n g t o u s e t h e S e e d To o l
1
Go through the following steps to activate the Seed Tool and
generate a polygon in your image.

1. Open ArcCatalog and make sure your working directory


appears in ArcCatalog, or navigate to it.
2. Click File, point to New, and click Shapefile.
3. Rename the New_Shapefile.
4. Click the dropdown arrow and select Polygon.
5. Check Show Details.
2
6. Click Edit. X

3
4

5 6

42 USING IMAGE ANALYSIS FOR ARCGIS


7. Click Select, Import, or New to input the coordinate
system the new shapefile will use. Clicking Import will
allow you to import the coordinates of the image you are
creating the shapefile for.
8. Click Apply and OK in the Spatial Reference Properties
dialog.
9. Click OK in the Create New Shapefile dialog.
10. Close ArcCatalog and click the dropdown arrow on the
Editor toolbar.
11. Select Start Editing.

11

APPLYING DATA TOOLS 43


Usi ng the Seed Tool

These processes will take you through steps to change the


Seed Radius and include Island Polygons. For an in-depth
tutorial on using the Seed Tool and generating a polygon, see
chapter 2 “Quick-start tutorial”.
Ch angin g th e Se ed Radius

1. Click the Image Analysis dropdown arrow, and click Seed


Tool Properties.
2. Type a new value in the Seed Radius text box. 1
3. If you need to enable Include Island Polygons, check the
box.
4. Click OK.

After growing the polygon in the image with the Seed Tool, go
back to the Editor toolbar, click the dropdown arrow, and click
Stop Editing.
2

44 USING IMAGE ANALYSIS FOR ARCGIS


Image Info
When analyzing images, you often have pixel values you need to NoDat a Value
alter or manipulate in order to perceive different parts of the image
better. The Image Info feature of Image Analysis for ArcGIS lets The NoDataValue section of the Image Info dialog gives you the
you choose a NoData Value and recalculate the statistics for your opportunity to label certain areas of your image as NoData. In order
image so that a pixel value that is unimportant in your image can be to do this, you assign a certain value that no other pixel in the image
designated as such. has to the pixels you want to classify as NoData. You will want to
do this when the pixel values in that particular area of the image are
You can apply NoData to a single layer of your image instead of to not important to your statistics or image. You have to assign some
the entire image if you want or need to do so. When you choose to type of value to those pixels to hold their place, so you need to come
apply NoData to single layers, it is important that you click Apply up with a value that's not being used for any of the other pixels you
on the dialog before moving to the next layer. You can also want to include. Using 0 does not work because 0 does contain
recalculate statistics (Recalc Stats) for single bands by choosing value. Look at the Minimum value and the Maximum value under
Current Band in the Statistics box on the Image Info dialog. It is Statistics on the Image Info dialog and choose your NoData value
important to remember that if you click Recalc Stats while Current to be any number between the Minimum and Maximum.
Band is selected, Image Info will only recalculate the statistics for
that band. If you want to set NoData for a single band, but Sometimes the pixel value you choose as NoData will already be
recalculate statistics for all bands, you can choose All Bands after used so that NoData matches some other part of your image. This
setting NoData in the single bands, and recalculate for all. problem becomes evident when the image is displayed in the view
and there are black spots or triangles where it should be clear, or
The Image Info dialog is found on the Image Analysis menu. When perhaps clear spots where it should be black. Also remember that
you choose it, the images in your view will be displayed on a you can type N/A or leave the area blank so that you have no
dropdown menu under Layer Selection. You can then type the pixel NoData assigned if you don't want to use this option.
value that you wish to give the NoData pixels in your image. The
Statistics portion of the dialog also features a dropdown menu so
you can designate the layer for which to calculate NoData. This
area of the dialog also names the Pixel Type and the Minimum and
Maximum values. When you click Recalc Stats, the statistics for the
image are recalculated using the NoData Value, and you can close
the image in the view, then reopen it to see the NoData Value
applied. The Representation Type area of the dialog will
automatically choose Continuous or Thematic depending on what
kind of image you have in your view. If you find that a file you need
to be continuous is listed as thematic, you can change it here.

APPLYING DATA TOOLS 45


Usi ng the I m ag e I nfo dia log

1. Click the Image Analysis dropdown arrow, and click Image


Info.
2. Click the Layer Selection dropdown arrow to make sure the
correct image is displayed.
3. Click the Statistics dropdown arrow to make sure the layer
you want to recalculate is selected.
4. Choose All Bands or Current Band.
5. Type the NoDataValue in the box.
6. Make sure the correct Representation Type is chosen for
your image. 1
7. Click Recalc Stats.
8. Click Apply and OK.
9. Close the image and re-open to view the results visually.

7
5
4

46 USING IMAGE ANALYSIS FOR ARCGIS


Options
You can access the Options dialog through the Image Analysis
menu. Through this dialog, you can set an analysis mask as well as
setting the extent, cell size, and preferences for future operations or
a single operation. It’s usually best to leave the options set at what
they are, but there may be times you want or need to change them.
When you’re mosaicking images, you can go to the Extent tab on
the Options dialog in order to set the extent at something other than
Union of Inputs, which it automatically defaults to when
mosaicking. The default extent is usually Intersection of Inputs. It
is recommended that you leave the default Union of Inputs when
mosaicking, but you can change it. If you do so, you will need to
check the Use Extent from Analysis Options box on the Mosaic
Image dialog. You can use the Options dialog with any Image
Analysis feature, but you may find it particularly useful with the
Data Preparation features that will be covered in the next chapter.
The Image Analysis Options dialog
The Options dialog has four tabs on it for General, Extent, Cell
Size, and Preferences. On the General tab, your output directory is Ex tent
displayed, and the Analysis mask will default to none, but if you
click the dropdown arrow, you can set it to any raster dataset. If you The Extent tab lets you control how much of a theme you want to
want to store your output images and shapefiles in one working use during processing. You do this by setting the Analysis extent.
directory, you can navigate to that directory or type the directory The rest of the tab will become active when Same as Display, As
name in the Working directory box. This will allow your working Specified below, and Same as Layer "......" (whatever layer is active
directory to automatically come up every time you click the browse in the view) are chosen. Same as Display refers to the area currently
button for an output image. The Analysis Coordinate System lets displayed in the view. If the view has been zoomed in on a portion
you choose which coordinate system you would like the image to of a theme, then the functions would only operate on that portion of
be saved with—the one for the input or the one for the active data the theme. When you choose Same as Layer, all of the information
frame. Finally, you can select whether or not to have a warning in the Table of contents for that layer is considered regardless of
message display if raster inputs have to be projected during analysis whether or not they are displayed in the view. As Specified below
operation. lets you fill in the information for the extent. You can also click the
open file button on the Extent tab to choose a dataset to use as the
Analysis extent. If you click this button, you can navigate to the
directory where your data is stored and select a file that has extents
falling within the selected project area.

APPLYING DATA TOOLS 47


The other options on the Analysis extent dropdown list are Cell Size
Intersection of Inputs and Union of Inputs. When you choose
Intersection (which is the default extent for all functions except The third tab on the Options dialog is Cell Size. This is for the cell
Mosaic), Image Analysis for ArcGIS performs functions on the size of images you produce using Image Analysis for ArcGIS. The
area of overlap common to the input images to the function. first field on the tab is a dropdown list for Analysis cell size. You
Portions of the images outside the area of overlap are discounted can choose Maximum of Inputs, Minimum of Inputs, As Specified
from analysis. Union is the default setting of Analysis extent for below, or Same as Layer ".....". Choosing Maximum of Inputs
mosaicking. When the extent is set to Union of Inputs, Image yields an output that has the maximum resolution of the input files.
Analysis for ArcGIS uses the union of every input theme. It is For example, if you use Image Difference on a 10 meter image and
highly recommended that you keep this default setting when a 20 meter image, the output is a 20 meter image.
mosaicking images.
The Minimum of Inputs option produces an output that has the
When you choose an extent that activates the rest of the Extent tab, minimum resolution of the input files. For example, if you use
the fields are Top, Right, Bottom, and Left. If you are familiar with Image Difference on a 10 meter image and a 20 meter image, the
the data and want to enter exact coordinates, you can do so in these output is a 10 meter image.
fields. Same as Display and As Specified Below activate the Snap
extent to field where you can choose an image to snap the Analysis When you choose As Specified below, you can enter whatever cell
mask to. size you wish to use, and Image Analysis for ArcGIS will adjust the
output accordingly.

If you choose Same as Layer "....", indicating a layer in the view,


the cell size reflects the current cell size of that layer.

The Cell Size field will display in either meters or feet. To choose
one, click View in ArcMap, click Data Frame Properties, and on the
General Tab, click the dropdown arrow for Map Units and choose
either Feet or Meters.

The Number of Rows and Number of Columns fields should not be


updated manually as they will update as analysis properties are
changed.

The Extent tab on the Options dialog

48 USING IMAGE ANALYSIS FOR ARCGIS


The Cell Size tab on the Options dialog The Preferences tab on the Options dialog

Prefe renc es

It is recommended that you leave the preference choice to the


default of Bilinear Interpolation, but you can change it to Nearest
Neighbor or Cubic Convolution if your data requires one of those
choices. Bilinear Interpolation is a resampling method that uses the
data file values of four pixels in a 2 × 2 window to calculate an
output data file value by computing a weighted average of the input
data file values with a bilinear function.

The Nearest Neighbor option is a resampling method in which the


output data file value is equal to the input pixel that has coordinates
closest to the retransformed coordinates of the output pixel.

The Cubic Convolution option is a resampling method that uses the


data file values of sixteen pixels in a 4 × 4 window to calculate an
output data file value with a cubic function.

APPLYING DATA TOOLS 49


Usi ng the Options dialo g

The following processes will take you through the parts you
can change on the Options dialog.
The Ge nera l Tab

1. Click the Image Analysis dropdown arrow, and click


Options.
2. Navigate to the Working directory if it’s not displayed in
the box.
3. Click the dropdown arrow and select the Analysis mask if
you want one, or navigate to the directory where it is 1
stored.
4. Choose the Analysis Coordinate System.
5. Check or uncheck the Display warning box according to
your needs.
6. Click the Extent tab to change Extents or OK to finish.
6

2
3

50 USING IMAGE ANALYSIS FOR ARCGIS


The Extent Tab
4
1. Click the dropdown arrow for Analysis extent, and
choose an extent, or navigate to a directory to choose a
dataset for the extent.
1
2. If the coordinate boxes are on, you can type in
coordinates if you know the exact ones to use. 2
3. If activated, click the dropdown arrow, and choose an
image to Snap extent to, or navigate to the directory
where it is stored.
4. Click the Cell Size tab, or OK.
3

APPLYING DATA TOOLS 51


Cel l S iz e tab

1. Click the dropdown arrow, and choose the cell size, or


navigate to the directory where it is stored.
2. If activated, type the cell size you want to use.
3. Type the number of rows. 1
4. Type the number of columns.
2
5. Click the Preferences tab or OK.
3
The Preferences tab has only the one option of clicking the 4
dropdown arrow and choosing to resample using either
Nearest Neighbor, Bilinear Interpolation, or Cubic
Convolution.

52 USING IMAGE ANALYSIS FOR ARCGIS


Working with features

Section 2
4 Using Data Preparation
4
IN THIS CHAPTER When using the Image Analysis for ArcGIS extension, it is sometimes necessary
to prepare your data first. It is important to understand how to prepare your data
• Create New Image before moving on to the different ways Image Analysis for ArcGIS gives you to
manipulate your data. You are given several options for preparing data in Image
• Subset Image Analysis for ArcGIS.

• Mosaic Images In this chapter you will learn how to:

• Reproject Image • Create a new image


• Subset an image
• Mosaic images
• Reproject an image

55
Create New Image
The Create New Image function makes it easy to create a new
image file. It also allows you to define the size and content of the
file as well as choosing whether or not the new image type will be Minimum Maximum
thematic or continuous. Data Type
Value Value
Choose thematic for raster layers that contain qualitative and Unsigned 1 bit 0 1
categorical information about an area. Thematic layers lend
themselves to applications in which categories or themes are used. Unsigned 2 bit 0 3
They are used to represent data measured on a nominal or ordinal
scale, such as soils, land use, land cover, and roads. Unsigned 4 bit 0 15

Unsigned 8 bit 0 255


Continuous data is represented in raster layers that contain
quantitative (measuring a characteristic on an interval or ratio Signed 8 bit -128 127
scale) and related, continuous values. Continuous raster layers can
be multiband or single band such as Landsat, SPOT, digitized Unsigned 16 bit 0 65,535
(scanned) aerial photograph, DEM, slope, and temperature.
Signed 16 bit -32,768 32,767
With this feature, you also get to choose the value of columns and Unsigned 32 bit
rows (the default value is 512, but you can change that) and you
choose the data type as well. The data type determines the type of Signed 32 bit -2 billion 2 billion
numbers and the range of values that can be stored in a raster layer.
Float Single

The Number of Layers allows you to select how many layers to


create in the new file.

The Initial Value lets you choose the number to initialize the new
file. Every cell is given this value.

When you are finished entering your information into the fields,
you can click OK to create the image, or Cancel to close the dialog.

56 USING IMAGE ANALYSIS FOR ARCGIS


Creati ng a new i m ag e

1. Click the Image Analysis dropdown arrow, point to Data


Preparation, and click Create New Image.
2. Navigate to the directory where the Output Image should 1
be stored.
3. Choose Thematic or Continuous as the Output Image
Type.
4. Type or click the arrows to enter how many Columns or
Rows if different from the default number of 512.
5. Click the dropdown arrow to choose the Data Type.
6. Type or click the arrows to enter Number of Layers.
7. Type or click the arrows to enter the Initial Value.
8. Click OK.

5
6
7

USING DATA PREPARATION 57


Subset Image
This function allows you to copy a portion (a subset) of an input
data file into an output data file. This may be necessary if you have
an image file that is much larger than the particular area you need
to study. Subset Image has the advantage of not only eliminating
extraneous data, but it also speeds up processing as well, which can
be important when dealing with multiband data.

The Subset Image function works on multiband continuous data to


separate that data into bands. For example, if you are working with
a TM image that has seven bands of data, you may wish to make a
subset of bands 2, 3, and 4, and discard the rest.

The Subset Image function can be used to subset an image either


spatially or spectrally. You will probably spatially subset more
frequently than spectrally. To subset spatially, you first bring up the
Options dialog, which allows you to apply a mask or extent or set
the cell size. These options are used for all Image Analysis for
ArcGIS functions including Subset Image. Spatial subsets are
particularly useful if you have a large image and you only want to The Amazon TM image before subsetting
subset part of it for analysis. You can use the Zoom In tool to draw
a rectangle around the specific area you wish to subset and go from
there. If you wish to subset an image spectrally, you do it directly
in the Subset Image dialog by entering the desired band numbers to
extract from the image.

Following are illustrations of a TM image of the Amazon as it


undergoes a spectral subset.

This feature is also accessible from the Utilities menu.

Amazon TM after a spectral subset

58 USING IMAGE ANALYSIS FOR ARCGIS


The next illustrations reflect images using the spatial subsetting
option.

The Options dialog

The image of the Pentagon before spatial subsetting

In order to specify the particular area to subset, you click the Zoom
In tool, draw a rectangle over the area, open the options dialog, and
select Same As Display on the Extent tab. The rectangle is defined
by Top, Left, Bottom, and Right coordinates. Top and Bottom are
measured as the locations on the Y-axis and the Left and Right
coordinates are measured on the X-axis. You can then save the
subset image and work from there on your analysis.

The Pentagon subset image after setting the Analysis Extent in Options

USING DATA PREPARATION 59


Sub setting a n imag e sp ectrally

1. Click Add Data to add the image to the view. 1


2. Double-click the image name in the Table of contents to
open Layer Properties.
3. Click the Symbology tab in Layer Properties.
4. Click Stretched in the Show panel.
5. Click the Band dropdown arrow, and select the layer you
want to subset.
6. Click Apply and OK. X
4 3 5

60 USING IMAGE ANALYSIS FOR ARCGIS


7. Click the Image Analysis dropdown arrow, point to Data
Preparation, and click Subset Image.
8. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
9. Using a comma for separation, type the band numbers 7
you want to subset in the text box.
10. Type the file name of the Output Image, or navigate to
the directory where it should be stored.
11. Click OK.

8
9
10

11

USING DATA PREPARATION 61


Sub s e tt i n g a n i m ag e s p at ia lly

1. Click the Add Data button to add your image. 1


2. Click the Zoom In tool, and draw a rectangle over the 2
area you want to subset.
3. Click the Image Analysis menu, and click Options.
4. Click the Extent tab.
5. Click the Analysis extent dropdown arrow, and select 7
Same As Display.
6. Click Apply and OK.
7. Click the Image Analysis dropdown arrow and click Save
As, and save the image in the appropriate directory.

4
5

62 USING IMAGE ANALYSIS FOR ARCGIS


Mosaic Images
Mosaicking is the process of joining georeferenced images together Another Options feature to take note of is the Preferences tab. For
to form a larger image. The input images must all contain map and mosaicking images, you should resample using Nearest Neighbor.
projection information, although they need not be in the same This will ensure that the mosaicked pixels do not differ in their
projection or have the same cell sizes. Calibrated input images are appearance from the original image. Other resampling methods use
also supported. All input images must have the same number of averages to compute pixel values and can produce an edge effect.
layers. You can mosaic single or multiband continuous data, or
thematic data. When you apply Mosaic, the images are processed using whatever
stretch you’ve specified in the Layer Properties dialog. During
It is extremely important when mosaicking to arrange your images processing, each image is fed through its own lookup table, and the
in the view as you want the output theme to appear before you output mosaicked image has the stretch built in, and should be
mosaic them. Image Analysis for ArcGIS mosaics images strictly viewed with no stretch. This allows you to adjust the stretch of each
based on their appearance in the view. This allows you to mosaic a image independently to achieve the desired overall color balance.
large number of images without having to make them all active.
With the Mosaic tool you are also given a choice of how to handle
It is also important that the images you plan to mosaic contain the image overlaps by using the order displayed, maximum value,
same number of bands. You cannot mosaic a seven band TM image minimum value, or average value.
with a six band TM image. You can, however, use Subset Image to
subset bands from an existing image and then mosaic regardless of Choose:
the number of bands they originally contained.
Order Displayed — replaces each pixel in the overlap area with the
You can mosaic images with different cell sizes or resolutions. pixel value of the image that is on top in the view.
When this happens you can consult the settings in the Image
Maximum Value — in order to replace each pixel in the overlap
Analysis Options dialog for Cell Size. The Cell Size is initially set
area with the greater value of corresponding pixels in the
to the maximum cell size so if you mosaic two images, one with a
overlapping images.
4-meter resolution and one with a 5-meter resolution, the output
mosaicked image has a 5-meter resolution. You can set the Cell Minimum Value — replaces each pixel of the overlap area by the
Size in the Options dialog to whatever cell size you like so that the lesser value of the corresponding pixels in the overlapping images.
output mosaicked image has the cell size you selected.
Average Value — replaces each pixel in the overlap area with the
The Extent tab on the Options dialog will default to Union of Inputs average of the values of the corresponding pixels in the overlapping
for mosaicking images. If, for some reason, you want to use a images.
different extent, you can change it in the Options dialog and check
the Use Extent from Analysis Options box on the Mosaic Images
dialog. It is recommended that you leave it at the default of Union
of Inputs.

USING DATA PREPARATION 63


The color balancing options let you choose between balancing by
brightness/contrast, histogram matching, or none. If you choose
brightness/contrast, the mosaicked image will be balanced by
utilizing the adjustments you have made in Layer Properties/
Symbology. If you choose Histogram Matching, the input images
are adjusted to have similar histograms to the top of the image in
the view. Select None if you don’t want the pixel values adjusted.

64 USING IMAGE ANALYSIS FOR ARCGIS


How to Mo saic Imag es

1. Add the images you want to mosaic to the view.


2. Arrange images in the view in the order that you want
them in the mosaic.
3. Click the Image Analysis dropdown arrow, point to Data 2
Preparation, and click Mosaic Images.
4. Click the Handle Image Overlaps by dropdown arrow,
and click the method you want to use.
5. If you want the images automatically cropped, check the
box, and enter the Percent by which to crop the images.
6. Choose the Color Balance method.
7. Check the box if you want to use the extent you set in
Analysis Options.
8. Navigate to the directory where the Output Image should
be stored.
9. Click OK.

For more information on mosaicking images, see chapter 2


“Quick-start tutorial’’.
3

6
7

USING DATA PREPARATION 65


Reproject Image
Reproject Image gives you the ability to reproject raster image data Here is the reprojected image after changing the Coordinate System
from one map projection to another. Reproject Image, like all to Mercator (world):
Image Analysis for ArcGIS functions, observes the settings in the
Options dialog so don’t forget to use Options to set Extent, Cell
Size, and so on if so desired.

ArcMap has the capability to reproject images on the fly by simply


setting the desired projection and choosing View/Data Frame
Properties and selecting the Coordinate System tab. The desired
projection may then be selected. After you select the coordinate
system, you apply it and go to Reproject Image n Image Analysis
for ArcGIS.

At times you may need to produce an image in a specific projection.


By having the desired output projection specified in the Data Frame
Properties, the only things you need to specify in Reproject Image
are the input and output images.

After Reproject Image

Before Reproject Image

66 USING IMAGE ANALYSIS FOR ARCGIS


How to Repro ject an Im ag e

1. Click Add Data, and add the image you want to reproject 1
to the view.
2. Right-click in the view, and click on Properties to bring up
the Data Frame Properties dialog.
3. Click on the Coordinate System tab.
4. Click Predefined and choose whatever coordinate
system you want to use to reproject the image.
5. Click Apply and OK. X

USING DATA PREPARATION 67


6. Click the Image Analysis dropdown arrow, point to Data
Preparation, and click Reproject Image.
7. Click the Input Image dropdown arrow and click the file
you want to use, or navigate to the directory where it is
stored.
8. Navigate to the directory where the Output Image should
be stored. 6
9. Click OK.

7
8

68 USING IMAGE ANALYSIS FOR ARCGIS


1 Performing Spatial Enhancement
5
IN THIS CHAPTER Spatial Enhancement is a function that enhances an image using the values of
individual and surrounding pixels. Spatial Enhancement deals largely with spatial
• Convolution frequency, which is the difference between the highest and lowest values of a
contiguous set of pixels. Jensen (1986) defines spatial frequency as “the number of
• Non-Directional Edge changes in brightness value per unit distance for any part of an image.”

• Focal Analysis There are three types of spatial frequency:

• Resolution Merge • zero spatial frequency — a flat image, in which every pixel has the same value
• low spatial frequency — an image consisting of a smoothly varying gray scale
• high spatial frequency — an image consisting of drastically changing pixel
values such as a checkerboard of black and white pixels
The Spatial Enhancement feature lets you use convolution, non-directional edge,
focal analysis, and resolution merge to enhance your images. Depending on what
you need to do to your image, you will select one feature from the Spatial
Enhancement menu. This chapter will focus on the explanation of these features as
well as how to apply them to your data.

This chapter is organized according to the order in which the Spatial Enhancement
tools appear. You may want to skip ahead if the information you are seeking is
about one of the tools near the end of the menu list.

69
Convolution
Convolution filtering is the process of averaging small sets of pixels
across an image. Convolution filtering is used to change the spatial Data
frequency characteristics of an image (Jensen 1996).

A convolution kernel is a matrix of numbers that is used to average 2 8 6 6 6


the value of each pixel with the values of surrounding pixels. The
2 8 6 6 6
numbers in the matrix serve to weight this average toward
particular pixels. These numbers are often called coefficients, 2 2 8 6 6
because they are used as such in the mathematical equations.
2 2 2 8 6
Ap plyi ng co nvol utio n fi ltering 2 2 2 2 8

Apply Convolution filtering by clicking the Image Analysis


dropdown arrow, and choosing Convolution from the Spatial
Enhancement menu. The word filtering is a broad term, which
refers to the altering of spatial or spectral features for image
enhancement (Jensen 1996). Convolution filtering is one method of
spatial filtering. Some texts use the terms synonymously.
-1 -1 -1
Co nvol ution exa m ple
Kernel -1 16 -1

To understand how one pixel is convolved, imagine that the -1 -1 -1


convolution kernel is overlaid on the data file values of the image
(in one band) so that the pixel to be convolved is in the center of the
window. To compute the output value for this pixel, each value in the
convolution kernel is multiplied by the image pixel value that
corresponds to it. These products are summed, and the total is
divided by the sum of the values in the kernel, as shown in this
equation:

integer [((-1 × 8) + (-1 × 6) + (-1 × 6) +


(-1 × 2) + (16 × 8) + (-1 × 6) +
(-1 × 2) + (-1 × 2) + (-1 × 8))/
: (-1 + -1 + -1 + -1 + 16 + -1 + -1 + -1 + -1)]
= int [(128-40) / (16-8)]
= int (88 / 8) = int (11) = 11

70 USING IMAGE ANALYSIS FOR ARCGIS


When the 2 × 2 set of pixels near the center of this 5 × 5 image is
convolved, the output values are:
 q
q 
∑  ∑ fij dij

i = 1 j = 1 
V = -----------------------------------
1 2 3 4 5 F
1 - - - - -

2 - 11 5 - -

3 - 0 11 - - Where:

- - - - - fij = the coefficient of a convolution kernel at


4
position i,j (in the kernel)
5 - - - - -
dij = the data value of the pixel that corresponds to
fij
The kernel used in this example is a high frequency kernel. The
relatively lower values become lower, and the higher values q = the dimension of the kernel, assuming a square
become higher, thus increasing the spatial frequency of the image. kernel (if q = 3, the kernel is 3 × 3)

Co nvol ution formula F = either the sum of the coefficients of the kernel,
or 1 if the sum of coefficients is zero
The following formula is used to derive an output data file value for
the pixel being convolved (in the center): V = the output pixel value

Source: Modified from Jensen 1996; Schowengerdt 1983

The sum of the coefficients (F) is used as the denominator of the


equation above, so that the output values are in relatively the same
range as the input values. Since F cannot equal zero (division by
zero is not defined), F is set to 1 if the sum is zero.

PERFORMING SPATIAL ENHANCEMENT 71


Ze ro s um k e r n e l s High freq uency kernels

Zero sum kernels are kernels in which the sum of all coefficients in A high frequency kernel, or high pass kernel, has the effect of
the kernel equals zero. When a zero sum kernel is used, then the increasing spatial frequency.
sum of the coefficients is not used in the convolution equation, as
above. In this case, no division is performed (F = 1), since division High frequency kernels serve as edge enhancers, since they bring
by zero is not defined. out the edges between homogeneous groups of pixels. Unlike edge
detectors (such as zero sum kernels), they highlight edges and do
This generally causes the output values to be: not necessarily eliminate other features.

• zero in areas where all input values are equal (no edges)
• low in areas of low spatial frequency
• extreme in areas of high spatial frequency (high values become -1 -1 -1
much higher, low values become much lower)
-1 16 -1
Therefore, a zero sum kernel is an edge detector, which usually -1 -1 -1
smooths out or zeros out areas of low spatial frequency and creates
a sharp contrast where spatial frequency is high, which is at the
edges between homogeneous (homogeneity is low spatial When a high frequency kernel is used on a set of pixels in which a
frequency) groups of pixels. The resulting image often consists of relatively low value is surrounded by higher values, like this...
only edges and zeros.

Zero sum kernels can be biased to detect edges in a particular


direction. For example, this 3 × 3 kernel is biased to the south BEFORE AFTER
(Jensen 1996).
204 200 197 - - -

201 106 209 - 10 -

-1 -1 -1 198 200 210 - - -

1 -2 1
...the low value gets lower. Inversely, when the high frequency
1 1 1 kernel is used on a set of pixels in which a relatively high value is
surrounded by lower values...

72 USING IMAGE ANALYSIS FOR ARCGIS


BEFORE AFTER

64 60 57 - - -

61 125 69 - 188 -

58 60 70 - - -

...the high value becomes higher. In either case, spatial frequency is


increased by this kernel.

Low f r e q u e n cy k e r n e l s

Below is an example of a low frequency kernel, or low pass kernel,


which decreases spatial frequency.

Convolution with High Pass

1 1 1 Convolution With High Pass

1 1 1

1 1 1

This kernel simply averages the values of the pixels, causing them
to be more homogeneous. The resulting image looks either more
smooth or more blurred.

PERFORMING SPATIAL ENHANCEMENT 73


Ap ply Convolu tion

1. Click the Image Analysis dropdown arrow, point to


Spatial Enhancement, and click Convolution.
2. Click the Input Image dropdown arrow, and click a file, or 1
navigate to the directory where the file is stored.
3. Click the Kernel dropdown arrow, and click the kernel you
want to use.
4. Choose Reflection or Background Fill.
5. Navigate to the directory where the Output Image should
be stored.
6. Click OK.

A p p l y i n g C o n v o lu t i o n

Reflection fills in the area beyond the edge of the of the image
with a reflection of the values at the edge. Background fill uses
2
zeros to fill in the kernel area beyond the edge of the image.

Convolution allows you to perform image enhancement


operations such as averaging and high pass or low pass filtering.

Each data file value of the new output file is calculated by 3


centering the kernel over a pixel and multiplying the original
values of the center pixel and the appropriate surrounding pixels 4
by the corresponding coefficients from the matrix. To make
sure the output values are within the general range of the input 5
values, these numbers are summed and then divided by the sum
of the coefficients. If the sum is zero, the division is not
performed.
6

74 USING IMAGE ANALYSIS FOR ARCGIS


Non-Directional Edge
The Non-Directional Edge function averages the results of two
orthogonal first derivative edge detectors. The filters used are the
Sobel and Prewitt filters. Both of these filters are based on a
calculation of the 1st derivative, or slope, in both the x and y
directions. Both use orthogonal kernels convolved separately with
the original image, and then combined.

The Non-Directional Edge is based on the Sobel zero-sum


convolution kernel. Most of the standard image processing filters
are implemented as a single pass moving window (kernel)
convolution. Examples include low pass, edge enhance, edge
detection, and summary filters.

For this model, a Sobel filter has been selected. To convert this
model to the Prewitt filter calculation, the kernels must be changed
according to the example below. Image of Seattle before applying Non-Directional Edge

–1 –2 –1 1 0 –1
0 0 0 2 0 –2
Sobel=
1 2 1 1 0 –1
horizontal vertical

–1 –1 –1 1 0 –1
Prewitt= 0 0 0 1 0 –1
1 1 1 1 0 –1
horizontal vertical

After Non-Directional Edge

PERFORMING SPATIAL ENHANCEMENT 75


Usi ng Non-Directional Ed g e

1. Click the Image Analysis dropdown arrow, point to


Spatial Enhancement, and click Non-Directional Edge.
2. Click the Input Image dropdown arrow, and click a file, or
navigate to the directory where the file is stored.
3. Choose Sobel or Prewitt. 1
4. Choose Reflection or Background Fill.
5. Type the file name of the Output Image, or navigate to
the directory where it should be stored.
6. Click OK.

Using Non-Directional Edge

In step 4, reflection fills in the area beyond the edge of the


image with a reflection of the values at the edge. Background
fill uses zeros to fill in the kernel area beyond the edge of the
image.
2

4
5

76 USING IMAGE ANALYSIS FOR ARCGIS


Focal Analysis
The Focal Analysis function enables you to perform one of several
types of analysis on class values in an image file using a process
similar to convolution filtering.

This model (Median Filter) is useful for reducing noise such as


random spikes in data sets, dead sensor striping, and other impulse
imperfections in any type of image. It is also useful for enhancing
thematic images.

Focal Analysis evaluates the region surrounding the pixel of


interest (center pixel). The operations that can be performed on the
pixel of interest include:

• Standard Deviation — measure of texture


• Sum
• Mean — good for despeckling radar data An image before Focal Analysis
• Median — despeckle radar
• Min
• Max

These functions allow you to select the size of the surrounding


region to evaluate by selecting the window size.

After Focal Analysis is performed

PERFORMING SPATIAL ENHANCEMENT 77


Ap plying Fo cal Analysis

1. Click the Image Analysis dropdown arrow, point to


Spatial Enhancement, and click Focal.
2. Click the Input Image dropdown arrow, and click a file, or
navigate to the directory where the file is stored.
3. Click the Focal Function dropdown arrow, and click the
1
function you want to use.
4. Click the Neighborhood Shape dropdown arrow, and click
the shape you want to use.
5. Click the Neighborhood Definition dropdown arrow, and
click the Matrix size you want to use.
6. Type the file name of the Output Image, or navigate to
the directory where it should be stored.
7. Click OK.

F o c a l A n a ly s i s R e s u l t s 2

Focal Analysis is similar to Convolution in the process that it


uses. With Focal Analysis, you are able to perform several
different types of analysis on the pixel values in an image file.
3
4

78 USING IMAGE ANALYSIS FOR ARCGIS


Resolution Merge
The resolution of a specific sensor can refer to radiometric, spatial, B rovey Tr a n s fo r m
spectral, or temporal resolution. This function merges imagery of
differing spatial resolutions. In the Brovey Transform, three bands are used according to the
following formula:
Landsat TM sensors have seven bands with a spatial resolution of
28.5 m. SPOT panchromatic has one broad band with very good DNB1_new = [DNB1 / DNB1 + DNB2 + DNB3] ×
spatial resolution—10 m. Combining these two images to yield a [DNhigh res. image]
seven-band data set with 10 m resolution provides the best
characteristics of both sensors. DNB2_new = [DNB2 / DNB1 + DNB2 + DNB3] ×
[DNhigh res. image]
A number of models have been suggested to achieve this image
merge. Welch and Ehlers (1987) used forward-reverse RGB to IHS DNB3_new = [DNB3 / DNB1 + DNB2 + DNB3] ×
transforms, replacing I (from transformed TM data) with the SPOT [DNhigh res. image]
panchromatic image. However, this technique is limited to three
Where:
bands (R,G,B).

Chavez (1991), among others, uses the forward-reverse principal B = band


components transforms with the SPOT image, replacing PC-1.
The Brovey Transform was developed to visually increase contrast
In the above two techniques, it is assumed that the intensity in the low and high ends of an image’s histogram (i.e., to provide
component (PC-1 or I) is spectrally equivalent to the SPOT contrast in shadows, water and high reflectance areas such as urban
panchromatic image, and that all the spectral information is features). Brovey Transform is good for producing RGB images
contained in the other PCs or in H and S. Since SPOT data does not with a higher degree of contrast in the low and high ends of the
cover the full spectral range that TM data does, this assumption image histogram and for producing visually appealing images.
does not strictly hold. It is unacceptable to resample the thermal
Since the Brovey Transform is intended to produce RGB images,
band (TM6) based on the visible (SPOT panchromatic) image.
only three bands at a time should be merged from the input
Another technique (Schowengerdt 1980) additively combines a multispectral scene, such as bands 3, 2, 1 from a SPOT or Landsat
high frequency image derived from the high spatial resolution data TM image or 4, 3, 2 from a Landsat TM image. The resulting
(i.e., SPOT panchromatic) with the high spectral resolution Landsat merged image should then be displayed with bands 1, 2, 3 to RGB.
TM image.

The Resolution Merge function uses the Brovey Transform method


of resampling low spatial resolution data to a higher spatial
resolution while retaining spectral information:

PERFORMING SPATIAL ENHANCEMENT 79


Res olution Merg e

1. Click the Image Analysis dropdown arrow, point to


Spatial Enhancement, and click Resolution Merge.
2. Click the High Resolution Image dropdown arrow, and
click a file, or navigate to the directory where the file is
stored.
3. Click the Multi-Spectral Image dropdown arrow, and click 1
a file, or navigate to the directory where the file is stored.
4. Navigate to the directory where the Output Image should
be stored.
5. Click OK.

U s in g R e s o l u t i o n M e r g e

Use Resolution Merge to integrate imagery of different spatial


resolutions (pixel size).

3
4

80 USING IMAGE ANALYSIS FOR ARCGIS


The following images display the Resolution Merge function:

High Resolution Image Multi-Spectral Image

Resolution Merge

PERFORMING SPATIAL ENHANCEMENT 81


82 USING IMAGE ANALYSIS FOR ARCGIS
1 Using Radiometric Enhancement
6
IN THIS CHAPTER Radiometric enhancement deals with the individual values of the pixels in an
image. It differs from Spatial Enhancement, which takes into account the values of
• LUT (Lookup Table) Stretch neighboring pixels.

• Histogram Equalization Radiometric Enhancement consists of functions to enhance your image by using
the values of individual pixels within each band. Depending on the points and the
• Histogram Matching bands in which they appear, radiometric enhancements that are applied to one band
may not be appropriate for other bands. Therefore, the radiometric enhancement of
• Brightness Inversion a multiband image can usually be considered as a series of independent, single-
band enhancements (Faust 1989).

83
LUT Stretch
LUT Stretch creates an output image that contains the data values N o n lin e a r c ont r a s t st ret ch
as modified by a lookup table. The output is 3 bands.
A nonlinear spectral enhancement can be used to gradually increase
Co ntrast s tre tch or decrease contrast over a range, instead of applying the same
amount of contrast (slope) across the entire image. Usually,
When radiometric enhancements are performed on the display nonlinear enhancements bring out the contrast in one range while
device, the transformation of data file values into brightness values decreasing the contrast in other ranges.
is illustrated by the graph of a lookup table.
P i e c ew i s e l i n e a r c o n t r a s t s t r e t ch
Contrast stretching involves taking a narrow input range and
stretching the output brightness values for those same pixels over a A piecewise linear contrast stretch allows for the enhancement of a
wider range. This process is done in Layer Properties in Image specific portion of data by dividing the lookup table into three
Analysis for ArcGIS. sections: low, middle, and high. It enables you to create a number
of straight line segments that can simulate a curve. You can
Line ar and n onlin ear enhance the contrast or brightness of any section in a single color
gun at a time. This technique is very useful for enhancing image
The terms linear and nonlinear, when describing types of spectral areas in shadow or other areas of low contrast.
enhancement, refer to the function that is applied to the data to
perform the enhancement. A piecewise linear stretch uses a A piecewise linear contrast stretch normally follows two rules:
polyline function to increase contrast to varying degrees over
different ranges of the data. 1. The data values are continuous; there can be no break in the
values between High, Middle, and Low. Range specifications
Line ar contrast stretch adjust in relation to any changes to maintain the data value
range.
A linear contrast stretch is a simple way to improve the visible 2. The data values specified can go only in an upward,
contrast of an image. It is often necessary to contrast-stretch raw increasing direction.
image data, so that they can be seen on the display.
The contrast value for each range represents a percentage of the
In most raw data, the data file values fall within a narrow range— available output range that particular range occupies. Since rules 1
usually a range much narrower than the display device is capable of and 2 above are enforced, as the contrast and brightness values are
displaying. That range can be expanded to utilize the total range of changed, they may affect the contrast and brightness of other
the display device (usually 0 to 255). ranges. For example, if the contrast of the low range increases, it
forces the contrast of the middle to decrease.

84 USING IMAGE ANALYSIS FOR ARCGIS


Co ntrast s tre tch on the di splay This figure shows how the contrast stretch manipulates the
histogram of the data, increasing contrast in some areas and
Usually, a contrast stretch is performed on the display device only, decreasing it in others. This is also a good example of a piecewise
so that the data file values are not changed. Lookup tables are linear contrast stretch, which is created by adding breakpoints to the
created that convert the range of data file values to the maximum histogram.
range of the display device. You can then edit and save the contrast
stretch values and lookup tables as part of the raster data image file.
These values are loaded into the view as the default display values
the next time the image is displayed.

The statistics in the image file contain the mean, standard deviation,
and other statistics on each band of data. The mean and standard
deviation are used to determine the range of data file values to be
translated into brightness values or new data file values. You can
specify the number of standard deviations from the mean that are to
be used in the contrast stretch. Usually the data file values that are
two standard deviations above and below the mean are used. If the
data has a normal distribution, then this range represents
approximately 95 percent of the data.

The mean and standard deviation are used instead of the minimum
and maximum data file values because the minimum and maximum
data file values are usually not representative of most of the data. A
notable exception occurs when the feature being sought is in
shadow. The shadow pixels are usually at the low extreme of the
data file values, outside the range of two standard deviations from
the mean.

Var ying the contrast stretch

There are variations of the contrast stretch that can be used to


change the contrast of values over a specific range, or by a specific
amount. By manipulating the lookup tables as in the following
illustration, the maximum contrast in the features of an image can
be brought out.

USING RADIOMETRIC ENHANCEMENT 85


Ap ply LUT Stretch Class

1. Click the Image Analysis dropdown arrow, point to


Radiometric Enhancement, and click LUT Stretch.
2. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
1
3. Navigate to the directory where the Output Image should
be stored. Set the output type to TIFF.
4. Click OK.

LUT Stretch Class

LUT Stretch Class provides a means of producing an output


image that has the stretch built into the pixel values to use with
packages that have no stretching capabilities.

86 USING IMAGE ANALYSIS FOR ARCGIS


Histogram Equalization
Histogram Equalization is a nonlinear stretch that redistributes To perform a Histogram Equalization, the pixel values of an image
pixel values so that there is approximately the same number of (either data file values or brightness values) are reassigned to a
pixels with each value within a range. The result approximates a flat certain number of bins, which are simply numbered sets of pixels.
histogram. Therefore, contrast is increased at the peaks of the The pixels are then given new values, based upon the bins to which
histogram and lessened at the tails. they are assigned.

Histogram Equalization can also separate pixels into distinct The total number of pixels is divided by the number of bins,
groups if there are few output values over a wide range. This can equaling the number of pixels per bin, as shown in the following
have the visual effect of a crude classification. equation:

Original Histogram
T-
A = ---
N
peak

Where:

N = the number of bins

tail T = the total number of pixels in the image

A = the equalized number of pixels per bin

After Equalization
The pixels of each input value are assigned to bins, so that the
number of pixels in each bin is as close to A as possible. Consider
the following:

There are 240 pixels represented by this histogram. To equalize this


histogram to 10 bins, there would be:
pixels at
tail are 240 pixels / 10 bins = 24 pixels per bin = A
grouped -
contrast
is lost
pixels at peak are spread
apart - contrast is gained

USING RADIOMETRIC ENHANCEMENT 87


60 60 Source: Modified from Gonzalez and Wintz 1977

The 10 bins are rescaled to the range 0 to M. In this example, M =


9, because the input values ranged from 0 to 9, so that the equalized
number of pixels
40 histogram can be compared to the original. The output histogram of
this equalized image looks like the following illustration:
30

A = 24 numbers inside bars are input data file values


15 60 60
10 10
5 5 5

0 1 2 3 4 5 6 7 8 9 40

number of pixels
data file values
30
4 5
To assign pixels to bins, the following equation is used:
20 A = 24
6
i–1  H 2
15 15
 H  + -----i
7
 ∑ k 2 1 3
8
k = 1  0 0 0 0 9
B i = int ----------------------------------
A 0 1 2 3 4 5 6 7 8 9
output data file values

Effec t on contrast
Where: By comparing the original histogram of the example data with the
one above, you can see that the enhanced image gains contrast in
A = equalized number of pixels per bin (see above) the peaks of the original histogram. For example, the input range of
Hi = the number of values with the value i 3 to 7 is stretched to the range 1 to 8. However, data values at the
(histogram) tails of the original histogram are grouped together. Input values 0
int = integer function (truncating real numbers to through 2 all have the output value of 0. So, contrast among the tail
integer) pixels, which usually make up the darkest and brightest regions of
Bi = bin number for pixels with value i the input image, is lost.

88 USING IMAGE ANALYSIS FOR ARCGIS


The resulting histogram is not exactly flat, since the pixels can
rarely be grouped together into bins with an equal number of pixels.
Sets of pixels with the same value are never split up to form equal
bins.

USING RADIOMETRIC ENHANCEMENT 89


Performing Hi stog ram Equ alizati on

1. Click the Image Analysis dropdown arrow, point to


Radiometric Enhancement, and click Histogram
Equalization.
2. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
1
3. Type or click the arrows to enter the Number of Bins.
4. Navigate to the directory where the Output Image should
be stored.
5. Click OK.

H is t o g r a m E q u a l i z a t io n

Perform Histogram Equalization when you need to redistribute


pixels to approximate a flat histogram.
2
The Histogram Equalization process works by redistributing
pixel values so that there are approximately the same number of
pixels with each value within a range.

Histogram Equalization can also separate pixels into distinct 3


groups if there are few output values over a wide range. This
4
process can have the effect of a crude classification.

90 USING IMAGE ANALYSIS FOR ARCGIS


Histogram Matching
Histogram Matching is the process of determining a lookup table
that converts the histogram of one image so that it resembles the (a) (b)
histogram of another. Histogram Matching is useful for matching
data of the same or adjacent scenes that were collected on separate

frequency

frequency
days, or are slightly different because of sun angle or atmospheric +
effects. This is especially useful for mosaicking or change
detection.

To achieve good results with Histogram Matching, the two input 0 255 0 255
images should have similar characteristics: input input

• The general shape of the histogram curves should be similar.


(c)
• Relative dark and light features in the image should be the
same.

frequency
=
• For some applications, the spatial resolution of the data should
be the same.
• The relative distributions of land covers should be about the
same, even when matching scenes that are not of the same area.
0 255
input
To match the histograms, a lookup table is mathematically derived,
Source histogram (a), mapped through the lookup table (b),
which serves as a function for converting one histogram to the
approximates model histogram (c).
other, as illustrated here.

USING RADIOMETRIC ENHANCEMENT 91


Performing Hi stog ram Ma tchi ng

1. Click the Image Analysis dropdown arrow, point to


Radiometric Enhancement, and click Histogram Match.
2. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
3. Click the Match Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is 1
stored.
4. Navigate to the directory where the Output Image should
be stored.
5. Click OK.

H is t o g r a m M a t c h in g

Perform Histogram Matching when using matching data of the


same or adjacent scenes that were gathered on different days
and have differences due to the angle of the sun or atmospheric 2
effects

Histogram Matching mathematically determines a lookup table


that will convert the histogram of one image to resemble the
3
histogram of another, and is particularly useful for mosaicking
images or change detection. 4

92 USING IMAGE ANALYSIS FOR ARCGIS


Brightness Inversion
The Brightness Inversion functions produce images that have the
opposite contrast of the original image. Dark detail becomes light,
and light detail becomes dark. This can also be used to invert a
negative image that has been scanned to produce a positive image.

Inverse is useful for emphasizing detail that would otherwise be lost


in the darkness of the low DN pixels. This function applies the
following algorithm:

DNout = 1.0 if 0.0 < DNin < 0.1

DNout = 0.1
if 0.1 < DN < 1
DNin

The same image after Brightness Inversion

An image before Brightness Inversion

USING RADIOMETRIC ENHANCEMENT 93


Ap plying Brightn ess Inv ers ion

1. Click the Image Analysis dropdown arrow, point to


Radiometric Enhancement, and click Brightness
Inversion.
2. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
3. Navigate to the directory where the Output Image should
be stored. 1
4. Click OK.

B r ig h t n e s s I n v e r s i o n

This function allows both linear and nonlinear reversal of the


image intensity range. Images can be produced that have the
opposite contrast of the original image. Dark detail becomes
light, and light becomes dark
2

94 USING IMAGE ANALYSIS FOR ARCGIS


1 Applying Spectral Enhancement
7
IN THIS CHAPTER Spectral Enhancement enhances images by transforming the values of each pixel
on a multiband basis. The techniques in this chapter all require more than one band
• RGB to IHS of data. They can be used to:

• IHS to RGB • extract new bands of data that are more interpretable to the eye
• apply mathematical transforms and algorithms
• Vegetative Indices
• display a wider variety of information in the three available color guns (R, G, B)
• Color IR to Natural Color You can use the features of Spectral Enhancement to study such patterns as might
occur with deforestation or crop rotation and to see images in a more natural state
or view images in different ways, such as changing the bands in an image from red,
green, and blue to intensity, hue, and saturation.

95
RGB to IHS
The color monitors used for image display on image processing
systems have three color guns. These correspond to red, green, and
blue (R,G,B), the additive primary colors. When displaying three
bands of a multiband data set, the viewed image is said to be in
R,G,B space.

However, it is possible to define an alternate color space that uses


intensity (I), hue (H), and saturation (S) as the three positioned
hue
parameters (in lieu of R, G, and B). This system is advantageous in
that it presents colors more nearly as perceived by the human eye.

intensity
saturation
• Intensity is the overall brightness of the scene (like PC-1) and
varies from 0 (black) to 1 (white).
• Saturation represents the purity of color and also varies linearly
from 0 to 1.
• Hue is representative of the color or dominant wavelength of
the pixel. It varies from 0 at the red midpoint through green and
blue back to the red midpoint at 360. It is a circular dimension.
In the following image, 0 to 255 is the selected range; it could
be defined as any data range. However, hue must vary from 0
to 360 to define the entire sphere (Buchanan 1979).
The variance of intensity and hue in RGB to IHS

M–r
R = ---------------
M–m

M – g-
G = --------------
M–m

M – b-
B = --------------
M–m
The algorithm used in the Image Analysis for ArcGIS RGB to IHS transform
(Conrac 1980)

96 USING IMAGE ANALYSIS FOR ARCGIS


Where: Where:

R, G, B are each in the range of 0 to 1.0. R, G, B are each in the range of 0 to 1.0.

r, g, b are each in the range of 0 to 1.0. M = largest value, R, G, or B

M = largest value, r, g, or b m = least value, R, G, or B

m = least value, r, g, or b

At least one of the R, G, or B values is 0, corresponding to the color


with the largest value, and at least one of the R, G, or B values is 1,
corresponding to the color with the least value.

The equation for calculating intensity in the range of 0 to 1.0 is:

I = M
+ m-
--------------
2

The equations for calculating saturation in the range of 0 to 1.0 are:

If M = m, S = 0

If I ≤ 0.5,
M – m-
S = --------------
M+m

M–m -
If I > 0.5, S = -----------------------
2–M–m

The equations for calculating hue in the range of 0 to 360 are:

If M = m, H = 0
If R = M, H = 60 (2 + b - g)
If G = M, H = 60 (4 + r - b)
If B = M, H = 60 (6 + g - r)

APPLYING SPECTRAL ENHANCEMENT 97


RGB to IHS

1. Click the Image Analysis dropdown arrow, point to


Spectral Enhancement, and click RGB to IHS.
2. Click the Input Image dropdown arrow, and click the
image you want to use, or navigate to the directory where
it is stored. 1
3. Navigate to the directory where the Output Image should
be stored.
4. Click OK.

RGB to IHS

Using RGB to IHS applies an algorithm that transforms red,


green, and blue (RGB) values to the intensity, hue, and
saturation (IHS) values.
2

98 USING IMAGE ANALYSIS FOR ARCGIS


IHS to RGB
IHS to RGB is intended as a complement to the standard RGB to
R = m + ( M – m )  ------
H
If H < 60,
IHS transform. In the IHS to RGB algorithm, a min-max stretch is  60
applied to either intensity (I), saturation (S), or both, so that they
more fully utilize the 0 to 1 value range. The values for hue (H), a If 60 H < 180, R = M
circular dimension, are 0 to 360. However, depending on the
R = m + ( M – m )  -------------------
dynamic range of the DN values of the input image, it is possible 240 – H
If 180 H < 240,
that I or S or both occupy only a part of the 0 to 1 range. In this  60 
model, a min-max stretch is applied to either I, S, or both, so that
they more fully utilize the 0 to 1 value range. After stretching, the
If 240 H 360, R = m
full IHS image is retransformed back to the original RGB space. As
The equations for calculating G in the range of 0 to 1.0 are:
the parameter Hue is not modified, it largely defines what we
perceive as color, and the resultant image looks very much like the
input image. If H < 120, G = m

G = m + ( M – m )  -------------------
H – 120
If 120 H < 180,
It is not essential that the input parameters (IHS) to this transform
 60 
be derived from an RGB to IHS transform. You could define I and/
or S as other parameters, set Hue at 0 to 360, and then transform to If 180 H < 300, G = M
RGB space. This is a method of color coding other data sets.
G = m + ( M – m )  -------------------
360 – H
If 300 H 360,
In another approach (Daily 1983), H and I are replaced by low- and  60 
high-frequency radar imagery. You can also replace I with radar
intensity before the IHS to RGB transform (Holcomb 1993). Equations for calculating B in the range of 0 to 1.0:
Chavez evaluates the use of the IHS to RGB transform to resolution
merge Landsat TM with SPOT panchromatic imagery (Chavez If H < 60, B = M
1991).
B = m + ( M – m )  -------------------
120 – H
If 60 H < 120,
The algorithm used by Image Analysis for ArcGIS for the IHS to  60 
RGB function is (Conrac 1980):
If 120 H < 240, B = M
Given: H in the range of 0 to 360; I and S in the range of 0 to 1.0
B = m + ( M – m )  -------------------
H – 240
If 240 H < 300,
If I 0.5, M = I(1 + S)  60 
If I > 0.5, M = I + S – I ( S ) If 300 H 360, B = M
m = 2⋅1–M
The equations for calculating R in the range of 0 to 1.0 are:

APPLYING SPECTRAL ENHANCEMENT 99


Co nv er tin g IHS to RGB

1. Click the Image Analysis dropdown arrow, point to


Spectral Enhancement, and click IHS to RGB.
2. Click the Input Image dropdown arrow, and click the
image you want to use, or navigate to the directory where
it is stored.
1
3. Navigate to the directory where the Output Image should
be stored.
4. Click OK.

IHS to RGB

Using IHS to RGB applies an algorithm that transforms


intensity, hue, and saturation (IHS) values to red, green, and
blue (RGB) values.
2

100 USING IMAGE ANALYSIS FOR ARCGIS


Vegetative Indices
Mapping vegetation is a common application of remotely sensed • Indices can also be used to minimize shadow effects in satellite
imagery. To help you find vegetation quickly and easily, Image and aircraft multispectral images. Black and white images of
Analysis for ArcGIS includes a Vegetative Indices feature. individual indices, or a color combination of three ratios, may
be generated.
Indices are used to create output images by mathematically • Certain combinations of TM ratios are routinely used by
combining the DN values of different bands. These may be geologists for interpretation of Landsat imagery for mineral
simplistic: type. For example: Red 5/7, Green 5/4, Blue 3/1.
(Band X - Band Y) I n d ex ex a m p l e s
or more complex: The following are examples of indices that have been
preprogrammed in Image Analysis for ArcGIS:
BandX – BandY-
----------------------------------------
BandX + BandY • IR/R (infrared/red)
• SQRT (IR/R)
In many instances, these indices are ratios of band DN values: • Vegetation Index = IR-R

BandX IR – R-
----------------- • Normalized Difference Vegetation Index (NDVI) = ---------------
BandY IR + R
IR – R- + 0.5
These ratio images are derived from the absorption/reflection
• Transformed NDVI (TNDVI) = ---------------
IR + R
spectra of the material of interest. The absorption is based on the
molecular bonds in the (surface) material. Thus, the ratio often Source: Modified from Sabins 1987; Jensen 1996; Tucker 1979
gives information on the chemical composition of the target.

Ap plica tions
• Indices are used extensively in mineral exploration and
vegetation analysis to bring out small differences between
various rock types and vegetation classes. In many cases,
judiciously chosen indices can highlight and enhance
differences that cannot be observed in the display of the
original color bands.

APPLYING SPECTRAL ENHANCEMENT 101


The following table shows the infrared (IR) and red (R) band for
some common sensors (Tucker 1979, Jensen 1996):

Sensor IR Band R Band

Landsat MSS 4 2

SPOT XS 3 2

Landsat TM 4 3

NOAA AVHRR 2 1

Imag e alg eb ra

Image algebra is a general term used to describe operations that


combine the pixels of two or more raster layers in mathematical
combinations. For example, the calculation:

(infrared band) - (red band)

DNir - DNred

yields a simple, yet very useful, measure of the presence of


vegetation.

Band ratios are also commonly used. These are derived from the
absorption spectra of the material of interest. The numerator is a
baseline of background absorption and the denominator is an
absorption peak.

102 USING IMAGE ANALYSIS FOR ARCGIS


Usi ng Veg etative Indic es

1. Click the Image Analysis dropdown arrow, point to


Spectral Enhancement, and click Vegetative Indices.
2. Navigate to the directory where the image is stored.
3. Click the dropdown list to add the Near Infrared Band
number.
4. Click the dropdown list to add the Visible Red Band 1
number.
5. Choose the Desired Index from the dropdown list.
6. Navigate to the directory where the Output Image should
be stored.
7. Click OK.

2
3
4
5
6

APPLYING SPECTRAL ENHANCEMENT 103


Color IR to Natural Color
This function lets you simulate natural colors from other types of
data so that the output image is a fair approximation of the natural
colors from an infrared image. If you are not familiar with the bands
designated to reflect infrared and natural color for a particular type
of imagery, Image Analysis for ArcGIS can help you apply either
scheme through the Color IR to Natural Color choice in Spectral
Enhancement. You cannot apply this feature to images having only
one band of data (i.e. grayscale images).

When an image is displayed in natural color, the bands are arranged


to approximate the most natural representation of the image in the
real world. Vegetation becomes green in color, and water becomes
dark in color. To create natural color, certain bands of data need to
be assigned to red, green, and blue. You will need to assign bands
to color depending on how many bands are in the image you want
to change to natural color.

After using Color IR to Natural Color, the image appears in natural colors.

The infrared image of a golf course.

104 USING IMAGE ANALYSIS FOR ARCGIS


Usi ng C olo r I R t o N a tur a l Col or

1. Click the Image Analysis dropdown arrow, point to


Spectral Enhancement, and click Color IR to
Natural Color.
2. Click the dropdown arrow or navigate to the
directory to select the Input Image.
3. Click the Near Infrared Band dropdown arrow, and
select the appropriate band. 1
4. Click the Visible Red Band dropdown arrow, and
select the appropriate band.
5. Click the Visible Green Band dropdown arrow, and
select the appropriate band.
6. Navigate to the directory where the Output Image
should be stored.
7. Click OK.

2
3
4
5

APPLYING SPECTRAL ENHANCEMENT 105


106 USING IMAGE ANALYSIS FOR ARCGIS
1 Performing GIS Analysis
8
IN THIS CHAPTER A GIS is a unique system designed to input, store, retrieve, manipulate, and
analyze layers of geographic data to produce interpretable information. A GIS
• Performing Neighborhood should also be able to create reports and maps (Marble 1990). The GIS database
Analysis may include computer images, hardcopy maps, statistical data, or any other data
that is needed in a study. Although the term GIS is commonly used to describe
• Performing Thematic Change software packages, a true GIS includes knowledgeable staff, a training program,
budgets, marketing, hardware, data, and software (Walker and Miller 1990). GIS
• Using Recode technology can be used in almost any geography-related discipline, from
Landscape Architecture to natural resource management to transportation routing.
• Using Summarize Areas
The central purpose of a GIS is to turn geographic data into useful information—
the answers to real-life questions—questions such as:

• How should political districts be redrawn in a growing metropolitan area?


• How can we monitor the influence of global climatic changes on the earth’s
resources?
• What areas should be protected to ensure the survival of endangered species?
This chapter is about using the different analysis functions in Image Analysis for
ArcGIS to better use the images, data, maps, and so on located in a GIS. You can
use GIS technology in any geography related discipline. The tools contained in
GIS Analysis will help you turn geographic data into useful information.

107
Information versus data
Information, as opposed to data, is independently meaningful. It is
relevant to a particular problem or question:

• “The land cover at coordinate N875250, E757261 has a data


file value 8,” is data.
• “Land cover with a value of 8 are on slopes too steep for
development,” is information.

You can input data into a GIS and output information. The
information you wish to derive determines the type of data that
must be input. For example, if you are looking for a suitable refuge
for bald eagles, zip code data is probably not needed, while land
cover data may be useful.

For this reason, the first step in any GIS project is usually an
assessment of the scope and goals of the study. Once the project is
defined, you can begin the process of building the database.
Although software and data are commercially available, a custom
database must be created for the particular project and study area.
The database must be designed to meet the needs and objectives of
the organization.

A major step in successful GIS implementation is analysis. In the


analysis phase, data layers are combined and manipulated in order
to create new layers and to extract meaningful information from
them.

Once the database (layers and attribute data) is assembled, the


layers can be analyzed and new information extracted. Some
information can be extracted simply by looking at the layers and
visually comparing them to other layers. However, new
information can be retrieved by combining and comparing layers
using the following procedures.

108 USING IMAGE ANALYSIS FOR ARCGIS


Neighborhood Analysis
Neighborhood Analysis applies to any image processing technique • Minimum—outputs the least or smallest class value within the
that takes surrounding pixels into consideration, such as window. This can be used to emphasize classes with the low
convolution filtering and scanning. This is similar to the class values.
convolution filtering performed on continuous data. Several types • Minority—outputs the least common of the class values that
of analyses can be performed, such as boundary, density, mean, are within the window. This option can be used to identify the
sum, and so on. least common classes. It can also be used to highlight
disconnected linear features.
With a process similar to the convolution filtering of continuous
raster layers, thematic raster layers can also be filtered. The GIS • Rank—outputs the number of pixels in the scan window whose
filtering process is sometimes referred to as scanning, but is not to value is less than the center pixel.
be confused with data capture via a digital camera. Neighborhood • Sum—totals the class values. In a file where class values are
analysis is based on local or neighborhood characteristics of the ranked, totaling enables you to further rank pixels based on
data (Star and Estes 1990). their proximity to high-ranking pixels.

Every pixel is analyzed spatially, according to the pixels that


surround it. The number and the location of the surrounding pixels
is determined by a scanning window, which is defined by you.
These operations are known as focal operations.

Neighborhood analysis creates a new thematic layer. There are


several types of analysis that can be performed upon each window
of pixels, as described below:

• Density—outputs the number of pixels that have the same class


value as the center (analyzed) pixel. This is also a measure of
homogeneity (sameness), based upon the analyzed pixel. This
is often useful in assessing vegetation crown closure.
• Diversity—outputs the number of class values that are present
within the window. Diversity is also a measure of
heterogeneity (difference).
• Majority—outputs the class value that represents the majority
of the class values in the window. This option operates like a
low-frequency filter to clean up a salt and pepper layer.
• Maximum—outputs the greatest class value within the
window. This can be used to emphasize classes with the higher
class values or to eliminate linear features or boundaries.

PERFORMING GIS ANALYSIS 109


Performing Ne ighbo rho od Analysis

1. Click the Image Analysis dropdown arrow, point to GIS


Analysis, and click Neighborhood.
2. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
3. Click the Neighborhood Function dropdown arrow, and 1
choose the function you want to use.
4. Click the Neighborhood Shape dropdown arrow, and
choose the shape you want to use.
5. Click the Matrix size dropdown arrow, and choose the
size you want to use.
6. Navigate to the directory where the Output Image should
be stored.
7. Click OK.

Neighborhood Analysis
2
Neighborhood Analysis applies to any analysis function that 3
takes neighboring pixels into account. This function creates a
new thematic layer. 4

The Neighborhood Analysis process is similar to convolution 5


filtering. Every pixel is spatially analyzed according to the
6
pixels surrounding it.

The different types of analysis that can be performed on each


window of pixels are listed in the dropdown menu for
Neighborhood Function. 7

110 USING IMAGE ANALYSIS FOR ARCGIS


Thematic Change
Thematic Change identifies areas that undergo change over time. Typically, you use Thematic Change after you perform categorizations
of your data. By using the categorizations of Before Theme and After Theme in the dialog, you can quantify both the amount and the type
of changes that take place over time. Image Analysis for ArcGIS produces a thematic image that has all the possible combinations of
change.

Thematic Change creates an output image from two input raster files. The class values of the two input files are organized into a matrix.
The first input file specifies the columns of the matrix, and the second one specifies the rows. Zero is not treated specially in any way. The
number of classes in the output file is the product of the number of classes from the two input files.

Both before and after images prior to performing Thematic Change.

PERFORMING GIS ANALYSIS 111


Performing Them atic Cha ng e

1. Click the Image Analysis dropdown arrow, point to GIS


Analysis, and click Thematic Change.
2. Click the Before Theme dropdown arrow, and click the
file you want to use, or navigate to the directory where it
is stored.
3. Click the After Theme dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored. 1
4. Navigate to the directory where the Output Image should
be stored.
5. Click OK.

Thematic Change

Use Thematic Change to identify areas that have undergone


change over time.

2
3
4

112 USING IMAGE ANALYSIS FOR ARCGIS


The following illustration is an example of the previous image after undergoing Thematic Change. In the Table of contents you see the
combination of classes from the Before and After images.

Note the areas of classification that show the changes between 1973 and 1994.

PERFORMING GIS ANALYSIS 113


Recode
By using Recode, class values can be recoded to new values.
Recoding involves the assignment of new values to one or more
classes of an existing file. Recoding is used to:

• reduce the number of classes


• combine classes
• assign different class values to existing classes
• write class name and color changes to the Attribute table

When an ordinal, ratio, or interval class numbering system is used,


recoding can be used to assign classes to appropriate values.
Recoding is often performed to make later steps easier. For
example, in creating a model that outputs good, better, and best Thematic Image of South Carolina soil types before Recode by class name.
areas, it may be beneficial to recode the input layers so all of the
best classes have the highest class values.

You can also use Recode to save any changes made to the color
scheme or class names of a classified image to the Attribute Table
for later use. Just saving an image will not record these changes.

Recoding an image involves two major steps. First, you must group
the discrete classes together into common groups. Secondly, you
perform the actual recoding process, which rewrites the Attribute
table using the information from your grouping process.

The three recoding methods described below are more accurately


described as three methods of grouping the classified image to get
it ready for the recode process. These methods are recoding by class South Carolina soils after the recode. Notice the changed and grouped class
name, recoding by symbology, and recoding a previously grouped names in the Table of contents.
image. The following exercises will take you through each of the
three recoding methods.

114 USING IMAGE ANALYSIS FOR ARCGIS


Performing Re code by class name

You will group the classified image in the ArcMap Table of


contents, and then perform the recode. 2

1. Click Add Data to open a classified image. 3


2. Identify the classes you want to group together in the
Table of contents. 4
3. Triple-click each class you wish to rename, and rename
it.
4. Click the color of each class, and change it to the color
scheme you want to use. X

PERFORMING GIS ANALYSIS 115


5. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Recode.
6. Navigate to the directory where the Output Image should
be stored.
7. Click OK.

116 USING IMAGE ANALYSIS FOR ARCGIS


Performing Re code by s ymbol ogy
4
This process will show you how to recode by symbology. You
will see similarities with recoding by class name, but you
should be aware of some different procedures. You will notice
that steps 1-3 and 10-12 are the same as the previous
Recode exercise.

1. Click Add Data to open an classified image.


2. Identify the classes you want to group together.
3. Click the colors of the classes to change to your desired
color scheme.
4. Double-click the image name in the Table of contents.
5. Click the Symbology tab in the Layer Properties dialog.
6. Press the Ctrl key while clicking on the first set of classes
you want to group together.
7. Right click on the selected classes, and click Group
Values.
8. Click in the Label column and type the new name for the
class.
9. Follow steps 5-7 to group the rest of your classes.
10. Click Apply and OK.
11. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Recode.
12. Navigate to the directory where the Output Image should
be stored.
13. Click OK.
5 7

PERFORMING GIS ANALYSIS 117


Rec oding with previou sly groupe d imag e

You may need to open an image that has been classified


and grouped in another program such as ERDAS
IMAGINE®. These images may have more than one valid
attribute column that can be used to perform the recode.

1. Click Add Data and add the grouped image.


2. Click the Image Analysis dropdown arrow, point to GIS
Analysis, and click Recode. 2
3. Click the Map Pixel Value through Field dropdown
arrow, and select the attribute you want to use to
recode the image.
4. Navigate to the directory where the Output Image
should be stored.
5. Click OK.

4
5

118 USING IMAGE ANALYSIS FOR ARCGIS


The following images depict soil data that was previously grouped
in ERDAS IMAGINE.

Previously grouped before Recode

After Recode in Image Analysis for ArcGIS

PERFORMING GIS ANALYSIS 119


Summarize Areas
Image Analysis for ArcGIS also provides Summarize Areas as a
method of assessing change in thematic data. Once you complete
the Thematic Change analysis, you can use Summarize Areas to
limit the analysis to include only a portion of the entire image.

Summarize Areas works by using a feature theme or an Image


Analysis for ArcGIS theme to compile information about that area
in tabular format. Summarize Areas produces cross-tabulation
statistics that compare class value areas between two thematic files,
including number of points in common, number of acres (or
hectares or square miles) in common, and percentages.

Summarize Areas might be used to assist a regional planning office


in preparing a study of urban change for certain counties within the
jurisdiction or even within one county or city. A file containing the
area to be inventoried can be summarized by a file for the same
geographical area containing the land cover categories. The
summary report could indicate the amount of urban change in a
particular area of a larger thematic change.

120 USING IMAGE ANALYSIS FOR ARCGIS


Usi n g Su m m a r i z e A r e as

1. Click the Image Analysis dropdown arrow, point to GIS


Analysis, and click Summarize Areas.
2. Click the Zone theme dropdown arrow, and click on the
theme you want to use, or navigate to the directory where
it is stored.
3. Click on the dropdown arrow for the Zone Attribute, and
click on the condition for each value of the attribute.
4. Click on the dropdown arrow for the Class Theme, and
click on the class theme, or navigate to the directory
where it is stored. 1
5. Click OK.

Summarize Areas

Use Summarize Areas to produce cross-tabulation statistics for


comparison of class value areas between two thematic files, or
2
one thematic and one shapefile, including number of points in
common, number of acres (or hectares or square miles) in
common, and percentages.

PERFORMING GIS ANALYSIS 121


122 USING IMAGE ANALYSIS FOR ARCGIS
1 Using Utilities
9
IN THIS CHAPTER The core of Image Analysis for ArcGIS is the ability it gives you to interpret and
manipulate your data. The Utilities part of Image Analysis for ArcGIS provides a
• Image Difference number of features for you to use in this capacity. The different procedures offered
in the Utilities menu allow you to alter your images in order to see differences, set
• Layer Stack new parameters, create images, or subset images. The information about Subset
Image, Create New Image, and Reproject Image can be found in chapter 4 “Using
Data Preparation” since the options are also accessible through that menu.

This chapter will explain the following functions and show you how to use:

• Image Difference
• Layer Stack

123
Image Difference
The Image Difference function gives you the ability to The Decreased class represents areas of negative (darker) change
conveniently perform change detection on aspects of an area by greater than the threshold for change and is red in color. The
comparing two images of the same place from different times. Increased class shows areas of positive (brighter) change greater
than the threshold and is green in color. Other areas of positive and
The Image Difference tool is particularly useful in plotting negative change less than the thresholds and areas of no change are
environmental changes such as urban sprawl and deforestation or transparent. For your application, you may edit the colors to select
the destruction caused by a wildfire or tree disease. It is also a any color desired for your study.
handy tool to use in determining crop rotation or the best new place
to develop a neighborhood. A lgor it hm
Image Difference is used for change analysis with imagery that Subtract two images on a pixel by pixel basis.
depicts the same area at different points in time. With Image
Difference, you can highlight specific areas of change in whatever 1. Subtract the Before Image from the After Image.
amount you choose. Two images are generated from this image-to- 2. Convert the decrease percentage to a value.
image comparison; one is a grayscale continuous image, and the
3. Convert the increase percentage to a value.
other is a five-class thematic image.
4. If the difference is less than the decrease value, then assign
The first image generated from Image Difference is the Difference the pixel to Class 1 (Decreased).
image. The Difference image is a grayscale image composed of 5. If the difference is greater than the increase value then assign
single band continuous data. This image is created by subtracting the pixel to Class 5 (Increased).
the Before Image from the After Image. Since Image Difference
calculates change in brightness values over time, the Difference
image simply reflects that change using a grayscale image. Brighter
areas have increased in reflectance. This may mean clearing of
forested areas. Dark areas have decreased in reflectance. This may
mean an area has become more vegetated, or the area was dry and
is now wet.

The second image is the Highlight Difference image. This thematic


image divides the changes into five categories. The five categories
are Decreased, Some Decrease, Unchanged, Some Increase, and
Increased.

124 USING IMAGE ANALYSIS FOR ARCGIS


U s i n g I m ag e D i f fe r e n c e

1. Click the Image Analysis dropdown arrow, point to


Utilities, and click Image Difference.
2. Click the Before Theme dropdown arrow, and click the
file you want to use, or navigate to the directory where it
is stored.
3. Click the After Theme dropdown arrow and click the file
you want to use, or navigate to the directory where it is
stored. 1
4. Choose As Percent or As Value for the Highlight
Changes.
5. Enter the Increases and Decreases values.
6. Click the color bar to choose the color you want to
represent the increases and decreases.
7. Type the Image Difference file name, or navigate to the
directory where it should be stored.
8. Type the Highlight Change file name, or navigate to the
directory where it should be stored. 2 3
9. Click OK.

5 6

7 8

The Image Difference Output file showing highlight change.

USING UTILITIES 125


Layer Stack
Layer Stack lets you stack layers from different images in any order
to form a single theme. It is useful for combining different types of
imagery for analysis such as multispectral and radar data. For
example, if you stack three single-band grayscale images, you
finish with one three band image. In general, you will find that
stacking images is most useful for combining grayscale single-band
images into multiband images.

Stacking works based on the order in the Table of contents. Before


you initiate stacking, you should first ensure that the images are in
the order that you want. This order represents the order in which the
bands will be arranged in the output file.

There are several applications of this feature such as change


visualization, combining and viewing multiple resolution data, and
viewing disparate data types. Layer Stack is particularly useful if
you have received a multispectral dataset with each of the
individual bands in separate files. You can also use Layer Stack to
analyze datasets taken during different seasons when different sets
show different stages for vegetation in an area. A stacked image with bands 1 and 3 taken from Amazon LBAND image and
the rest of the layers take from Amazon TM.
An example of a multispectral dataset with individual bands in
separate files would be Landsat TM data. Layer stack quickly
consolidates the bands of data into one file.

The image on this page is an example of a Layer Stack output. The


files used are from the Amazon, and the red and blue bands were
chosen from one image, while the green band was chosen from the
other.

126 USING IMAGE ANALYSIS FOR ARCGIS


Usi ng L aye r Stack

1. Click the Image Analysis dropdown arrow, point to


Utilities, and click Layer Stack.
2. Select a currently open layer, and click Add to include it in
the layer stack.
3. Click the browse button to navigate to a file containing
layers you want to add to the layer stack.
4. Select any files you want to remove from the layer stack
and click Remove. 1
5. Navigate to the directory where the Output Image should
be stored.
6. Click OK.

3
4

USING UTILITIES 127


128 USING IMAGE ANALYSIS FOR ARCGIS
1 Understanding Classification
10
IN THIS CHAPTER Multispectral classification is the process of sorting pixels into a finite number of
individual classes, or categories of data, based on their data file values. If a pixel
• The Classification Process satisfies a certain set of criteria, the pixel is assigned to the class that corresponds
to that criteria.
• Classification Tips
Depending on the type of information you want to extract from the original data,
• Unsupervised Classification classes may be associated with known features on the ground or may simply
represent areas that look different to the computer. An example of a classified
• Supervised Classification image is a land cover map that shows vegetation, bare land, pasture, urban, and so
on.
• Classification Decision Rules
This chapter covers the two ways to classify pixels into different categories:

• Unsupervised Classification
• Supervised Classification
The differences in the two are basically as their titles suggest. Supervised
Classification is more closely controlled by you than Unsupervised Classification.

129
The Classification Process
Pattern rec ognition Unsuper vised traini ng

Pattern recognition is the science—and art—of finding meaningful Unsupervised training is more computer-automated. It enables you
patterns in data, which can be extracted through classification. By to specify some parameters that the computer uses to uncover
spatially and spectrally enhancing an image, pattern recognition statistical patterns that are inherent in the data. These patterns do
can be performed with the human eye; the human brain not necessarily correspond to directly meaningful characteristics of
automatically sorts certain textures and colors into categories. the scene, such as contiguous, easily recognized areas of a
particular soil type or land use. They are simply clusters of pixels
In a computer system, spectral pattern recognition can be more with similar spectral characteristics. In some cases, it may be more
scientific. Statistics are derived from the spectral characteristics of important to identify groups of pixels with similar spectral
all pixels in an image. However, in Supervised Classification, the characteristics than it is to sort pixels into recognizable categories.
statistics are derived from the training samples, and not the entire
image. After the statistics are derived, pixels are sorted based on Unsupervised training is dependent upon the data itself for the
mathematical criteria. The classification process breaks down into definition of classes. This method is usually used when less is
two parts: training and classifying (using a decision rule). known about the data before classification. It is then the analyst’s
responsibility, after classification, to attach meaning to the
Tr aini ng resulting classes (Jensen 1996). Unsupervised classification is
useful only if the classes can be appropriately interpreted.
First, the computer system must be trained to recognize patterns in
the data. Training is the process of defining the criteria by which Si gnatures
these patterns are recognized (Hord 1982). Training can be
performed with either a supervised or an unsupervised method, as The result of training is a set of signatures that defines a training
explained below. sample or cluster. Each signature corresponds to a class, and is used
with a decision rule (explained below) to assign the pixels in the
S u p er vi s e d t r a i n i n g image file to a class. Signatures contain both parametric class
definitions (mean and covariance) and non-parametric class
Supervised training is closely controlled by the analyst. In this definitions (parallelepiped boundaries that are the per band minima
process, you select pixels that represent patterns or land cover and maxima).
features that you recognize, or that you can identify with help from
other sources, such as aerial photos, ground truth data, or maps. A parametric signature is based on statistical parameters (e.g., mean
Knowledge of the data, and of the classes desired, is required before and covariance matrix) of the pixels that are in the training sample
classification. or cluster. Supervised and unsupervised training can generate
parametric signatures. A set of parametric signatures can be used to
By identifying patterns, you can instruct the computer system to train a statistically-based classifier (e.g., maximum likelihood) to
identify pixels with similar characteristics. If the classification is define the classes.
accurate, the resulting classes represent the categories within the
data that you originally identified.

130 USING IMAGE ANALYSIS FOR ARCGIS


Dec ision ru le

After the signatures are defined, the pixels of the image are sorted
into classes based on the signatures by use of a classification
decision rule. The decision rule is a mathematical algorithm that,
using data contained in the signature, performs the actual sorting of
pixels into distinct class values.

Para metric d ecisio n rule

A parametric decision rule is trained by the parametric signatures.


These signatures are defined by the mean vector and covariance
matrix for the data file values of the pixels in the signatures. When
a parametric decision rule is used, every pixel is assigned to a class
since the parametric decision space is continuous (Kloer 1994).
There are three parametric decision rules offered:

• Minimum distance
• Mahalanobis distance
• Maximum likelihood
No nparametric decisi on rule

When a nonparametric rule is set, the pixel is tested against all of


the signatures with nonparametric definitions. This rule results in
the following conditions:

• If the nonparametric test results in one unique class, the pixel


is assigned to that class.
• If the nonparametric test results in zero classes (for example,
the pixel lies outside all the nonparametric decision
boundaries), then the pixel is assigned to a class called
unclassified.

Parallelepiped is the only nonparametric decision rule in Image


Analysis for ArcGIS.

UNDERSTANDING CLASSIFICATION 131


Classification tips
C l a s s i f i c a t i o n s ch e m e It is recommended that the classification process is begun by
defining a classification scheme for the application, using
Usually, classification is performed with a set of target classes in previously developed schemes, like those above, as a general
mind. Such a set is called a classification scheme (or classification framework.
system). The purpose of such a scheme is to provide a framework
for organizing and categorizing the information that can be S u pe r v is e d v e rs u s Unsup er vi sed
extracted from the data (Jensen 1983). The proper classification Classification
scheme includes classes that are both important to the study and
discernible from the data on hand. Most schemes have a In supervised training, it is important to have a set of desired classes
hierarchical structure, which can describe a study area in several in mind, and then create the appropriate signatures from the data.
levels of detail. You must also have some way of recognizing pixels that represent
the classes that you want to extract.
A number of classification schemes have been developed by
specialists who have inventoried a geographic region. Some Supervised classification is usually appropriate when you want to
references for professionally-developed schemes are listed below: identify relatively few classes, when you have selected training
sites that can be verified with ground truth data, or when you can
• Anderson, J. R., et al. 1976. “A Land Use and Land Cover identify distinct, homogeneous regions that represent each class. In
Classification System for Use with Remote Sensor Data.” U.S. Image Analysis for ArcGIS, if you need to correctly classify small
Geological Survey Professional Paper 964. areas with actual representation, you should choose Supervised
• Cowardin, Lewis M., et al. 1979. Classification of Wetlands Classification.
and Deepwater Habitats of the United States. Washington,
D.C.: U.S. Fish and Wildlife Service. On the other hand, if you want the classes to be determined by
spectral distinctions that are inherent in the data so that you can
• Florida Topographic Bureau, Thematic Mapping Section. define the classes later, then the application is better suited to
1985. Florida Land Use, Cover and Forms Classification unsupervised training. Unsupervised training enables you to define
System. Florida Department of Transportation, Procedure No. many classes easily, and identify classes that are not in contiguous,
550-010-001-a. easily recognized regions.
• Michigan Land Use Classification and Reference Committee.
1975. Michigan Land Cover/Use Classification System. If you have areas that have a value of zero, and you do not classify
Lansing, Michigan: State of Michigan Office of Land Use. them as NoData (see chapter 3 “Applying data tools”), they will be
assigned to the first class when performing Unsupervised
Other states or government agencies may also have specialized land Classification. You can assign a specific class by taking a training
use/cover studies. sample when performing a Supervised Classification.

132 USING IMAGE ANALYSIS FOR ARCGIS


Classifying enhan ced da ta

For many specialized applications, classifying data that have been


merged, spectrally merged or enhanced—with principal
components, image algebra, or other transformations—can produce
very specific and meaningful results. However, without
understanding the data and the enhancements used, it is
recommended that only the original, remotely-sensed data be
classified.

Limi ting dime nsion s

Although Image Analysis for ArcGIS allows an unlimited number


of layers of data to be used for one classification, it is usually wise
to reduce the dimensionality of the data as much as possible. Often,
certain layers of data are redundant or extraneous to the task at
hand. Unnecessary data take up valuable disk space, and causes the
computer system to perform more arduous calculations, which
slows down processing.

UNDERSTANDING CLASSIFICATION 133


Unsupervised Classification/Categorize Image
Unsupervised training requires only minimal initial input from you. The ISODATA method uses minimum spectral distance to assign a
However, you have the task of interpreting the classes that are cluster for each candidate pixel. The process begins with a specified
created by the unsupervised training algorithm. Unsupervised number of arbitrary cluster means or the means of existing
training is also called clustering, because it is based on the natural signatures, and then it processes repetitively, so that those means
groupings of pixels in image data when they are plotted in feature shift to the means of the clusters in the data.
space.
Because the ISODATA method is iterative, it is not biased to the
If you need to classify small areas with small representation, you top of the data file, as are the one-pass clustering algorithms.
should use Supervised Classification. Due to the skip factor of 8
used by the Unsupervised Classification signature collection, small Ini tia l cl u s t e r m e a ns
areas such as wetlands, small urban areas, or grasses can be
wrongly classified on rural data sets. On the first iteration of the ISODATA algorithm, the means of N
clusters can be arbitrarily determined. After each iteration, a new
Cl usters mean for each cluster is calculated, based on the actual spectral
locations of the pixels in the cluster, instead of the initial arbitrary
Clusters are defined with a clustering algorithm, which often uses calculation. Then, these new means are used for defining clusters in
all or many of the pixels in the input data file for its analysis. The the next iteration. The process continues until there is little change
clustering algorithm has no regard for the contiguity of the pixels between iterations (Swain 1973).
that define each cluster.
The initial cluster means are distributed in feature space along a
The Iterative Self-Organizing Data Analysis Technique vector that runs between the point at spectral coordinates (µ1-σ1,
(ISODATA) (Tou and Gonzalez 1974) clustering method uses µ2-σ2, µ3-σ3, ... µn-σn) and the coordinates (µ1+σ1, µ2+σ2, µ3+σ3,
spectral distance as in the sequential method, but iteratively ... µn+σn). Such a vector in two dimensions is illustrated below. The
classifies the pixels, redefines the criteria for each class, and
initial cluster means are evenly distributed between (µA-σA, µB-
classifies again, so that the spectral distance patterns in the data
gradually emerge. σB) and (µA+σA, µB+σB).

ISODATA clu s te r i n g

ISODATA is iterative in that it repeatedly performs an entire


classification (outputting a thematic raster layer) and recalculates
statistics. Self-Organizing refers to the way in which it locates
clusters with minimum user input.

134 USING IMAGE ANALYSIS FOR ARCGIS


ISODATA Arbitrary Clusters Cluster
Cluster 5
5 arbitrary cluster means in two-dimensional spectral space 4
Cluster
3
µΒ+σΒ Cluster
2

data file values


data file values

µΒ

Band B
Band B

Cluster
1

µΒ−σΒ

µΑ+σΑ µΑ µΑ−σΑ Band A


Band A data file values
data file values

For the second iteration, the means of all clusters are recalculated,
Pixel a naly sis
causing them to shift in feature space. The entire process is
Pixels are analyzed beginning with the upper left corner of the repeated—each candidate pixel is compared to the new cluster
image and going left to right, block by block. means and assigned to the closest cluster mean.

The spectral distance between the candidate pixel and each cluster
mean is calculated. The pixel is assigned to the cluster whose mean
is the closest. The ISODATA function creates an output image file
with a thematic raster layer as a result of the clustering. At the end
of each iteration, an image file exists that shows the assignments of
the pixels to the clusters.

Considering the regular, arbitrary assignment of the initial cluster


means, the first iteration of the ISODATA algorithm always gives
results similar to those in this illustration.

UNDERSTANDING CLASSIFICATION 135


Cluster
Cluster 5
4
Cluster
3

Cluster
2
data file values

Cluster
Band B

Band A
data file values

Perc e n t ag e u ncha n g ed

After each iteration, the normalized percentage of pixels whose


assignments are unchanged since the last iteration is displayed on
the dialog. When this number reaches T (the convergence
threshold), the program terminates.

It is possible for the percentage of unchanged pixels to never


converge or reach T (the convergence threshold). Since you are not
able to control the convergence threshold, it may be beneficial to
monitor the percentage, or specify a reasonable maximum number
of iterations, M, so that the program does not run indefinitely.

136 USING IMAGE ANALYSIS FOR ARCGIS


Performing Un super vised Classification/
Cat egoriz e Im ag e

1. Click the Image Analysis dropdown arrow, point to


Classification, and click Unsupervised/Categorize.
2. Click the Input Image dropdown arrow, or navigate to the
directory where it is stored.
3. Type or click the arrows to enter the Desired Number of
Classes.
1
4. Navigate to the directory where the Output Image should
be stored.
5. Click OK.

2
3

UNDERSTANDING CLASSIFICATION 137


Supervised Classification
Supervised classification requires a priori (already known)
information about the data, such as:

• What type of classes need to be extracted? Soil type? Land


use? Vegetation?
• What classes are most likely to be present in the data? That is,
which types of land cover, soil, or vegetation (or whatever) are
represented by the data?

In supervised training, you rely on your own pattern recognition


skills and a priori knowledge of the data to help the system
determine the statistical criteria (signatures) for data classification.

To select reliable samples, you should know some information—


either spatial or spectral—about the pixels that you want to classify.

The location of a specific characteristic, such as a land cover type,


may be known through ground truthing. Ground truthing refers to
the acquisition of knowledge about the study area from field work,
analysis of aerial photography, personal experience, and so on.
Ground truth data are considered to be the most accurate (true) data
available about the area of study. It should be collected at the same
time as the remotely sensed data, so that the data correspond as
much as possible (Star and Estes 1990). However, some ground
data may not be very accurate due to a number of errors and
inaccuracies.

138 USING IMAGE ANALYSIS FOR ARCGIS


Performing Su per vi sed Clas sific ation

1. Click the Image Analysis dropdown arrow, point to


Classification, and click Supervised.
2. Click the Input Image dropdown arrow, and click the file
you want to use, or navigate to the directory where it is
stored.
3. Click the Signature Features dropdown arrow, and click
the file you want to use, or navigate to the directory
where it is stored.
1
4. Click the Class Name Field dropdown arrow, and click
the field you want to use.
5. Choose All Features or Selected Features to use during
classification.
6. Click the Classification Rule dropdown arrow, and click
the rule you want to use.
7. Navigate to the directory where the Output Image should
be stored.
8. Click OK.

2
3
4
5

6
7

UNDERSTANDING CLASSIFICATION 139


Classification decision rules
Once a set of reliable signatures has been created and evaluated, the Mini mum dista nce
next step is to perform a classification of the data. Each pixel is
analyzed independently. The measurement vector for each pixel is The minimum distance decision rule (also called spectral distance)
compared to each signature, according to a decision rule, or calculates the spectral distance between the measurement vector for
algorithm. Pixels that pass the criteria that are established by the the candidate pixel and the mean vector for each signature.
decision rule are then assigned to the class for that signature. Image
Analysis for ArcGIS enables you to classify the data parametrically candidate pixel
µB3 µ3
with statistical representation. ◆

data file values


Para metric rule s

Band B
µB2 ◆
Image Analysis for ArcGIS provides these commonly-used µ2
decision rules for parametric signatures:

• minimum distance µB1 ◆ µ1

• Mahalanobis distance o
o µA1 µA2 µA3
• maximum likelihood (with Bayesian variation)
Band A
No nparametric rul e data file values

• Parallelepiped
In this illustration, spectral distance is illustrated by the lines from
the candidate pixel to the means of the three signatures. The
candidate pixel is assigned to the class with the closest mean.

The equation for classifying by spectral distance is based on the


equation for Euclidean distance:

∑ ( µci – Xxyi )
2
SD xyc =
i=1

140 USING IMAGE ANALYSIS FOR ARCGIS


Where: Where:

n = number of bands (dimensions) D = weighted distance (likelihood)


i = a particular band c = a particular class
c = a particular class X = the measurement vector of the candidate pixel
Xxyi = data file value of pixel x,y in band i Mc = the mean vector of the sample of class c
µci = mean of data file values in band i for the ac = percent probability that any candidate pixel is
sample for class c a member of class c (defaults to 1.0, or is
SDxyc = spectral distance from pixel x,y to the mean of entered from a priori data)
class c Covc = the covariance matrix of the pixels in the
sample of class c
Source: Swain and Davis 1978 |Covc| = determinant of Covc (matrix algebra)
When spectral distance is computed for all possible values of c (all Covc-1 = inverse of Covc (matrix algebra)
possible classes) the class of the candidate pixel is assigned to the ln = natural logarithm function
class for which SD is the lowest. T = transposition function (matrix algebra)
Maximum likelih ood Mahal anobi s dis tance

Note: The maximum likelihood algorithm assumes that the Note: The Mahalanobis distance algorithm assumes that the
histograms of the bands of data have normal distributions. If this is histograms of the bands have normal distributions. If this is not the
not the case, you may have better results with the minimum case, you may have better results with the parallelepiped or
distance decision rule. minimum distance decision rule, or by performing a first-pass
parallelepiped classification.
The maximum likelihood decision rule is based on the probability
that a pixel belongs to a particular class. The basic equation Mahalanobis distance is similar to minimum distance, except that
assumes that these probabilities are equal for all classes, and that the covariance matrix is used in the equation. Variance and
the input bands have normal distributions. covariance are figured in so that clusters that are highly varied lead
to similarly varied classes, and vice versa. For example, when
The Equation for the Maximum Likelihood/Bayesian Classifier is classifying urban areas—typically a class whose pixels vary
as follows: widely—correctly classified pixels may be farther from the mean
than those of a class for water, which is usually not a highly varied
–1
D = ln ( ac ) – [ 0.5 ln ( Cov c ) ] – [ 0.5 ( X – M c )T ( Cov c ) ( X – M c ) ] class (Swain and Davis 1978).

UNDERSTANDING CLASSIFICATION 141


The equation for the Mahalanobis distance classifier is as follows:

T –1
D = ( X – M c ) ( Cov c ) ( X – M c )

Where:

D = Mahalanobis distance
c = a particular class
X = the measurement vector of the candidate pixel
Mc = the mean vector of the signature of class c
Covc = the covariance matrix of the pixels in the
signature of class c
Covc-1 = inverse of Covc
T
= transposition function

The pixel is assigned to the class, c, for which D is the lowest.

Para llele piped

Image Analysis for ArcGIS provides the parallelepiped decision


rule as its nonparametric decision rule. In the parallelepiped
decision rule, the data file values of the candidate pixel are
compared to upper and lower limits which are the minimum and
maximum data file values of each band in the signature.

There are high and low limits for every signature in every band.
When a pixel’s data file values are between the limits for every
band in a signature, then the pixel is assigned to that signature’s
class. In the case of a pixel falling into more than one class, then the
first class is the one assigned. When a pixel falls into no class
boundaries, it is labeled unclassified.

142 USING IMAGE ANALYSIS FOR ARCGIS


1 Using Conversion
11
IN THIS CHAPTER The Conversion feature gives you the ability to convert shape files to raster images
and raster images to shape files. This tool is very helpful when you need to isolate
• Conversion or highlight certain parts of a raster image or when you have a shape file and you
need to view it as a raster image. Possible applications include viewing
• Convert Raster to Features deforestation patterns, urban sprawl, and shore erosion.

• Convert Features to Raster The Image Info tool that is discussed in chapter 3 “Applying data tools” is also an
important part of Raster/Feature Conversion. The ability to assign certain pixel
values as NoData is very helpful when converting images.

143
Conversion
Always be aware of how the raster dataset will represent the
features when converting points, polygons, or polylines to a raster,
and vice versa. There is a trade off when working with a cell-based
system, and it is that even though points don't have area, cells do.
Even though points are represented by a single cell, that cell does
have area. The smaller the cell size, the smaller the area, and thus a
closer representation of the point feature. Points with area will have
an accuracy of plus or minus half the cell size. For many users
having all data types in the same format and being able to use them
interchangeably in the same language is more important than a loss
of accuracy.

Linear Data is represented by a polyline that is also comprised of


cells so it has area even though by definition, lines do not. Because
of this, the accuracy of representation will vary according to the
scale of the data the resolution of the raster dataset.

With polygonal or areal data, problems can occur from trying to


represents smooth polygon boundaries with square cells. The
accuracy of the representation is dependent on the scale of the data
and the size of the cell. The finer the cell resolution and the greater
the number of cells that represent small areas, the more accurate the
representation.

144 USING IMAGE ANALYSIS FOR ARCGIS


Converting raster to features
During a conversion of a raster representing polygonal features to
polygonal features, the polygons are built from groups of
contiguous cells having the same cell values. Arcs are created from
cell borders in the raster. Continuous cells with the same value are
grouped together to form polygons. Cells that are NoData in the
input raster will not become features in the output polygon feature.

When a raster that represents linear features is converted to a


polyline feature, a polyline is created from each cell in the input
raster, passing through the center of each cell. Cells that are NoData
in the input raster will not become features in the output polyline
feature.

When you convert a raster representing point features to point


features, a point will be created in the output for each cell of the
input raster. Each point will be positioned at the center of the cell it
represents. NoData cells will not be transformed into points.

When you choose Convert Raster to Features, the dialog will give A raster image before conversion
you the choice of a Field to specify from the image in the
conversion. You will also be given the choice of an Output
geometry type so you can choose if the feature will be a point, a
polygon, or a polyline according to the Field and data you’re using.
In order no to have jagged or sharp edges to the new feature file,
you can check Generalize Lines to smooth out the edges. You
should note that regardless of what Field you pick, the category will
not be populated on the Attribute Table after conversion.

After conversion to a shapefile using Value as the Field

USING CONVERSION 145


Performing ra ster to fea ture conve rsion

1. Click the Image Analysis dropdown arrow, point to


Convert, and click Convert Raster to Features.
2. Click the Input raster dropdown arrow, or navigate to the
directory where the raster image is stored.
3. Click the Field dropdown arrow and choose a Filed to use.
4. Click the Output geometry type dropdown arrow, and
choose point, polygon, or polyline.
5. Check or uncheck Generalize Lines according to your 1
preference.
6. Navigate to the directory where the Output feature should
be stored.
7. Click OK.

2
3
4
5
6

146 USING IMAGE ANALYSIS FOR ARCGIS


Converting features to raster
Any polygons, polylines, or points from any source file can be
converted to a raster. You can convert features using both string and
numeric fields. Each unique string in a string field is assigned a
unique value to the output raster. A field is added to the table of the
output raster to hold the original string value from the features.

When you convert points, cells are given the value of the points
found within each cell. Cells that do not contain a point are given
the value of NoData. You are given the option of specifying the cell
size you want to use in the Feature to Raster dialog. You should
choose the cell size based on several different factors: the resolution
of the input data, the output resolution needed to perform your
analysis, and the need to maintain a rapid processing speed.

Polylines are features that, at certain resolutions, only appear as


lines representing streams or roads. When you convert polylines,
cells are given the value of the line that intersects each cell. Cells
that are not intersected by a line are given the value NoData. If more
than one line is found in a cell, the cell is given the value of the first
line encountered while processing. Using a smaller cell size during
conversion will alleviate this.

Polygons are used for buildings, forests, fields, and many other
features that are best represented by a series of connected cells.
When you convert polygons, the cells are given the value of the
polygon found at the center of each cell.

USING CONVERSION 147


Performing Feature to Raster conversio n

1. Click the Image Analysis dropdown arrow, point to


Convert, and click Convert Feature to Raster.
2. Click the Input features dropdown arrow, or navigate to
the directory where the file is stored.
3. Click the Field dropdown arrow, and select the Field
option you want to use.
4. Type the Output cell size.
5. Navigate to the directory where the Output Raster should
be stored. 1
6. Click OK.

2
3
4
5

148 USING IMAGE ANALYSIS FOR ARCGIS


12 Applying Geocorrection Tools
12
IN THIS CHAPTER The tools and methods described in this chapter concern the process of
geometrically correcting the distortions in images caused by sensors and the
• Geocorrection Properties curvature of the earth. Even images of seemingly flat areas are distorted, but these
images can be corrected, or rectified, so they can be represented on a planar
• Spot Properties surface, conform to other images, and have the integrity of a map.

• Polynomial Properties The terms geocorrection and rectification are used synonymously when discussing
geometric correction. Rectification is the process of transforming the data from
• Rubber Sheeting one grid system into another grid system using a geometric transformation. Since
the pixels of a new grid may not align with the pixels of the original grid, the pixels
• Camera Properties must be resampled. Resampling is the process of extrapolating data values for the
pixels on the new grid from the values of the source pixels.
• IKONOS Properties
Orthorectification is a form of rectification that corrects for terrain displacement
• Landsat Properties
and can be used if there is a DEM of the study area. It is based on collinearity
• QuickBird Properties equations, which can be derived by using 3D Ground Control Points (GCPs). In
relatively flat areas, orthorectification is not necessary, but in mountainous areas
• RPC Properties (or on aerial photographs of buildings), where a high degree of accuracy is
required, orthorectification is recommended.

149
When to rectify
Rectification is necessary in cases where the pixel grid of the image • What is the extent of the study area? Circular, north-south,
must be changed to fit a map projection system or a reference east-west, and oblique areas may all require different
image. There are several reasons for rectifying image data: projection systems (ESRI 1992).

• comparing pixels scene to scene in applications, such as D i s a dvan t ag e s o f r e c t i f i c a ti o n


change detection or thermal inertia
During rectification, the data file values of rectified pixels must be
• mapping (day and night comparison) resampled to fit into a new grid of pixel rows and columns.
• developing GIS databases for GIS modeling Although some of the algorithms for calculating these values are
highly reliable, some spectral integrity of the data can be lost during
• identifying training samples according to map coordinates
rectification. If map coordinates or map units are not needed in the
prior to classification
application, then it may be wiser not to rectify the image. An
• creating accurate scaled photomaps unrectified image is more spectrally correct than a rectified image.
• overlaying an image with vector data, such as ArcInfo
G e o r e fe r e n c in g
• comparing images that are originally at different scales
• extracting accurate distance and area measurements Georeferencing refers to the process of assigning map coordinates
• mosaicking images to image data. The image data may already be projected onto the
desired plane, but not yet referenced to the proper coordinate
• performing any other analyses requiring precise geographic system. Rectification, by definition, involves georeferencing, since
locations all map projection systems are associated with map coordinates.
Image to image registration involves georeferencing only if the
Before rectifying the data, you must determine the appropriate
reference image is already georeferenced. Georeferencing, by
coordinate system for the database. To select the optimum map
itself, involves changing only the map coordinate information in
projection and coordinate system, the primary use for the database
the image file. The grid of the image does not change.
must be considered. If you are doing a government project, the
projection may be predetermined. A commonly used projection in Geocoded data are images that have been rectified to a particular
the United States government is State Plane. Use an equal area map projection and pixel size, and usually have had radiometric
projection for thematic or distribution maps and conformal or equal corrections applied. It is possible to purchase image data that is
area projections for presentation maps. Before selecting a map already geocoded. Geocoded data should be rectified only if they
projection, consider the following: must conform to a different projection system or be registered to
other rectified data.
• How large or small an area is mapped? Different projections
are intended for different size areas.
• Where on the globe is the study area? Polar regions and
equatorial regions require different projections for maximum
accuracy.

150 USING IMAGE ANALYSIS FOR ARCGIS


Geore ferenci ng on ly En tering GC Ps

Rectification is not necessary if there is no distortion in the image. Accurate GCPs are essential for an accurate rectification. From the
For example, if an image file is produced by scanning or digitizing GCPs, the rectified coordinates for all other points in the image are
a paper map that is in the desired projection system, then that image extrapolated. Select many GCPs throughout the scene. The more
is already planar and does not require rectification unless there is dispersed the GCPs are, the more reliable the rectification is. GCPs
some skew or rotation of the image. Scanning or digitizing for large scale imagery might include the intersection of two roads,
produces images that are planar, but do not contain any map airport runways, utility corridors, towers or buildings. For small
coordinate information. These images need only to be scale imagery, larger features such as urban areas or geologic
georeferenced, which is a much simpler process than rectification. features may be used. Landmarks that can vary (edges of lakes,
In many cases, the image header can simply be updated with new other water bodies, vegetation and so on) should not be used.
map coordinate information. This involves redefining:
The source and reference coordinates of the GCPs can be entered in
• the map coordinate of the upper left corner of the image the following ways:
• the cell size (the area represented by each pixel)
• They may be known a priori, and entered at the keyboard.
This information is usually the same for each layer of an image file, • Use the mouse to select a pixel from an image in the view. With
although it could be different. For example, the cell size of band 6 both the source and destination views open, enter source
of Landsat TM data is different than the cell size of the other bands. coordinates and reference coordinates for image to image
registration.
Gro und contro l poi nts • Use a digitizing tablet to register an image to a hardcopy map.
GCPs are specific pixels in an image for which the output map Tol e r a n c e o f R M S e rror ( R M S E )
coordinates (or other output coordinates) are known. GCPs consist
of two X,Y pairs of coordinates: Acceptable RMS error is determined by the end use of the data
base, the type of data being used, and the accuracy of the GCPs and
• source coordinates — usually data file coordinates in the image ancillary data being used. For example, GCPs acquired from GPS
being rectified should have an accuracy of about 10 m, but GCPs from 1:24,000-
• reference coordinates — the coordinates of the map or scale maps should have an accuracy of about 20 m.
reference image to which the source image is being registered
It is important to remember that RMS error is reported in pixels.
The term map coordinates is sometimes used loosely to apply to Therefore, if you are rectifying Landsat TM data and want the
reference coordinates and rectified coordinates. These coordinates rectification to be accurate to within 30 meters, the RMS error
are not limited to map coordinates. For example, in image to image should not exceed 1.00. Acceptable accuracy depends on the image
registration, map coordinates are not necessary. area and the particular project.

APPLYING GEOCORRECTION TOOLS 151


Classification

Some analysts recommend classification before rectification since


the classification is then based on the original data values. Another
benefit is that a thematic file has only one band to rectify instead of
the multiple bands of a continuous file. On the other hand, it may
be beneficial to rectify the data first, especially when using GPS
data for the GCPs. Since this data is very accurate, the classification
may be more accurate if the new coordinates help to locate better
training samples.

Thema tic files

Nearest neighbor is the only appropriate resampling method for


thematic files, which may be a drawback in some applications. The
available resampling methods are discussed in detail later in
Geocorrection property dialogs.

152 USING IMAGE ANALYSIS FOR ARCGIS


Geocorrection property dialogs
The individual Geocorrection Tools have their own dialog that 3
1
appears whenever you choose a model type and click on the
Geocorrection Properties button. Some of the tool dialogs offer
certain option tabs pertaining to that specific tool, but they all have
several tabs in common. Every Geocorrection Tool dialog has a
General tab and a Links tab, and all but Polynomial Properties and
Rubber Sheeting Properties have an Elevation tab.

The General tab has a Link Coloring section, a Displayed Units


section, and a Link Snapping section. The Link Coloring section
lets you set a Threshold and select or change link colors. The
Displayed Units section gives you the Horizontal and Vertical
Units if they are known. Often one will be known and the other one
not so it may say Meters for Vertical Units and Unknown for
Horizontal Units. Display Units does not have any effect on the
original data in latitude/longitude format. The image in the view
will not show the changes either.

The Link Snapping section will only be activated when you have a
vector layer (shapefile) active in ArcMap. The purpose of this
portion of the tool is to allow you to snap an edge, end, or vertex to
the edge, end, or vertex of another layer. The vector layer you want
to snap to another layer will be defined in the Link Snapping box. 2
You will need to check either Vertex, Edge, or End depending on
what you want to snap to in another layer. The choice is completely
up to you.

1. Click the arrows to set the Threshold, and click the Within
and Over Threshold boxes to change the link colors.
2. The Displayed Units area shows the measurement of the
Vertical Units.
3. If you have shapefiles (a vector layer) active in ArcMap,
check Vertex, Boundary, or End Point. Checking one will
activate Snap Tolerance and Snap Tolerance Units.

APPLYING GEOCORRECTION TOOLS 153


Link s tab 1. Right-click in the view area and click Properties at the
bottom of the popup menu. The Data Frame Properties dialog
The Links tab (this display is also called a CellArray) shows displays.
information about the links in your image, including reference 2. Click the Coordinate System tab.
points and RMS Error. If you have already added links to your 3. If your link coordinates are predefined, click the appropriate
image, they will be listed under this tab. The program is interactive Predefined coordinate system. If you want to use the
between the image and the Links tab, so when you add links in an coordinate system from a specific layer, select that layer from
image or between two images, information is automatically the list of Layers.
updated in the CellArray. You can edit and delete information
displayed in the CellArray as well. For example, if you want to There are a few additional checks you need to make before
experiment with coordinates other than the ones you’ve been given, proceeding.
you can plug your own coordinates into the CellArray on the Links
tab. 1. Make sure that the correct layer is displayed in the Layers
box on the Image Analysis toolbar.
Before adding links or editing the links table, you need to select the
2. Choose your Model Type from the dropdown list.
Coordinate System in which you want to store the link coordinates.
3. Click the Add Links button to set your new links.

1 2
2

You can proof and edit the coordinates of the links as you enter
them.

3
1. Click the Geocorrection Properties button .
2. Click the Links tab. The coordinates will be displayed in the
cell array on this tab.
3. Click inside a cell and edit the contents.
4. When you are finished, you can click Export Links to Shape
file and save the new shapefile.

154 USING IMAGE ANALYSIS FOR ARCGIS


2

Eleva tion tab


Elevation Source File
The Elevation tab is in all Geocorrection Model Properties except
for Polynomial and Rubber Sheeting. When you click the Elevation
tab in any of the Geocorrection Model Types, the default selection
will allow you to choose a file to use as an Elevation Source,
because most of the time you will have an Elevation File to use as
your elevation source. If you do not have an Elevation File, you
should use a Constant elevation value as the elevation source.
Choosing Constant changes the options in the Elevation Source
section to allow you to specify the Elevation Value and Elevation
Units. The Constant value you should use is the average ground
elevation for the entire scene. The following examples use the
Landsat Properties dialog, but the Elevation tab is the same on all
of the Model Types that allow you to specify elevation information.

APPLYING GEOCORRECTION TOOLS 155


1

Elevation Source Constant

After the Elevation Source section you can check the box if you
want to Account for Earth’s curvature as part of the Elevation.

The following steps take you through the Elevation tab. The first set
of instructions pertains to using File as your Elevation Source. The
second set uses Constant as the Elevation Source.

1. Choose File.
2. Type the file name or navigate to the directory where the
Elevation File is stored.
3. Click the dropdown arrow and choose Feet or Meters.
4. Check if you want to Account for the Earth’s curvature.
5. Click Apply to set the Elevation Source. Click OK if you are
finished with the dialog.

156 USING IMAGE ANALYSIS FOR ARCGIS


These are the steps to take when using a Constant value as the
elevation source.

1. Choose Constant.
2. Click the arrows to enter the Elevation Value.
3. Click the dropdown arrow, and choose either Feet or Meters.
4. Check if you want to Account for the Earth’s curvature.
5. Click Apply to set the Elevation Source. Click OK if you are
finished with the dialog.

APPLYING GEOCORRECTION TOOLS 157


SPOT
The first SPOT satellite, developed by the French Centre National XS
d’Etudes Spatiales (CNES), was launched in early 1986. The
second SPOT satellite was launched in 1990, and the third was SPOT XS, or multispectral, has 20 × 20 m spatial resolution, 8-bit
launched in 1993. The sensors operate in two modes, multispectral radiometric resolution, and contains 3 bands (Jensen 1996).
and panchromatic. SPOT is commonly referred to as a pushbroom
scanner, which means that all scanning parts are fixed, and SPOT XS Bands and Wavelengths
scanning is accomplished by the forward motion of the scanner.
SPOT pushes 3000/6000 sensors along its orbit. This is different Wavelength
from Landsat which scans with 16 detectors perpendicular to its Band Comments
(microns)
orbit.
1, Green 0.50 to 0.59 This band corresponds to the
The SPOT satellite can observe the same area on the globe once µm green reflectance of healthy
every 26 days. The SPOT scanner normally produces nadir views, vegetation.
but it does have off-nadir viewing capability. Off-nadir refers to
any point that is not directly beneath the detectors, but off to an 2, Red 0.61 to 0.68 This band is useful for
angle. Using this off-nadir capability, one area on the earth can be µm discriminating between plant
viewed as often as every 3 days. species. It is also useful for soil
boundary and geological
This off-nadir viewing can be programmed from the ground control boundary delineations.
station, and is quite useful for collecting data in a region not directly
3, 0.79 to 0.89 This band is especially
in the path of the scanner or in the event of a natural or man-made
Reflective µm responsive to the amount of
disaster, where timeliness of data acquisition is crucial. It is also
IR vegetation biomass present in a
very useful in collecting stereo data from which elevation data can
scene. It is useful for crop
be extracted.
identification and emphasizes
The width of the swath observed varies between 60 km for nadir soil/crop and land/water
viewing and 80 km for off-nadir viewing at a height of 832 km contrasts.
(Jensen 1996).

Panch romati c

SPOT Panchromatic (meaning sensitive to all visible colors) has 10


× 10 m spatial resolution, contains 1 band—0.51 to 0.73 mm—and
is similar to a black and white photograph. It has a radiometric
resolution of 8 bits (Jensen 1996).

158 USING IMAGE ANALYSIS FOR ARCGIS


SP OT 4

The SPOT 4 satellite was launched in 1998. SPOT 4 carries High


Panc Resolution Visible Infrared (HR VIR) instruments that obtain
hrom 1 band
atic
information in the visible and near-infrared spectral bands.

The SPOT 4 satellite orbits the earth at 822 km above the Equator.
3 bands The SPOT 4 satellite has two sensors on board: a multispectral
XS
sensor, and a panchromatic sensor. The multispectral scanner has a
pixel size of 20 × 20 m, and a swath width of 60 km. The
1 pixel = panchromatic scanner has a pixel size of 10 × 10 m, and a swath
10 m x 10 m width of 60 km.

SPOT 4 Bands and Wavelengths


radiometric
resolution 1 pixel =
0-255 20 m x 20 m
Band Wavelength

1, Green 0.50 to 0.59 µm

2, Red 0.61 to 0.68 µm


SPOT Panchromatic versus SPOT XS
3, (near-IR) 0.78 to 0.89 µm
S t e r e o s c o p i c p a i rs
4, (mid-IR) 1.58 to 1.75 µm
Two observations can be made by the panchromatic scanner on
successive days, so that the two images are acquired at angles on Panchromatic 0.61 to 0.68 µm
either side of the vertical, resulting in stereoscopic imagery.
Stereoscopic imagery can also be achieved by using one vertical
scene and one off-nadir scene. This type of imagery can be used to
produce a single image, or topographic and planimetric maps
(Jensen 1996).

Topographic maps indicate elevation. Planimetric maps correctly


represent horizontal distances between objects (Star and Estes
1990).

APPLYING GEOCORRECTION TOOLS 159


The Spot Properties dialog
In addition to the General, Links, and Elevation tabs, the Spot
Properties dialog also contains a Parameters tab. Most of the
Geocorrection Properties dialogs do contain a Parameters tab, but 1
each one offers different options.
2
1. Click the Model Types dropdown arrow, and choose Spot.
2. Click the Geocorrection Properties button.
3 6
3. Click the Parameters tab on the Spot Properties dialog.
4. Choose the Sensor type.
5. Click the arrows to enter the Number of Iterations.
6. Click the arrows to enter the Incidence Angle.
7. Click the arrows to enter the Background Value, and the
layer.
4
8. Click OK.
5

160 USING IMAGE ANALYSIS FOR ARCGIS


Polynomial transformation
Polynomial equations are used to convert source file coordinates to Every GCP influences the coefficients, even if there isn’t a perfect
rectified map coordinates. Depending upon the distortion in the fit of each GCP to the polynomial that the coefficients represent.
imagery, complex polynomial equations may be required to express The distance between the GCP reference coordinate and the curve
the needed transformation. The degree of complexity of the is called RMS error, which is discussed later in this chapter in
polynomial is expressed as the order of the polynomial. The order “Camera Properties” on page 171.
of transformation is the order of the polynomial used in the
transformation. Image Analysis for ArcGIS allows 1st through nth L in e a r t r a ns fo r m a t io ns
order transformations. Usually, 1st order or 2nd order
transformations are used. A 1st order transformation is a linear transformation. It can change:

Tr ansformati on ma tri x • location in X and/or Y


• scale in X and/or Y
A transformation matrix is computed from the GCPs. The matrix • skew in X and/or Y
consists of coefficients that are used in polynomial equations to
convert the coordinates. The size of the matrix depends upon the • rotation
order of transformation. The goal in calculating the coefficients of
1st order transformations can be used to project raw imagery to a
the transformation matrix is to derive the polynomial equations for
planar map projection, to convert a planar map projection to
which there is the least possible amount of error when they are used
another planar map projection, and to rectify relatively small image
to transform the reference coordinates of the GCPs into the source
areas. You can perform simple linear transformations to an image
coordinates. It is not always possible to derive coefficients that
displayed in a view or to the transformation matrix itself. Linear
produce no error. For example, in the figure below, GCPs are
transformations may be required before collecting GCPs on the
plotted on a graph and compared to the curve that is expressed by a
displayed image. You can reorient skewed Landsat TM data, rotate
polynomial.
scanned quad sheets according to the angle of declination stated in
the legend, and rotate descending data so that north is up.

A 1st order transformation can also be used for data that are already
projected onto a plane. For example, SPOT and Landsat Level 1B
Reference X coordinate

data are already transformed to a plane, but may not be rectified to


the desired map projection. When doing this type of rectification, it
is not advisable to increase the order of transformation if at first a
GCP high RMS error occurs. Examine other factors first, such as the
GCP source and distribution, and look for systematic errors.
Polynomial curve
The transformation matrix for a 1st-order transformation consists
of six coefficients—three for each coordinate (X and Y).
Source X coordinate

APPLYING GEOCORRECTION TOOLS 161


The transformation matrix for a transformation of order t contains
this number of coefficients:
a0 a1 a2 t+1

b0 b1 b2 2∑i
i=0

Coefficients are used in a 1st order polynomial as follows:


It is multiplied by two for the two sets of coefficients — one set for
X and one for Y.
x0 = a0 + a1 x + a2 y An easier way to arrive at the same number is:

( t + 1 )x ( t + 2 )
y0 = b0 + b1 x + b2 y
Clearly, the size of the transformation matrix increases with the
order of the transformation.
Where:

x and y are source coordinates (input) High o rder polynomi als


x0 and y0 are rectified coordinates (output)
The polynomial equations for a t order transformation take this
the coefficients of the transformation matrix are as above
form:
No nline ar tra nsforma tion s  t  i 
xo =  Σ   Σ 
i–j j
ak × x ×y
Second-order transformations can be used to convert Lat/Lon data   
to a planar projection, for data covering a large area (to account for  i = o  j = o
the earth’s curvature), and with distorted data (for example, due to
camera lens distortion). Third-order transformations are used with
distorted aerial photographs, on scans of warped maps and with
radar imagery. Fourth-order transformations can be used on very
distorted aerial photographs.  t  i 
yo =  Σ   Σ 
i–j j
bk × x ×y
  
 i = o  j = o

162 USING IMAGE ANALYSIS FOR ARCGIS


Where: Coefficients like those presented in this example would generally
be calculated by the least squares regression method. Suppose
t is the order of the polynomial GCPs are entered with these X coordinates:
a and b are coefficients
Source X Reference X
the subscript k in a and b is determined by: Coordinate Coordinate
(input) (output)
⋅i+j+j
k = i--------------- 1 17
2
2 9
E f fe c t s o f o rd e r 3 1

The computation and output of a higher polynomial equation are


These GCPs allow a 1st order transformation of the X coordinates,
more complex than that of a lower order polynomial equation. which is satisfied by this equation (the coefficients are in
Therefore, higher order polynomials are used to perform more parentheses):
complicated image rectifications. To understand the effects of
different orders of transformation in image rectification, it is
helpful to see the output of various orders of polynomials. x r = ( 25 ) + ( – 8 )x i
The following example uses only one coordinate (X) instead of two
(X,Y) which are used in the polynomials for rectification. This Where:
enables you to draw two-dimensional graphs that illustrate the way
that higher orders of transformation affect the output image. xr = the reference X coordinate
Because only the X coordinate is used in these examples, the
number of GCPs used is less than the number required to actually xi = the source X coordinate
perform the different orders of transformation.
This equation takes on the same format as the equation of a line
(y = mx + b). In mathematical terms, a 1st-order polynomial is
linear. Therefore, a 1st-order transformation is also known as a
linear transformation. This equation is graphed on the next page:

APPLYING GEOCORRECTION TOOLS 163


These points are plotted against each other below:

16 16

reference X coordinate
reference X coordinate
12 xr = (25) + (-8)xi 12

8 8

4 4

0 0

0 1 2 3 4 0 1 2 3 4
source X coordinate source X coordinate

A line cannot connect these points, which illustrates that they


However, what if the second GCP were changed as follows?
cannot be expressed by a 1st-order polynomial like the one above.
In this case, a 2nd-order polynomial equation expresses these
Source X Reference X points.
Coordinate Coordinate 2
(input) (output) x r = ( 31 ) + ( – 16 )x i + ( 2 )x i
1 17
Polynomials of the 2nd-order or higher are nonlinear. The graph of
2 7 this curve is drawn below:

3 1 16

reference X coordinate
12 xr = (31) + (-16)xi + (2)xi2

0
0 1 2 3 4
source X coordinate

164 USING IMAGE ANALYSIS FOR ARCGIS


What if one more GCP were added to the list?

16
Source X Reference X

reference X coordinate
Coordinate Coordinate 12 xr = (25) + (-5)xi + (-4)xi2 + (1)xi3
(input) (output)
8
1 17
4
2 7

3 1 0
0 1 2 3 4
4 5 source X coordinate

This figure illustrates a 3rd-order transformation. However, this


equation may be unnecessarily complex. Performing a coordinate
16 transformation with this equation may cause unwanted distortions
in the output image for the sake of a perfect fit for all the GCPs. In
reference X coordinate

12 xr = (31) + (-16)xi + (2)xi2 this example, a 3rd-order transformation probably would be too
high, because the output pixels in the X direction would be arranged
8 in a different order than the input pixels in the X direction.
(4,5)
4
Source X Reference X
Coordinate Coordinate
0
0 1 2 3 4
(input) (output)
source X coordinate
1 x 0 ( 1 ) = 17
As illustrated in the graph above, this fourth GCP does not fit on the
curve of the 2nd-order polynomial equation. To ensure that all of 2 x0 ( 2 ) = 7
the GCPs fit, the order of the transformation could be increased to
3rd-order. The equation and graph below could then result. 3 x0 ( 3 ) = 1

2 2 4 x0 ( 4 ) = 5
x r = ( 25 ) + ( – 5 )x i + ( – 4 )x i + ( 1 )x i

APPLYING GEOCORRECTION TOOLS 165


M i n i mum num b e r o f G C P s

x0 ( 1 ) > x0 ( 2 ) > x0 ( 4 ) > x0 ( 3 ) Higher orders of transformation can be used to correct more
complicated types of distortion. However, to use a higher order of
transformation, more GCPs are needed. For instance, three points
define a plane. Therefore, to perform a 1st-order transformation,
17 > 7 > 5 > 1 which is expressed by the equation of a plane, at least three GCPs
are needed. Similarly, the equation used in a 2nd-order
transformation is the equation of a paraboloid. Six points are
required to define a paraboloid. Therefore, at least six GCPs are
input image required to perform a 2nd-order transformation. The minimum
X coordinates number of points required to perform a transformation of order t
equals:
1 2 3 4
1 2 3 4
((t + 1)(t + 2))
-------------------------------------
2

Use more than the minimum number of GCPs whenever possible.


Although it is possible to get a perfect fit, it is rare, no matter how
many GCPs are used.

output image
X coordinates

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

3 4 2 1

In this case a higher order of transformation would probably not


produce the desired results.

166 USING IMAGE ANALYSIS FOR ARCGIS


For 1st through 10th-order transformations, the minimum number
of GCPs required to perform a transformation is listed in the
following table:

Number of GCPs

Order of Minimum GCPs


Transformation Required

1 3

2 6

3 10

4 15

5 21

6 28

7 36

8 45

9 55

10 66

APPLYING GEOCORRECTION TOOLS 167


The Polynomial Properties dialog
Polynomial Properties has a Parameters tab in addition to the
General and Links tabs. It does not need an Elevation tab. The
General tab and the Links tab are the same as the ones featured at
the beginning of this chapter.

The Parameters tab contains a CellArray that shows the


transformation coefficients table. These are filled in when the
model is solved.

1. Click the Parameters tab.


2. Using the arrows, enter the Polynomial Order.

1
2

168 USING IMAGE ANALYSIS FOR ARCGIS


Rubber Sheeting
Tr iang le-b ased finite element analysis Tria n gle - b a s e d r e c ti f i cat i o n

The finite element analysis is a powerful tool for solving Once the triangle mesh has been generated and the spatial order of
complicated computation problems which can be approached by the control points is available, the geometric rectification can be
small simpler pieces. It has been widely used as a local done on a triangle-by-triangle basis. This triangle-based method is
interpolation technique in geographic applications. For image appealing because it breaks the entire region into smaller subsets. If
rectification, the known control points can be triangulated into the geometric problem of the entire region is very complicated, the
many triangles. Each triangle has three control points as its vertices. geometry of each subset can be much simpler and modeled through
Then, the polynomial transformation can be used to establish simple transformation.
mathematical relationships between source and destination systems
for each triangle. Because the transformation exactly passes For each triangle, the polynomials can be used as the general
through each control point and is not in a uniform manner, finite transformation form between source and destination systems.
element analysis is also called Rubber Sheeting. It can also be
called the triangle-based rectification because the transformation L in e a r t r a ns fo r m a t io n
and resampling for image rectification are performed on a triangle-
by-triangle basis. The easiest and fastest transformation is the linear transformation
with the first order polynomials:
This triangle-based technique should be used when other
rectification methods such as Polynomial Transformation and
photogrammetric modeling cannot produce acceptable results. 
 xo = a 0 + a 1 x + a 2 y
Tr iang ulation 
 yo = b 0 + b 1 x + b 2 y
To perform the triangle-based rectification, it is necessary to 
triangulate the control points into a mesh of triangles. Watson
(1994) summarily listed four kinds of triangulation, including the
arbitrary, optimal, Greedy, and Delaunay triangulation. Of the four There is no need for extra information because there are three
kinds, the Delaunay triangulation is most widely used and is known conditions in each triangle and three unknown coefficients
adopted because of the smaller angle variations of the resulting for each polynomial.
triangles.

The Delaunay triangulation can be constructed by the empty


circumcircle criterion. The circumcircle formed from three points
of any triangle does not have any other point inside. The triangles
defined this way are the most equiangular possible.

APPLYING GEOCORRECTION TOOLS 169


No nline ar tra nsforma tion The 5th-order has 21 coefficients for each polynomial to be
determined. For solving these unknowns, 21 conditions should be
Even though the linear transformation is easy and fast, it has one available. For each vertex of the triangle, one point value is given,
disadvantage. The transitions between triangles are not always and two 1st-order and three 2nd-order partial derivatives can be
smooth. This phenomenon is obvious when shaded relief or contour easily derived by establishing a 2nd-order polynomial using
lines are derived from the DEM which is generated by the linear vertices in the neighborhood of the vertex. Then the total 18
rubber sheeting. It is caused by incorporating the slope change of conditions are ready to be used. Three more conditions can be
the control data at the triangle edges and vertices. In order to obtained by assuming that the normal partial derivative on each
distribute the slope change smoothly across triangles, the nonlinear edge of the triangle is a cubic polynomial, which means that the
transformation with polynomial order larger than one is used by sum of the polynomial items beyond the 3rd-order in the normal
considering the gradient information. partial derivative has a value zero.

The fifth order or quintic polynomial transformation is chosen here C h e ck p o int a n a ly s is


as the nonlinear rubber sheeting technique in this example. It is a
smooth function. The transformation function and its first order It should be emphasized that the independent checkpoint analysis is
partial derivative are continuous. It is not difficult to construct critical for determining the accuracy of rubber sheeting modeling.
(Akima 1978). The formulation is simply as follows: For an exact modeling method like rubber sheeting, the ground
control points, which are used in the modeling process, do not have
much geometric residuals remaining. To evaluate the geometric
transformation between source and destination coordinate systems,
 5 i
the accuracy assessment using independent checkpoints is
x = j
∑ ∑ ak ⋅ x
i–j
⋅y recommended.
 0
 i = 0j = 0

 5 i
y =
∑ ∑ bk ⋅ x
i–j j
⋅y
 0
 i = 0j = 0

170 USING IMAGE ANALYSIS FOR ARCGIS


Camera Properties
The Camera model is derived by space resection based on Rotation offers the following options when you click the dropdown
collinearity equations, and is used for rectifying any image that uses arrows:
a camera as its sensor. In addition to the General, Links, and
Elevation tabs, Camera Properties has tabs for Orientation, Camera, • Unknown— select when the rotation angle is unknown
and Fiducials. • Estimated — select when estimating the rotation angle
• Fixed — select when rotation angle is defined
The Orientation feature allows you to choose different rotation
angles and perspective center positions for the camera. The • Omega — rotation angle is roll: around the x-axis of the
Rotation Angle lets you customize the Omega, Phi, and Kappa ground system
rotation angles of the image to determine the viewing direction of • Phi — phi rotation angle is pitch: around the y-axis (after
the camera. If you can fill in all the degrees and meters for the Omega rotation)
Rotation Angle and the Perspective Center Position, then you do • Kappa — kappa rotation angle is yaw: around the z-axis
not need the three links you normally would need for the Camera rotated by Omega and Phi
model. If you are going to fill in this information on the Orientation
tab, then you will need to make sure you do not check Account for The Perspective Center Position is given in meters and allows you
Earth’s curvature on the Elevation tab. You can see the areas to fill to enter the perspective center for ground coordinates. You can
in on the Orientation tab below: choose from the following options:

• Unknown — select when the ground coordinate is unknown


• Estimated — select when estimating the ground coordinate
• Fixed — select when ground coordinate is defined
• X — enter the X coordinate of the perspective center
• Y — enter the Y coordinate of the perspective center
• Z — enter the Z coordinate of the perspective center

Camera Properties dialog

APPLYING GEOCORRECTION TOOLS 171


The next tab on Camera Properties is also called Camera. This is The fiducials for your image will be fixed on the frame and visible
where you can specify the Camera Name, the Number of Fiducials, in the exposure. The Fiducial information you enter on the Camera
the Principal Point, and the Focal Length for the camera that was tab will be displayed in a cell array on the Fiducial tab after you
used to capture your image. click Apply on the Camera Properties dialog.

In order to select the appropriate fiducial orientation, compare the


axis of the photo-coordinate system (defined in the calibration
report) with the orientation of the image. Based on the relationship
between the photo-coordinate system and the image, the
appropriate fiducial orientation can be selected. Do not use over 8
fiducials in an image. The following illustrations demonstrate the
fiducial orientation used under the various circumstances.

Fiducial One—places the marker at the left of the image

Fiducial Two—places the marker at the top of the image

Fiducial Three—places the marker at the right of the image

Fiducial Four—places the marker at the bottom of the image

Camera tab on Camera Properties dialog


Click to select where to place the fiducial in the viewer.
You can click Load or Save to open or save a file with certain
camera information in it. Selecting the inappropriate fiducial orientation results in large
RMS errors during the measurement of fiducial marks for interior
The last tab on the Camera Properties dialog is the Fiducials tab. orientation and errors during the automatic tie point collection. If
Fiducials are used to compute the transformation from data file to initial approximations for exterior orientation have been defined
image coordinates. Fiducial orientation defines the relationship and the corresponding fiducial orientation does not correspond, the
between the image/photo-coordinate system of a frame and the automatic tie point collection capability provides inadequate
actual image orientation as it appears within a view. The image/ results. Ensure that the appropriate fiducial orientation is used as a
photo-coordinate system is defined by the camera calibration function of the image/photo-coordinate system.
information. The orientation of the image is largely dependent on
the way the photograph was scanned during the digitization stage.

172 USING IMAGE ANALYSIS FOR ARCGIS


IKONOS, QuickBird, and RPC Properties
IKONOS, QuickBird, and RPC Properties are sometimes referred IKONOS
to together as the Rational Function Models. They are virtually the
same except for the files they use. The dialogs for the three in IKONOS images are produced from the IKONOS satellite, which
Geocorrection Properties are identical as well. IKONOS files are was launched in September of 1999 by the Athena II rocket.
images captured by the IKONOS satellite. QuickBird files are
images captured by the QuickBird satellite. RPC Properties uses The resolution of the panchromatic sensor is 1 m. The resolution of
NITF data. the multispectral scanner is 4 m. The swath width is 13 km at nadir.
The accuracy without ground control is 12 m horizontally, and 10
It is important that you click the Add Links button before you click m vertically; with ground control it is 2 m horizontally, and 3 m
the Geocorrection Properties button to open one of these three vertically.
property dialogs. Once you click the Add Links button and click the
Geocorrection Properties button, the dialog will appear. The IKONOS orbits at an altitude of 423 miles, or 681 kilometers. The
Parameters tab in IKONOS, QuickBird, and RPC Properties calls revisit time is 2.9 days at 1 m resolution, and 1.5 days at 1.5 m
for an RPC file and the Elevation Range. Click the Parameters tab, resolution.
and enter the RPC File before proceeding with anything else.
IKONOS Bands and Wavelengths

Band Wavelength (microns)

1, Blue 0.45 to 0.52 µm

2, Green 0.52 to 0.60 µm

3, Red 0.63 to 0.69 µm

4, NIR 0.76 to 0.90 µm

Panchromatic 0.45 to 0.90 µm

The IKONOS Properties dialog gives you the ability to rectify


IKONOS images from the satellite. Like the other property dialogs
in Geocorrection, IKONOS has a General, Links, and Elevation
tabs as well as Parameters and Chipping.
IKONOS Properties Parameters tab

The Parameters tab is the same in all three of these Geocorrection


models.

APPLYING GEOCORRECTION TOOLS 173


The RPC file is generated by the data provider based on the position
of the satellite at the time of image capture. The RPCs can be
further refined by using ground control points (GCPs). This file
should be located in the same directory as the image you intend to
use in the Geocorrection process.

On the Parameters tab, there is also a check box for Refinement


with Polynomial Order. This is provided so you may apply
polynomial corrections to the original rational function model. This
option corrects the remaining error and refines the mathematical
solution. Check the box to enable the refinement process, then
specify the order by clicking the arrows.

The 0-order results in a simple shift to both image X and Y


coordinates. The 1st-order is an affine transformation. The 2nd-
order results in a second order transformation, and the 3rd-order in
a third order transformation. Usually, a 0 or 1st-order is sufficient
to reduce error not addressed by the rational function model (RPC
file). IKONOS Properties Chipping tab

After the Parameters tab on the IKONOS Properties dialog, there is The Chipping tab is the same for IKONOS, QuickBird, and RPC
the Chipping tab. The Chipping process allows circulation of RPCs Properties.
for an image chip rather than the full, original image from which the
chip was derived. This is made possible by specifying an affine On the Chipping tab you are given the choice of Scale and Offset or
relationship (pixel) between the chip and the full, original image. Arbitrary Affine as your chipping parameters. The dialog will
change depending on which chipping parameter you choose. Scale
and Offset is the more simple of the two. The formulas for
calculating the affine using scale and offset are listed on the dialog.
X and Y correspond to the pixel coordinates for the full, original
image.

174 USING IMAGE ANALYSIS FOR ARCGIS


The following is an example of the Scale and Offset dialog on the The Arbitrary Affine formulas are listed on the dialog when you
Chipping tab: choose that option. In the formulas, x’ (x prime), and y’ (y prime),
correspond to the pixel coordinates in the chip with which you are
currently working. Values for the following variables are either
obtained from the header data of the chip, or they default to the
predetermined values described above. Also under the Chipping
tab, you’ll find a box for Full Row Count and Full Column Count.
For Full Row Count, if the chip header contains the appropriate
data, this value is the row count of the full, original image. If the
header count is absent, this value corresponds to the row count of
the chip. For Full Column Count, if the chip header contains the
appropriate data, this value is the column count of the full, original
image. If the header count is absent, the value corresponds to the
column count of the chip.

The following is an example of the Arbitrary Affine dialog on the


Chipping tab:

IKONOS Chipping tab using Scale and Offset

• Row Offset—This value corresponds to value f, an offset


value. In absence of header data, this value defaults to 0.
• Row Scale—This value corresponds to value e, a scale factor
that is also used in rotation. In the absence of header data, this
value defaults to 1.
• Column Offset—This value corresponds to value c, an offset
value. In the absence of header data, this value defaults to 0.
• Column Scale—This value corresponds to value a, a scale
factor that is also used in rotation. In the absence of header
data, this value defaults to 1.

IKONOS Chipping tab using Arbitrary Affine

APPLYING GEOCORRECTION TOOLS 175


QuickBird R PC

QuickBird Properties allows you to rectify images captured with RPC stands for rational polynomial coefficients. When you choose
the QuickBird satellite. Like IKONOS, QuickBird requires the use it, the function allows you to specify the associated RPC file to be
of an RPC file to describe the relationship between the image and used in Geocorrection. RPC Properties in Image Analysis for
the earth’s surface at the time of image capture. ArcGIS allows you to work with NITF data.

The QuickBird satellite was launched in October of 2001. Its orbit NITF stands for National Imagery Transmission Format Standard.
has an altitude of 450 kilometers, a 93.5 minute orbit time, and a NITF data is designed to pack numerous image compositions with
10:30 A.M. equator crossing time. The inclination is 97.2 degrees complete annotation, text attachments, and imagery associated
sun-synchronous, and the nominal swath width is 16.5 kilometers metadata.
at nadir. The sensor has both panchromatic and multispectral
capabilities. The dynamic range is 11 bits per pixel for both The RPC file associated with the image contains rational function
panchromatic and multispectral. The panchromatic bandwidth is polynomial coefficients that are generated by the data provider
450-900 nanometers. The multispectral bands are as follows: based on the position of the satellite at the time of image capture.
These RPCs can be further refined by using GCPs. This file should
QuickBird Bands and Wavelengths be located in the same directory as the image or images you intend
to use in orthorectification.
Band Wavelength (microns)
Just like IKONOS and QuickBird, the RPC dialog contains the
1, Blue 0.45 to 0.52 µm Parameters and Chipping tabs. These work the same way in all
three model properties.
2, Green 0.52 to 0.60 µm

3, Red 0.63 to 0.69 µm

4, NIR 0.76 to 0.90 µm

Just like IKONOS, QuickBird has a Parameters tab as well as a


Chipping tab on its Properties dialog. The same information applies
to both tabs as is discussed in the IKONOS section.

176 USING IMAGE ANALYSIS FOR ARCGIS


Landsat
The Landsat dialog is used for orthorectification of any Landsat • Bands 1 and 2 are in the visible portion of the spectrum and are
image that uses TM or MSS as its sensor. The model is derived by useful in detecting cultural features, such as roads. These bands
space resection based on collinearity equations. The elevation also show detail in water.
information is required in the model for removing relief • Bands 3 and 4 are in the near-infrared portion of the spectrum
displacement. and can be used in land/water and vegetation discrimination.

Land sat 1 -5 • Bands 4, 3, and 2 create a false color composite. False color
composites appear similar to an infrared photograph where
In 1972, the National Aeronautics and Space Administration objects do not have the same colors or contrasts as they would
(NASA) initiated the first civilian program specializing in the naturally. For instance, in an infrared image, vegetation
acquisition of remotely sensed digital satellite data. The first appears red, water appears navy or black, etc.
system was called ERTS (Earth Resources Technology Satellites), • Bands 5, 4, and 2 create a pseudo color composite. (A thematic
and later renamed to Landsat. There have been several Landsat image is also a pseudo color image.) In pseudo color, the colors
satellites launched since 1972. Landsats 1, 2, and 3 are no longer do not reflect the features in natural colors. For instance, roads
operating, but Landsats 4 and 5 are still in orbit gathering data. may be red, water yellow, and vegetation blue.

Landsats 1, 2, and 3 gathered Multispectral Scanner (MSS) data and Different color schemes can be used to bring out or enhance the
Landsats 4 and 5 collect MSS and TM data. features under study. These are by no means all of the useful
combinations of these seven bands. The bands to be used are
MSS determined by the particular application.

The MSS from Landsats 4 and 5 has a swath width of TM


approximately 185 × 170 km from a height of approximately 900
km for Landsats 1, 2, and 3, and 705 km for Landsats 4 and 5. MSS The TM scanner is a multispectral scanning system much like the
data is widely used for general geologic studies as well as MSS, except that the TM sensor records reflected/emitted
vegetation inventories. electromagnetic energy from the visible, reflective-infrared,
middle-infrared, and thermal-infrared regions of the spectrum. TM
The spatial resolution of MSS data is 56 × 79 m, with a 79 × 79 m has higher spatial, spectral, and radiometric resolution than MSS.
IFOV (instantaneous field of view). A typical scene contains
approximately 2340 rows and 3240 columns. The radiometric TM has a swath width of approximately 185 km from a height of
resolution is 6-bit, but it is stored as 8-bit (Lillesand and Kiefer approximately 705 km. It is useful for vegetation type and health
1987). determination, soil moisture, snow and cloud differentiation, rock
type discrimination, and so on.
Detectors record electromagnetic radiation (EMR) in four bands:

APPLYING GEOCORRECTION TOOLS 177


The spatial resolution of TM is 28.5 × 28.5 m for all bands except TM Bands and Wavelengths
the thermal (band 6), which has a spatial resolution of 120 × 120 m.
The larger pixel size of this band is necessary for adequate signal Wave-
strength. However, the thermal band is resampled to 28.5 × 28.5 m Band length Comments
to match the other bands. The radiometric resolution is 8-bit, (microns)
meaning that each pixel has a possible range of data values from 0
to 255. 1, Blue 0.45 to For mapping coastal water areas,
0.52 µm differentiating between soil and
Detectors record EMR in seven bands: vegetation, forest type mapping, and
detecting cultural features.
• Bands 1, 2, and 3 are in the visible portion of the spectrum and
are useful in detecting cultural features such as roads. These 2, 0.52 to Corresponds to the green reflectance
bands also show detail in water. Green 0.60 µm of healthy vegetation. Also useful for
cultural feature identification.
• Bands 4, 5, and 7 are in the reflective-infrared portion of the
spectrum and can be used in land/water discrimination. 3, Red 0.63 to For discriminating between many
0.69 µm plant species. It is also useful for
• Band 6 is in the thermal portion of the spectrum and is used for
determining soil boundary and
thermal mapping (Jensen 1996; Lillesand and Kiefer 1987).
geological boundary delineations as
well as cultural features.
4, NIR 0.76 to Especially responsive to the amount
0.90 µm of vegetation biomass present in a
scene. It is useful for crop
identification and emphasizes soil/
crop and land/water contrasts.

178 USING IMAGE ANALYSIS FOR ARCGIS


Wave-
Band length Comments 3 bands
MSS
(microns)

5, MIR 1.55 to Sensitive to the amount of water in


1.75 µm plants, which is useful in crop
drought studies and in plant health TM 7 bands
analyses. This is also one of the few radiometric
bands that can be used to discriminate resolution 1 pixel =
0-127 57 m x 79 m
between clouds, snow, and ice.
6, TIR 10.40 to For vegetation and crop stress
12.50 µm detection, heat intensity, insecticide radiometric 1 pixel =
applications, and for locating thermal resolution 30 m x 30 m
0- 255
pollution. It can also be used to locate
geothermal activity.
7, MIR 2.08 to Important for the discrimination of
2.35 µm geologic rock type and soil
Landsat MSS vs. Landsat TM
boundaries, as well as soil and
vegetation moisture content. Band Comb inations for Di splaying TM Data

Different combinations of the TM bands can be displayed to create


different composite effects. The order of the bands corresponds to
the Red, Green, and Blue (RGB) color guns of the monitor. The
following combinations are commonly used to display images:

• Bands 3, 2, 1 create a true color composite. True color means


that objects look as they would to the naked eye—similar to a
color photograph.
• Bands 4, 3, 2 create a false color composite. False color
composites appear similar to an infrared photograph where
objects do not have the same colors or contrasts as they would
naturally. For instance, in an infrared image, vegetation
appears red, water appears navy or black, etc.

APPLYING GEOCORRECTION TOOLS 179


• Bands 5, 4, 2 create a pseudo color composite. (A thematic L a n ds a t 7 d a t a t y p es
image is also a pseudo color image.) In pseudo color, the colors
do not reflect the features in natural colors. For instance, roads One type of data available from Landsat 7 is browse data. Browse
may be red, water yellow, and vegetation blue. data is a lower resolution image for determining image location,
quality and information content. The other type of data is metadata,
Different color schemes can be used to bring out or enhance the
which is descriptive information on the image. This information is
features under study. These are by no means all of the useful
available via the internet within 24 hours of being received by the
combinations of these seven bands. The bands to be used are
primary ground station. Moreover, EDC processes the data to Level
determined by the particular application.
0r. This data has been corrected for scan direction and band
Land sat 7 alignment errors only. Level 1G data, which is corrected, is also
available.
The Landsat 7 satellite, launched in 1999, uses Enhanced Thematic
Mapper Plus (ETM+) to observe the earth. The capabilities new to Lan dsat 7 spec ific ation s
Landsat 7 include the following:
Information about the spectral range and ground resolution of the
• 15 m spatial resolution panchromatic band bands of the Landsat 7 satellite is provided in the following table:
• 5% radiometric calibration with full aperture
Landsat 7 Characteristics
• 60 m spatial resolution thermal IR channel
Band Wavelength Resolution
The primary receiving station for Landsat 7 data is located in Sioux
Number (microns) (m)
Falls, South Dakota at the USGS EROS Data Center (EDC). ETM+
data is transmitted using X-band direct downlink at a rate of 150 1 0.45 to 0.52 µm 30
Mbps. Landsat 7 is capable of capturing scenes without cloud
obstruction, and the receiving stations can obtain this data in real 2 0.52 to 0.60 µm 30
time using the X-band. Stations located around the globe, however, 3 0.63 to 0.69 µm 30
are only able to receive data for the portion of the ETM+ ground
track where the satellite can be seen by the receiving station. 4 0.76 to 0.90 µm 30
5 1.55 to 1.75 µm 30
6 10.4 to 12.5 µm 60
7 2.08 to 2.35 µm 30
Panchromatic (8) 0.50 to 0.90 µm 15

180 USING IMAGE ANALYSIS FOR ARCGIS


Landsat 7 has a swath width of 185 kilometers. The repeat coverage
interval is 16 days, or 233 orbits. The satellite orbits the earth at 705
kilometers.

The Lands at di alog

The Landsat Properties dialog in Geocorrection Properties has the


General, Links, and Elevation tabs already discussed in this
chapter. It also has a Parameters tab, which is different from the
ones discussed so far. The Parameters tab has areas where you
select the type of sensor used to capture your data, the Scene
Coverage (if you choose Quarter Scene you also choose the
quadrant), the Number of Iterations, and the Background.

APPLYING GEOCORRECTION TOOLS 181


182 USING IMAGE ANALYSIS FOR ARCGIS
Glossary

Glossary
T erm s
abstract symbol
An annotation symbol that has a geometric shape, such as a circle, square, or triangle. These
symbols often represent amounts that vary from place to place, such as population density, yearly
rainfall, and so on.

accuracy assessment
The comparison of a classification to geographical data that is assumed to be true. Usually, the
assumed true data is derived from ground truthing.

American Standard Code for Information Interchange (ASCII)


A basis of character sets...to convey some control codes, space, numbers, most basic punctuation,
and unaccented letters a-z and A-Z.

analysis mask
An option that uses a raster dataset in which all cells of interest have a value and all other cells are
no data. Analysis mask lets you perform analysis on a selected set of cells.

ancillary data
The data, other than remotely sensed data, that is used to aid in the classification process.
annotation
The explanatory material accompanying an image or a map. Annotation can consist of lines, text,
polygons, ellipses, rectangles, legends, scale bars, and any symbol that denotes geographical
features.

AOI
See area of interest.

183
a priori band
Already or previously known. A set of data file values for a specific portion of the electromagnetic
spectrum of reflected light or emitted heat (red, green, blue, near-
infrared, infrared, thermal, and so on) or some other user-defined
area information created by combining or enhancing the original bands,
A measurement of a surface. or creating new bands from other sources. Sometimes called
channel.

area of interest
bilinear interpolation
(AOI) a point, line, or polygon that is selected as a training sample
or as the image area to be used in an operation. Uses the data file values of four pixels in a 2 × 2 window to
calculate an output value with a bilinear function.

ASCII
bin function
See American Standard Code for Information Interchange.
A mathematical function that establishes the relationship between
data file values and rows in a descriptor table.
aspect
The orientation, or the direction that a surface faces, with respect to bins
the directions of the compass: north, south, east, west.
Ordered sets of pixels. Pixels are sorted into a specified number of
bins. The pixels are then given new values based upon the bins to
attribute which they are assigned.
The tabular information associated with a raster or vector layer.
border
average On a map, a line that usually encloses the entire map, not just the
image area as does a neatline.
The statistical mean; the sum of a set of values divided by the
number of values in the set.
boundary
A neighborhood analysis technique that is used to detect
boundaries between thematic classes.

184 USING IMAGE ANALYSIS FOR ARCGIS


brightness value cell size
The quantity of a primary color (red, green, blue) to be output to a The area that one pixel represents, measured in map units. For
pixel on the display device. Also called intensity value, function example, one cell in the image may represent an area 30’ × 30’ on
memory value, pixel value, display value, and screen value. the ground. Sometimes called the pixel size.

buffer zone checkpoint analysis


A specific area around a feature that is isolated for or from further The act of using check points to independently verify the degree of
analysis. For example, buffer zones are often generated around accuracy of a triangulation.
streams in site assessment studies so that further analyses exclude
these areas that are often unsuitable for development.
circumcircle
A triangle’s circumscribed circle; the circle that passes through
Cartesian each of the triangle’s three vertices.
A coordinate system in which data are organized on a grid and
points on the grid are referenced by their X,Y coordinates.
class
A set of pixels in a GIS file that represents areas that share some
camera properties condition. Classes are usually formed through classification of a
Camera properties are for the orthorectification of any image that continuous raster layer.
uses a camera for its sensor. The model is derived by space
resection based on collinearity equations. The elevation
information is required in the model for removing relief class value
displacement. A data file value of a thematic file that identifies a pixel as
belonging to a particular class.
categorize
The process of choosing distinct classes to divide your image into. classification
The process of assigning the pixels of a continuous raster image to
discrete categories.
cell
1. A 1 × 1 area of coverage. DTED (Digital Terrain Elevation Data)
are distributed in cells. 2. A pixel; grid cell. classification accuracy table
For accuracy assessment, a list of known values of reference pixels,
supported by some ground truth or other a priori knowledge of the
true class, and a list of the classified values of the same pixels, from
a classified file to be tested.

GLOSSARY 185
classification scheme (or classification system) continuous data
A set of target classes. The purpose of such a scheme is to provide A type of raster data that are quantitative (measuring a
a framework for organizing and categorizing the information that characteristic) and have related, continuous values, such as
can be extracted from the data. remotely sensed images ( Landsat, SPOT, and so on).

clustering contrast stretch


Unsupervised training; the process of generating signatures based The process of reassigning a range of values to another range,
on the natural groupings of pixels in image data when they are usually according to a linear function. Contrast stretching is often
plotted in spectral space. used in displaying continuous raster layers, since the range of data
file values is usually much narrower than the range of brightness
values on the display device.
clusters
The natural groupings of pixels when plotted in spectral space.
convolution filtering
The process of averaging small sets of pixels across an image. Used
coefficient to change the spatial frequency characteristics of an image.
One number in a matrix, or a constant in a polynomial expression.
convolution kernel
collinearity A matrix of numbers that is used to average the value of each pixel
A nonlinear mathematical model that photogrammetric with the values of surrounding pixels in a particular way. The
triangulation is based upon. Collinearity equations describe the numbers in the matrix serve to weight this average toward
relationship among image coordinates, ground coordinates, and particular pixels.
orientation parameters.
contiguity analysis coordinate system
A study of the ways in which pixels of a class are grouped together A method of expressing location. In two-dimensional coordinate
spatially. Groups of contiguous pixels in the same class, called systems, locations are expressed by a column and row, also called
raster regions, or clumps, can be identified by their sizes and X and Y.
multiplied.

correlation threshold
continuous
A value used in rectification to determine whether to accept or
A term used to describe raster data layers that contain quantitative discard GCPs. The threshold is an absolute value threshold ranging
and related values. See continuous data. from 0.000 to 1.000.

186 USING IMAGE ANALYSIS FOR ARCGIS


correlation windows database
Windows that consist of a local neighborhood of pixels. A relational data structure usually used to store tabular information.
Examples of popular databases include SYBASE, dBASE, Oracle,
INFO, etc.
corresponding GCPs
The GCPs that are located in the same geographic location as the
data file
selected GCPs, but are selected in different files.
A computer file that contains numbers that represent an image.

covariance
data file value
Measures the tendencies of data file values for the same pixel, but
in different bands, to vary with each other in relation to the means Each number in an image file. Also called file value, image file
of their respective bands. These bands must be linear. Covariance value, DN, brightness value, pixel.
is defined as the average product of the differences between the
data file values in each band and the mean of each band.
decision rule
An equation or algorithm that is used to classify image data after
covariance matrix signatures have been created. The decision rule is used to process
A square matrix that contains all of the variances and covariances the data file values based upon the signature statistics.
within the bands in a data file.

density
cubic convolution A neighborhood analysis technique that outputs the number of
Uses the data file values of sixteen pixels in a 4 × 4 window to pixels that have the same value as the analyzed pixel in a user-
calculate an output with cubic function. specified window.

data digital elevation model (DEM)


1. In the context of remote sensing, a computer file containing Continuous raster layers in which data file values represent
numbers that represent a remotely sensed image, and can be elevation. DEMs are available from the USGS at 1:24,000 and
processed to display that image. 2. A collection of numbers, strings, 1:250,000 scale, and can be produced with terrain analysis
or facts that requires some processing before it is meaningful. programs.

GLOSSARY 187
digital terrain model (DTM) enhancement
A discrete expression of topography in a data array, consisting of a The process of making an image more interpretable for a particular
group of planimetric coordinates (X,Y) and the elevations of the application. Enhancement can make important features of raw,
ground points and breaklines. remotely sensed data more interpretable to the human eye.

dimensionality extension
In classification dimensionality refers to the number of layers being The three letters after the period in a file name that usually identify
classified. For example, a data file with three layers is said to be the type of file.
three dimensional.

extent
divergence 1. The image area to be displayed in a View. 2. The area of the
A statistical measure of distance between two or more signatures. earth’s surface to be mapped.
Divergence can be calculated for any combination of bands used in
the classification; bands that diminish the results of the
classification can be ruled out. feature collection
The process of identifying, delineating, and labeling various types
of natural and human-made phenomena from remotely-sensed
diversity images.
A neighborhood analysis technique that outputs the number of
different values within a user-specified window.
feature extraction
The process of studying and locating areas and objects on the
edge detector ground and deriving useful information from images.
A convolution kernel, which is usually a zero-sum kernel, that
smooths out or zeros out areas of low spatial frequency and creates
a sharp contrast where spatial frequency is high. High spatial feature space
frequency is at the edges between homogeneous groups of pixels. An abstract space that is defined by spectral units (such as an
amount of electromagnetic radiation).
edge enhancer
A high-frequency convolution kernel that brings out the edges fiducial center
between homogeneous groups of pixels. Unlike an edge detector, it The center of an aerial photo.
only highlights edges, it does not necessarily eliminate other
features.

188 USING IMAGE ANALYSIS FOR ARCGIS


fiducials geographic information system (GIS)
Four or eight reference markers fixed on the frame of an aerial A unique system designed for a particular application that stores,
metric camera and visible in each exposure that are used to enhances, combines, and analyzes layers of geographic data to
compute the transformation from data file to image coordinates. produce interpretable information. A GIS may include computer
images, hardcopy maps, statistical data, and any other data needed
for a study, as well as computer software and human knowledge.
file coordinates GISs are used for solving complex geographic planning and
The location of a pixel within the file in x.y coordinates. The upper management problems.
left file coordinate is usually 0,0.
georeferencing
filtering The process of assigning map coordinates to image data and
The removal of spatial or spectral features for data enhancement. resampling the pixels of the image to conform to the map projection
Convolution filtering is one method of spatial filtering. Some texts grid.
may use the terms filtering and spatial filtering synonymously.
ground control point (GCP)
focal Specific pixel in image data for which the output map coordinates
The process of performing one of several analyses on data values (or other output coordinates) are known. GCPs are used for
in an image file, using a process similar to convolution filtering. computing a transformation matrix, for use in rectifying an image.

GCP matching high frequency kernel

For image to image rectification, a GCP selected in one image is A convolution kernel that increases the spatial frequency of an
precisely matched to its counterpart in the other image using the image. Also called a high-pass kernel.
spectral characteristics of the data and the transformation matrix.
histogram
geocorrection A graph of data distribution, or a chart of the number of pixels that
The process of rectifying remotely sensed data that has distortions have each possible data file value. For a single band of data, the
due to a sensor or the curvature of the earth. horizontal axis of a histogram graph is the range of all possible data
file values. The vertical axis is a measure of pixels that have each
data value.

GLOSSARY 189
histogram equalization image matching
The process of redistributing pixel values so that there are The automatic acquisition of corresponding image points on the
approximately the same number of pixels with each value within a overlapping area of two images.
range. The result is a nearly flat histogram.

image processing
histogram matching The manipulation of digital image data, including (but not limited
The process of determining a lookup table that converts the to) enhancement, classification, and rectification operations.
histogram of one band of an image or one color gun to resemble
another histogram.
indices
The process used to create output images by mathematically
hue combining the DN values of different bands.
A component of IHS (intensity, hue, saturation) that is
representative of the color or dominant wavelength of the pixel. It
varies from 0 to 360. Blue = 0 (and 360) magenta = 60, red = 120, IR
yellow = 180, green = 240, and cyan = 300. Infrared portion of the electromagnetic spectrum.

IKONOS properties island polygons


Use the IKONOS Properties geocorrection dialog to perform When using Seed Tool, island polygons represent areas in the
orthorectification on images gathered with the IKONOS satellite. polygon that have differing characteristics from the areas in the
The IKONOS satellite orbits at an altitude of 423 miles, or 681 larger polygon. You have the option to use the island polygons
kilometers. The revisit time is 2.9 days at 1 meter resolution, and feature or to turn it off when using Seed Tool.
1.5 days at 1.5 meter resolution.

ISODATA (Iterative Self-Organizing Data Analysis


image data Technique)
Digital representations of the earth that can be used in computer A method of clustering that uses spectral distance as in the
image processing and GIS analyses. sequential method, but iteratively classifies the pixels, redefines the
criteria for each class, and classifies again so that the spectral
distance patterns in the data gradually emerge.
image file
A file containing raster image data.
Landsat
A series of earth-orbiting satellites that gather MSS and TM
imagery operated by EOSAT.

190 USING IMAGE ANALYSIS FOR ARCGIS


layer majority
1. A band or channel of data. 2. A single band or set of three bands A neighborhood analysis technique that outputs the most common
displayed using the red, green, and blue color guns. 3. A component value of the data file values in a user-specified window.
of a GIS database that contains all of the data for one theme. A layer
consists of a thematic image file, and may also include attributes.
map projection
A method of representing the three-dimensional spherical surface
linear of a planet on a two-dimensional map surface. All map projections
A description of a function that can be graphed as a straight line or involve the transfer of latitude and longitude onto an easily
a series of lines. Linear equations (transformations) can generally flattened surface.
be expressed in the form of the equation of a line or plane. Also
called 1st-order.
maximum
A neighborhood analysis technique that outputs the greatest value
linear contrast stretch of the data file values in a user-specified window.
An enhancement technique that outputs new values at regular
intervals.
maximum likelihood
A classification decision rule based on the probability that a pixel
linear transformation belongs to a particular class. The basic equation assumes that these
A 1st-order rectification. A linear transformation can change probabilities are equal for all classes, and that the input bands have
location in X and/or Y, scale in X and/or Y, skew in X and/or Y, normal distributions.
and rotation.

mean
lookup table (LUT) 1. The statistical Average; the sum of a set of values divided by the
An ordered set of numbers that is used to perform a function on a number of values in the set. 2. A neighborhood analysis technique
set of input values. To display or print an image, lookup tables that outputs the mean value of the data file values in a user-
translate data file values into brightness values. specified window.

low frequency kernel median


A convolution kernel that decreases spatial frequency. Also called 1. The central value in a set of data such that an equal number of
low-pass kernel. values are greater than and less than the median. 2. A neighborhood
analysis technique that outputs the median value of the data file
values in a user-specified window.

GLOSSARY 191
minimum multispectral scanner (MSS)
A neighborhood analysis technique that outputs the least value of Landsat satellite data acquired in four bands with a spatial
the data file values in a user-specified window. resolution of 57 × 79 meters.

minimum distance nadir


A classification decision rule that calculates the spectral distance The area on the ground directly beneath a scanner’s detectors.
between the measurement vector for each candidate pixel and the
mean vector for each signature. Also called spectral distance.
NDVI
See Normalized Difference Vegetation Index.
minority
A neighborhood analysis technique that outputs the least common
value of the data file values in a user-specified window. nearest neighbor
A resampling method in which the output data file value is equal to
the input pixel that has coordinates closest to the retransformed
modeling coordinates of the output pixel.
The process of creating new layers from combining or operating
upon existing layers. Modeling allows the creation of new classes
from existing classes and the creation of a small set of images, or a neighborhood analysis
single image, which, at a glance, contains many types of Any image processing technique that takes surrounding pixels into
information about a scene. consideration, such as convolution filtering and scanning.

mosaicking no data
The process of piecing together images side by side to create a NoData is what you assign to pixel values you do not want to
larger image. include in a classification or function. By assigning pixel values
NoData, they are not given a value. Images that georeference to
non-rectangles need a NoData concept for display even if they are
multispectral classification
not classified. The values that NoData pixels are given are
The process of sorting pixels into a finite number of individual understood to be just place holders.
classes, or categories of data, based on data file values in multiple
bands.
non-directional
The process using the Sobel and Prewitt filters for edge detection.
multispectral imagery
These filters use orthogonal kernels convolved separately with the
Satellite imagery with data recorded in two or more bands. original image, and then combined.

192 USING IMAGE ANALYSIS FOR ARCGIS


nonlinear orthorectification
Describing a function that cannot be expressed as the graph of a A form of rectification that corrects for terrain displacement and
line or in the form of the equation of a line or plane. Nonlinear can be used if a DEM of the study area is available.
equations usually contain expressions with exponents. Second-
order (2nd-order) or higher-order equations and transformations
are nonlinear. overlay
1. A function that creates a composite file containing either the
minimum or the maximum class values of the input files. Overlay
nonlinear transformation sometimes refers generically to a combination of layers. 2. The
A 2nd-order or higher rectification. process of displaying a classified file over the original image to
inspect the classification.

nonparametric signature
panchromatic imagery
A signature for classification that is based on polygons or
rectangles that are defined in the feature space image for the image Single-band or monochrome satellite imagery.
file. There is not statistical basis for a nonparametric signature; it is
simply an area in a feature space image.
parallelepiped
1. A classification decision rule in which the data file values of the
normalized difference vegetation index (NDVI) candidate pixel are compared to upper and lower limits. 2. The
The formula for NDVI is IR - R / IR + R, where IR stands for the limits of a parallelepiped classification, especially when graphed as
infrared portion of the electromagnetic spectrum, and R stands for rectangles.
the red portion of the electromagnetic spectrum. NDVI finds areas
of vegetation in imagery.
parameter
1. Any variable that determines the outcome of a function or
observation
operation. 2. The mean and standard deviation of data, which are
In photogrammetric triangulation, a grouping of the image sufficient to describe a normal curve.
coordinates for a GCP.

parametric signature
off-nadir
A signature that is based on statistical parameters (such as mean
Any point that is not directly beneath a scanner’s detectors, but off and covariance matrix) of the pixels that are in the training sample
to an angle. The SPOT scanner allows off-nadir viewing. or cluster.

GLOSSARY 193
pattern recognition principal components analysis (PCA)
The science and art of finding meaningful patterns in data, which 1. A method of data compression that allows redundant data to be
can be extracted through classification. compressed into fewer bands (Jensen 1996; Faust 1989). 2. The
process of calculating principal components and outputting
principal component bands. It allows redundant data to be
piecewise linear contrast stretch compacted into fewer bands (that is the dimensionality of the data
An enhancement technique used to enhance a specific portion of is reduced).
data by dividing the lookup table into three sections: low, middle,
and high.
principal point
The point in the image plane onto which the perspective center is
pixel projected, located directly beneath the interior orientation.
Abbreviated from picture element; the smallest part of a picture
(image).
profile
A row of data file values from a DEM or DTED file. The profiles
pixel depth of DEM and DTED run south to north (that is the first pixel of the
The number of bits required to store all of the data file values in a record is the southernmost pixel).
file. For example, data with a pixel depth of 8, or 8-bit data, have
256 values ranging from 0-255.
pushbroom
A scanner in which all scanning parts are fixed, and scanning is
pixel size accomplished by the forward motion of the scanner, such as the
The physical dimension of a single light-sensitive element (13 × 13 SPOT scanner.
microns).
QuickBird
polygon The QuickBird model requires the use of rational polynomial
A set of closed line segments defining an area. coefficients (RPCs) to describe the relationship between the image
and the earth's surface at the time of image capture. By using
QuickBird Properties, you can perform orthorectification on
polynomial images gathered with the QuickBird satellite
A mathematical expression consisting of variables and coefficients.
A coefficient is a constant that is multiplied by a variable in the
expression.

194 USING IMAGE ANALYSIS FOR ARCGIS


radar data recoding
The remotely sensed data that are produced when a radar The assignment of new values to one or more classes.
transmitter emits a beam of micro or millimeter waves, the waves
reflect from the surfaces they strike, and the backscattered radiation
is detected by the radar system’s receiving antenna, which is tuned rectification
to the frequency of the transmitted waves. The process of making image data conform to a map projection
system. In many cases, the image must also be oriented so that the
north direction corresponds to the top of the image.
radiometric correction
The correction of variations in data that are not caused by the object
or scene being scanned, such as scanner malfunction and rectified coordinates
atmospheric interference. The coordinates of a pixel in a file that has been rectified, which are
extrapolated from the GCPs. Ideally, the rectified coordinates for
the GCPs are exactly equal to the reference coordinates. Because
radiometric enhancement there is often some error tolerated in the rectification, this is not
An enhancement technique that deals with the individual values of always the case.
pixels in an image.

reference coordinates
radiometric resolution The coordinates of the map or reference image to which a source
The dynamic range, or number of possible data file values, in each (input) image is being registered. GCPs consist of both input
band. This is referred to by the number of bits into which the coordinates and reference coordinates for each point.
recorded energy is divided. See pixel depth.

reference pixels
rank In classification accuracy assessment, pixels for which the correct
A neighborhood analysis technique that outputs the number of GIS class is known from ground truth or other data. The reference
values in a user-specified window that are less than the analyzed pixels can be selected by you, or randomly selected.
value.

reference plane
raster data
In a topocentric coordinate system, the tangential plane at the
A data type in which thematic class values have the same properties center of the image on the earth ellipsoid, on which the three
as interval values, except that ratio values have a natural zero or perpendicular coordinate axes are defined.
starting point.

GLOSSARY 195
reproject RPC properties
Transforms raster image data from one map projection to another. The RPC Properties uses rational polynomial coefficients to
describe the relationship between the image and the earth's surface
at the time of image capture. You can specify the associated RPC
resampling file to be used in your geocorrection.
The process of extrapolating data file values for the pixels in a new
grid when data have been rectified or registered to another image.
rubber sheeting
The application of nonlinear rectification (2nd-order or higher).
resolution
A level of precision in data.
saturation
A component of IHS that represents the purity of color and also
resolution merging varies linearly from 0 to 1.
The process of sharpening a lower-resolution multiband image by
merging it with a higher-resolution monochrome image.
scale
1. The ratio of distance on a map as related to the true distance on
RGB the ground. 2. Cell size. 3. The processing of values through a
Red, green, blue. The primary additive colors that are used on most lookup table.
display hardware to display imagery.
scanner
RGB clustering The entire data acquisition system such as the Landsat scanner or
A clustering method for 24-bit data (three 8-bit bands) that plots the SPOT panchromatic scanner.
pixels in three-dimensional spectral space and divides that space
into sections that are used to define clusters. The output color
seed tool
scheme of an RGB-clustered image resembles that of the input file.
An Image Analysis for ArcGIS feature that automatically generates
feature layer polygons of similar spectral value.
RMS error
The distance between the input (source) location of the GCP and
shapefile
the retransformed location for the same GCP. RMS error is
calculated with a distance equation. A vector format that contains spatial data. Shapefiles have the .shp
extension.

196 USING IMAGE ANALYSIS FOR ARCGIS


signature spectral enhancement
A set of statistics that defines a training sample or cluster. The The process of modifying the pixels of an image based on the
signature is used in a classification process. Each signature original values of each pixel, independent of the values of
corresponds to a GIS class that is created from the signatures with surrounding pixels.
a classification decision rule.

spectral resolution
source coordinates A measure of the smallest object that can be resolved by the sensor,
In the rectification process, the input coordinates. or the area on the ground represented by each pixel.

spatial enhancement spectral space


The process of modifying the values of pixels in an image relative An abstract space that is defined by spectral units (such as an
to the pixels that surround them. amount of electromagnetic radiation). The notion of spectral space
is used to describe enhancement and classification techniques that
compute the spectral distance between n-dimensional vectors,
spatial frequency where n is the number of bands in the data.
The difference between the highest and lowest values of a
contiguous set of pixels.
SPOT
SPOT satellite sensors operate in two modes, multispectral and
spatial resolution panchromatic. SPOT is often referred to as the pushbroom scanner,
A measure of the smallest object that can be resolved by the sensor, meaning that all scanning parts are fixed, and scanning is
or the area on the ground represented by each pixel. accomplished by the forward motion of the scanner.

speckle noise standard deviation

The light and dark pixel noise that appears in radar data. 1. The square root of the variance of a set of values which is used
as a measurement of the spread of the values. 2. A neighborhood
analysis technique that outputs the standard deviation of the data
spectral distance file values of a user-specified window.
The distance in spectral space computed as Euclidean distance in
n-dimensions, where n is the number bands.

GLOSSARY 197
striping temporal resolution
A data error that occurs if a detector on a scanning system goes out The frequency with which a sensor obtains imagery of a particular
of adjustment, that is, it provides readings consistently greater than area.
or less than the other detectors for the same band over the same
ground cover.
terrain analysis
The processing and graphic simulation of elevation data.
subsetting
The process of breaking out a portion of a large image file into one
or more smaller files. terrain data
Elevation data expressed as a series of x, y, and z values that are
either regularly or irregularly spaced.
sum
A neighborhood analysis technique that outputs the total of the data
file values in a user-specified window. thematic change
Thematic Change is a feature in Image Analysis for ArcGIS that
allows you to compare two thematic images of the same area
supervised training captured at different times to notice change in vegetation, urban
Any method of generating signatures for classification in which the areas, and so on.
analyst is directly involved in the pattern recognition process.
Usually, supervised training requires the analyst to select training
samples from the data that represent patterns to be classified. thematic data
Raster data that is qualitative and categorical. Thematic layers
often contain classes of related information, such as land cover, soil
swath width type, slope, etc.
In a satellite system, the total width of the area on the ground
covered by the scanner.
thematic map
A map illustrating the class characterizations of a particular spatial
summarize areas
variable such as soils, land cover, hydrology, etc.
A common workflow progression with feature theme
corresponding to an area of interest to summarize the change just
within a certain area. thematic mapper (TM)
Landsat data acquired in seven bands with a spatial resolution of 30
× 30 meters.

198 USING IMAGE ANALYSIS FOR ARCGIS


theme true color
A particular type of information, such as soil type or land use, that A method of displaying an image (usually from a continuous raster
is represented in a layer. layer) that retains the relationships between data file values and
represents multiple bands with separate color guns. The image
memory values from each displayed band are translated through
threshold the function memory of the corresponding color gun.
A limit, or cutoff point, usually a maximum allowable amount of
error in an analysis. In classification, thresholding is the process of
unsupervised training
identifying a maximum distance between a pixel and the mean of
the signature to which it was classified. A computer-automated method of pattern recognition in which
some parameters are specified by the user and are used to uncover
statistical patterns that are inherent in the data.
training
The process of defining the criteria by which patterns in image data
variable
are recognized for the purpose of classification.
1. A numeric value that is changeable, usually represented with a
letter. 2. A thematic layer. 3. One band of a multiband image. 4. In
training sample models, objects that have been associated with a name using a
A set of pixels selected to represent a potential class. Also called declaration statement.
sample.
vector data
transformation matrix Data that represents physical forms (elements) such as points, lines,
A set of coefficients that is computed from GCPs, and used in and polygons. Only the vertices of vector data are stored, instead of
polynomial equations to convert coordinates from one system to every point that makes up the element.
another. The size of the matrix depends upon the order of the
transformation.
vegetative indices
A gray scale image that clearly highlights vegetation.
triangulation
Establishes the geometry of the camera or sensor relative to objects
zoom
on the earth’s surface.
The process of expanding displayed pixels on an image so they can
be more closely studied. Zooming is similar to magnification,
except that it changes the display only temporarily, leaving image
memory the same.

GLOSSARY 199
200 USING IMAGE ANALYSIS FOR ARCGIS
References References

This appendix lists references used in the creation of this book.

Akima, H., 1978, A Method for Bivariate Interpolation and Smooth Surface Fitting for
Irregularly Distributed Data Points, ACM Transactions on Mathematical Software 4(2),
pp. 148-159.

Buchanan, M.D. 1979. “Effective Utilization of Color in Multidimensional Data


Presentation. “Proceedings of the Society of Photo-Optical Engineers, Vol. 199: 9-19.

Chavez, Pat S., Jr, et al. 1991. “Comparison of Three Different Methods to Merge
Multiresolution and Multispectral Data: Landsat TM and SPOT Panchromatic.”
Photogrammetric Engineering & Remote Sensing, Vol. 57, No. 3: 295-303.

Conrac Corp., Conrac Division. 1980. Raster Graphics Handbook. Covina, California:
Conrac Corp.

Daily, Mike. 1983. “Hue-Saturation-Intensity Split-Spectrum Processing of Seasat Radar


Imagery.” Photogrammetric Engineering& Remote Sensing, Vol. 49, No. 3: 349-355.

ERDAS 2000. ArcView Image Analysis. Atlanta, Georgia: ERDAS, Inc.

ERDAS 1999. Field Guide. 5th ed. Atlanta: ERDAS, Inc.

ESRI 1992. Map Projections & Coordinate Management: Concepts and Procedures.
Redlands, California: ESRI, Inc.

Faust, Nickolas L. 1989. “Image Enhancement.” Volume 20, Supplement 5 of Encyclopedia


of Computer Science and Technology, edited by Allen Kent and James G. Williams. New
York: Marcel Dekker, Inc.

Gonzalez, Rafael C., and Paul Wintz. 1977. Digital Image Processing. Reading,
Massachusetts: Addison-Wesley Publishing Company.

Holcomb, Derrold W. 1993. “Merging Radar and VIS/IR Imagery.” Paper submitted to the
1993 ERIM Conference, Pasadena, California.

Hord, R. Michael. 1982. Digital Image Processing of Remotely Sensed Data. New York.
Academic Press.

201
Jensen, John R., et al. 1983. “Urban/Suburban Land Use Analysis.” Chapter 30 in Manual of Remote Sensing, edited by Robert N.
Colwell. Falls Church, Virginia: American Society of Photogrammetry.

Jensen, John R. 1996. Introductory Digital Image Processing: A Remote Sensing Perspective. Englewood Cliffs, New Jersey:
Prentice-Hall.

Kloer, Brian R. 1994. “Hybrid Parametric/Non-parametric Image Classification.” Paper presented at the ACSM-ASPRS Annual
Convention, April 1994, Reno, Nevada.

Lillesand, Thomas M., and Ralph W. Kiefer. 1987. Remote Sensing and Image Interpretation. New York: John Wiley & Sons, Inc.

Marble, Duane F. 1990. “Geographic Information Systems: An Overview.” Introductory Readings in Geographic Information
Systems, edited by Donna J. Peuquet and Duane F. Marble. Bristol, Pennsylvania: Taylor & Francis, Inc.

McCoy, Jill, and Kevin Johnston. Using ArcGIS Spatial Analyst. Redlands, California: ESRI, Inc.

Sabins, Floyd F., Jr. 1987. Remote Sensing Principles and Interpretation. New York: W. H. Freeman and Co.

Schowengerdt, Robert A. 1983. Techniques for Image Processing and Classification in Remote Sensing. New York. Academic Press.

Schowengerdt, Robert A. 1980. “Reconstruction of Multispatial, Multispectral Image Data Using Spatial Frequency Content.”
Photogrammetric Engineering & Remote Sensing, Vol. 46, No. 10: 1325-1334.

Star, Jeffrey, and John Estes. 1990. Geographic Information Systems: An Introduction. Englewood Cliffs, New Jersey: Prentice-Hall.

Swain, Philip H. 1973. Pattern Recognition: A Basis for Remote Sensing Data Analysis (LARS Information Note 111572). West
Lafayette, Indiana: The Laboratory for Applications of Remote Sensing, Purdue University.

Swain, Philip H., and Shirley M. Davis. 1978. Remote Sensing: The Quantitative Approach. New York: McGraw Hill Book Company.

Tou, Julius T., and Rafael C. Gonzalez. 1974. Pattern Recognition Principles. Reading, Massachusetts: Addison-Wesley Publishing
Company.

Tucker, Compton J. 1979. “Red and Photographic Infrared Linear Combinations for Monitoring Vegetation.” Remote Sensing of
Environment, Vol. 8: 127-150.

Walker, Terri C., and Richard K. Miller. 1990. Geographic Information Systems: An Assessment of Technology, Applications, and
Products. Madison, Georgia: SEAI Technical Publications.

Watson, David, 1994, Contouring: A Guide to the Analysis and Display of Spatial Data, Elsevier Science, New York.

202 USING IMAGE ANALYSIS FOR ARCGIS


Welch, R., and W.Ehlers. 1987. “Merging Multiresolution SPOT HRV and Landsat TM Data.” Photogrammetric Engineering &
Remote Sensing, Vol. 53, No. 3: 301-303.

REFERENCES 203
204 USING IMAGE ANALYSIS FOR ARCGIS
Cell 185
Index

Index A
A priori 183
Cell Size 48
Cell Size Tab
Absorption spectra 101 workflow 51
Abstract symbol 183 Checkpoint analysis 170
Accuracy assessment 183 Class 185
Ancillary data 183 value
Annotation 183 numbering systems 114
AOI 183 Class value 185
Area 184 Classification 152, 185
Area of interest 184 Classification accuracy table 185
ASCII 183 Classification scheme 185
Aspect 184 Clustering 186
Atmospheric correction 91 Clusters 186
Attribute 184 Coefficient 186
Average 184 Collinearity 186
AVHRR 102 Contiguity analysis 186
Continuous 186
B Continuous data 186
Band 184 Contrast stretch
Bilinear interpolation 184 for display 85
Bin 87 linear 84
Bin function 184 min/max vs. standard deviation 85
Bins 184 nonlinear 84
Border 184 piecewise linear 84
Boundary 184 Convolution 70
brightness inversion 94 filtering 109
Brightness value 184 Convolution Filtering 70
Brovey Transform 79 Convolution filtering 186
Buffer zone 185 Convolution kernel 186
Coordinate system 186
C Correlation threshold 186
Camera Model Correlation windows 186
tutorial 33 Corresponding GCPs 187
Camera Properties Covariance 187
Fiducials 172 Covariance matrix 187
Camera properties 185 Creating a shapefile
Camera Properties dialog 171 tutorial 18
Cartesian 185 Cubic convolution 187
Categorize 185

205
D G Chipping tab 174
Data 108, 187 GCP matching 189 IKONOS properties 190
Data file 187 GCPs 151 IKONOS Properties dialog 173
Data file value 187 General Tab Image data 190
display 84 workflow 50 Image Difference
Database 187 Geocorrection 189 tutorial 22
Decision rule 187 tutorial 33 Image file 190
Digital elevation model 187 Geocorrection property dialogs 153 Image Info 45
Digital terrain model 187 Elevation tab 155 workflow 46
Display device 84, 85, 96 General tab 153 Image matching 190
Links tab 154 Image processing 190
E Geographic information system 189 Index 101
Edge detector 188 Georeferencing 150, 189 Indices 190
Edge enhancer 188 GIS Information (vs. data) 108
Effects of order 163 defined 107 Intensity 96
Enhancement 188 Ground control point 189 IR 190
linear 84 Ground control points 151 Island Polygons 41
nonlinear 84 ISODATA 190
radiometric 83 H
spatial 83 High frequency kernel 189 L
Extension 188 High Frequency Kernels 72 Landsat 190
Extent 47 High order polynomials 162 bands and wavelengths 177
Extent Tab Histogram 189 MSS 102
workflow 51 breakpoint 85 TM 99, 102
Histogram Equalization Landsat 7 180
F tutorial 14 Landsat Properties 177
Feature collection 188 Histogram equalization 189 Landsat Properties dialog 181
Feature extraction 188 formula 88 Layer 190
Feature space 188 Histogram match 91 Linear 191
Fiducial center 188 Histogram matching 190 Linear transformation 169, 191
Fiducials 188 histogram matching 92 Linear transformations 161
File coordinates 189 Histogram Stretch Lookup table 84
Filtering 189 tutorial 14 display 85
Finding areas of change 22 Hue 96, 190 Lookup table (LUT) 191
Focal 189
Focal Analysis 77 I M
workflow 78 Identifying similar areas 18 Majority 191
Focal operation 109 IHS to RGB 99 Map projection 191
IKONOS Maximum likelihood 191

206 USING IMAGE ANALYSIS FOR ARCGIS


Mean 85, 191 O R
Median 191 Observation 193 Radar data 194
Minimum 191 Off-nadir 193 Radiometric correction 195
Minimum distance 192 Options Radiometric enhancement 195
Minimum GCPs 166 dialog 47 Radiometric resolution 195
Minority 192 Options Dialog Raster data 195
Modeling 192 workflow 50 Recode 114
Mosaicking 192 Orientation tab 171 Recoding 195
Mosaicking images Orthorectification 193 Rectification 150, 195
tutorial 30 tutorial 33 Rectified coordinates 195
MSS 177 Overlay 193 Reference coordinates 195
Multispectral classification 192 Reference pixels 195
Multispectral imagery 192 P Reference plane 195
Multispectral scanner (MSS) 192 Panchromatic imagery 193 Reflection spectra
Parallelepiped 193 see absorption spectra
N Parameter 193 Reproject 195
Nadir 192 Parametric 131 Resampling 196
NDVI 192 Parametric signature 193 Resolution 196
Nearest neighbor 152, 192 Pattern recognition 193 spatial 91
Neighborhood analysis 109, 192 Pixel 194 Resolution Merge 79
density 109 Pixel depth 194 workflow 80
diversity 109 Pixel size 194 Resolution merging 196
majority 109 Placing links RGB 196
maximum 109 tutorial 36 RGB clustering 196
minimum 109 Polygon 194 RMS error 151, 196
minority 109 Polynomial 194 RMSE 35
rank 109 Polynomial Properties dialog 168 RPC properties 196
sum 109 Polynomial Transformation 161 RPC Properties dialog 173, 176
NITF 176 Preference Tab 51 Rubber Sheeting 169
NoData Value 45 Preferences 49 Rubber sheeting 196
Non-directional 192 Principal components analysis (PCA)
Non-Directional Edge 75 194 S
workflow 76 Profile 194 Saturation 96, 196
Nonlinear transformation 170, 193 Pushbroom 194 Scale 196
Nonlinear transformations 162 Scanner 196
Normalized difference vegetation index Q Scanning window 109
193 QuickBird 194 Seed Radius 40
QuickBird Properties 176 workflow 44
QuickBird Properties dialog 173 Seed Tool 18

INDEX 207
controlling 40 tutorial 24
workflow 42 Thematic data 198
Seed Tool Properties 40 Thematic files 152
Shadow Thematic map 198
enhancing 84 Thematic mapper (TM) 198
Shapefile 196 Theme 198
Signature 196 Threshold 199
Source coordinates 197 TM 177
Spatial Enhancement 69 TM data 179
Spatial enhancement 197 Training 199
Spatial frequency 197 Training sample 199
Spatial resolution 197 Transformation matrix 161, 199
Speckle noise 197 Triangle-based finite element analysis
Spectral distance 197 169
Spectral enhancement 197 Triangle-based rectification 169
Spectral resolution 197 Triangulation 169, 199
Spectral space 197 True color 199
SPOT 197 tutorial 18
panchromatic 99
XS 102 U
Spot 158 Unsupervised Classification
Panchromatic 158 tutorial 25
XS 158 Unsupervised training 199
Spot 4 159
Spot Properties dialog 160 V
Standard deviation 85, 197 Variable 199
Starting Image Analysis for ArcGIS 12 Vector data 199
Stereoscopic pairs 159 Vegetative indices 199
Striping 197
Subsetting 198 Z
Summarize areas 198 Zero Sum Kernels 72
Supervised training 198 Zoom 199
Swath width 198

T
Temporal resolution 198
Terrain analysis 198
Terrain data 198
Thematic Change

208 USING IMAGE ANALYSIS FOR ARCGIS

You might also like