Professional Documents
Culture Documents
Robert M.
M Rohland
Department of Resou
urce Analysiss, Saint Marry’s Universiity of Minnessota, Minneaapolis, MN
55404
Abstractt
Environm mental Systeems Researchh Institute (E ESRI) ArcGIIS 9.x has lim mited imagee processing
capabilitiies in the forrm of the Sppatial Analysst Extension.. Spatial Anaalyst containns a tool called
the Maxiimum Likelih hood (ML) classification
c n tool, whichh can be used to derive a thematic map
m
a photo or satellite imaage. Workingg with multispectral imaage data (succh as a Landssat
from an air
MSS or TM+)
T or witth multitempporal image data
d (such ass data obtainned at multipple times durring
a growing season) it is possible to derive them matic maps with their asssociated leggends. Howeever,
RGB andd color IR airr photos havving only thrree layers oft ften contain too
t little infoormation forr ML
pixel classsification to
o have an accceptable level of accuraccy. The accuuracy of classsification caan be
improvedd by using co ontext informmation in thee form of texxture measurres. Custom texture filteers
have been designed for f that purpose.
_____________________ _______________________________________________________________________________
Rohland, Robert.
R 2008. Improving the Accuracy
A of Piixel Classificattion by Includiing Texture Infformation. Voluume
10, Papers in Resource Analysis.
A 21 pp. Saint Mary’s University of Minnesota Unniversity Centraal Services Press.
Winona, MN.
M Retrieved (date) from htttp://www.gis.sm mumn.edu.
However, the whole process is The Need for Greater Classification
referred to more loosely as classification by Accuracy and the Purpose of this Project
Spatial Analyst Tools in Environmental
Systems Research Institute (ESRI) ArcGIS When working with six or seven spectral
9.x. This discussion focuses on the layers, e.g. Landsat TM+ datasets, there are
classification of pixels rather than enough bands to give a definitive spectral
delineation between regions. Because the signature that allows for accurate
ESRI ML classifier is being used, the whole classification. However, in cases where
process will be referred to as classification there are only three layers in a dataset, such
for the purpose of this project. as RGB color or color IR images, there is
too little information to perform an accurate
Image Segmentation involves classification using the ML classifier tool in
both Classification and Delineation ESRI ArcGIS 9.x. In that case, additional
Remotely-
information must be extracted from the
Segmentation Sensed pixel’s context.
Image
In this project, custom software tools
have been developed to extract additional
es>>
<<us
es>>
Photo-
interpreter each pixel’s neighborhood. Typically,
texture information is derived from the value
Classification Delineation (brightness) of an RGB image. There is such
high correlation between the red, green, and
Figure 2. The Process of Segmentation. blue bands (usually over 95%) that it does
not make sense to extract texture
Multilayer Classification of Remotely information for each spectral band
Sensed Images (Maenpaa and Pietikainen, 2004).
A trained human interpreter may be able to Kinds of Classifiers
classify grayscale images, but automated
classification, especially a tool like the ESRI Classifiers are broadly divided into two
ArcGIS ML classifier requires multilayer kinds: unsupervised and supervised. An
images (Landgrebe, 2003; Richards and Jia, unsupervised classifier looks for natural
2006; Tso and Mather, 2001). Ordinarily, clustering of data in feature space. These
pixel-by-pixel classification is performed on clusters in feature space are called spectral
multispectral or multitemporal images. An classes, and they may or may not have
example of multispectral data is that which informational value.
is produced by the Landsat series of Supervised classifiers can be
satellites. These images include red, green, “trained.” Classifiers can be taught to
and blue layers and also extra spectral bands classify samples according to categories
in the IR (infrared), giving additional known as informational classes. Supervised
information about vegetation and geology. classification is performed by presenting
An example of multitemporal data training samples, which have class labels
would be a series of multispectral images attached to them, to the classifier. After the
taken of agricultural land at different times classifier is taught to assign labeled sample
during the growing season. As crops grow vectors to the appropriate classes, then the
and mature, their spectral signatures change. classifier is allowed to classify samples
whose class is not known.
2
Supervised classified classifiers are belonging to any class that they are labeled
further divided into statistical classifiers, “unknown.”
neural network classifiers, fuzzy logic
classifiers, and rule-based classifiers. The Multivariate Normal Distribution
Statistical classifiers can be yet
further subdivided into parametric and non- When the random variable has more than
parametric classifiers. Parametric classifiers one dimension (a vector) one is dealing with
assume classes can be modeled by a multivariate statistics. The ML classifier
multivariate normal distribution (Gaussian makes the assumption that the sample data
distribution). Non-parametric classifiers do are multivariate normal. The performance of
not make that assumption. the ML classifier depends upon how well
The ML classifier is a parametric this assumption holds in reality.
statistical classifier because it uses statistics
and it assumes the classes are normally Points to Remember about the Maximum
distributed. It is therefore important to Likelihood Classifier
remember that the performance of the ML
• It must be kept firmly in mind that
classifier is compromised if the data are not
the ML classifier is a parametric
normally distributed.
statistical classifier, and its
Goals of Supervised Classification effectiveness depends on the class
sample data being multivariate
There are three goals in the supervised normal.
classification of pixels (Landgrebe, 2003): • The training samples must be a good
and faithful representation of all the
• The classes must have informational samples.
value. They must correspond to a • The class covariance matrices must
category useful to humans, such as a not be singular. The ML classifier
land cover type. crashes if it encounters a singular
• The classes must be separable. A covariance matrix.
pixel must be unambiguously • There must be very little overlap
assignable to one class or another. between classes.
• The set of classes must be
exhaustive. That is, every pixel must What is Texture?
have a class into which it can fall –
including a class labeled “other” Researchers agree that texture is a spatial
where pixels may be assigned if they variation in brightness or tone from one
do not neatly fit into an existing pixel to the next within an image. However,
informational class. there is no absolute or universal consensus
as to how texture should be modeled or
In practice, it is rarely possible to measured (Davies, 2005; Haralick, et al.,
achieve all three goals at the same time. 1973; Haralick and Shanmugan, 1974;
Classes may overlap, and some pixels could Parker, 1997; Richards and Jia, 2006).
be assigned to more than one class. In those Texture is not an attribute of a single
cases, one is forced to pick the most likely point or pixel. Texture is a variation between
class assignment (ML classifier). Other or among several pixels in the neighborhood
pixels may have such a low probability of of a pixel. The texture filter computes a
value based on variations in the
3
neighborhood pixel values. This value is The first step is to use the Create
written to the output target pixel. Signatures tool, which takes the image raster
bands as input. Figure 4 shows an RGB
Methods orthophoto of some agricultural land which
includes a lake. The three bands, red, green,
The ESRI ArcGIS 9.x Maximum and blue are specified as inputs for the
Likelihood Classifier Tool Create Signatures tool. Also required is a
file, either a polygon vector file or a raster,
The ESRI Spatial Analyst Extension must be which defines which areas are the training
installed in order to perform classification in samples. The attributes of the polygons or
ArcGIS 9.x. ESRI ArcGIS 9.x provides only pixels (depending on the file type) indicate
one kind of supervised classifier, the the class assignment. Figure 5 shows an
Maximum Likelihood Classification tool in example of a file marking the training fields.
the Spatial Analyst / Multivariate toolbox. It
performs a pixel-by-pixel classification of a
multilayer dataset.
Some of the Focal Statistics tools in
Spatial Analyst may be used to perform
limited extraction of context and texture
information. ML classification in ArcGIS
9.x is a two-step process (Figure 3). With
the addition of some customized VBA code
(the core of this project), much more can be
done.
Figure 4. FSA Orthophoto of a Rural Scene.
Training
Fields File
(Feature
Image class or
Raster raster)
Bands
Create
Signature
File
Signature
File
(*.gsg)
4
Minimum Distance to Class Means the image raster bands (red, green, and blue)
classifier. as well as the signature file as inputs.
5
The ML Classification dialog is
shown in Figure 7. A name needs to be
specified in the dialog for the output raster
file. In this example, IMAGINE Image
format was chosen by specifying the “.img”
extension.
6
Brief descriptions for each texture filter Manthalkar, et al., 2003). The Gabor kernel
follow. is a sine wave (or cosine wave) modulated
by a Gaussian envelope (Figure 10).
Gaussian Smoother
Gabor Filters
7
Hurst Filter LBP / VAR filter
The Hurst filter calculates a Hurst The local binary pattern (LBP) filter and the
coefficient for each input pixel based on a variance (VAR) filter deal only with the
circular neighborhood. The Hurst coefficient eight immediate neighbors in this particular
is a measure of fractal dimension similar to implementation (Figure 15). These filters
that defined by Mandelbrot (1983). The were originally developed by the Machine
Hurst coefficient is the slope of a regression Vision Group at the University of Oulu,
line (Hurst plot, Figure 13) which plots the Finland (Maenpaa and Pietikainen, 2005;
log distance from the central pixel versus the Ojala, et al., 2002; Pietikainen, 2000).
log range (max - min).
2.0
Hurst Plot
1.5
1.5
1.0
pe
= Figure 15. LBP Primitives.
Slo
log R
0.5
Other Custom Tools
0.0
0.347
0.693
0.804
1.040
1.098
1.151
8
One thematic raster is called the random chance. Many statisticians consider
reference, and the other, the target. A κ , or “kappa” to be a better figure of merit
reference pixel is assumed to be true, and if than overall accuracy (Congalton, 1991;
the corresponding target pixel matches it, it Congalton, et al., 1983; Rosenfield and
is said that the classification is correct, Fitzpatrick-Lins, 1986). Kappa, as a
otherwise it is in error. population parameter, is:
This custom tool was created for this
po − pc
very purpose. The classification matches and (1.1) κ =
mismatches are cross-tabulated (Table 3). 1 − pc
The reference classes are shown along the
horizontal axis and vertical axis. Tallies po is the probability of observing a
along the main diagonal of the matrix are correct classification, and pc is the
correct, and all off-diagonal tallies are probability of a correct classification
errors. happening by chance. A sampled estimate
The correct tallies by column divided for kappa is called kappa-hat ( κ̂ ).
by the column totals are called producer’s The kappa-hat-variance tells how
accuracy, and the correct tallies by row good that estimate is. The formula for the
divided by the row totals are called user’s kappa-hat-variance is rather complicated;
accuracy. The total correct divided by the see Hudson and Ramm (1987). The Create
overall total is the overall accuracy. Error Matrix Tool calculates all of the
It is expected that a certain number above, as well as a 95% confidence interval
of pixels will be classified correctly by for kappa.
Table 3. An Error Matrix.
9
The Bhattacharyya Matrix Tool from an RGB raster dataset. It also extracts
hue, saturation, and value as separate layers.
The Bhattacharyya Matrix tool (not on the
Thematic Classification toolbar, but under Show Raster Cell Values Tool
ArcTools) takes an ESRI classification
signature file and generates a cross- This modal tool shows the pixel value at the
tabulation matrix of statistical distances cursor of the topmost active layer. For RGB
between the classes as typified by their datasets, the values of all three layers are
signatures. The Bhattacharyya distance is shown.
the distance between two distributions in
feature space. The formula for the B- Autocorrelation Tools
distance is:
The two autocorrelation tools allow one to
1 ⎡ Σ + Σ2 ⎤
-1
select an area of the image and create an
D = ⎡ μ 1 - μ 2 ⎤⎦ T ⎢ 1 ⎡⎣ μ 1 - μ 2 ⎤⎦
8⎣ ⎣ 2 ⎥
⎦ autocorrelation plot of that selection. This
⎛ 1 ⎞ can be used to help select the scale for
⎡ Σ + Σ 2 ⎤⎦ ⎟
1 ⎜ 2⎣ 1 applying texture filters.
+ ln ⎜ ⎟
2 ⎜ Σ1 Σ 2 ⎟
⎜ ⎟ Miscellaneous Tools
(1.2) ⎝ ⎠
The μ variables are the centroids of Additional utility tools are included in the
the distributions and the Σ variables are the Thematic Classification toolbar:
covariance matrices. The B-distance is used
in the MultiSpec© software by Landgrebe • Extract Using Mask
and Biehl (1995), and is considered to be • Histogram Equalization
one of the best measures of class separation • Compare Rasters
(Landgrebe, 2003; Richards and Jia, 2006). • Convert Raster Type
For an example of a B-matrix, see Table 4. • General Help
Table 4. Bhattacharyya Matrix. The General Help selection is really not a
Bhattacharyya matrix for ArcGIS signatures tool, but rather a set of Help dialogs that
file:
C:\Grad_Project\Development\TextureClassificati gives additional information beyond what is
on\rgb_lake.gsg
available with each software tool.
Type = 1
Number of Classes = 7
Number of Layers = 3 Datasets
Parameter Layers = 3
10
Figure 17. 12-Tile RGB Mosaic.
11
Alaska Wetland Data
Results
12
(GLCM) statistics, and the Multidirectional relatively slow. The primary reason for its
Gabor filter, in descending order. slow performance had to do with computing
the order statistics: median and range. The
software uses the Quicksort algorithm,
which has a time complexity of O(n log n),
but is nonetheless slower than computing
statistical moments, which are O(n).
Table 5. Summary of Filter Speed Tests.
13
handle a raster dataset as large as 284 also performed on the color IR dataset, and
megabytes. The tools threw “out of the result was very similar to Table 6.
memory” exceptions. Thus, the decision was It was decided to combine the color
made to analyze a somewhat smaller section IR dataset (IR, red, and green layers) with
measuring 1600 × 2500 meters, resulting in the blue layer from the RGB dataset.
an 11.8 megabyte file (Figure 25). Principal component analysis of the four
layers yielded the results in Table 7.
Table 7. Variance of the Itasca IRGB (4-layer)
Dataset.
Princ. Comp. 1 2 3 4
Eigenvalues: 8268.59 837.38 302.01 8.42
% Variance: 87.81% 8.89% 3.21% 0.09%
Eigenvectors
IR 0.652 ‐0.128 0.735 0.135
red 0.558 ‐0.014 ‐0.361 ‐0.747
green 0.504 ‐0.001 ‐0.568 0.651
Figure 25. A smaller (1600 x 2500) area of blue 0.092 0.992 0.089 0.008
interest.
Before undergoing classification, the dataset Table 8 illustrates the poor accuracy of the
was analyzed using the Principal IRGB-only classification. This is to be
Components Analysis tool in ArcTools. It expected, because of the granularity of
was discovered that the three bands, red, forest images. Each tree casts a shadow, so a
green, and blue, are very highly correlated pixel-by-pixel classification scheme is
with each other (Table 6). bound to fail.
Table 6. Variance of the Itasca RGB Dataset. Table 8. Classification Results of IRGB Layers.
An inspection of the eigenvectors reveals In Figure 26, it is evident that the brightness
that the red, green, and blue bands make a varies over the course of several meters, but
nearly equal contribution to the first the variance in brightness pattern varies
principal component, which can be from one stand of trees to another. Note that
considered to be total image brightness. if the imaging device had been the Landsat
Therefore, the strategy of deriving texture TM+ instrument, the pixel resolution would
information from the value (brightness) of be 30 meters, and the variance seen here
the pixels is considered to be valid, as would be lost. In our dataset the spatial
suggested by Maenpaa and Pietikainen resolution is one meter, and the crowns of
(2004). Principal components analysis was trees are several meters wide. It is precisely
14
this spatial variance of brightness between Categories
one stand of trees and another that is the
basis for our expectation that texture In experiment #2, the results of using four
measurement should improve land cover texture methods were compared with the
classification. results of using IRGB-only. Categories used
for training areas are shown in Table 9, with
the training areas shown in Figure 28.
Table 9. Experiment #2 Landcover Categories.
Category Code Description
Hardwood 1 ash
“ 12 aspen
“ 13 birch
Conifers 52 red pine
“ 53 jack pine
“ 62 balsam fir
“ 71 black spruce
“ 72 tamarack
Non-productive 75 stagnant spruce
Figure 26. Autocorrelation analysis of forest. Non-forest 85 lowland brush
“ 96 permanent water
It was decided that the texture filters
should be set so that the raster is subsampled
by a factor of 4 before filtering and
supersampled by 4 afterwards to restore the
raster to full scale.
Registration
15
that the hardwood and conifer categories with an overall accuracy of 56.1% and a
were collapsed (Table 10 and Figure 30). kappa-hat of 32.58%.
Figure 29. Experiment #2 Classification Results. Figure 31. Experiment #3 Classification Results.
Table 10. Experiment #3 - Collapsed Landcover Experiment #4: Alaska Wetland Dataset,
Categories.
Full Categories
Category Code
Hardwood 1 Principal Components Analysis
Conifers 2
Stagnant 3 As was done with the Itasca forest dataset, a
Brush 4 principal components analysis of the Alaska
Permanent Water 5 wetland dataset was performed first (Table
11). Unlike the Itasca dataset, the first
principal component, which carried over
90% of the variance, was heavily weighted
in the red band by over 80%. So, rather than
using the value band derived from an HSV
color transform, it was decided to use the
16
first principal component as the basis for
Table 12. Alaska Dataset Landcover Types.
texture filtering.
Wetland_Type Code Attribute
Scale Lake 5 L1UBH
“ 6 L2EM2/UBH
The next task was to perform an “ 7 L2EM2H
autocorrelation on the 1st principal “ 8 L2USE
component at various places in the image. Freshwater Emergent
Wetland 9 PEM1/2F
Based upon this test it was decided that the “ 10 PEM1/SS1B
1st principal component should be “ 11 PEM1/SS1E
subsampled by a factor of 10 before “ 12 PEM1/SS1F
applying texture filters, and afterwards “ 13 PEM1/UBF
supersampled by 10 to bring the result back “ 14 PEM1/UBH
“ 15 PEM1B
to full scale. “ 16 PEM1E
“ 17 PEM1F
Full Categories “ 18 PEM1H
“ 19 PEM2/1F
Table 12 show the full set of categories “ 20 PEM2/1H
used. As with the Itasca dataset, the four “ 21 PEM2H
texture filters associated with the most Freshwater Pond 22 PUB/EM1H
“ 23 PUBH
accurate classifications in previous
experiments were used. The results are
displayed in Figure 32.
It appears that first order statistics
yielded the best classification at 50.51%
overall accuracy with a kappa-hat of
41.81%. The Laws Energy filter came in
second with 44.66% overall accuracy and a
kappa-hat of 38.42%. Apparently, the
classifier performed much better with full
Alaska wetland categories than with the
Itasca forestry dataset with full arboreal
species categories.
The same experiments were run with four Figure 32. Experiment #4 Classification Results.
collapsed categories of lake, freshwater
emergent wetland, and freshwater pond. The Discussion
best classification results were obtain with
the red, green, and blue bands only, yielding Experiment #1: VisTex Mosaic
90.55% overall accuracy and kappa-hat of
79.88%. Classification involving any of the A step-by-step description of the
texture filters yielded poorer results than classification performed here is given in the
RGB alone (Figure 33). This was quite ancillary document Texture Classification
unexpected. Tools for ArcGIS 9.3 (Rohland, 2008a) .
17
stage of development. The reason for the
“out of memory” error is that the tools
require the entire dataset to be buffered in
RAM at once, rather than handling it
piecemeal. For the time being, we will have
to deal with the data in smaller pieces. A
section of size 2500 × 1600 = 4,000,000
pixels was chosen.
It was noted that classification
accuracy improved when the training area
categories were collapsed so that all
hardwoods were grouped together, and all
conifers were grouped together. Apparently,
species of trees cannot be reliably identified
by texture plus near IR plus RGB. Only
when the trees are collapsed into hardwoods
and conifers do accuracies start to approach
Figure 33. Experiment #5 Classification Results. 50%. It is possible that there might not be a
strong correlation between arboreal species
This was the only experiment performed and texture.
with a texture mosaic. While the software
tools have the ability to perform texture Experiments 4 and 5: Alaska Wetland Data
filtering at scales coarser than 1:1, this
particular feature was not used. It is very Apparently the Alaska wetland data was
possible that better classification results photographed during the Arctic summer.
could have been obtained with a 2:1 or The dataset did not come with metadata, so
greater scaling. the date of acquisition is unknown. This area
In this project the results obtained near the shore of the Arctic Ocean appears
from one section of the project were used to to be much more than just a frozen
inform choices made for sections following. wasteland. Texture based classification fares
Because the results obtained from the much better here. With 19 categories of
VisTex Mosaic showed that the Laws wetland / lake / pond the overall accuracies
Energy filter and the GLCM statistics approach 50%, and in the case of 1st order
offered the best balance of accuracy and statistics, exceed it.
speed performance, these tools were chosen In experiment #5, with three
for analyzing the Itasca and Alaska datasets. collapsed categories, the classifier overall
First order statistics and Multidirectional accuracy performance exceeded 90%
Gabor filters were also used for later without texture filtering and without
experiments even though they were slow. subsampling. The use of texture filtering
actually worsened classifier performance.
Experiments 2 and 3: Itasca County Forest The researcher is unable to offer an
Dataset explanation for this unexpected result.
18
filter improves the accuracy of ML Perhaps an ML pixel classifier is not
classification is quite easy because of the the appropriate instrument for landcover
very narrow confidence intervals for kappa- segmentation, at least not in the problem
hat. An example follows: domains tried here. What this study does
H0 : t0.05 = 0 where t =
(κˆ1 − κˆ2 ) demonstrate is that texture filters can be
(1.3) used to significantly improve the accuracy
σˆ κ21 + οˆκ22
of an ML pixel classifier when used with
H A : t0.05 ≠ 0 real-world data.
Table 13. Classification Accuracy of Alaska Suggestions for Future Research and
Wetland, RGB-only.
Development
Overall Accuracy = 33.28%
Estimated Kappa-Hat = 0.2563 The software tools (texture filters and all) in
Est. Kappa-Variance = 7.5516E-9
95% Confidence Interval = [ 0.2561, their present form suffice as a proof of
0.2565 ] concept. However, before they could be
used in a full production setting or be sold
commercially, several things would have to
Table 14. Classification Accuracy of Alaska happen:
Wetland, Multidirectional Gabor Filter.
19
extra overhead handling cells Appendix: Ancillary Documents
containing “NoData.”
• One could build a new classifier. The Details of texture based classification and
ML classifier is not the highest the particular software tools used here can
performing classifier for landcover be found in the following ancillary
classification (Rohland, 2008b; Tso documents:
and Mather, 2001). For classes that
have a multimodal distribution, a • Texture Classification Tools for
non-parametric Parzen window ESRI ArcGIS 9.3: A Technical
classifier could be used, for example. Walkthrough: “how-to” information
Higher-level segmentation could be about the software tools described in
performed by using a Hidden this study (Rohland, 2008a).
Markov Model and simulated • An Overview of Classification
annealing (Tso and Mather, 2001). Methods for Remote Sensing:
There are many approaches to information on theory and issues
classification that could be tried as related to classification (Rohland,
an adjunct to ESRI ArcGIS imaging. 2008b).
• Texture Classification Tools Help
Acknowledgements Dialogs: an emphasis on the actual
methods and formulas used in each
The author wishes to express his gratitude to software tool (Rohland, 2008c).
the following persons for this project:
References
• Mr. John Ebert, committee chairman,
faculty of St. Mary’s University of Campbell, J. B. 2002. Introduction to remote
MN Resource Analysis Dept., sensing. The Guilford Press, New York.
• Mr. Jeff Knopf from Geospatial Clausi, D. A. and Jernigan, E. M. 2000.
Services, St. Mary’s University in Designing Gabor filters for optimal texture
Winona, MN, who provided the separability. Pattern Recognition 33: 1835-
Alaska Wetland dataset. 1849.
• Mr. Tim Loesch, Mr. Tim Aunan, Congalton, R. G. 1991. A review of
and Mr. Jim Rack of the Minnesota assessing the accuracy of classifications of
Department of Natural Resources, remotely sensed data. Remote Sensing of
who provided the Itasca County Environment 37: 35-46.
Forestry dataset. Congalton, R. G., Oderwald, R. G. and
• Mr. Kurt Swendson, alumnus of St. Mead, R. A. 1983. Assessing Landsat
Mary’s University GIS Program, classification accuracy using discrete
fellow student and comrade, for his multivariate analysis statistical techniques.
insight and advice regarding Photogram. Eng, Remote Sens. 49: 1671-
programming with ESRI ArcObjects. 1678.
• The faculty of St. Mary’s University Davies, E. R. 2005. Machine vision: theory,
of MN Department of Resource algorithms, practicalities. Elsevier, San
Analysis, especially Dr. David Francisco, CA.
McConville, department chair, Haralick, R. M., Shanmugan, K. and
without whom this project would not Dinstein, I. 1973. Textural features for
have been possible. image classification. IEEE Transactions on
Systems, Man & Cybernetics 3: 610-621.
20
Haralick, R. M. and Shanmugan, K. S. 1974. MIT Media Lab. 1995. VisTex texture
Combined spectral and spatial processing database.
of ERTS imagery data. Remote Sensing of http://vismod.media.mit.edu/pub/VisTex/
the Environment 3: 3-13. Ojala, T., Pietikainen, M. and Maenpaa, T.
Hudson, W. D. and Ramm, C. W. 1987. 2002. Multiresolution gray-scale and
Correct formulation of the Kappa rotation invariant texture classification
coefficient of agreement. Photogram. Eng, with local binary patterns. IEEE
Remote Sens. 53: 421-422. Transactions on Pattern Analysis &
Idrissa, M. and Acheroy, M. 2002. Texture Machine Intelligence 24: 971-987.
classification using Gabor filters. Pattern Parker, J. R. 1997. Algorithms for image
Recognition Letters 23: 1095-1102. processing and computer vision. John
Landgrebe, D. A. 2003. Signal theory Wiley & Sons, New York.
methods in multispectral remote sensing. Pietikainen, M. K., ed. 2000. Texture
John Wiley & Sons, Inc, Hoboken, NJ. analysis in machine vision. World
Landgrebe, D. A. and Biehl, L. 1995. Scientific Publishing Co. Pte. Ltd.,
MultiSpec software. Multispectral Singapore.
classification software. West Lafayette, IN: Richards, J. A. and Jia, X. 2006. Remote
Laboratory for Applications in Remote sensing digital image analysis. Springer-
Sensing (LARS). Verlag, Berlin.
http://cobweb.ecn.purdue.edu/~biehl/Multi Rohland, R. M. 2008a. Texture
Spec/download_win.html Classification Tools for ESRI ArcGIS 9.3:
Laws, K. I. 1980a. Rapid texture A Technical Walkthrough. Volume: 63 pp.
identification. Proceedings of the SPIE 238 http://www.rmrohland.com
Image Processing for Missile Guidance: Rohland, R. M. 2008b. An Overview of
376-380. Classification Methods for Remote
Laws, K. I. 1980b. Textured Image Sensing. Volume: 84 pp.
Segmentation, University of Southern http://www.rmrohland.com
California: Los Angeles. 178pp. Rohland, R. M. 2008c. Texture
Maenpaa, T. and Pietikainen, M. 2004. Classification Tools Help Dialogs.
Classification with color and texture: Volume: 39 pp.
jointly or separately? Pattern Recognition http://www.rmrohland.com
37: 1629-1640. Rosenfield, G. H. and Fitzpatrick-Lins 1986.
Maenpaa, T. and Pietikainen, M. 2005. A coefficient of agreement as a measure of
Texture analysis with local binary patterns. thematic classification accuracy.
Handbook of Pattern Recognition and Photogram. Eng, Remote Sens. 52: 223.
Computer Vision, 3rd ed., pp. 197-216. Tso, B. and Mather, P. M. 2001.
World Scientific, Singapore. Classification methods for remotely-sensed
Mandelbrot, B. B. 1983. The fractal data. Taylor & Francis, London.
geometry of nature. W.H. Freeman & Co.,
New York.
Manthalkar, R., Biswas, P. K. and Chatterji,
B. N. 2003. Rotation invariant texture
classification using even symmetric Gabor
filters. Pattern Recognition Letters 24:
2061-2068.
21