You are on page 1of 54

Instituto Superior Tcnico

Report:

Remote sensing

Academic year 2015-16

Teacher: Pedro Pina

Edited by:

Antonela Colic 85534


Claudio Finocchiaro 85539
Franco Franchino 85537

Tabla de contenido
Problem 1 .......................................................................................................................................................... 3
Thesis statement: .......................................................................................................................................... 3
Dissertation: .................................................................................................................................................. 3

Figure 1 Initial (input) image - Funchal2010.tif ................................................................................................. 3


Figure 2 Contrast options and operations. ........................................................................................................ 5
Figure 3 Linear operation with values between 7 and 80. ................................................................................ 5
Figure 4 Logarithm operation with values between 10-60. .............................................................................. 6
Figure 5 Squareroot operation with values between 7-80. .............................................................................. 6
Figure 66 MaxVer Classification Image, 99% Acceptance Threshold. ..............Error! Marcador no definido.
Figure 67 K-Mediuns Classification Image, 8 Themes and 10 Interactions. ......Error! Marcador no definido.
Figure 8 Image Comparation, 1.Contrasted Initial Image, 2.MaxVer 100%, 3. MaxVer 99%, 4. MaxVer 95%,
5. Euclidean, 6. K-Mediuns. ................................................................................Error! Marcador no definido.

Problem 1
Thesis statement:
Consider the image from GeoEye-1 satellite, panchromatic band with a spatial resolution of 0.50
m/pixel, from the city of Funchal acquired on the 23rd February 2010.
Increase the contrast of the image using different approaches based on the grey level histogram of
the image. Which one achieved better results? Justify the answer, comparing and commenting the
several outputs.

Dissertation:
In this problem were considered two main objectives: the study of the debris flow from landslides in
Funchal and the identification of city buildings.
For this purpose it was necessary to choose a contrast identification method, based on the values of
both ends of the histogram which are available in the software Spring. These methods are: linear,
square root, logarithm, square, equalizer histogram and negative.
The method should be chosen depending on the information that is intended to underline in the
picture. The types Square Root and Logarithm are suitable to highlight the darker areas instead of
Square method the lighter areas. These enhancement techniques consist mainly in the modification
of the contrast of the images, and are really depending on the type of application, that is, a given
technique valid for a given set of images can be unusable in other set of images.
The enhancement of the visual aspect of the input images (Figure 1) can be conveniently performed
in a way that its content can be faster and easier analyzed.

Figure 1 Initial (input) image - Funchal2010.tif

The increase of contrast can be obtained individually for each pixel with different types of transfer
functions. This contrast enhancement procedure is based on two values of the histogram (Table 1).
Normally, these are its extreme values, minimum and maximum, that used by default to make this
operation.

How we can see, the minimum and maximum values are between 7 (69457) and 80 (971), so we
will use this values for making operations.
Table 1 Values of the histogram Extreme values are bolded and red colored.

pixel
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

mono
0
56
30
53
76
224
544
69457
1350760
1261573
772633
625618
454403
362669
451549
411993
243580
160439
155365
129693
119998
103554
82662
75187
54327
45572
38414
33080
29212
29546
23128

31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60

20629
18777
17743
18910
15798
15032
14607
13534
11655
11795
9379
8724
7649
6857
6853
5671
4893
4250
3987
3902
4491
3750
3544
3564
3437
3306
3274
3354
3191
2918

61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90

2716
2427
2158
2000
1761
1606
1467
1317
1234
1166
1057
1042
1014
1060
1203
1489
1674
1385
1101
971
843
733
664
651
668
565
448
373
298
270

91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120

239
222
194
184
146
151
128
110
93
91
96
85
63
76
40
45
33
28
24
29
31
20
27
30
26
24
16
26
16
21

The choice of functions transfer depends on what is intended to be enhanced on the images. For
instance, if these regions correspond to dark components of the image it is preferable to use square
root or logarithm types, whereas for light regions, power or exponential are preferable to be used
instead.
The window with options we can see in figure 2. There is options where we can choose functions
(operations), and also place where we can put minimum and maximum values of histogram.
In channels we can choose color in which we want to do operations, but in this problem we didnt
use this function.

Figure 2 Contrast options and operations.

FUNCTIONS TRANSFER

Figure 3 Linear operation with values between 7 and 80.

Figure 4 Logarithm operation with values between 10-60.

Figure 5 Squareroot operation with values between 7-80.

Figure 6 Square operation with values between 7-80 .

Figure 7 Negative operation with values between 2-45.

Figure 8 Histogram equalization operation with values between 780.

Conclusion:
After the study of the images obtained, it was found that the histogram equalization method allows
good visualization of the debris flow into the ocean. It also notes the existence of small lighter areas
in the water that can be confused with debris but after comparison with other images, it appears that
you can treat yourself to the waves.
The Linear method it was more suitable for the display of the built up areas of the city, standing out
clearly the buildings and streets.
On the other hand, the logarithm and square methods are those which best enhance the
concentration of debris in the various zones of the ocean. In the second method, it was found that it
was still possible to observe what appears to be the layers of sediment.
The total area is covered by debris is best defined in the image obtained by the method of square
root.

Finally, it was found that the negative method was not suitable for achieving the objectives initially
set since no highlight any of the areas under study.
In conclusion, for the study of the debris in the ocean, it was found that the best result would be
obtained by combining the equalizer methods, Logarithm, Square and Square Root as each
highlights a different property. For the second objective, ex. the identification of buildings and
streets, we recommend the application of the linear method.

Problem 2
Thesis statement:
Consider the image captured by HiRISE camera on-board Mars Reconnaissance Orbiter with a
spatial resolution of 0.25 m/pixel.
The image shows the impact crater named Victoria located at 2.0 S latitude and 354.5 E longitude
of Mars surface.
Apply the several filters you know to delineate the edge of the crater and estimate its average
diameter. Which one have produced the best results? Comment on it.

Dissertation:
The aim of this problem is to highlight the edge of the crater with a high spatial resolution (Figure
9).

Figure 9 Initial image.

For this propose, it was necessary to eliminate the details of the image by filtering methods which
consists in the suppression of information at different scales.
The information contained in a image can be symbolically described through the following model:
Image Data = regional pattern + local pattern + noise =
= background + foreground + noise =
= low frequencies + high frequencies + noise
Normally, a given pattern is obscured by local variability pattern or by noise. The removal of those
features from the image through filtering may allow enhancing that pattern, making it more clearly
visible.
The simplest spatial filter consists on computing the average of the attribute of each point in a
giving region, using its attribute and those of its neighbor points.

For this, we used the linear filter Low-Mid Pass (Figure 10) whose function is the blurring of
structures causing an attenuation of the details, which results in a blurring of the boundaries.
There are other options of linear filtering but we used just low mid pass (figure 11) 7x7 but with a
different number of iterations.

Figure 10 Linear low- mid pass filtering Average 7x7_20 iterationts.

Figure 11 Linear low- mid pass filtering with options: 3x3, 5x5 and 7x7.

But the choice of this filter doesnt show satisfactory results, consequently it was applied Sobel
(Figure 12) which is an high-pass filter used normally for edge detection purposes in digital images,
in order to identify the discontinuities or abrupt changes in the grey level values of the pixels.

Figure 12 Sobel operator applied to Average 7x7_20 iterations.

Figure 13 Non linear filters Sobel operator.

The final goal of this problem is the estimation of the average crater diameter. Therefore we did
four measurements with metrics operations using SPRING software to calculate in different
directions the diameter of the crater. Than we did an arithmetical average of the values. The
measurement direction of each diameter and its value are shown in the following figures.

Diameter = 736 m

Diameter = 701 m

Diameter = 610 m

Diameter = 747 m

The arithmetical average of the values is = diameter / 4 = 698,5 m.

Problem 3
Thesis statement:
Consider the Landsat TM 5 satellite image, constituted by 6 spectral bands with a spatial resolution
of 30 m/pixel. The image is geometrically corrected and shows a region of Ribatejo, in Chamusca
county, where a section of Tagus river is noticeable at top-left corner of the image.
1. Make several color compositions with the several bands. Which one permitted a visual
higher discrimination of the scene? Why?
2. Compute different spectral vegetation and wetness indexes. Comment on the results
obtained.
3. Compute a brightness image from the scene. What operations where involved?
4. Compute the dimensionality of this dataset. Include the results considered more relevant to
better justify the answer.

Dissertation:
1. In this first phase, we did various combinations of bands where it is possible to highlight
kind of the objects in the image corresponding to farm areas, forest areas and water lines.
It is important consider that the vegetation reflects a greater amount of solar energy making
the image clearer in these zones, while the water absorbs the radiation, producing dark
tones.
In this analysis we used six different bands with respective wavelengths, namely:

TM1 wavelength of Blue band ( 0,4 m 0,4 m);


TM2 wavelength of Green band ( 0,5 m 0,6 m);
TM3 wavelength of Red band ( 0,6 m 0,7 m);
TM4 wavelength of NIR band ( 0,7 m 1,3 m);
TM5 wavelength of MIR band ( 1,3 m 3 m);
TM7 wavelength of MIR band ( 1,3 m 3 m);

The combination of these different bands allows for the following images:

Figure 14 Image obtained by combination of TM3 TM2 TM1 (RGB).

Figure 15 Image obtained by combination of TM4 TM2 TM1 (RGB).

Figure 16 Image obtained by combination of TM4 TM3 TM2 (RGB).

Figure 17 Image obtained by combination of TM5 TM4 TM1 (RGB).

Figure 18 Image obtained by combination of TM7 TM4 TM3 (RGB).

Figure 19 Image obtained by combination of TM7 TM5 TM3 (RGB).

The image obtained by combining the bands TM3, TM2 and TM1 (Figure 14), allows to obtain an
image similar to the human eye processes. Then, the substitution of TM3 band (Red) by TM4 (Near
Infrared) (Figure 15) - allows to highlight the fields cultivated (range of greens colors) from
uncultivated ones (reddish color).
In combination TM4, TM2 and TM3 (Figure 16) it notes that this is the combination of bands that
best allows highlight the sediments deposited along the river, having been removed the blue
wavelength.

In Figure 17, with the combination of TM5 bands, TM4 and TM1, we can see that the tone of the
vegetation is reinforced by TM5 and TM4 bands. In this figure, the green color identifies areas of
forest, while the lighter green identify the areas of farm, where there is recent flora. In this image it
is still possible to easily identify dark areas where the presence of water is checked.
In the figure 18, composed by TM7 bands, TM4 and TM1 introducing the Infrared Medium TM7
band, which differs from TM5 to have longer wavelength, or higher energy associated enhances the
green color as well as the contrast even more and the outlines of structures.
Figure 19 was obtained by the bands TM7, TM3 and TM5. The predominance of green tones can be
justified by the use of two bands of medio infrared. The identification of various green in the figure
may be useful in the classification of plant species.
2. In this paragraphed we talk about spectral vegetation and wetness index. First of all we
calculated the NDVI index. It assumes that the vegetation reflects strongly the NIR bad and
absorbs the radiation in the Red band. This means that the lighter gray areas correspond to
more densely vegetated areas. Comparing the image obtained by NDVI index with the
images initially obtained by the combination of the bands, we can see that the lighter areas
coincide with the high density forest, while areas with water correspond to the dark areas
(Figure 20).
=

4 3
=
+ 4 + 3

Figure 20 Image obtained by NDVI index.

4 5
=
+ 4 + 5

Figure 21 Image obtained by NDWI index (TM5).

4 7
=
+ 4 + 7

Figure 22 Image obtained by NDWI index (TM7).

In this images we can see how the areas with water are those with lighter shades of gray, as can be
seen in the river area and the abundance of light tones in the forest zone, which can be explained by
the fact that the plants retain moisture.
The main difference between using TM5 or TM7 band to calculate the index is demonstrated in
forest areas. The TM7 band intensifies green compared to TM5 band, brightening the image. This
causes the difference of tones in the image between the wet and non-wet areas, such as the
residential area is most significant when using the TM7 band.
3. To compute a brightness image from the scene we used an arithmetic average of the six
bands considered. It was necessary to do five operations in Spring software, in which we
considered the gain value equal to 1/6 in the first three operations and 0 in the other, and the
Offset value of 0 in everyone.
1 Operation = (TM1 + TM2) x 1/6
2 Operation = (TM3 + TM4) x 1/6
3 Operation = (TM5 + TM7) x 1/6
4 Operation = 1 Operation + 2 Operation
5 Operation = 3 Operation + 4 Operation = Average brightness image

Figure 23 Image obtained by 1 Operation.

Figure 24 Image obtained by 2 Operation.

Figure 25 Image obtained by 3 Operation.

Figure 26 Image obtained by 4 Operation.

Figure 27 Image obtained by 5 Operation: Average brightness image.

4. The dimensionality of this dataset can be explained by the Principal Component Analysis
(PCI).
Table 2 Covariance and correlation matrix.

The results obtained are the Auto- Value, the representation of each axis and covariance and
correlation matrix. The dimensionality of the dataset is possible explain with the first two
axes because together represent the 98.30% of the information (Table 2).
The first image is very similar to the average brightness image, because the vast majority of
information (94.24%) is contained in this axis. As can be seen, the information contained in
the remaining images of the axes, from the second, the images lose intensity.

Figure 28 Image PCI PC1

Figure 29 Image PCI PC2.

Figure 30 Image PCI PC3.

Figure 31 Image PCI PC4.

Figure 32 Image PCI PC5.

Figure 33 Image PCI PC6.

Problem 4
Thesis statement:
Consider the image captured by the camera MOC-NA on-board the probe Mars Global Surveyor
with a spatial resolution of 1.65 m/pixel.
The image shows a sand dune field on the surface of Mars. By using the morphological operators
that we know, process the binary image of the dunes:
1. To filter the noise;
2. To find markers of mean/coarse and coarse dunes;
3. To estimate approximately the main direction of displacement of the dunes.

Dissertation:
1. Considering the noise corresponds to white small spots in dark areas and black small spots in
light areas, it was possible to filter through erosion and dilation processes using the
structuring element MTOT, or square.
Initially, was applied an erosion to the original image (Figure 34-35) followed by four
iterations of the dilation filter (Figure 36) in order to remove noise in all areas of the image.
Although the noise has been removed, using the dilation caused an increase in white areas, so
it was necessary to apply erosion with three iterations (Figure 37), so that the figures to
resume its initial shape. Usually the iterations of erosion and dilation should be balanced as
in this case.

Figure 34 Initial image.

Figure 35 Image after one iteration of erosion.

Figure 36 Image after four iteration of dilation.

Figure 37 Image after three iteration of erosion.

2. In order to identify the medium and large dunes were performed 30 iterations of erosion
method to eliminate the smaller dunes. Then, in order to restore the initial form of the dunes
it was carried out 30 iterations of the dilation method. Figure 38 allows comparison of the
result of this process with the figure reported in paragraph 1, wherein the green areas were
eliminated (smaller dunes and little some parts of larger ones), while the color yellow
identifies the average and large dunes.

Figure 38 Image with representation of the big and averages dunes in yellow and the remaining in green.

In order to find only the largest dunes, it was necessary to carry out the process mentioned above by
increasing the number of iterations to 50 both erosion and dilation method. Increasing the number
of iterations was due to it being necessary to remove elements with larger area than the previous
ones.

Figure 39 Image with representation of large dunes in yellow and the other in green.

3. In order to study the main direction of displacement of the dunes, you should use structural
elements for filtering, such as the vertical elements (M |), horizontal (M -) and diagonal (M\ and
M/).
Considering that the structural elements must have the same size so that you can make a comparison
between the images obtained, it is necessary to multiply by 2 diagonal in relation to the vertical
and horizontal elements. The images, shown below, were obtained: the first two from 99 iterations
of erosion method and the other from 70.

Figure 40 Image obtained by 99 iterations of erosion method with structuring element M\ (Md).

Figure 41 Image obtained by 99 iterations of erosion method with structuring element M/ (Me).

Figure 42 Image obtained by 70 iterations of erosion method with structuring element M|.

Figure 43 Image obtained by 70 iterations of erosion method with structuring element M-.

By applying the method of erosion in reference with the structuring element, we can see the main
direction of displacement by the small elements. Finally we can conclude that the main direction of
displacement of the dunes will be in the direction of the main diagonal, M \, because in this
situation we have the smallest clear zones than the other.

Problem 5
Thesis statement:
Consider the two images from the surface of Mars, one showing an impact crater, the other a typical
polygonal network from a permafrost region. The images were acquired in distinct regions of the
planet by different probes: the crater image by HIRISE camera on board Mars Reconnaissance
Orbiter with a resolution of 0.25 m/pixel, while the polygonal network belongs to MOC camera
from Mars Global Surveyor with a resolution of 3 M/pixel. Make the segmentation of the crater and
of the polygons of the network using the segmentation techniques you are acquainted with. Present
different processing sequences to justify the best solution obtained for each image.

Dissertation:
Segmentation is a process that allows the subdivision of the image into regions with the same
meaning, considering groups of similar pixels. In this way, it is possible to delimit homogenous
regions with a certain spectral characteristic or property, such as basins or craters.
There are several methods that allow segmentation, the most common of these is watershed. This
algorithm simulates the flooding of the surface, as compared to digital image to a topographic
surface. The association is done considering that more is high the gray level in the image and more
is big the dimension of the surface. Flooding occurs from each of the image minimum, and the
water level gradually increases and with the same speed in all regions.
Another segmentation method that has been used is the region growing, which starting from a pixel,
absorbs its similar near to him and grows.
Victoria Crater

The initial image of the crater Victoria has high details (Figure 44) so, before proceeding to the
segmentation, it was necessary to smooth the image. For this reason the image was smoothed with
average filter for 30 times (Figure 45), then applying Sobel filter (Figure 46) and again average for
10 times. (Figure 47)

Figure 44 Initial image.

Figure 45 smoothed image by applying the average linear filter (x30).

Figure 46 smoothed image by applying the average linear filter (x30) then Sobel filter.

Figure 47 smoothed image by applying the average linear filter (x30) then Sobel filter and in the end again average linear filter
(x10).

Applying watershed on the initial image, the result obtained can be found in Figure 48. We observe
a over-segmentation that does not allow to obtain any information regarding acquisition of the basin
boundary. For this reason, the watershed was applied on the final filtered image with three different
digital levels (ND - corresponds to the selection of the minimum meaningful from which define
regions or structures of interest that wish to target). Initially it was considered a null ND, then a
value equal to 1 and, in conclusion, an ND 2. (Figure 49-51).

Figure 48 Watershed applied to the initial image.

Figure 49 Watershed applied to the filtered image with a null ND.

Figure 50 - Watershed applied to the filtered image with ND=1.

Figure 51 Watershed applied to the filtered image with ND=2

As can be seen, the image smoothed by linear filter is more blurred. This occurs because both the
minimum values as the maximum gray level are replaced by the average value of neighboring
pixels, which leads to a noise attenuation and details. Therefore, it is concluded that the use of the
average filter will lead to better results.
In this it was more appropriate to use for segmentation only watershed, because if we had used the
region growing method, we would have obtained the breakdown of the crater: the light part and
dark part. Therefore we would not have had the contour around the crater from which it is possible
to measure the diameter (Figure 52).

Figure 52 Diameter calculation crater (167 m)

Polygonal network

The initial image of the polygonal network has a resolution of 3m/pixel. In this case it was applied
both segmentation methods: watershed and region growing changing values to parameters, both to
the initial image and the filtered image by closing.

Figure 53 initial image.

Figure 54 region growing applied initial image (s1- a300).

Figure 55 watershed applied to initial.

Figure 56 closing filter applied to the initial image.

Figure 57 region growing applied to closing image (s1).

Figure 58 region growing applied closing image (s1- a20).

Figure 59 region growing applied closing image (s1- a300).

Figure 60 watershed applied to closing image.

As you can see, there was no particular difference applying the region growing method of initial
and filtered image, differently in the case of the watershed. In fact, if we apply watershed in both
images, we can see an over segmentation in the first case and a delineated outline of the polygons in
the second case. It has also been applied the region growing method, increasing its area pixel
parameters from which you can see that increasing this value are surrounded larger polygons at
disadvantage of smaller ones, so it is difficult to find a balanced value for various sizes polygons.
Instead comparing the two methods we can safely say that for this type of application is more
appropriate utilize region growing method because outlines in more detail polygons, differently the
watershed in which the contours are not adjacent to the polygons but in the middle of the lighter
areas between a polygon and the other.

Problem 6
Thesis statement:
Consider the satellite image from GeoEye-1 satellite in a region of Madeira Island, acquired 3 days
after the catastrophic meteorological event of 20th February 2010. Several tens of small landslides
scars, created during that event with extreme rainfalls, are clearly discernible in this region.
This is a multispectral image constituted by 4 individual images (R, G, B and NIR) with a spatial
resolution of 2 m/pixel.
It is asked to obtain a thematic map of the region, proposing a legend to the categories that you
consider more relevant to describe this surface.
Indicate also the best performance obtained, with the adequate details, using different automatic
classification methods. Compare also the results between pixels and object based methods.

Dissertation:
The classification is defined as a decision process based on object recognition , assigning labels to
items selected image to be recognized within classes , which are defined according to the study that
is required to perform. It is also necessary to define the criteria for assigning labels to each of the
elements of the image, in order to have a representative selection of objects. The classification is
performed for each existing pixel in the image, which are assigned an object that can be water ,
clouds, trees, etc. Pixels are characterized by a vector composed of elements representing levels of
gray in each spectral band and having the properties of that point.
In this problem different methods of classification are performed based on the collected images of
the region of Madeira. The information provided is composed of pictures in the Red, Green, Blue
and NIR bands.
Contrast

Contrast manipulation improves the visual ability of the images, therefore its interpretation. From
the image present in (Figure 61), a linear transformation to the distribution of color histogram is
performed in each of the channels R G B.

Figure 61 Initial Image GeoEye1_NIR(RGB)

By manipulating the image contrast and superposition is possible to see more clearly the details and
colors. It is possible to recognize the terrain as shown in (Figure 62).

Figure 62 Contrasted image from GeoEye1_NIR(RGB).

Classification

The classification aims to recognize objects and patterns that make up the image and the
distribution of pixels. For this it is necessary to define "Classes" for each soil sample to be studied.
In this case 7 classes that will be labeled with the following names are recognized:
1.
2.
3.
4.
5.
6.
7.

Water
Clouds
Shadows
Vegetation
Bushes and Trees
Bare Soil
Land slides

Using the tool Traning the samples for each class are defined. The function of this tool is labeled
by classes, defined by gray levels, each pixel with similar levels. A rectangular selection using as
Themes Export "Acquisition" for a first sample acquisition is performed,
defining with representative colors each classes labeled as shown in (Figure 63).

Figure 63 Acquisition theme export.

Then a rectangular selection of sample acquisition using as Themes Export "Test" ensuring that
there is a ratio between the size and number of pixels selected with previous samples "Acquisition "
and avoiding overlap between them in all classes is performed as shown in (Figure 64). This
selection is performed to evaluate the performance of the previous classification

Figure 64 Test theme export.

Having acquired samples of each class using the classification tool, proceed to the analysis of the
image having as variables the "Classifier Attributes or decision rules which are:

1. Maxver: It is a method of supervised classification based on each given pixel can be


assigned to a class that probably belongs by "Bayes optimization that minimizes errors of
the input data. By maximizing the likelihood is considered to represent the dependent
classes multivariate normal distributions of mean vector of each class covariance matrix so
as to classify a new value with greater likelihood.
2. Euclidean: It is a method based supervised classification using the Euclidean distance
measure so as to decide whether or not a point belongs to a prescribed class. To decide
which class belongs each point, the distance between each point is calculated with each class
being the class presents a lower average distance which will be assigned to that point.
3. K-mediuns: It is a method of unsupervised classification clustering algorithms based on
dividing a set point in k- clusters, so that the points of each class tend to be near each other.
The decision rule according to which points are closer to the center to label each in a class
deals.

1. MaxVer
a. 100%

Figure 65 MaxVer Classification Image, 100% Acceptance Threshold.

i. Acquisition
Table 3 Acquisition Error Matrix, 100% Acceptance Threshold.

ii. Test

Table 4 Test Error Matrix, 100% Acceptance Threshold.

Analysis
In this case observing the classification matrix error, shows that where there is more confusion is
between classes, "vegetation" and "water", where a large number of pixels corresponding to
vegetation are classified as water and can be seen in the (figure 65). Despite the above, the overall
accuracy is good having a value of 83.17%, as its statistical measure K having a value of 77.97%.
Making a visual comparison between the original image and this classification, this image, there is a
very good classification of areas of vegetation , trees and shadows, but paying attention to the shape
of the cloud a relationship with the initial image is not seen , there are some areas classified as water
that does not correspond.
b. 99%

Figure 66 MaxVer Classification Image, 99% Acceptance Threshold.

i. Acquisition
Table 5 Acquisition Error Matrix, 99% Acceptance Threshold.

ii. Test
Table 6 Test Error Matrix, 99% Acceptance Threshold.

Analysis
In this case also a great confusion seen in the classification matrix of errors between vegetation and
water. Moreover there is a great abstention in "Bare Soil" class that represents approximately 50%
of the pixels. The overall accuracy and statistical measure K also have high values, being 80.68%
and 75.06% respectively. It is expected to have lower percentages because the pixels not classified
in class.
As in the previous image, there is a very good classification of areas of vegetation, trees and
shadows. Unlike the above, there are now no defined areas demarcated with white color in the
vicinity of the cloud, which should represent Landslide, but in turn the shape of the cloud is better
defined, but not so clearly. Also they classified as unexpected water areas are observed.
c. 95%

Figure 67 MaxVer Classification Image, 95% Acceptance Threshold

i. Acquisition
Table 7 Acquisition Error Matrix, 95% Acceptance Threshold

ii. Test
Table 8 Test Error Matrix, 95% Acceptance Threshold.

Analysis
It is necessary to define a level of acceptance associated with each class, which defines the limits
for which it is considered a point within a class. This means that the higher the level is the worst
classification, as they are considered more distant points to the levels of the class. That's why in this
classification there is a greater amount of points undefined.
In this case also the main source of confusion is between the vegetation and water. There is a great
abstention in the "Bare Soils" and "Shadows" class. The overall accuracy and decrease K statistical
measure, being 74.25% and 68.06% respectively. It is expected to have lower percentages because
the pixels not classified in class.
As in previous images, there is a very good classification of areas of vegetation , trees and shadows.
There are also no defined areas demarcated with white color in the vicinity of the cloud , which
should represent Landslide , Now it is possible to observe the shape of the clouds more clearly.
Also they classified as unexpected water areas and the amount of undefined areas do not help a
good appreciation of the image.

2. Euclidean

Figure 68 Euclidean Classification Image.

i. Acquisition
Table 9 Acquisition Error Matrix, Euclidean.

ii. Test
Table 10 Test Error Matrix, Euclidean.

Analysis
This classification method has better performance among classes and water vegetation, which in
previous cases were the most contentious. Overall, the error matrix has few deviations from the
diagonal, with exception of the confusions between "vegetation" classes and "bare soils", this being
even greater than the class which should belong pixels. The overall accuracy and statistical
decreasing compared to previous cases, being 76.22% and 69,54% respectively.
Making a visual comparison between the original image and this classification, using this method a
better definition of all areas is observed, especially of clouds and water, which in previous
occasions were overvalued. They can clearly identify the elements that make up the image.

3. K-Mediuns

Figure 69 K-Mediuns Classification Image, 8 Themes and 10 Interactions.

Analysis
This method does not clearly define the water, being observed in relation to the rest of the images
an excessive overvaluation of this element. The definition of the clouds improves, but there are also
no classification data within it. Unlike previous pictures in this predominantly the presence of bare
soils, which when compared with the initial image has no relationship.

Final Comparation

Figure 70 Image Comparation, 1.Contrasted Initial Image, 2.MaxVer 100%, 3. MaxVer 99%, 4. MaxVer 95%, 5. Euclidean, 6. KMediuns.

Statistically the classification "MaxVer 100%" has less confusion between pixels, being somewhat
expected because this method has a greater range of classification for each gray level.
Making a visual comparison of images, it is possible to note that the classification made with the 5.
Euclidean method has greater similarity to the initial image.
This is concluded by observing 3 elements in the image.
1. The shape of the cloud is well defined compared to the other methods.
2. This method recognizes the presence of Landslides near the cloud
3. There is a better representation of the areas of water. In other methods can be seen an
overestimation of this element.
As was discussed in the previous topics, the selection of the image should be about what you want
to study, so not necessarily to have better overall accuracies mean that the classification faithfully
represent the actual image.

You might also like