Professional Documents
Culture Documents
Volume 8, Issue 1, January - February 2017, pp. 1831, Article ID: IJECET_08_01_003
Available online at
http://www.iaeme.com/IJECET/issues.asp?JType=IJECET&VType=8&IType=1
ISSN Print: 0976-6464 and ISSN Online: 0976-6472
IAEME Publication
Prof. M. P. Dongare
Assistant Professor, Electronics Department, Amrutvahini College of Engineering,
Sangamner, Maharashtra, India
ABSTRACT
Texture information is exploited for classification of HSI (Hyperspectral Imagery) at high
spatial resolution. For this purpose, framework employs to LBP (Local Binary Pattern) to extract
local image features such as edges, corners & spots. After the extraction of LBP feature two levels
of fusions are applied along with Gabor feature & spectral feature, i.e. Feature level fusion &
Decision level fusion. In Feature level fusion multiple features are concurred before pattern
classification. While in decision level fusion, it works on probability output of each individual
classification pipeline combines the distinct decisions into final one. Decision level fusion consists
of either hard fusion, soft fusion method. In hard fusion we consider majority part & in soft fusion
linear logarithmic opinion pool at probability level (LOGP). In addition to this, extreme learning
machine (ELM) classifier is included which is more efficient than support vector machine (SVM),
used to provide probability classification output. It has simple structure with one hidden layer &
one linear output layer. ELM trained much faster than SVM.
Key words: Decision fusion, extreme learning machine (ELM), Gabor filter, hyperspectral imagery
(HSI), local binary patterns (LBPs), pattern classification.
Cite this Article: Priya G. Deshmukh and Prof. M.P. Dongare, Hyperspectral Imagery
Classification using Technologies of Computational Intelligence, International Journal of
Electronics and Communication Engineering and Technology, 8(1), 2017, pp. 1831.
http://www.iaeme.com/IJECET/issues.asp?JType=IJECET&VType=8&IType=1
1. INTRODUCTION
To develop an innovative technique to classify the hyperspectral images using tools of computational
intelligence. Classification of hyperspectral imagery (I) at high spatial resolution is done by exploiting
texture information. It uses local binary pattern to extract local features and a simple efficient extreme
learning machine with a very simple structure is employed as the classifier. Many algorithms have been
proposed improve the local image feature for classification of hyperspectral images currently feature-level
http://www.iaeme.com/IJECET/index.asp 18 editor@iaeme.com
Hyperspectral Imagery Classification using Technologies of Computational Intelligence
fusion simply concatenates a pair of different features (i.e., Gabor features, LBP features, and spectral
features) in the feature space
A) LBP Features- LBP for HSI classification works on gray scale image with single spectral band. In this
method, linear prediction error (LPE) is used for unsupervised band selection. LPE is first applied to the
set of featured & distinct bands. For each band LBP code is generated for each pixel in the entire image. So
that to generate LBP code image. From LBP code image LBP image patch is extracted to calculate its
histogram. Performance of LPE is better than principle component analysis.
After the extraction of LBP feature two levels of fusions are applied along with Gabor feature &
spectral feature, i.e. Feature level fusion & Decision level fusion. In Feature level fusion multiple features
are concurred before pattern classification. While in decision level fusion, it works on probability output of
each individual classification pipeline combines the distinct decisions into final one. Decision level fusion
consists of either hard fusion, soft fusion method. In hard fusion we consider majority part & in soft fusion
linear logarithmic opinion pool at probability level (LOGP).
B) ELM- Extreme Learning Machine (ELM) classifier is used to provide probability classification outputs
using LBP features. ELM is a neural network based method. It has very simple structure consisting of one
hidden layer & one linear layer. It is much faster technique due to random generation of input weight &
analytical generation (least square method) of output weights, which reduces computational cost.
2. LITERATURE REVIEW
It is of great interest in exploiting spatial information to improve HSI classification. In previous system
like SVM classifier composite kernels is employed for combination of both spectral & spatial information
referred [9] as SVM-CK. Further researches are like SVM-MRF [10] based on segmentation map obtained
by pixel wise SVM classifier, Gaussian mixture model classifier (MRF-GMM). Moreover, morphological
profile (MP) [7] generated by certain morphological operators which is widely used for modeling structural
information. MPs are extracted from principle components (PC). But in PCs fine structures tend to be
present in minor PCs than in major PCs. Before SVM-CK [8] kernel discriminate analysis were employed
but it has problem of overload computation. Next researches are like Gabor texture feature, Gabor texture
feature including gray level concurrent matrix, different MPs, & urban complexity index also Gabor
feature with band selection
3. PROBLEM STATEMENT
Hyperspectral image processing has been a very dynamic area in remote sensing and other applications in
recent years. Hyperspectral images provide ample spectral information to identify and distinguish
spectrally similar materials for more accurate and detailed information extraction. Wide range of advanced
classification techniques are available based on spectral information and spatial information. To improve
classification accuracy it is essential to identify and reduce uncertainties in image processing chain. Large
number of high spatial resolution images is available through various advances of sensor technology. In
conventional HSI classification systems, classifiers only consider spectral signatures and ignore the spatial
information at neighboring locations. So we focused on classification of hyperspectral images using local
binary patterns and technologies of computational intelligence.
4. PROPOSED SOLUTION
There are total two primary stages in this method i.e. an effective texture feature extraction & fusion of
extracted local LBP features, global Gabor features, & original spectral feature. First LPE is applied to an
image for band selection with gray scale image generation. Then LBP code generation takes place to each
pixel in an image. From that only local LBP image patch extracted & then histogram is calculated. After
this process fusion of extracted local LBP is carried out along with Gabor, spectral features. In this process
LOGP plays a vital role in merging probability outputs of the multiple texture & spectral features. Gabor
filter is used as a global operator to capture global texture feature like orientation &scale. While LBP can
http://www.iaeme.com/IJECET/index.asp 19 editor@iaeme.com
Priya G. Deshmukh and Prof. M.P. Dongare
characterized the local spatial textures such as edges, corners, & knots. Finally Gabor feature & LBP
texture features are combined for HSI classification betterment.
5. SYSTEM OVERVIEW
5.1. Hyper Spectral Image Classification Approaches
The hyper means over i.e. too many and refers as large number of measured wavelength bands.
Hyper spectral images are spectrally over determined, which provides ample spectral information to
recognize and distinguish spectrally unique materials. Hyper spectral imagery gives the potential for more
accurate and detailed information extraction than possible with any other type of remotely sensed data
[1].Hyper spectral images are 3D data, with a spectral signature for the scene spread over several bands.
Generally, the high dimensional spectral information is used to perform operations like pixel-by-pixel
classification of the scene. Band selection or feature extraction methods had been developed to improve the
performance of parametric classifiers like ML, Distance Classifiers and clustering methods. However, the
classification accuracies of these methods do not match for gray scale/color images. To identify groups of
pixels which having similar spectral characteristics and also to determine the various features represented
by these groups is an important part of image analysis, called as classification. To classify an image, visual
classification based on the analyst's ability to use visual elements (tone, contrast, shape, etc.) is necessary.
Digital image classify on the basis of the spectral information used to create the image and each individual
pixel is classified based on its spectral characteristics & then all pixels in an image are assigned to
particular classes (e.g. water, coniferous forest, deciduous forest, corn, wheat, etc.). This classified image
is called a thematic map of the original image. Hence a classification is performed for observation of land
use patterns, geology, vegetation types, or rain fall. In classification of an image we have to distinguish
between spectral classes and information classes. Spectral classes are groups of pixels which have
approximately uniform spectral characteristics. The main objective of image classification procedures is to
automatically categorize all pixels in image into land cover classes. Based on pixel information, images is
classified into Per-pixel, Sub pixel, Per-field, Knowledge based, Contextual and multiple classifiers. Per-
pixel classifiers are parametric or non- parametric. By using training samples, images can be classified as
Supervised and Unsupervised Classification. The unsupervised classification is the identification of natural
groups. The supervised classification is the method of using samples of known identity to assign
unclassified pixels to one of several informational classes. Supervised method follows the steps such as
feature extraction, training and labeling processes. In first step transforming the image to a feature image to
reduce the data dimensionality and improve the data interpretability takes place. It is optional phase and
composed techniques such as HIS transformation, principal component analysis and linear mixture model.
In the training phase, a set of training samples in the image is selected to specify each class. Training
samples trained classifiers to identify the classes and are used to determine the rules which allow
assignment of a class label to each pixel in the image. Hyperspectral Image Classification approaches are
classified as shown in Figure 1
http://www.iaeme.com/IJECET/index.asp 20 editor@iaeme.com
Hyperspectral Imagery Classification using Technologies of Computational Intelligence
Figure 2 Example of an input image, the corresponding LBP image and histogram
http://www.iaeme.com/IJECET/index.asp 21 editor@iaeme.com
Priya G. Deshmukh and Prof. M.P. Dongare
Figure 3 The circular (8,1),(16,2) and (8,2) neighborhoods. The pixel values are bilinear interpolated whenever the
sampling point is not in the center of a pixel
http://www.iaeme.com/IJECET/index.asp 22 editor@iaeme.com
Hyperspectral Imagery Classification using Technologies of Computational Intelligence
http://www.iaeme.com/IJECET/index.asp 23 editor@iaeme.com
Priya G. Deshmukh and Prof. M.P. Dongare
"shift-difference" statistics. In this method, the difference between adjacent pixels is assumed to be an
estimate of noise.
6. SYSTEMANALYSIS
6.1. Band Selection
Hyper spectral images consist of a large number of spectral bands, but many of which contain redundant
information. By selecting a subset of spectral bands with distinctive and informative features, band
selection, such as LPE [1], reduces the dimensionality. Linear projections, such as PCA, can also transform
the high-dimensional data into a lower dimensional subspace. In previous study [1], [5], investigation on
http://www.iaeme.com/IJECET/index.asp 24 editor@iaeme.com
Hyperspectral Imagery Classification using Technologies of Computational Intelligence
both LPE and PCA for spatial-feature-based hyper spectral image classification is done and found that the
classification performance of LPE was superior to that of PCA. The reason might be fine spatial structures
tend to be present in minor PCs rather than in major PCs. Thus, band selection (i.e., LPE) is employed in
this research. LPE [1] is a simple. Efficient band selection method based on band similarity measurement.
Assuming there are two initial bands B1 and B2. For every other band B, an approximation can be
expressed as = 0 + 1 1 + 2 2, where a0, a1, a2 are the parameters to minimize the LPE: =
2. Let the parameter vector be = , , . A least squares solution is employed to obtain
= ( 1 2) 1 where XB1B2 is an N3 matrix whose first column is with all 1s, second
column is the B1-band, and third column is the B2-band. Here, N is the total number of pixels, and XB is
the B-spectral band. The band which produces the maximum error e is considered as the most dissimilar
band toB1andB2, and it will be selected. Using these three bands, a fourth band can be found by using the
same strategy and so on. More implementation details can be found in [5].
Figure 6 Example of LBP binary thresholding (a) Centre pixel and its eight circular neighbors { } 7i=0 with
radius r=1.(b) 33 sample block.(c) Binary labels of eight neighbors
With center pixel , each neighbor of (t0 to t7) is assigned with a binary label, either 0 or 1. These
values are depends on value of intensity of center pixel . All these samples are equispaced with radius r.
where r is refers to distance between neighbor and center pixel. For m number of neighbors{ } !" , the
LBP code for is given by
http://www.iaeme.com/IJECET/index.asp 25 editor@iaeme.com
Priya G. Deshmukh and Prof. M.P. Dongare
http://www.iaeme.com/IJECET/index.asp 26 editor@iaeme.com
Hyperspectral Imagery Classification using Technologies of Computational Intelligence
Figure 8 Example (a) Input image (b) LBP-coded image (different intensities representing different codes) (c) (f)
Filtered images obtained by the Gabor filter with different values. (c) Gabor feature image, =0. (d) Gabor feature
image, =/4. (e) Gabor feature image, =/2. (f) Gabor feature image, =3/4
Figure 8 illustrates an example of comparison between LBP and Gabor features in a natural image
(namely, boat) of size 256256. Figure 8 (b) shows the LBP-coded image obtained using (1) with (m, r) =
(8, 1), and figure 8(c)(d) illustrates the filtered images obtained by the Gabor filter with different (i.e., 0,
/4, /2, and 3/4). In figure 8, the Gabor features produced by the average magnitude response for each
Gabor-filtered image reflect the global signal power. While the LBP coded image gives a better expression
of detailed local spatial features, like edges, corners, and knots. Hence, it is necessary to apply the global
Gabor filter as a supplement to the local LBP operator that lacks the consideration of distant pixel
interactions, for getting better result. As stated earlier, the Gabor filter captures the global texture
information of an image, and LBP represents the local texture information, it came to know that an HSI
data usually contains homogeneous regions where pixels fall into the same class. Gabor features can able
to reflect such a global texture information as the Gabor filter can effectively capture the orientation and
scale of physical structures in the scene. Hence, combining Gabor and LBP features can achieve better
classification performance than using only LBP features.
6.5. Classifier
ELM [2], [4] a type of classifier is a neural network with only one hidden layer and one linear output layer.
The weights of the output layer are computed using a least squares method and the weights between the
input and the hidden layers are randomly assigned. Which causes to the computational cost is much lower
than any other neural-network-based methods. For C classes, let the class labels be defined as VW
{1, 1} (1 Y Z). Thus, a constructed row vector\ = \1, . . . , \Y, . . . , \Z indicates the class to which
a sample belongs. For example, if \Y = 1and the other elements in y are 1, then the sample belongs to
the kth class. Thus, the training samples and corresponding labels are represented as{5(, \(}Q! , where
5( ]^ and\( ] , the output function of an ELM with L hidden nodes can be expressed as,
http://www.iaeme.com/IJECET/index.asp 27 editor@iaeme.com
Priya G. Deshmukh and Prof. M.P. Dongare
stability and generalization, a positive value1/ is normally added to each diagonal element of HHT. As a
result, the output function of the ELM classifier is expressed as
n
t# (5() = (5()a = (5()hu 7v + hhu? 1V (8)
In ELM, the feature mapping h (xi) is assumed to be known. Recently, kernel-based ELM [4] has been
proposed by ex-tending explicit activation functions in ELM to implicit map-ping functions, which have
exhibited a better generalization capability. If the feature mapping is unknown, a kernel matrix of ELM can
be considered as
wxdy = hhu: wxdy (, A = (5() (5A) = |(5( , 5A)(9)
Hence, the output function of KELM is given as
|(5 , 5 )
n
*d_` =m p 7 + wxdy ? ~ (10)
|(5 , 5Q )
The input data label is finally determined as per the index of the output node with the largest value. In
these experiments, the kernel version of ELM is implemented. The training of ELM has only one analytical
step as compared to the standard SVM, which needs to solve a large constrained optimization problem. In
these experiments, it will be demonstrated that ELM can provide a classification accuracy that is similar to
or even better than that of SVM.
http://www.iaeme.com/IJECET/index.asp 28 editor@iaeme.com
Hyperspectral Imagery Classification using Technologies of Computational Intelligence
http://www.iaeme.com/IJECET/index.asp 29 editor@iaeme.com
Priya G. Deshmukh and Prof. M.P. Dongare
higher for a larger output of the decision function. Platts empirical analysis using scaling functions of the
following form is added
$( n ) = ;_ () ;
(11)
Where$( n ) means the conditional class probability of the qth classifier,*d(_) is the output decision
function of each ELM, and (W , W ) are parameters estimated for ELM in class Y (1 Y Z). The
parameters W and W are found by minimization of the cross-entropy error over the validation data. Note
that W is negative. In the proposed framework, LOGP [3], [5] uses the conditional class probabilities to
estimate a global membership function$(n ) a weighted product of these output probabilities. The
final class label y is given according to
5
\ = arg Y 1, ) 6(\W | ) (12)
Where the global membership function is
$(\W | ) =
! $ (\W | )
8
(13)
log $(\W | ) = ! $ (\W | ) (14)
With {} = 1being the classifier weights uniformly distributed over all of the classifiers and Q
being the number of pipelines (classifiers) in Figure 10
7. CONCLUSION
In this paper, a framework based on LBP proposed to extract local image features for classification of HSI.
Specifically, LBP implemented to a subset of original bands selected by the LPE method. Two types of
fusion levels (i.e., feature and decision levels) were defined on the extracted LBP features along with the
Gabor features and the selected spectral bands. A soft-decision fusion process of ELM utilizing LOGP
proposed to merge the probability outputs of multiple texture and spectral features. The experimental
results express that local LBP representations are effective in HSI spatial feature extraction, because they
encode the information of image texture configuration while providing local structure patterns. Also, the
decision-level fusion of kernel ELM provides effective classification and also it is superior to SVM-based
methods. Recently, feature-level fusion simply concatenates a pair of different features (i.e., Gabor
features, LBP features, and spectral features) in the feature space.
REFERENCES
[1] C. Chen, W. Li, H. Su, and K. Liu, spectralspatial classification of hyperspectral image based on
kernel extreme learning machine, Remote Sens., vol. 6, no. 6, pp. 57955814, Jun. 2014.
[2] R. Moreno, F. Corona, A. Lend asse, M. Grana, and L. S. Galvao Extreme learning machines for
soybean classification in remote sensing hyperspectral images,Neuro-computing, vol. 128, no. 27, pp.
207216, Mar. 2014.
[3] W. Li, S. Prasad, and J. E. Fowler, Decision fusion in kernel-induced spaces for hyperspectral image
classification, IEEE Trans. Geosci. Remote Sens., vol. 52, no. 6, pp. 33993411, Jun. 2014
[4] Y. Baziet al. Differential evolution extreme learning machine for the classification of hyperspectral
images IEEE Geosci. Remote Sens. Lett., vol. 11, no. 6, pp. 10661070, Jun. 2014.
[5] Z. Guo, L. Zhang, and D. Zhang, Rotation invariant texture classification using LBP variance (LBPV)
with global matching Pattern Recogn., vol. 43, no. 3, pp. 706719, Mar. 2010
http://www.iaeme.com/IJECET/index.asp 30 editor@iaeme.com
Hyperspectral Imagery Classification using Technologies of Computational Intelligence
[6] X. Kang, S. Li, and J. A. Benediktsson, spectralspatial hyperspectralimage classification with edge-
preserving filtering ,IEEE Trans. Geosci.Remote Sens., vol. 52, no. 5, pp. 26662677, May 2014
[7] 7. M. Fauvel, J. A. Benediktsson, J. Chanussot, and J. R. Sveinsson Spectral and spatial classification
of hyperspectral data using SVMs and morphological profilesIEEE Trans. Geosci. Remote Sens., vol.
46, no. 11,pp. 38043814, Nov. 2008.
[8] 8.C. Chen and J. E. Fowler, Single image super-resolution using multi hypothesis prediction, in Proc.
46th Asilomar Conf. Signals, Syst., Comput.,Pacific Grove, CA, USA, Nov. 2012, pp. 608612
[9] 9. C. Chenet al., Multihypothesis prediction for noise-robust hyperspectral image classification IEEE
J. Sel. Topics Applies Earth Observe. RemoteSens., vol. 7, no. 4, pp. 10471059, Apr. 2014.
[10] 10. X. Huang and L. Zhang, An SVM ensemble approach combining spectral, structural, and semantic
features for the classification of high-resolution remotely sensed imagery, IEEE Trans. Geosci. Remote
Sens.,vol. 51, no. 1, pp. 257272, Jan. 2013
[11] Sorna Percy. G and Dr. T. Arumuga Maria Devi, An Efficiently Identify the Diabetic Foot ULCER
Based on Foot Anthropometry Using Hyper Spectral Imaging. International Journal of Information
Technology & Management Information System, 7 (2), 2016, pp. 3644.
[12] Preethi N Patil and G. G. Rajput, Detection and Classification of Non Proliferative Diabetic Retinopathy
Stages Using Morphological Operations and SVM Classifier. International Journal of Computer
Engineering & Technology, 4 (6), 2013, pp. 18.
http://www.iaeme.com/IJECET/index.asp 31 editor@iaeme.com