You are on page 1of 6

IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES, VOLUME 4, ISSUE 1, JAN-JUNE /2017

ISSN (ONLINE): 2394-8442

MODIFIED FUZZY HYPER-LINE SEGMENT CLUSTERING NEURAL


NETWORK (MFHLSCNN) FOR PATTERN RECOGNITION AND ITS
PARALLEL IMPLEMENTATION ON GPU
Priyadarshan Dhabe #1, Akshay Prakash Mahajan *2
#
Computer Engineering, Vishwakarma Institute of Technology, Pune, India.
1
priyadarshan.dhabe@vit.edu, 2akshaymahajan0111@gmail.com

ABSTRACT.

The Modified Fuzzy Hyper-Line Segment Clustering Neural Network (MFHLSCNN) is the
modified version of Fuzzy Hyper-Line Segment Clustering Neural Network (FHLSCNN) [1]. This hybrid system
combing fuzzy logic and neural networks is used for pattern recognition. MFHLSCNN learn patterns in terms of n
dimensional Hyper-Line Segments (HLSs), which are fuzzy sets. The fuzzy HLSs created during the training of
MFHLSCNN and are defined by two endpoints. After HLSs creation, we are clustering them and removing
clustered HLSs based on the membership criteria iteratively. For a large dataset, MFHLSCNN creates large no of
HLSs and causes an increase in time in training and testing phase. In this work, we proposed a GPU (Graphics
Processing Unit) parallel implementation of MFHLSCNN using CUDA [2] and achieved 4.64 times speedup, for
online retail data set [3]. For parallel implementation, we used NVIDIAs single Tesla K20 GPU and CUDA
(Compute Unified Device Architecture) computing platform.

Keywords CUDA, Fuzzy Neural Network, Parallel computing on GPU, Pattern Recognition, Market segmentation.

I. INTRODUCTION
Market segmentation process is used by numerous kinds of businesses like clothing, banks, financial institutions, car companies, etc. Market
segmentation is dividing the customers into groups on the basis of similar needs, desires and preferences. The basis of segmenting the customers
in groups can be gender, income, occupation, age group, marital status, etc. In this work, we used regency, frequency and monetary (RFM)
model for segmenting the customers into meaningful groups [4].

Pattern recognition is the study about machines like how they observe, learn and make decisions for categories of the patterns. A brief
literature survey shows the applications of FHLSNN [5] and its modifications in engineering fields such as rotation invariant handwritten
character recognition [6] and in scientific fields such as thyroid disease detection [7], heart disease detection [8]. FHLSNN is successfully
modified for classification, clustering and a hybrid classification and clustering. The real challenge is to design the systems which can take
decisions like a human.

The neural network has the abilities to deal with the uncertainties [9],[10]. The hybrid combination of the fuzzy logic and neural network, i.e.
fuzzy neural network [11] is commonly used and more promising manner to fulfil these challenges. In FHLSNN and FHLSCNN [1] the fuzzy
neural network is used and thus we are considering them in this work. Pattern recognition has become one of the most important problems in
engineering and scientific fields. Numerous applications of pattern recognition have a large datasets; it required large number of computations,
so we are proposing a parallel approach which gives a remarkable speedup.

II. ONLINE RETAIL DATASET


The Online Retail dataset is taken from the UCI repository [3]. The online retail dataset is a transactional dataset which contains all
transaction occurred between duration 1/12/2010 to 9/12/2011. The Company mainly sells occasional gifts and most of the customers are
wholesalers. Since we are doing the parallel implementation of MFHLSCNN we needed large dataset. The online retail dataset has total 541909
numbers of instances and 8 numbers of features. This dataset is associated with both the tasks classification and clustering.

To Cite This Article: Priyadarshan Dhabe and Akshay Prakash Mahajan ,. MODIFIED FUZZY
HYPER-LINE SEGMENT CLUSTERING NEURAL NETWORK (MFHLSCNN) FOR PATTERN
RECOGNITION AND ITS PARALLEL IMPLEMENTATION ON GPU. Journal for Advanced
Research in Applied Sciences ;Pages: 490-495
491. Priyadarshan Dhabe and Akshay Prakash Mahajan ,. MODIFIED FUZZY HYPER-LINE
SEGMENT CLUSTERING NEURAL NETWORK (MFHLSCNN) FOR PATTERN RECOGNITION
AND ITS PARALLEL IMPLEMENTATION ON GPU. Journal for Advanced Research in Applied
Sciences; Pages: 490-495

A. Data Pre-processing

Before making cluster analysis based on RFM, the original data should be pre-processed. Initially, we have 8 numbers of features, and they
are Invoice No, Stock Code, Description, Quantity, Invoice Date (Invoice Date and time), Unit Price, Customer and Country. The stepwise data
pre-processing is discussed below:

i. Select only appropriate variables of interest. In this step, Description variable will be removed.
ii. Assign unique numbers to each country in Country column (forex. United Kingdom 1, France 2, United Arab Emirates 37).
iii. Separate the attribute Invoice Date into two Date and Time. So that transaction is done by the same consumer/customer on the same date
but different time can be separated.
iv. Create a single variable Amount, by multiplication of Quantity with Price. The new variable will give the total amount spent by a
customer on a product in each transaction.
v. Remove the whole tuples which have missing data (for ex. a tuple with missing Customer-ID). After removing such tuples, a number of
instances = 406830.
vi. Delete the instances of cancelled orders by filtering Invoice-Id starting with C (where C stands for cancellation).
vii. Sort dataset with respect to Customer-ID column. There are total 3448 unique customers.
viii. Finally, target dataset consists of 313999 numbers of instances and 7 attribute variables.

III. RFM MODEL


The Regency, Frequency and Monitory model are used for analysing and segmenting best customers. The basic concept of this model is
customers who have made a purchase recently, make a regular and frequent purchase with your company, and making big transactions with a
large amount of money. RFM model helps to make market segmentation in order to find least profitable customers, regular customers and large
profitable customers. The variables are described below:

1. Recency: The last purchase of customer made with your company and the customer who recently made a transaction is more likely to
do more transactions.
2. Frequency: Frequency is about how frequently customer purchases from your company, it also tell the future predictions of purchases.
3. Monetary value is the amount of money spent by a customer during the particular time period. Depending on monetary value least
profitable, regular and large profitable customers are treated differently.

IV. GPU COMPUTING WITH CUDA


CPU (Central Processing Unit) is the central processing component of the computer system and it is being enhanced by the additional
accelerator device called GPU. This hybrid combination of CPU-GPU can accelerate many large applications in areas like machine learning,
computational finance, defense and intelligence, etc. CPU has very few but more powerful as compared to GPU. Modern GPU has thousands of
cores.

The design philosophy is CPU core like few elephants and GPU cores like millions of ants. Due to few numbers of cores CPU can handle less
number of software threads at a time where GPU can handle thousands of threads simultaneously. GPU is energy efficient device than CPU.
Hence, with the help of GPU computing, many applications can be accelerated by 100 xs than CPU alone [12].

NVidias CUDA is the computing platform for parallel programming. With the modern CUDA enabled GPUs many engineering and
scientific applications can achieve the maximum speed up. The CUDA code is saved with .cu file extension and compiled by the NVIDIA
NVCC (NVIDIA CUDA Compiler) compiler. NVCC compiler divides the code into the host (CPU) code and device (GPU) code and. Host code
is compiled by the C/CPP compiler.

Before developing an application with CUDA platform, the algorithm should be design and decomposed into a number of SIMT threads.
GPU has SM (streaming multiprocessors). The number of threads in a CUDA application is organized according to CUDA thread hierarchy.
CUDA blocks have a number of CUDA threads, and CUDA blocks in turn arranged into a CUDA grid [2]. Further, this blocks splits into warps
[13]. A CUDA warp is a scheduling unit on a single streaming processor. In date, each CUDA warp can have maximum 32 CUDA threads. All
of the CUDA threads in CUDA warp execute concurrently and a CUDA warp can be scheduled on a single CUDA core.

CUDA allows the developer to control the number of threads into blocks and blocks into the grid. The GPU has different types of memory
like global/constant/texture/shared/local memory. Global memory has high latency and accessible by all the threads as well as host, other types
of GPU memories are accessible by GPU only. The limited amount of CUDA memory limits the number of threads that can execute
simultaneously in SM. Threads should be arranged such that as possible as more amount of data should be shared between threads using these
memories [14].

V. THE MFHLSCNN

As shown in Fig. 1, MFHLSCNN has three layered architecture namely, , and . The layer accepts the unlabeled pattern as inputs.
At layer, the HLSs are created and the connection between and layers represented by two endpoints of HLS V and W as shown in Fig.
3, which are stored in V and W matrices at layer clusters are created by checking certain criteria as in [1].
492. Priyadarshan Dhabe and Akshay Prakash Mahajan ,. MODIFIED FUZZY HYPER-LINE
SEGMENT CLUSTERING NEURAL NETWORK (MFHLSCNN) FOR PATTERN RECOGNITION
AND ITS PARALLEL IMPLEMENTATION ON GPU. Journal for Advanced Research in Applied
Sciences; Pages: 490-495

Fig.1. Architecture of MFHLSCNN

MFHLSCNN includes three steps namely, the creation of HLSs, clustering of HLSs and intersection test. But the dataset used here is a large
dataset with n dimensions (i.e. total 7 dimensions). So In this work, the intersection step is eliminated since in large dimensional dataset there
is a rare situation in which HLSs intersected in two or more clusters. While calculating fuzzy membership function for each HLSs parameter
shows the fuzziness. As the value of parameters increases, it becomes more crisp. The plot of fuzzy membership function for =1 and HLS
end points as V = [0.5,0.7] and W = [0.5,0.3] is shown in Fig. 2.

Fig. 2. Plot of Fuzzy membership function, for ,

V= [0.5, 0.7] and W= [0.5, 0.3]

As shown in Fig. 3 different distances need to be calculated during the creation of HLSs [15]. For creation of HLS the test pattern should be
tested for inclusion in previously created HLS(s) and this can be done with the help of calculating fuzzy membership values. Fuzzy membership
function requires three parameters: test pattern Rh, and both end points which are stored in matrices namely V and W. The fuzzy membership
value is calculated using fuzzy membership function as in (1).

ei Rh ,V j ,W j 1 f x, , l (1)
And the parameter x passed to ramp function is calculated as x l1 l2 the distances shown in Figure 3. i.e. l , l1 , l 2 defined in (2), (3) and (4)
are:

1/ 2
n 2
l w ji v ji (2)
i 1
1/ 2
n 2
l1 w ji rhi (3)
i 1
493. Priyadarshan Dhabe and Akshay Prakash Mahajan ,. MODIFIED FUZZY HYPER-LINE
SEGMENT CLUSTERING NEURAL NETWORK (MFHLSCNN) FOR PATTERN RECOGNITION
AND ITS PARALLEL IMPLEMENTATION ON GPU. Journal for Advanced Research in Applied
Sciences; Pages: 490-495

1/ 2
n 2
l2 v ji rhi (4)
i 1
The ramp threshold function f (.) is defined in (5):

Fig.3. Distances between Test pattern and HLS

if Otherwise
x if 0 x 1
f (x, , l ) = (5)
1 if x 1
The clustering step again consists three steps: determination of centroid, bunching of HLSs in a cluster and removing already bunched HLSs
in previous iterations so that rest of the patterns are considered for clustering in next iteration. The parameters and are used for adjusting
centroid and bunching HLSs respectively. The is called as a centring factor and is called as bunching factor. This whole algorithm is
summarized in Algo.1, as follows:
494. Priyadarshan Dhabe and Akshay Prakash Mahajan ,. MODIFIED FUZZY HYPER-LINE
SEGMENT CLUSTERING NEURAL NETWORK (MFHLSCNN) FOR PATTERN RECOGNITION
AND ITS PARALLEL IMPLEMENTATION ON GPU. Journal for Advanced Research in Applied
Sciences; Pages: 490-495

VI. GPU PARALLELISATION OF MFHLSCNN

In this work, the GPU parallelization of MFHLSCNN is proposed for faster execution of training and testing of a larger dataset. The
algorithm is described in Algo. 2 which is an unsupervised learning process. Since GPU gives good performance for large dataset [16], for the
first number of HLSs, fuzzy membership value will be calculated on CPU and for the large number of HLSs the fuzzy membership will be
calculated on GPU. The fuzzy membership value on GPU will be calculated by launching the MembershiKernel. The parallel algorithm for
creation of HLS is shown in Algo. 2. In Algo. 2, each pattern will calculate its fuzzy membership value with all number of HLSs and find
the index which gives maximum fuzzy membership.

In GPU parallelization of the HLSs creation on multi core GPU [17], each thread is calculating fuzzy membership value for each HLS as in
Algo. 2. MembershiKernel calculates fuzzy membership value in parallel and stores all the values in an array. After calculating membership
values, the array which contains fuzzy membership value is passed as an input to Reduction Kernel [18]. Then Reduction Kernel is launched for
finding the index which gives maximum membership value from an array. Then each pattern from training pattern set is checked for inclusion
test as in MFHLSCNN training algorithm.Throughout the algorithm, fuzzy membership value is calculated by launching Membership Kernel
and maximum is found by Reduction Kernel. After creation of HLSs step, fuzzy membership value needs to be calculated in various steps of
determining the centroid, clustering of HLSs, etc. so these kernels are launched with additional parameters.

VII. COMPUTATIONAL RESULTS


In this section, the serial and parallel computational results of MFHLSCNN are compared. Visual Studio 2010 VC++ is used for writing the code.
The implementation environment used for this work includes GPU server with 16GB RAM, Intels Xeon CPU E5-2620V3, 2.64 GHz,64-bit
Windows 7 OS (Operating System).The parallel CUDA code executes on the server with NVIDIAs single Tesla K20 GPU. The Tesla K20 has
5GB RAM and 2496 CUDA cores. The results are taken for two datasets mentioned in Table 1. Table. 1, describes serial and parallel time
required for training of MFHLSCNN, HLSs created, achieved speedup and % gain time for the online retail dataset. The maximum length of
HLSs and other parameters kept same for both datasets. The speed up increases as more number of HLSs created, so dataset with large number
of instances creates a large number of HLSs hence, it achieves more speed up than the small dataset. The Online Retail dataset is described
above in Dataset section in detail. The Iris dataset is taken from the UCI repository [19]. It has 150 instances and 4 features.
495. Priyadarshan Dhabe and Akshay Prakash Mahajan ,. MODIFIED FUZZY HYPER-LINE
SEGMENT CLUSTERING NEURAL NETWORK (MFHLSCNN) FOR PATTERN RECOGNITION
AND ITS PARALLEL IMPLEMENTATION ON GPU. Journal for Advanced Research in Applied
Sciences; Pages: 490-495

We calculate speedup as follows:


CPU Execution Time
Speedup
GPU Execution Time

And % gain time calculated as follows:

CPU Execution Time - GPU Execution Time


Gain (%) 100
CPU Execution Time
TABLE I
COMPARISON OF SERIAL AND PARALLEL IMPLEMENTATION OF MFHLSCNN TRAINING

Serial Parallel
Dataset HLSs Created Speedup % gain time
(ms) (ms)
Online
4358800 939396 25632 4.64x 78.44%
Retail

VIII. CONCLUSION
It is concluded that the parallel implementation on GPU gives a much better performance for large datasets to achieve maximum speed up.
Hence it should be adopted for applications with large datasets. The learning of MFHLSCNN is done in a single pass, so it can be applied for
real-time applications where a large amount of data needs to be handled. In the proposed work the GPU parallelization of MFHLSCNN, we
achieved 4.64x speedup and 78.44 %gain, for the Online Retail dataset.

REFERENCES

[1] S. K. Pal, Soft computing tools and pattern recognition, vol. 44, pp.6187, 1988.
[2] Kwan H. K. and Yaling Cai, "A fuzzy neural network and its applications to pattern recognition," IEEE Trans. Fuzzy Systems, vol. 2, no. 3,
pp 185-192, Aug. 1994.
[3] K. S. Kadam and S. B. Bangal, Fuzzy Hyperline Segment Neural Network Pattern Classifier with Different Distance Metrics,
International Journal of Computer Applications (0975 8887) Volume 95 No.8, June 2014.
[4] Jacek M. Zurada, Neuron Modelling of Artificial Neural Systems, in Fundamental Concepts and Models of Artificial Neural Systems,
Ed. St. Paul: West Publisher Company, 1992, Ch. 2, Sec. 2.1, pp. 30-36.
[5] D. Chen, S. L. Sain, and K. Guo, Data mining for the online retail industry: A case study of RFM model-based Customer segmentation
using data mining, Journal of Database Marketing & Customer Strategy Management, vol. 19, no. 3, pp. 197208, 2012.
[6] S. B. Bagal and U. V. Kulkarni, Modified Fuzzy Hyperline Segment Neural Network for Pattern Classification and Recognition,
Proceedings of the World Congress on Engineering, vol. 1, London, U.K, July 2014
[7] D. Kirk and W. Hwu, Programming Massively Parallel Processors.
[8] U. V. Kulkarni, T. R. Sontakke and A. B. Kulkarni, Fuzzy hyperline segment clustering neural network," Electronics Letters, Vol. 37, No.
5, pp. 301-303, 2001.
[9] NVIDIA. Cuda programming guide. [Online]. Available: http://docs.nvidia.com/cuda/cuda-c-programming-guide/#axzz3U9G5eYRu
[10] M. G. John Nickolls, Ian Buck, Scalable parallel programming, vol. 6, pp. 4053, 2008.
[11] Satish N. Kulkarni and Dr A. R. Karwankar, Thyroid Disease Detection using modified fuzzy hyperlink segment clustering neural
network, in Int. Journal of Computers and Technology, vol. 3, No. 3, Nov-Dec, 2012.
[12] K. A. Hawick and D. P. Playne, Mixing multi-core cpus and gpus for scientific simulation software. in Res. Lett. Inf. Math. Sci., Vol. 14,
pp. 25-77, 2010.
[13] Nvdia cuda. [Online]. Available: http://www.nvidia.com/object/cuda_home_new.html
[14] UCI Repository of Machine Learning Databases, University of California at Irvine, Department of Computer Science. [Online]. Available:
http://archive.ics.uci.edu/ml/datasets/online+retail
[15] P. S. Dhabe, A S Natu, AS Naval, PH Parmar, AM Padwal, ML Dhore, Modified Fuzzy Hyper-Line Segment Neural Network and it s
Application to Heart Disease Detection, Journal of Artificial Intelligent Systems and Machine Learning, 2010.
[16] P. M. Patil, P. S. Dhabe, and T. R. Sontakke, Recognition of handwritten characters using modified fuzzy hyperline segment neural
network, vol. 2, 2003
[17] P. S. Dhabe and Prashant Vyas, Pattern classification using updated fuzzy hyper-line segment neural network and its GPU parallel
implementation for large datasets using CUDA, 2016 International Conference on Computing, Analytics and Security Trends (CAST) College
of Engineering Pune, India. Dec 19-21, 2016
[18] Harris,Optimizing parallel reduction in CUDA. [Online]. Available: http://developer.download.nvidia.com/assets/cuda/files/reduction.pdf.
[19] R. A. Fisher. Iris data set. [Online]. Available: https://archive.ics.uci.edu/ml/datasets/Iris

You might also like