You are on page 1of 6

7th WSEAS Int. Conf.

on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING and DATA BASES (AIKED'08),


University of Cambridge, UK, Feb 20-22, 2008

Classifier Based Text Mining For Radial Basis Function


M.GOVINDARAJAN
Lecturer (Senior Scale)
Department of CSE
Annamalai University
Annamalai Nagar -608002
Tamil Nadu
INDIA
RM.CHANDRASEKARAN
Professor
Department of CSE
Annamalai University
Annamalai Nagar -608002
Tamil Nadu
INDIA

Abstract: - Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge
discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems,
training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used
to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a
proposed radial basis function neural net classifier that performs cross validation for original RBF Neural Network. In
order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed
approach are demonstrated by means of two data sets like mushroom, weather symbolic. It is shown that, for mushroom
(large dataset) the accuracy with Proposed RBF Neural Network was in average around 1.4 % less than with the original
RBF Neural Network and the larger the improvement in speed. For weather symbolic (smaller dataset) the accuracy with
Proposed RBF Neural Network was in average around 35.7 % less than with the original RBF Neural Network and the
smaller the improvement in speed. This algorithm is independent of specify data sets so that many ideas and solutions can
be transferred to other classifier paradigms.
Keywords Radial Basis Function, Classification accuracy, Text mining, Time complexity.

1. Introduction

1.1 Radial basis function networks

In supervised learning, we are given a set of example


pairs (x, y, x X, y Y) and the aim is to find a
function f in the allowed class of functions that matches
the examples. In other words, we wish to infer the
mapping implied by the data; the cost function is related
to the mismatch between our mapping and the data and
it implicitly contains prior knowledge about the
problem domain. In this article we start with the
following assumptions.

ISSN: 1790-5109

Radial Basis Function (RBF) networks [13] are also


feedforward, but have only one hidden layer. Like
MLP, RBF nets can learn arbitrary mappings; the
primary difference is in the hidden layer. RBF hidden
layer units have a receptive field which has a centre;
that is a particular input value at which they have a
maximal output. Their output tails off as the input
moves away from this point. Generally, the hidden units
have a Gaussian transfer function. This is usually

Page 476

ISBN: 978-960-6766-41-1

7th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING and DATA BASES (AIKED'08),
University of Cambridge, UK, Feb 20-22, 2008

accomplished via supervised learning. When RBF


functions are used as the activation functions on the
hidden layer, the nodes can be sensitive to a subset of
input values.

1.2 Proposed RBF networks


A radial function or a radial basis function (RBF) is a
class of function whose value decreases (or increases)
with the distance from a central point. The Gaussian
activation function is an RBF network is typically an
NN with three layers. The input layer is used to simply
input data. A Gaussian activation function is used at the
hidden layer, while a linear activation function is used
at the output layer. But we proposed RBF is a Class that
implements a normalized Gaussian radial basis function
network. It uses the k-means clustering algorithm to
provide the basis functions and learns either a logistic
regression (discrete class problems) or linear regression
(numeric class problems) on top of that. Symmetric
multivariate Gaussians are fit to the data from each
cluster. If the class is nominal it uses the given number
of clusters per class. It standardizes all numeric
attributes to zero mean and unit variance.

1.3 Applications
These days, neural networks are used in a very large
number of applications. Application areas include
system identification and control (Vehicle control,
process control), game-playing and decision making
(backgammon, chess, racing), pattern recognition (radar
systems, face identification, object recognition and
more), sequence recognition (gesture, speech,
handwritten text recognition), medical diagnosis,
financial applications, data mining (or knowledge
discovery in databases, "KDD"), visualization and email spam filtering.

1.4 Radial Basis Function


Radial basis function neural network is in no way a new
idea, but exiting approaches typically suffer from the
problems of a high runtime and classification accuracy
in small dataset. Often, these two problems are closely
related: Due to a high runtime and classification
accuracy in a smaller datasets. Hence, we take over the
best from existing approach like a cross validation that
reduce the runtime and classification accuracy of the

ISSN: 1790-5109

RBF algorithm radically. In particular, we use methods


[4] for:
1) k-fold cross validation (Radial Basis Function Neural
Network)
2) In stratified cross validation
3) Leave-one-out
k-fold cross validation: The initial data are randomly
partitioned into k mutually exclusive subsets or folds,
s1,s2,s3.sk, each of approximately equal to size.
Training and Testing is performed k times. The
accuracy estimate is the overall number of correct
classifications from the k-iterations, divided by the total
number of samples in the initial data.
In stratified cross: validation, the folds are stratified so
that the class distribution of the samples in each fold is
approximately the same as that in the initial data.
Leave-one-out: k-fold cross validation with k set to s,
number of initial samples. In general, stratified 10-fold
cross- validation is recommended for estimating
classifier accuracy (even if computation power allows
using more folds) due to its relatively low bias and
variance. The use techniques as well as innovative
novel ideas. Its advantages are outlined by means of
two data sets like mushroom weather symbolic.
Experimental results show that for mushroom (large
dataset) the accuracy with Proposed RBF Neural
Network was in average around 1.4 % less than with the
original RBF Neural Network and the larger the
improvement in speed. For weather symbolic (smaller
dataset) the accuracy with Proposed RBF Neural
Network was in average around 35.7 % less than with
the original RBF Neural Network and the smaller the
improvement in speed.
The remainder of this article is structured as follows.
First, the state of the art is analyzed to motivated our
work Section 2) and the Radial Basis Function Neural
Network was described Section 3). Then, the RBF for
architecture optimization is introduced with a strong
focus on the innovative aspects mentioned above
(Section 4). After that, the advantages of the approach
are set out by means of two datasets (Section 5).
Finally, the main findings are summarized and an
outlook on future work is given (Section 6).

2 State of the Art


In this section, the state of the art concerning cross
validation of RBF algorithm is investigated. The results
of this survey will motivate a new approach.

Page 477

ISBN: 978-960-6766-41-1

7th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING and DATA BASES (AIKED'08),
University of Cambridge, UK, Feb 20-22, 2008

2.1 Related Work


This article focuses on training time and classification
accuracy using cross validation of RBF Neural
Network. Cross validation methods are described in [4].
In general, filter approach was described. The problem
of training time and classification accuracy for neural
networks is discussed in [5] [6]. Here, we discuss
examples of the combination of RBF and PRBF
algorithm. Altogether, we investigated five datasets
where cross validation methods are applied to optimize
RBF algorithm. The following steps are carrying out to
classify the Radial Basis Function[3].
1. Input layer is used to simply input the data.
2. A Gaussian activation function is used at the hidden
layer
3. A linear activation function is used at the output
layer.
The objective is to have the hidden nodes learn to
respond only to a subset of the input, namely, that
where the Gaussian function is centered. This is usually
accomplished via supervised learning. When RBF
functions are used as the activation functions on the
hidden layer, the nodes can be sensitive to a subset of
the input values.

2.2 Motivation for a New Approach


A radial function or a radial basis function (RBF) is a
class of function whose value decreases (or increases)
with the distance from a central point. The Gaussian
activation function is an RBF network is typically an
NN with three layers. The input layer is used to simply
input data. A Gaussian activation function is used at the
hidden layer, while a linear activation function is used
at the output layer. But we proposed RBF is a Class that
implements a normalized Gaussian radial basis function
network. It uses the k-means clustering algorithm to
provide the basis functions and learns either a logistic
regression (discrete class problems) or linear regression
(numeric class problems) on top of that. Symmetric
multivariate Gaussians are fit to the data from each
cluster. If the class is nominal it uses the given number
of clusters per class. It standardizes all numeric
attributes to zero mean and unit variance.

ISSN: 1790-5109

3 Classification with radial basis function


neural network
Supervised training involves providing an ANN with
specified input and output values, and allowing it to
iteratively reach a solution. MLP and RBF employ the
supervised mode of learning.
The RBF design involves deciding on their centers and
the sharpness (standard deviation) of their Gaussians.
Generally, the centres and SD (standard deviations) are
decided first by examining the vectors in the training
data. RBF networks are trained in a similar way as
MLP. The output layer weights are trained using the
delta rule. MLP is the most widely applied neural
network technique. RBF have the advantage that one
can add extra units with their centres near parts pf the
input, which are difficult to classify. Simple
perceptions, MLP, and RBF networks are supervised
networks. In an Unsupervised mode, the network adapts
purely in response to its inputs. Such networks can learn
to pick out structures in their input. One of the most
popular models in the unsupervised framework is the
self-organizing map (SOM), Radial basis function
(RBF) networks combine a number of different
concepts from approximation theory, clustering, and
neural network theory. A key advantage of RBF
networks for practitioners is the clear and
understandable interpretation of the functionality of
basis functions. Also, fuzzy rules may be extracted from
RBF networks for deployment in an expert system. The
RBF networks used here may be defined as follows.
1) RBF networks have three layers of nodes: input
layer, hidden layer , and output layer .
2) Feed-forward connections exist between input and
hidden layers, between input and output layers (shortcut
connections), and between hidden and output layers.
Additionally, there are connections between a bias node
and each output node. A scalar weight is associated
with the connection between nodes.
3) The activation of each input node (fanout) is equal to
its external input where is the th element of the external
input vector (pattern) of the network (denotes the
number of the pattern).
4) Each hidden node (neuron) determines the Euclidean
distance between its own weight vector and the
activations of the input nodes, i.e., the external input
vector The distance is used as an input of a radial basis
function in order to determine the activation of node.

Page 478

ISBN: 978-960-6766-41-1

7th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING and DATA BASES (AIKED'08),
University of Cambridge, UK, Feb 20-22, 2008

Here, Gaussian functions are employed the parameter of


node is the radius of the basis function; the vector is its
center. Any other function which satisfies the
conditions derived from theorems of Schoenberg or
Micchelli. Localized basis functions such as the
Gaussian or the inverse multiquadric are usually
preferred.
5) Each output node (neuron) computes its activation as
a weighted sum The external output vector of the
network, consists of the activations of output nodes, i.e.,
. The activation of a hidden node is high if the current
input vector of the network is similar (depending on
the value of the radius) to the center of its basis
function. The center of a basis function can, therefore,
be regarded as a prototype of a hyper spherical cluster
in the input space of the network. The radius of the
cluster is given by the value of the radius parameter. A
radial basis function (RBF) is a real-valued function
whose value depends only on the distance from the
origin. They are used in function approximation, time
series prediction, and control. In artificial neural
networks radial basis functions are utilized as activation
functions.

4 Optimization of RBF Algorithm


In this section, a schematic overview of cross validation
used RBF optimization is given. Then, the standard
techniques are sketched and our innovative extensions
are described in detail.

4.1 Overview
From an algorithmic perspective, optimization is a least
value for the minimization that can be used to solve a
wide range of optimization tasks including the most
important parameters are optimized of neural network.

4.2 Standard
Validation

Methods

of

the

Cross

The development of the new approach was guided by


the idea that well known cross validation methods
should be applied as far as possible. To keep the
runtime of the cross validation, only the most important
parameters are optimized. We discuss techniques [2] for
estimating runtime and classifier accuracy, such as the
(i) Holdout
(ii) K-fold cross validation

ISSN: 1790-5109

Holdout: The given data are randomly partitioned into


two independent sets, a training set and test set.
Random sub the algorithm. The idea minimizes
validation techniques described in least wide important
by the should be cross optimized. And sampling is a
variation of the holdout method in which the holdout
method is repeated k times. The overall accuracy
estimate is taken as the average of the accuracies
obtained from each iteration.
K-fold cross-validation: The initial data are randomly
partitioned into k mutually exclusive subsets or folds,
s1, s2, s3.sk each of approximately equal to size.
Training and Testing is performed k times. The
accuracy estimate is the overall number of correct
classifications from the k-iterations, divided by the total
number of samples in the initial data.
In stratified cross validation, the folds are stratified so
that the class distribution of the samples in each fold is
approximately the same as that in the initial data.
Bootstrapping: Given training instances uniformly with
replacement.
Leave-one-out: k-fold cross validation with k set to s ,
number of initial samples. In general, stratified 10-fold
cross- validation is recommended for estimating
classifier accuracy (even if computation power allows
using more folds) due to its relatively low bias and
variance. The use of such techniques to estimate
classifier accuracy increases the overall computation
time, yet is useful for among several classifiers.
Increases classifier Accuracy:
(i) Bagging ( or bootstrap aggregation)
(ii) Boosting

5 Experimental results
In this section we demonstrated the properties and
advantages of our approach by means of two data sets
like mushroom, weather symbolic. The performance of
classification algorithms is usually examined by
evaluating the accuracy of the classification. However,
since classification is often a fuzzy problem, the correct
answer may depend on the user. Traditional algorithm
evaluation approaches such as determining the space
and time overhead can be used, but these approaches
are usually secondary. Classification accuracy [13] is
usually calculated determining the percentage of tuples
placed in the correct class. This ignores the fact that
there also may be a cost associated with an incorrect
assignment to the wrong class. This perhaps should also

Page 479

ISBN: 978-960-6766-41-1

7th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING and DATA BASES (AIKED'08),
University of Cambridge, UK, Feb 20-22, 2008

Training Time
300
PRBF

be determined. We examine the Performance of


classification much as is done with information retrieval
systems. With only two classes, there are four possible
outcomes with the classification. The upper left and
lower right quadrants are correct actions. The remaining
two quadrants are incorrect actions.

200

PRBF

100

ORBF

Table 1
Properties of data sets
Dataset
Factor of
Mushroom
Weather.
symbolic

Instances

0
1

8124

23

14

Mushroom
weather.
symbolic

Table 2
Training Time (seconds)
Faster by
Original
Proposed
Radial
Radial
Basis
Basis
Function
Function
(ORBF)
(PRBF)
217.23

246.61

29.38

0.05

0.06

0.01

ISSN: 1790-5109

ORBF

Attribues

Fig.1 Training Time


Table 3 Classification accuracy

The
performance of classification algorithms is usually
examined by evaluating the accuracy of the
classification. However, since classification is often a
fuzzy problem, the correct answer may depend on the
user. Traditional algorithm evaluation approaches such
as determining the space and time overhead can be
used, but these approaches are usually secondary.
Classification accuracy [11] is usually calculated by
determining the percentage of tuples placed in the
correct class. This ignores the fact that there also may
be a cost associated with an incorrect assignment to the
wrong class. This perhaps should also be determined.
We examine the Performance of classification much as
is done with information retrieval systems. With only
two classes, there are four possible outcomes with the
classification. The upper left and lower right quadrants
are correct actions. The remaining two quadrants are
incorrect actions.

Dataset
Factor of

Dataset
Factor of

% Correct
using
10fold
cross
validation
(PRBF)

%
Correct
class
(ORBF)

Mushroom

65.5465

66.9498

1.4033 %

Weather.
symbolic

64.2857

100

35.7143 %

Classification
Accuracy

Classification Accuracy

P RBF

150
100

PRBF

50

ORBF

0
ORBF

Fig.2 Classification Accuracy

6 Conclusions
In this work we developed one text mining classifier
using Neural Network methods to measure the training
time for two data sets like mushroom, weather
symbolic. First, we utilized our developed text mining
algorithms, including text mining techniques based on
classification of data in two data collections. After that,
we employ exiting neural network to deal with measure
the training time for two data sets. Experimental results

Page 480

ISBN: 978-960-6766-41-1

7th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING and DATA BASES (AIKED'08),
University of Cambridge, UK, Feb 20-22, 2008

show that for mushroom (large dataset) the accuracy


with Proposed RBF Neural Network was in average
around 1.4 % less than with the original RBF Neural
Network and the larger the improvement in speed. For
weather symbolic (smaller dataset) the accuracy with
Proposed RBF Neural Network was in average around
35.7 % less than with the original RBF Neural Network
and the smaller the improvement in speed.

Acknowledgement
Authors gratefully acknowledge the authorities of
Annamalai University for the facilities offered and
encouragement to carry out this work. This part of work
is supported in part by the first author got Career
Award for Young Teachers (CAYT) grant from All
India Council for Technical Education, New Delhi.
They would also like to thank the reviewers for their
valuable remarks

References:
[1] Guobin Ou,Yi Lu Murphey, Multi-class
pattern classification using neural
networks, Pattern Recognition 40 (2007)
[2] M.Govindarajan, Dr.RM.Chandrasekaran,
Classifier Based Text Mining for Neural
Network Proceeding of XII international
conference on computer, electrical and
system science and engineering, may 24-26,
Vienna , Austria, waste.org,2007. pp. 200205
[3] Oliver Buchtala, Manual Klimek and
Bernhard Sick, Member, IEEE
Evolutionary Optimization of Radial Basis
Function Classifier for Data Mining
Applications, IEEE Transactions on
systems,man,andcybernets,vol.35,No.5,
October,2005
[4] Jiawei Han , Micheline Kamber Data
Mining Concepts and Techniques
Elsevier, 2003, pages 303 to 311 , 322 to
325.
[5] Intrusion Detection: Support Vector
Machines and Neural Networks, Srinivas
Mukkamala, Guadalupe Janoski, Andrew
Sung {srinivas, silfalco, , Department of
Computer Science
New Mexico Institute
of Mining and Technology, Socorro, New
Mexico 87801, 2002, IEEE
[6] N. Jovanovic, V. Milutinovic, and Z.

ISSN: 1790-5109

Obradovic, Member, IEEE, Foundations


of Predictive Data Mining (2002)
[7] Yochanan Shachmurove, Department of
Economics,The City College of the City,
University of New York and The
University of Pennsylvania, Dorota
Witkowska, Department of
Management,Technical University of
Lodz CARESS Working Paper #0011Utilizing Artificial Neural Network
Model to Predict Stock Markets
September 2000
[8] Bharath, Ramachandran. Neural Network
Computing. McGraw-Hill, Inc., New York,
1994. pp. 4-43.
[9] Luger, George F., and Stubblefield,
William A. Artificial Intelligence:
Structures and Strategies for Complex
Problem Solving, (2nd Edition).
Benjamin/Cummings Publishing Company,
Inc., California, 1993, pp. 516-527.
[10] Andrew T.Wilson Off-line Handwriting
Recognition Using Artificial Neural
Networks
[11] Skapura, David M., Building Neural
Networks. ACM Press, New York. pp. 2933.
[12] Bhavit Gyan, University of Canterbury,
Kevin E. Voges, University of Canterbury
Nigel K. Ll. Pope, Griffith University
Artificial Neural Networks in Marketing
from 1999 to 2003: A Region of Origin
and Topic Area Analysis
[13] Margaret H.Dunham, Data MiningIntroductory and Advanced Topics
Pearson Education, 2003, pages 106-112

Page 481

ISBN: 978-960-6766-41-1

You might also like