You are on page 1of 10

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)

Web Site: www.ijettcs.org Email: editor@ijettcs.org


Volume 4, Issue 3, May-June 2015
ISSN 2278-6856

Study of Supply Vector Machine (SVM) for


Emotion Recognition
1

Soma Bera, Shanthi Therese2 ,Madhuri Gedam3

Department of Computer Engineering, Shree L.R. Tiwari College of Engineering,


Mira Road, Thane, India

Department of Information Technology, Thadomal Shahani Engineering College,


Bandra, Mumbai, India

Department of Computer Engineering, Shree L.R. Tiwari College of Engineering,


Mira Road, Thane, India

Abstract
This paper is mainly concerned with Speech Emotion
Recognition. The main work is concerned with Supply Vector
Machine (SVM) that allows training the desired data set from
the databases. The Support Vector Machine (SVM) performs
classification by finding the decision boundary also known as
hyperplane that maximizes the margin between two
different classes. The vectors that define the hyperplane are
the support vectors.

Keywords: Speech, Emotion Recognition, Linear SVM,


Nonlinear SVM, hyperplane

1. INTRODUCTION
Speech Emotional Recognition is identifying the
emotional state of human being from his or her voice
[12]. Speech is a complex signal consisting of
information about the message, speaker, language and
emotions. Emotions on the contrary are an individual
mental state that arises spontaneously rather than through
conscious effort. Speech may contain various kinds of
emotions such as Happy, Sarcastic, Sad, Compassion,
Anger, Surprise, Disgust, Fear, Neutral, etc.

Figure 1 Block Diagram of Speech Emotion Recognition


System using SVM Classification

2. SUPPLY VECTOR MACHINE CLASSIFICATION


Supply Vector Machine (SVM) is a promising new
method for the classification of both linear and nonlinear
data [18]. SVM is a binary classifier. A binary classifier
is the one that classifies exactly two different classes. It
can be also used for classifying multiple classes.
Multiclass SVM is the SVM that classifies more than two
classes. Each feature is associated with its class label e.g.
happy, sad, fear, angry, neutral, disgust, etc. The
following figure 1 shows the process of SVM
classification. The SVM classifier may separate the data
elements linearly or nonlinearly depending upon the
placements of the data elements within the dimensional
space area as illustrated in figure 2.

Volume 4, Issue 3, May June 2015

Figure 2 Examples illustrating linear and nonlinear


classifications
Algorithm
1. Generate an optimal hyperplane: generally,
maximizing margin.
2. Apply the same for linearly inseparable problems.
3. Map data elements to higher dimensional space where
it is easier to classify with linearly separable hyperplane.

Page 91

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 4, Issue 3, May-June 2015
ISSN 2278-6856
2.1Linear Separability
Suppose we have two features X1 and X2 and we want to
classify all these elements (X1=green dots, X2= red dots).

Figure 5 Maximizing margin


Figure 3 Graph showing support vectors with margin
width
So the objective of the SVM is to generate a hyperplane
that distinguishes or classifies all training vectors in two
different classes. The line or the boundary dividing both
the classes is the hyperplane that classifies into two
classes. To define an optimal decision boundary we need
to maximize the width of the margin (w). The margin is
defined to be the distance between the hyperplane and the
closest elements from these hyperplanes.

An easy way to separate two groups of data element is


with a linear straight line (1 dimension), flat plane (2
dimensions) or an N-dimensional hyperplane. However,
there are possibilities where a nonlinear region can
classify the groups more proficiently. SVM achieves this
by using a kernel function (nonlinear) to map the data
into a different space where a hyperplane (linear) cannot
be used to do the separation. It means a linearly
inseparable function is learned by a linear learning
machine in a high dimensional feature space while the
capacity of the system is controlled by a parameter that is
independent of the dimensionality of the space. This is
termed as kernel trick [17] which means the kernel
function transform the data into a higher dimensional
feature space so as to perform the task of linear
separation. The main notion of SVM classification is to a
convert the original input set to a high dimensional
feature space by making use of kernel function [4].

Figure 6 Illustrating difference between nonlinear and


linear hyperplane respectively
Figure 4 Process of calculating margin width
The beauty of SVM is that if the data is linearly
separable, then in that case a unique global minimum
value is generated. An ideal SVM analysis should
generate a hyperplane that entirely classifies the support
vectors into two non-overlapping classes [16]. However,
exact separation may not be possible, thereby resulting in
classifying the data incorrectly. In this situation, SVM
encounters the hyperplane that maximizes the margin and
minimizes the misclassifications.

Volume 4, Issue 3, May June 2015

Consider the following simple example to understand the


concept of SVM. Suppose we have 2 features namely X1
and X2 located on the X axis and Y axis respectively. We
have, say for example, 2 set of elements.
1. Blue Dots
2. Red square
We consider the Blue Dots to be the negative class and
the Red square to be the positive class. The objective of
the SVM classifier is to define an optimum boundary to
differentiate both the classes.

Page 92

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 4, Issue 3, May-June 2015
ISSN 2278-6856

Figure 11 Augmented Vector with 1 as a bias input


Now, we need to find 3 parameters 1, 2 and 3 based on
the following 3 equation:
Figure 7 Graph showing Blue (negative) and Red
(positive) class
The following figures demonstrates the possible hyper
planes that could be plotted to classify these elements
separately.

1S1.S1 + 2S2.S1 + 3S3.S1 = -1 (-ve class)


1S1.S2 + 2S2.S2 + 3S3.S2 = -1 (-ve class)
1S1.S3 + 2S2.S3 + 3S3.S3 = +1 (+ve class)
Lets substitute the values in the above equation

Figure 8 Illustrating one and two possible hyper planes


resp

Figure 9 Illustrating multiple possible hyperplanes resp


Here, we select 3 support vectors to start with. They are
s1, s2 and s3.

Figure 12
After simplification, we get:
61 + 42 + 93 = -1
41 + 62 + 93 = -1
91 + 92 + 173 = +1

Figure 10 Graph illustrating the selected Support Vectors


for classification

Simplifying all the above equations, we get:

Here, we will use vector augmented with 1 as a bias input,


and for clarity we will differentiate these with an overtilde.

1 = 2 = -3.25 and 3 = 3.5

Volume 4, Issue 3, May June 2015

The hyper plane that differentiates the positive class from


the negative class is given by:

Page 93

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 4, Issue 3, May-June 2015
ISSN 2278-6856

w i Si

number of support vectors. The following figure


illustrates data items that are non linearly separable.

Substituting the values we get:

Figure 15 Example of nonlinear separability


Our vectors are augmented with a bias. Hence we can
equate the entry in as the hyper plane with an offset b.
Therefore, the separating hyper plane equation is given
by:

Y = wx + b with w=

and offset b = -3

The nonlinear SVM may classify the multiple classes by


using any of the suitable techniques such as One V/s All,
One v/s One, All v/s All, Polynomial kernel (homogenous
and inhomogeneous), Lagranges multiplier, Kernel
function, Gaussian Radial Basis Function, Hyperbolic
tangent.
Consider the following scenario to understand the concept
of nonlinear separability:

This is the expected decision surface area of the Linear


SVM.

Figure 16 Illustrating nonlinear separability


Figure 13 Plotted hyperplane

In the above figure, it is clearly seen that both the classes


that is, the blue class and the red class, are not linearly
separable. Blue class vectors are:

1
1 ,

-1
1 ,

-1
-1 ,

1
-1

Red class vectors are:

Figure 14 Offset location for plotting hyperplane


2.2 Nonlinear Separability
Nonlinear separability stems from the fact that the
training data can be rarely separated using a hyperplane
thereby resulting in misclassification of some of the data
samples. In the case when linear method fails, project the
data non linearly into a higher dimensional
space.Nonlinearly separated data is
handled by the nonlinear SVM but is not that efficient as
the linear classifier. Its complexity is multiplied with the

2
0 ,

0
2 ,

-2
0 ,

0
-2

Next, the aim is to find a nonlinear function which can


transform this into new feature space where a separating
hyperplane can be found. Considering the following
mapping function:

Figure 17 Mapping (plotting) function

Volume 4, Issue 3, May June 2015

Page 94

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 4, Issue 3, May-June 2015
ISSN 2278-6856
From the above equation, it is very clear that the blue
class falls into the second condition and the red class lies
into the first condition. Now let us transform the blue
class and the red class vectors using the nonlinear
mapping function. Blue class vectors are:

1
1 ,

-1
1 ,

-1
-1 ,

1
-1

But for the red class vectors are:

2
0 ,

0
2 ,

-2
0 ,

0
-2

So, the transformation features formed are as follows:

Figure 20 Nonlinearly separable elements transformed to


higher dimensional feature space
So now, our problem has been transformed into linear
SVM. Now our task is to find suitable linear support
vectors to classify these two classes using linear
separability which is very similar to the one that we
illustrated above in the linear separability method. Hence
the new support vectors turn out to be:

Figure 18 Transformation features into higher


dimensions

Figure 21 New Support Vectors at higher dimensional


feature space
So, the new coordinates are:

Figure 22
The entire problem will be solved very similar to that of a
linear separability. After calculating all the simultaneous
equation, the values that we get are:

Figure 23

Figure 19

Volume 4, Issue 3, May June 2015

And an offset b=-1.2501/0.1243=-10.057 So, the new


hyperplane that is plotted is:

Page 95

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 4, Issue 3, May-June 2015
ISSN 2278-6856
4. COMPARATIVE STUDY AND ANALYSIS

Figure 24 Expected Decision surface of Nonlinear SVM


This is the expected decision surface of non Linear SVM.

This section describes the various related work done in


the field of emotion recognition. It briefly surveys about
different techniques used for feature extraction of the
audio speech signal and the various different classifiers
used for classification of emotions. The above Table I
illustrates the application of various classifier for Speech
Emotion Recognition. Annexure I illustrate the table
showing the year and the author of the particular papers,
the different databases that have been used for testing and
training. It also states the number of emotions (happy,
sad, anger, disgust, boredom, fear, neutral) that have been
classified. The table illustrates the accuracy level that
have been reached in the research work so far over years
including their corresponding drawbacks in their
designed systems.

3. OPTIMIZATION OF SVM MODEL


In order to optimize the problem, the distance between
the hyperplane and the support vectors should be
maximized [13] [14].

h (x) = 1/1+e-

Tx

Example:
-(y log h (x) + (1-y) log(1- h (x))) = -y log
1/1+e- Tx - (1-y) log(1-1/1+e- Tx)
If y=1, then h (x)1,

x>>0

5. ANNEXURE I
The different feature extraction method techniques used
are Mel Frequency Cepstrum Coefficient (MFCC), Mel
Energy Spectrum Dynamic Coefficients (MEDC), Linear
Predictive Cepstral Coefficients (LPCC), Spectral
features, Modulation Spectral features (MSF), Linear
Predictive Coding (LPC), etc. The various classifiers used
are Supply Vector Machine (SVM), Gaussian Mixture
Model (GMM), Granular SVM (GSVM), SVM Radial
Basis Function (RBF), Hidden Markov Model (HMM),
Back Propogation Neural Network (BPNN), k-NN, etc.

6. CONCLUSION

Figure 25 Optimized SVM: When y=1

If y=0, then h (x)0,

x<<0

In this paper, a survey of current research work in Speech


Emotion Recognition system and detail description about
the various concepts included in SVM has been discussed.
This paper describes briefly about what linear and
nonlinear classification is and how they are classified
with the help of an example. It also describes the method
for optimizing SVM. In addition, it also presents a brief
survey about different applications of Speech Emotion
Recognition. This study aimed to provide a simple guide
to the researcher for those carried out their research study
in the Speech Emotion Recognition system

Figure 26 Optimized SVM: When y=0

Volume 4, Issue 3, May June 2015

Page 96

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 4, Issue 3, May-June 2015
ISSN 2278-6856
Table 1: Survey on different Classifiers for Emotion Recognition

Volume 4, Issue 3, May June 2015

Page 97

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 4, Issue 3, May-June 2015
ISSN 2278-6856

Volume 4, Issue 3, May June 2015

Page 98

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 4, Issue 3, May-June 2015
ISSN 2278-6856

References
[1] Akshay S. Utane, Dr. S.L.Nalbalwar, Emotion
Recognition Through Speech Using Gaussian
Mixture Model And Hidden Markov Model,
International Journal of Advanced Research in
Computer Science and Software Engineering,
Volume 3, Issue 4, April 2013, ISSN: 2277 128X
[2] Siqing Wu, Tiago H. Falk, Wai-Yip Chan,
Automatic speech emotion recognition using
modulation spectral features, Elsevier, Speech
Communication 53 (2011) 768785.
[3] Yixiong Pan, Peipei Shen, Liping Shen,
Speech Emotion Recognition Using Support Vector
Machine, International Journal of Smart Home, Vol.
6, No. 2, April, 2012.
[4] Aastha Joshi , Speech Emotion Recognition Using
Combined Features of HMM & SVM Algorithm,
International Journal of Advanced Research in
Computer Science and Software Engineering,
Volume 3, Issue 8, August 2013 ISSN: 2277 128X.
[5] Bhoomika Panda, Debananda Padhi, Kshamamayee
Dash, Prof. Sanghamitra Mohanty, Use of SVM
Classifier & MFCC in Speech Emotion Recognition
System, International Journal of Advanced Research
in Computer Science and Software Engineering,
Volume 2, Issue 3, March 2012 ISSN: 2277 128X.
[6] K Suri Babu , Srinivas Yarramalle , Suresh Varma
Penumatsa, Text-Independent Speaker Recognition
using Emotional Features and Generalized Gamma
Distribution, International Journal of Computer
Applications (0975 8887) Volume 46 No.2, May
2012.

Volume 4, Issue 3, May June 2015

[7] Nitin Thapliyal, Gargi Amoli, Speech based


Emotion Recognition with Gaussian Mixture Model,
International Journal of Advanced Research in
Computer Engineering & Technology, Volume 1,
Issue 5, July 2012, ISSN: 2278 1323.
[8] S. Demircan, H. Kahramanl, Feature Extraction
from Speech Data for Emotion Recognition, Journal
of Advances in Computer Networks, Vol. 2, No. 1,
March 2014.
[9] Jagvir Kaur, Abhilash Sharma, Speech EmotionSpeaker Recognition Using Mfcc and Neural
Network, Global Journal of Advanced Engineering
Technologies, Vol3, Issue3-2014 ISSN: 2277-6370.
[10] Eun Ho Kim, Kyung Hak Hyun, Soo Hyun Ki, Yoon
Keun Kwak, Improved Emotion Recognition With a
Novel Speaker-Independent Feature, Ieee/Asme
Transactions On Mechatronics, Vol. 14, No. 3, June
2009.
[11] S. Ramakrishnan , Ibrahiem M.M. El Emary,
Speech emotion recognition approaches in human
computer Interaction , Published online: 2
September 2011 Springer Science+Business Media,
LLC 2011, Telecommun Syst (2013) 52:14671478,
DOI 10.1007/s11235-011-9624-z.
[12] Showkat Ahmad Dar, Zahid Khaki, Emotion
Recognition Based On Audio Speech, IOSR Journal
of Computer Engineering (IOSR-JCE) e-ISSN: 22780661, p- ISSN: 2278-8727Volume 11, Issue 6 (May.
- Jun. 2013), PP 46-50.
[13] Bo Yu, Haifeng Li and Chunying Fang, Speech
Emotion Recognition based on Optimized Support
Vector Machine, Journal Of Software, Vol. 7, No.
12, December 2012.

Page 99

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 4, Issue 3, May-June 2015
ISSN 2278-6856
[14] Optimization Objective, Artificial Intelligence
CoursesVideo from Coursera - Standford University Course:
Machine
Learning:
https://www.coursera.org/course/ml.
[15] Non Linear Support Vector Machines (Non Linear
SVM),
Scholastic
Tutor,
June
2014,
http://scholastictutors.webs.com.
[16] Dr. Saed Sayad, Support Vector MachineClassification
(SVM),
2015,
http://www.saedsayad.com/support
vector
machine.htm.
[17] J. Christopher Bare, Support Vector Machines,
Digitheads Lab Notebook, November 30,2011.
[18] Jiawei Han and Micheline Kamber, Data
Mining:Concepts and Techniques,Elsevier, Second
Edition, Morgan Kaufmann
Publishers,
2006,ISBN: 978-1-55860-901-3.
AUTHOR
Soma Bera received the B.E degree
in Information Technology from
Atharva College of Engineering in
2013. Presently I am pursuing
Masters in Engineering (M.E) in
Computer Engineering from Shree
L.R.
Tiwari
College
of
Engineering. I am with Atharva Institute of Information
Technology training young and dynamic IT
undergraduates.

Volume 4, Issue 3, May June 2015

Page 100

You might also like