Professional Documents
Culture Documents
Volume: 4 Issue: 4
ISSN: 2321-8169
466 - 470
______________________________________________________________________________________
AbstractThe human can display their emotions through facial expressions. To achieve more effective human- computer interaction, the
emotion recognize from human face could prove to be an invaluable tool. In this work the automatic facial recognition system is described with
the help of video. The main aim is to focus on detecting the human face from the video and classify the emotions on the basis of facial features
.There have been extensive studies of human facial expressions. These facial expressions are representing happiness, sadness, anger, fear,
surprise and disgust. It including preliterate ones, and found much commonality in the expression and recognition of emotions on the face.
Emotion detection from speech has many important applications. In human-computer based systems, emotion recognition systems
provide users with improved services as per their emotions criteria. It is quite limited on body of work on detecting emotion in speech. The
developers are still debating what features effect the emotion identification in speech. There is no particularity for the best algorithm for
classifying emotion, and which emotions to class together.
Keywordshuman- computer interaction, human emotion, facial expression
__________________________________________________*****_________________________________________________
I . INTRODUCTION
Deprssion is a disorder which affect on humans life
functions.There are varieties of features that can be extracted
from human speech. We use statistics graph relating to the
pitch, Mel Frequency Campestral Coefficient (MFCCs) and
Formats of speech as inputs to classification algorithms[1].
The emotion recognition accuracy allows us to carry the most
emotional information from the features.Using these methods
we achieve high emotion recognition accurately. In this paper
we use k-means and Support Vector Machines (SVMs) to
classify emotions. There are various phases of emotion
capturing such as preprocessing, feature extraction, face
detection etc.Preprocessing is nothing but removing unwanted
signal from speech signal. Extraction means extract only the
necessary data which are useful for computing the result [2].
Author presented a system to determine the emotions with
the help of facial expressions,displayed in live video streams
and video sequences [3]. The system is based on the Piece
wise Baez ire Volume Deformation tracker and has been
extended with face detector to initially capture the human face
automatically. Author also used Naive Bays and the Tree
Augmented Naive Bays (TAN) classier in person dependent
and person independent tests on the Cohn- Canada database.
Author implements a framework for emotional state
classification through still images of pictures and real time
feature extraction and emotion analysis application[4].The
application automatically detect the face and codes them in a
seven
different
dimensions
i.e.
neutral,
anger
_______________________________________________________________________________________
ISSN: 2321-8169
466 - 470
______________________________________________________________________________________
A. Live Streaming
In fig 1, the live streaming is the important step of image
acquisition in real time. In this image frames are obtained
using streaming media [5]. In this stage, the application
receives images from video camera device. The streaming
continues until input image frame is acquired.
B. Frontal Face
This file is used for capture the image and codes with
respect to two dimentational in real time like normal and
abnormal[5].
D. Face Detection
It discovers size and location of objects, within an input
image.it is detected by some facial features by ignoring all the
elements which are not used for detecting the face elements .In
fig 4,it is necessary to convert the original image into binary
format and scan the whole image for forehead .Color image
converting to binary image, we need measure the average
value of RGB for each pixel and if the value is small than 110,
we replace or covered it by black pixel or we covered it by
white pixel. By this method, we get a binary image from RGB
image[6].
_______________________________________________________________________________________
ISSN: 2321-8169
466 - 470
______________________________________________________________________________________
E.Eye Detection
Basically in eye detection first we need to convert RGB
(RED-GREEN-BLUE) into the binary form. (GRAYBLACK).Then we need to scan the image by using (W/4)
formula where w is nothing but width of image.
_______________________________________________________________________________________
ISSN: 2321-8169
466 - 470
______________________________________________________________________________________
match with database values then average result is calculated,
and according to the result decision is made[13].
_______________________________________________________________________________________
ISSN: 2321-8169
466 - 470
______________________________________________________________________________________
improve the emotion detection technique using Bezier Curve
Algorithm. The system works well for faces with different
shapes, skin tones as well audio speech, voice modulations.
The key design principles behind this successful
implementation of a large real-time system included choosing
efficient data structures and algorithms and employing suitable
software engineering tools. In addition to that the paper also
present an understanding of a wide area of Computer Science
to demonstrate that highly accurate speech and facial emotion
detection analysis is possible, and that it can be done in real
time.
ACKNOWLEDGMENT
I wish to express my sincere thanks to the guide Prof.
Shamla Mantri and Head of Department, Prof. (Dr.) A.S.
Hiwale as well as our principal Prof.(Dr.) M.S.Nagmode,
also Grateful thanks to our Coordinator Prof. Neha Sathe
and last but not least, the departmental staff members for
their support.
.
REFERENCE
[1] Shamla Mantri, Dr. Dipti Patil, Dr. Pankaj Agarwal,
Dr.Vijay Wadhai, Cumulative Video Analysis Based
Smart Framework for Detection of Depression Disorders,
2015 International Conference on Pervasive Computing
(ICPC).
[2] Shamla Mantri, Dr.Dipt Patil, Ria Agarwal, Shraddha
Bhattad, Ankit Padiya,Rakshit Rathi A Survey: Preprocessing and Feature Extraction Techniques for
Depression Analysis Using Speech Signal, International
Journal of Computer Science Trends and Technology
(IJCST) Volume 2 Issue 2, Mar-Apr 2014.
[3] Aitor Azcarate, Felix Hageloh,Koenvan de Sande,Roberto
Valenti,
Automatic
Facial
Emotional
Recognition,Universiteit van Amsterdam,June 2005.
[4] LiyanageCDe Silva,ChunHui Real Time Facial Feature
Extraction and Emotion Recognotion,2003.
[5] P.M.Chavan,Manan C.Jadhav,Jinal B.Mashruwala, Real
Time Emotion Recognition through Facial Expressions for
Desktop Devices,International Journal of Emerging
Science and Engineering,Volume-1,Issue-7,May2013.
[6] Alex Mordkovich, Kelly Veit, Daniel Zilber,Detecting
Emotion in Human Speech, December 16th, 2011.
[7] Asthana, A., Saragih, J., Wagner, M., Goecke, Evaluating
AAM Fitting Methods for Facial Expression Recognition
Proceedings of the IEEE International Conference on
Affective Computing and Intelligent Interaction, ACII09,
pp. 598605 (2009).
[8] Casale S.,Russo A.,Scebba G.,Serrano,Speech Emotion
Classification Using Machine Learning Algorithms
Semantic
Computing,IEEE
International
Converence,2008.
[9] Zhu, X., Ramanan, Face detection, pose estimation, and
landmark localization in the wild,In: IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), pp.
28792886 (2012).
470
IJRITCC | April 2016, Available @ http://www.ijritcc.org
_______________________________________________________________________________________