You are on page 1of 4

Detection of Driver Drowsiness during Driving 1.

Abstract
The increasing number of automobiles increases the accidents rate. There may be several factors that would result in accidents. The National Highway Traffic Safety Administration (NHTSA) estimates that a total of 100,000 vehicle crashes each year are the direct result of driver drowsiness. One of the major factors is drowsiness. So a reliable driver drowsiness system should be developed to alert the person. In this study, an in-vehicle vision system for monitoring driver facial features using a video camera is proposed. Consider an input video sequence and the drivers face is first located in a video image. To identify the face, Violes Jones detector is used. After identifying the face, facial features such as eyes and mouth are located within the face. The head position of the driver can also be detected. The yawn detection can also be performed. The human skin has specific characteristics in RGB color space. It should be noted that skin segmentation in RGB color space is quite sensitive to illumination variation. Therefore, we have to add another set of rules in the YCbCr space. The first step towards mouth segmentation is to highlight the mouth region .The red color is the strongest component in the mouth area while the blue color is the weakest. Therefore, the Cr component is bigger than the Cb component in the mouth area. Even though the mouth area is highlighted in the output, there might be some other regions that have been falsely classified as mouth. Therefore, post processing step should be done to find the mouth contour with a better precision.

2. Introduction
Different techniques are used in driver-fatigue monitoring systems. These techniques are divided into three categories. The first category includes intrusive techniques, which are mostly based on monitoring biomedical signals, and therefore require physical contact with the driver. The second category includes non-intrusive techniques based on visual assessment of drivers bio-behavior from face images such as head movement and eye state positions. The third category includes methods based on drivers performance, which monitor vehicle behaviors such as moving course, steering angle, speed, braking etc.

Face detection proposed by Viola and Jones based on statistic methods is most popular among the face detection approaches. This face detection is a variant of the AdaBoost algorithm which achieves rapid and robust face detection. They proposed a face detection method based on the AdaBoost learning algorithm using Haar features that can detect the face successfully with high accuracy.

3. Need and Importance of Project Problem


Driver assistance systems improve road safety by contributing to the reduction of the vehicle crashes. Safety can be improved by designing the vehicles and monitoring the behavior of the road users or drivers. In the first level, data is collected from the real time using camera. In the second level, collected data is converted into required format based on our application and redundancy is removed. In the third level, feature or parameters are calculated by using feature extraction algorithms. Finally decision is obtain based on the extracted parameters and warning is generated to avoid accidents.

4. Objective The objective is to 1. Monitor the driver facial features using a video camera. 2. Then detect the drowsiness of the driver by measuring the eyes, head and yawn detection.

5. Methodology
The input video is considered and divided into frames. After dividing, the successive frames are considered. When the frames are constructed, then the location of face should be done. For identifying the face Viola-Jones Detector is used. The facial features such as eyes and mouth are located using Haar cascades. The face model is constructed and then the transformation techniques are applied to the model and when the drivers face gets moved by an angle and alarm is sent to awaken the driver.The face model is defined as three points, two eyes and mouth positions. The original point (0, 0, 0) is set as the middle point between two eyes. Their coordinates are measured from a drivers face in a monitoring image. For each point (feature), some feature values such as: region size (width and height), histogram of the color distribution, and the histogram of the edge orientation are stored for tracking.

6. Input Parameters/ Size of Samples


The input video considered is of type YUY format. The frame count supported is 30 frames per second. The different resolutions considered are 800600, 1024768, 16001200. For drowsiness detection every 5th frame is considered. If drowsiness is detected then consequent frames are considered else it checks for every 5th frames.

7. Hypothesis
Consider an input video sequence. The video is divided into frames and then successive frames are observed. Later the drivers face should be located. To identify the face ViolesJones face detector is used. Here an integral image is constructed and it is moved over the image so that exact face should be detected. After identifying the face, facial features are considered which include eyes, mouth and face pose. During detection some of the facial features are not correctly classified. So Adaboost algorithm is used. Adaboost is a machine learning algorithm in which it correctly classifies the features which were not identified properly. The drowsiness can be detected by plotting the histogram and observing the shape. Next, the head position is checked. This can be done by measuring angles and then applying transformation techniques. To perform yawn detection, the human skin has specific characteristics in RGB color space. It should be noted that skin segmentation in RGB color space is quite sensitive to illumination variation. Therefore, we have to add another set of rules in the YCbCr space. The first step towards mouth segmentation is to highlight the mouth region .The red color is the strongest component in the mouth area while the blue color is the weakest. Therefore, the Cr component is bigger than the Cb component in the mouth area. Even though the mouth area is highlighted in the output, there might be some other regions that have been falsely classified as mouth. Therefore, post processing step should be done to find the mouth contour with a better precision.

8. References:
[1] Efficient Eyes and Mouth Detection Using Viola Jones Technique, June 2013

[2] Extracting Drivers Facial Features During Driving -J. M. Wang, H. P. Chou, C. F. Hsu, S. W. Chen, and C. S. Fuh [3] A Yawning Measurement Method to Detect Driver Drowsiness- Behnoosh Hariri , Shabnam Abtahi , Shervin Shirmohammadi , Luc Martel

You might also like