You are on page 1of 6

International Journal of Engineering Research and General Science Volume 4, Issue 2, March-April, 2016

ISSN 2091-2730

Night Time Vehicle Detection Using Tail Lights: A Survey


Swathy S Pillai, Radhakrishnan B

PG Scholar, Dept. of Computer Science and Engineering, Baselios Mathews II College of Engineering
Sasthamcotta, Kerala, India Ph: 9895759462
swathysp@gmail.com
Asst. Professor, Dept. of Computer Science and Engineering, Baselios Mathews II College of Engineering
Sasthamcotta, Kerala, India
radhak77@rediffmail.com

Abstract- The main challenge during night-time driving is the visibility of the road ahead. The major challenge rests in controlling
accidents during night as the night conditions differ from the day in environment conditions, vehicle lighting etc. The reduced cost of
cameras and optical devices helped in installing front-mounted intelligent systems for identifying forward collision avoidance and
mitigation. During night, vehicles in front are generally seen by their tail lights. The turn signals are mostly important because they
signal lane change and potential collision. This survey presents a car recognition system by identifying and segmenting tail lights in
the night-time road environment.

Keywords

Segmentation, Edge detection, Taillight, Blob detection, Connected component analysis, Symmetry score, Nakagami-m distribution .

INTRODUCTION

This paper gives a literature review on various techniques used in identifying vehicles during night. The usage of Computer Vision
ideas has helped lot with road safety and security. The appearance of vehicles during night time is different when compared to its
daylight conditions such as environment lighting, reflection of light on the body of vehicles, color of vehicles etc. Thus a different
image processing approach is essential for night time road environment. This method seeks on segmenting the tail lights of vehicles
and classifying them. When vehicles are viewed from behind at night, they are primarily visible by the red color of tail lights. All car
models have their own peculiar physical and structural features which make them distinctive from each other and the appearance of
brake lights is one of those features. For the technique to be most efficient, ideally a tweaked camera is used which filters only the red
lights, thereby eliminating image bleeding.

(a) (b )
Fig 1 Tail light of vehicle Brake light of Vehicle

TECHNIQUES
Segmentation
Segmentation is the process by which an image is fenced off into smaller parts so that processing of an entire image can be more
meaningful and easier i.e. process of partitioning a digital image into set of pixels. The process of segmentation makes an image
meaningful and easy for analysis and it assigns label for every pixel so that same characteristics are shared by pixels with same label.

BhavinkumarM.Rohit et.al. [1] used the segmentation approach and introduced low light video frames which have low exposure value
images. The idea behind using the low exposure image frames is that factors such as street lights, unwanted reflections from vehicles

551 www.ijergs.org
International Journal of Engineering Research and General Science Volume 4, Issue 2, March-April, 2016
ISSN 2091-2730

body, sign boards reflections can be removed and only the bright red color and head lights of oncoming vehicles are visible in the
image frame. Segmenting the red color from other noises is the major issue in identifying the tail lights.

Binamrata Baral et.al. [2] explains Special Theory Based Segmentation techniques such as Genetic Algorithm, Neural Network,
Clustering and Wavelets based segmqentation techniques. Genetic algorithm depends upon the process of natural selection and uses
techniques such as selection, cross over and mutation. In neural based, the principle of ANN is used. For clustering Fuzzy C-means
and K-means clustering is implemented and in wavelet based wavelet transforms of image is used for feature extraction.

JunbinGuoet.al. [3] explained OTSU algorithm for tail light segmentation for identification of vehicles at night. In this after analyzing
the histogram of image the lower boundary for thresholding is determined by counting the average number of higher gray scales. Next,
between the lower boundary and the highest gray scale of the image, the optimal threshold is calculated by Otsu method and the image
is divided into pixels based on this threshold. The resultant object can be computed by,

( )
( ) { (1)
( )
where is the image and T the threshold value.

Fig 2 Histogram showing apparent classes using thresholds

Edge detection

The edges of an image indicates higher frequency region within that image.Edge detection refers to the process of identifying and
locating abrupt changes in pixel intensity which characterize boundaries of an image.Detection of edges is applicable in image
segmentation, data compression image reconstruction and so on. There are two types of edge detection; first order and second order
edge detection. The techniques of edge detection are Sobel, Prewitt and Canny edge detection.

P.Srinivaset.al.[4] implemented Canny edge detection in order to detect the vehicle tail light. The algorithm smoothens the image and
finds the gradient to highlight regions with high spatial derivatives. The non-maxima regions are then suppressed and two threshold
values are set. The value is set to zeroif the magnitude is below the first threshold, it is considered as edge if the magnitude is above
second thresholdand a path is obtained if magnitude lies between the two.

M.Kalpanaet.al.[5]explained edge detection operators such as differential operator like Robert, Prewitt and Sobel operators, Log
operator and Canny operator. Among these operators the most commonly used is the Canny operator. Even after edge detection
various noises exists in the detected image and then wavelet transformation has been used to remove that.

Manoj K Vairalkaret.al.[6] proposed Sobel edge detection for image detection. The Sobel detector is selected after verifying the edge
orientation, noise environment and edge structure. Depending upon the output of these variables the techniques implemented for edge
detection are gradient and laplacian detection. The Sobel operator is associated with a pair of 3x3 convolution kernel, one obtained by
rotating the other one at 900. The gradient magnitude G is given by,

| | √( ) (2)

whereGxand Gyare magnitude in x and y direction respectively.

552 www.ijergs.org
International Journal of Engineering Research and General Science Volume 4, Issue 2, March-April, 2016
ISSN 2091-2730

Filtering and Enhancement

Filtering is the process of modifying or enhancing an image by removing the noises from it and is a neighborhood operation in which
the output image pixel value is obtained by applying some algorithms to the neighborhood pixel values of the input pixel. Image
enhancement is the process by which the specific features of an image is brought out by histogram equalization, median filtering etc.
Enhancement makes it easier to identify the key features of an image.

Bharti Sharma et.al. [7] proposed an algorithm for vehicle detection in which the filtering of the output image is done after applying
the threshold value to the vehicle . Shape index is computed to extract the target object as it extracts the bright target object more
precisely. Shape Index (SI) is calculated as,

( )

By these operations vehicle is detected.

PushpalataPatilet.al. [8]explained a method for vehicle detection by image processing in which image enhancement is done with the
help of a computer. The main objective is to create an image that is more suitable for specific applications. The technique
implemented enhances the image by image’s joints and lineaments. To emphasize brightness differences associated with linear
features contrast enhancement is used.

Fig 3 Vehicle Detection Steps

LITERATURE SURVEY

BhavinkumarM.Rohit et.al [1] describes a system for detecting vehicles based on their rear lights. They used the segmentation
approach and introduced low light video frames which have low exposure value images. The idea behind using the low exposure
image frames is that factors such as street lights, unwanted reflections from vehicles body, sign boards reflections can be removed and
only the bright red color and head lights of oncoming vehicles are visible in the image frame. Segmenting the red color from other
noises is the major issue in identifying the tail lights. The red layer is firstly extracted from the rear light and is then converted to the
gray image. The gray frame is then subtracted from the red frame and all the unwanted noises are removed by median filter. A
threshold value is set to convert the obtained image to its corresponding binary image. Then the blob analysis techniques are used to
calculate the area and the corresponding bounding boxes. Blob detection methods are aimed at detecting regions in an image that has
similar as well as difference in their properties. Depending upon the symmetry, tail lights for same vehicle is identified and the
nuisance are rejected.

Chen et al.[11] uses segmentation for identifying the bright object and verifies the segmented regions by spatial clustering based on
the symmetric properties such as shape, texture and relative position. The Nakagami-m distribution approach is used to detect turn
signals by scatter modeling of tail lights. The turn signals are detected using contrast enhancement in which intensity for the image is
obtained. In order to avoid the noise generated from the non-tail lights a step function is applied and preprocessed. Thus obtained tail
light is modeled using the Nakagami-m distribution. After this the color space regulation on the CIE xy chromaticity has been
introduced to verify detected turn signals. For recognizing the direction of the detected turn signal as the reflectance strength of the
area near turn signals is larger than that of other areas, vehicle reflectance is first decomposed. The bounding area is analyzed and in
order to overcome the variation of an event pattern, a training algorithm is adopted like AdaBoost. Using this algorithm the classifiers
of left and right turn signal directions are trained.

553 www.ijergs.org
International Journal of Engineering Research and General Science Volume 4, Issue 2, March-April, 2016
ISSN 2091-2730

Noppakun Boonsim et.al. [12] present a new algorithm to detect a vehicle ahead by using taillight pair. Tail light detection is
implemented by two steps tail light candidate extraction and tail light verification. The tail light candidate extraction phase includes
extraction of color pixels from the input image and candidate extraction process segments red pixel regions containing white.
By applying symmetry score of size, shape and position candidates the tail light verification is done. The symmetry of position is
considered by estimating y-direction distance of each pair by,

| |
DS ( ) = (1- ) (4)

Where DS is y-distance symmetry score of pair . and are the y axis-distance between border and center of and ,
respectively. Next, size symmetry is then checked to justify area characteristic equality using.

| |
AS ( ) = (1- ) (5)

Where AS is defined as size (area) symmetry score of pair . and are the area of candidate and consecutively. Then the
symmetry of shape is checked by analyzing aspect ratio of candidate width and height.

| |
ARS ( ) = (1- ) (6)

Where ARS is the aspect ratio symmetry score of pair , and represent the aspect ratio of candidate and respectively.

position symmetry score is experimentally defined as the most significant value following by size and shape symmetry Score

Symmetry Score ( )k = 0.8 * DSk + 0.1 * ASk + 0.1 * ARSk (7)

Finally, the maximum symmetry score will be considered to confirm the position of TLs. The symmetry score of TL should be more
than 80. If the score is less than the threshold value, test images will be discarded. This is the equation to verify TL positions.

TL pair = Max (Symmetry Score ( )) > 80 (8)

Yen Lin Chen et.al.[13] explained an effective system for detecting and tracking moving vehicles during night has been explained.
Using image segmentation and pattern analysis techniques the vehicles tail light are detected and identified. The main process done in
this paper is the classification of car and motorbike tail lights.

A moving car is modeled as a rectangular patch, a tracked component group TGtk has been implemented and given by,

( )
(9)
( )

where and are threshold values 2.0, 8.0 and W, H represents the width and height of the bright objects. TGtk should be
symmetrical and well aligned such that

( ) ( )
( ) | | ( ) (10)
( ) ( )

where , are threshold values 0.4 and 2.0. The width of TGtk of a potential car should be in a reasonable ratio with respect to the
lane width,

( ) ( ) ( ) (11)

where ( ) is the approximate lane width associated withTGtk. The threshold and are 0.5 and 0.9, respectively.

554 www.ijergs.org
International Journal of Engineering Research and General Science Volume 4, Issue 2, March-April, 2016
ISSN 2091-2730

To identify motorbikea single tracked component TPti can be identified as,

( )
(12)
( )

where the threshold values and are 0.6 and 1.2.

Ronan O’ Malley et.al.[14] explains segmenting red lamps and pairing to combine it with target vehicle. RGB color space is converted
to HSV and cross relation method is used to pair the red light.

Table 1 Color thresholds

Along the line adjoining the center region of light the cross relation is calculated. The cross relation matrix between two lamp image
segments is calculated by,

( ( ) ̅ )( ( ) ̅)
∑ (13)

CHALLENGES

Although many techniques have been implemented for vehicle detection during night, accidents caused by heavy vehicles can’t be
controlled yet. The methods implemented for detection are not used for heavy vehicles as their taillight orientation is different from
that of cars and motorbikes. It is difficult to identify the vehicle if the lights are under repairing condition. The techniques available for
detection uses computer vision techniques and so it requires high cost for its implementation.

CONCLUSION

This survey paper reviews different techniques used for the detection and identification of taillights of vehicles during night. Various
vehicle detection methods are explained using the image segmentation techniques. In this paper many image processing techniques
such as Segmentation, Edge detection, Filtering and image enhancement are discussed. By combining these methods the tail light are
detected and the accidents during night are controlled up to some extent. More studies and researches are required in this field for
further detection of all vehicles.

REFERENCES:

[1] Bhavinkumar M. Rohit, Mitul M. Patel, “Nighttime vehicle Tail light detection in low light video frame using
Matlab”,International Journal for Research in Applied Science & Engineering Technology, Volume 3 Issue V, May 2015.

[2] BinamrataBaral, SandeepGonnade, ToranVerma, “Image segmentation and Various Segmentation Techniques- A Review”,
International Journal of Soft Computing and Engineering, Volume-4,Issue-1,March 2014.

[3] JunbinGuo, JianqiangWang,XiaosongGuo, Chuanqiang Yu and Xiaoyan Sun, “Preceding Vehicle Detection and Tracking
Adaptive to Illumination Variation in Night Traffic Scenes Based on Relevance Analysis”Sensors 2014, 19 August 2014.
555 www.ijergs.org
International Journal of Engineering Research and General Science Volume 4, Issue 2, March-April, 2016
ISSN 2091-2730

[4] P.Srinivas, Y.L. Malathilatha, Dr. M.V.N.K Prasad, “Image Processing Edge Detection Technique used for Traffic Control
Problem”, International Journal of Computer Science and Information Technologies, Vol. 4 (1), 2013, 17 – 20.

[5] M. Kalpana, G. Kishorebabu, K.Sujatha, “Extraction of Edge Detection Using Digital Image Processing
Techniques”,International Journal Of Computational Engineering Research, Vol. 2 Issue.5.

[6] Mr. Manoj K. Vairalkar, Prof.S.U.Nimbhorkar, “Edge Detection of Images Using Sobel Operator”, International Journal of
Emerging Technology and Advanced Engineering, volume 2, Issue 1, January 2012.

[7] Bharati Sharma, Vinod Kumar Katiyar, Arvind Kumar Gupta and Akansha Singh, “the Automated Vehicle Detection of
Highway Traffic Images by Differential Morphological Profile”, Journal of Transportational Technologies, Scientific
Research, April 2014.

[8] PushpalataPatil and Prof. SuvaranaNandyal, “Vehicle Detection and Traffic Assessment Using Images”, Advance in
Electronic and Electric Engineering, Volume 3, Number 8 (2013).

[9] Basavaprasad B, Ravi M, “A Comparative Study On Classification Of Image Segmentation Methods With A Focus On graph
Based Techniques”, International Jouranal of Reseaarch in Engineering and Technology, Volume 3, May 2014.

[10] XiaohuaShu, Liming Liu, Xiaowei Long, Pei Shu, “Vehicle Monitoring Based On Taillight Detection”, International
Conference on Intelligent Systems Research and Mechatronics Engineering.

[11] Duan-Yu Chen, Member, IEEE, Yang-JiePeng, Li-Chih Chen, and Jun-Wei Hsieh, Member, IEEE, “Nighttime Turn Signal
Detection by Scatter Modeling and Reflectance-Based Direction Recognition”, IEEE SENSORS JOURNAL, VOL. 14, NO.
7, JULY 2014.

[12] Noppakun Boonsim, SimantPrakoonwit, “An Algorithm for Accurate Taillight Detection at Night”, International Journal of
Computer Applications, Volume 100– No.12, August 2014.

[13] Yen-Lin Chen, Member, IEEE, Bing-Fei Wu, Senior Member, IEEE, Hao-Yu Huang, and Chung-Jui Fan, “A Real-Time
Vision System for Nighttime Vehicle Detection and Traffic Surveillance”,IEEE TRANSACTIONS ON INDUSTRIAL
ELECTRONICS, VOL. 58, NO. 5, MAY 2011.

[14] Gonzalez, R. C., Woods, R. E., And Eddins, Digital image processing using MATLAB, 2nd Ed. Gatesmark Publishing,
Knoxville, TN,2009.

[15] Mahdi Rezaei, ReinhardKlette, Mutsuhiro Terauchi, “Robust Vehicle Detection and Distance Estimation Under Challenging
Lighting Conditions”, ARTICLE in IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS ·
MARCH 2015.

[16] http://www.wikipedia.org/

556 www.ijergs.org

You might also like