Professional Documents
Culture Documents
5, MAY 2017
Abstract— Traffic light recognition is of great significance assistance or even autonomous driving [1]–[8]. For example,
for driver assistance or autonomous driving. In this paper, in order to drive safely through road intersections, Google’s
a traffic light recognition system based on smartphone platforms self-driving car has mounted a camera positioned near the
is proposed. First, an ellipsoid geometry threshold model in Hue
Saturation Lightness color space is built to extract interesting rear-view mirror for traffic light recognition [8]. In recent
color regions. These regions are further screened with a post- years, with the increase of computation power, the application
processing step to obtain candidate regions that satisfy both of smartphones in the driving assistance has gradually become
color and brightness conditions. Second, a new kernel function a hot research field [9]–[12]. Compared with the commercial
is proposed to effectively combine two heterogeneous features, driving assistance systems that use dedicated hardware, the
histograms of oriented gradients and local binary pattern, which
is used to describe the candidate regions of traffic light. A kernel driving assistance application based on smartphone has several
extreme learning machine (K-ELM) is designed to validate these advantages such as low cost, easier usability, and upgradability.
candidate regions and simultaneously recognize the phase and Several interesting works for traffic light recognition on
type of traffic lights. Furthermore, a spatial-temporal analysis smartphone platforms have been reported. For instance,
framework based on a finite-state machine is introduced to Roters et al. [9] present a mobile vision system to detect
enhance the reliability of the recognition of the phase and type
of traffic light. Finally, a prototype of the proposed system is pedestrian light in live video streams to help pedestrians with
implemented on a Samsung Note 3 smartphone. To achieve a visual impairment cross roads. In [10], a real-time red traffic
real-time computational performance of the proposed K-ELM, light recognition method is proposed on mobile platforms. The
a CPU–GPU fusion-based approach is adopted to accelerate the method consists of real-time traffic light localization, circular
execution. The experimental results on different road environ- region detection, and traffic light recognition.
ments show that the proposed system can recognize traffic lights
accurately and rapidly. Due to the ego movement of the vehicle as well as the
variety of outdoor conditions, accurate traffic light recognition
Index Terms— Finite-state machine, geometry threshold model, is still faced with various challenges [5], [6], [9]:
kernel extreme learning machine (K-ELM), smartphone, traffic
light recognition. 1) varying unknown environment;
2) the interference of other light sources, such as billboards
I. I NTRODUCTION and street lamps;
3) the impact of different weather and illumination
T YPICAL traffic scenes contain a lot of traffic information,
such as road signs, road markings, and traffic lights.
Usually, it is not easy for the drivers to keep attention to
conditions;
4) the change of viewing angles and sizes of traffic lights
the various presenting traffic information. The distraction, due to the ego motion of the vehicle;
visual fatigue, and understanding errors of the drivers can 5) various appearances of traffic lights, e.g. with or without
lead to severe traffic accidents. Especially, as the traffic the countdown timer;
lights are used to direct the pedestrians and vehicles to pass 6) the existence of different types of traffic lights, which
the intersections orderly and safely, it is of great impor- indicate different meanings, such as the traffic light with
tance to recognize and understand them accurately. Therefore, a round lamp and the one with an arrow lamp;
many research institutions are striving to recognize the traffic 7) the functions of autofocus and automatic white balance
lights using in-car cameras to assist the driver to under- of on-board cameras or smartphones that may result in
stand driving conditions. This function is critical to driving color cast or blur;
8) the requirement of the real-time processing behavior of
Manuscript received December 19, 2014; revised July 13, 2015 and
October 13, 2015; accepted December 22, 2015. Date of publication January 6, the traffic light recognition algorithm.
2016; date of current version May 3, 2017. This work was supported In order to solve the above problems, we present a traffic
in part by the National Natural Science Foundation of China under light recognition system on smartphone platforms. Different
Grant 61273239 and in part by the Fundamental Research Funds for the
Central Universities of China under Grant 151802001. This paper was from [9], the smartphone is fixed on the front windshield of the
recommended by Associate Editor A. Kokaram. ego vehicle with a bracket. The system recognizes the traffic
W. Liu, H. Yuan, and H. Zhao are with the Research Academy, Northeastern light, including its phase (red or green) and type (round and
University, Shenyang 110179, China (e-mail: lwei@neusoft.com).
S. Li, J. Lv, B. Yu, and T. Zhou are with the Advanced Automotive straight arrow) information, and reminds the driver to follow
Electronics Technology Research Center, Neusoft Corporation, the indications of traffic lights.
Shenyang 110179, China. The system consists of three stages: 1) candidate region
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. extraction; 2) recognition; and 3) spatial-temporal analysis.
Digital Object Identifier 10.1109/TCSVT.2016.2515338 In the stage of candidate region extraction, an ellipsoid
1051-8215 © 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
LIU et al.: REAL-TIME TRAFFIC LIGHT RECOGNITION BASED ON SMARTPHONE PLATFORMS 1119
Fig. 6. Sketch image of regions. (a) Color candidate region. (b) Expanded
region.
light candidate region. To solve this problem, we generate the where K (xi , x j ) represents the proposed kernel function, xi is
traffic light candidate region RiTL according to the aspect ratio the feature vector of sample i , xi = [x iHOG , x iLBP ], and
Ai (Ai = H /W ) of the obtained lamp candidate region, as x iHOG , x iLBP represent the feature vectors of HOG and LBP,
shown in respectively. β is a combination coefficient, which determines
⎧ the contribution of each feature, and β ∈ [0, 1]. By (13), the
⎪
⎪ X L = max(1, xl − K )
⎪
⎨ X = min(c, x + K ) if R is a red lamp HOG feature and the LBP can be combined with different β.
R r i It is worth noting that the method of combinative HOG–LBP
(10)
⎪
⎪ YT = max(1, yt − K ) candidate region features in [35] is only a special case of the proposed method,
⎪
⎩
Y B = min(r, yt + 7K ) which is equal to β = 0.5. More details can be seen from the
⎧ experimental results in Section VIII.
⎪
⎪ X L = max(1, xl − K )
⎪
⎨ X = min(c, x + K ) if R is a green lamp In this paper, a traffic light candidate region is first converted
R r i
(11) into grayscale and is then scaled to a size of 20 × 40 pixels,
⎪
⎪ Y = max(1, y − 7K ) candidate region which is used to extract the features of HOG and LBP.
⎪
⎩
T b
Y B = min(r, yb + K ) For HOG feature, the block size is 10, the cell size is 5,
and the orientation bin number is 9; for LBP feature, we
where (X L , YT ) and (X R , Y B ), respectively, denote the extract 58D uniform patterns and 1D nonuniform pattern per
left-top and right-bottom vertices of the traffic light candidate block, and the feature vectors of all the blocks are concate-
region RiTL , (xl , yt ) and (xr , yb ), respectively, represent the nated as the LBP feature of the candidate region. For each
left-top and right-bottom vertices of the minimum enclosing traffic light candidate region, the dimension of the feature
rectangle of the lamp candidate region Ri , and r and c, vector is 1995. In order to reduce the computation burden,
respectively, represent the height and width of the whole the between-category to within-category sums of squares
image. Here, K can be determined as follows: (BW) method [38] is adopted to reduce feature dimensions.
The strategy of BW is to select the features with large
K = W/2, if Ai ≥ 1.5
(12) between-category distances and small within-category dis-
K = (W + H )/4, else. tances. Considering the algorithmic acceleration based on
OpenCL (to be described in Section VII-B), the dimension
V. R ECOGNITION OF THE T RAFFIC of the feature vector is reduced to 256, and then it is input to
L IGHT IN A S INGLE I MAGE the K-ELM with the proposed kernel function.
After the above procedures of traffic light candidate region B. Recognition of Traffic Light Candidate Regions
extraction, there also exists the influence of interfering light
sources, such as car tail lights. In order to verify whether a Extreme learning machine (ELM) is a machine learn-
candidate region is a traffic light or a background and simulta- ing method with fast training speed and suitable for the
neously recognize the type of the traffic light (round, straight multicategory classification task [39]. Many research results
arrow, left-turn arrow, right-turn arrow, etc.), in this section, show that ELM produces comparable or better classification
a new nonlinear kernel function is proposed to effectively accuracies with implementation complexity compared with
combine two heterogeneous features, HOG and LBP, which artificial neural networks and support vector machines [40].
is used to describe the traffic lights. In addition, a K-ELM is Furthermore, it has been pointed out that K-ELM achieves
designed to recognize the candidate region. good generalization performance; meanwhile, there is no ran-
domness in assigning connection weights between input and
hidden layer and the number of hidden nodes does not need to
A. Feature Extraction of Traffic Light Candidate Regions be given [41], [42]. Therefore, we select the K-ELM to verify
HOG and LBP are two heterogeneous features with com- whether a candidate region is a traffic light or the background
plementary information. The combination of the two features and recognize the phase and type of the traffic light. The output
can extract contour and texture information simultaneously function of K-ELM can be written compactly as
and has obtained effective results in the applications such −1
T I
as pedestrian detection and face recognition [35]. Thus, we f (x) = h(x)H + HH T
T
λ
use HOG–LBP feature to describe the traffic light candidate ⎡ ⎤T
region. In [36], HOG and LBP are directly concatenated to K (x, x 1 ) −1
⎢ .. ⎥ I
form a feature vector, while the contributions of each feature =⎣ . ⎦ + ELM T (14)
are not considered, and the descriptive ability of the features λ
K (x, x N )
is not fully exploited. Inspired by [37], a new nonlinear
kernel function is proposed to combine the two heterogeneous where f (x) is the output of K-ELM; N is the number of
features training samples; x i (i = 1, 2, . . . , N) expresses the feature
vector of training samples; x is the feature vector of a traffic
K (xi , x j ) light candidate region, i.e., the input to the K-ELM classifier;
⎛ 2 ⎞
(1−β)x iHOG −x HOG +β x LBP −x LBP 2 ELM is the kernel matrix for the classifier; ELM = H H T :
= exp⎝− ⎠ (13)
j i j
ELMi j = h(x i )h(x j ) = K (x i , x j ); K (x i , x j ) represents the
γ proposed kernel function in (13); T = [t1 , t2 , . . . , ti , . . . , t N ]T
1124 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 27, NO. 5, MAY 2017
Fig. 11. Evaluation results of red traffic light recognition with various Fig. 12. Comparisons of different feature combinations. (a) Results on test
β values. (a) Curves of RE and PR with different β values. (b) Curve of data set of red light recognition. (b) Results on test data set of green light
F-measure with various β values, and the red square marker indicates the recognition.
point with the maximal F-measure value.
several feature combinations with different β values whose
information are considered. This is done by treating the output descriptive power is superior to that of β = 0.5. This exhibits
of the K-ELM as a binary classification output—traffic light the contribution of parameter β. The feature combination with
and nontraffic light, regardless of the type information. β = 0.8 has the highest score of F. Therefore, for red lights,
Besides the parameter β, there are two other parameters β = 0.8 is chosen as the combination coefficient in this paper.
in K-ELM: the kernel parameter γ and the regularization Similarly, the optimal combination coefficient β can also be
parameter λ. The choice of the values of these two parameters obtained for green lights.
can also affect the performance of the classifier. Especially, to Furthermore, in order to further validate the effectiveness
analyze the contribution of β, we take the HOG–LBP features of the proposed heterogeneous feature combination method,
in [35] as the baseline of comparison, which corresponds four different feature combinations are tested and compared
to the case with β = 0.5. In this case, the optimal values on the test data set, with the Receiver Operating Characteristic
of γ and λ are first determined by applying multiple exper- (ROC) curves shown in Fig. 12. The horizontal axis shows
iments with a grid search strategy on the validation data the FPR, and the vertical axis shows the TPR. Four selected
set. Then, the variation of F-measure with respect to the parameters β = 0, β = 1, β = 0.5, and β = 0.8, respectively,
various values of β is analyzed using the determined optimal represent the HOG, LBP, HOG + LBP [35], and the proposed
values of γ and λ. In this paper, the search range is defined HOG-LBP combination feature. From these results, one can
as {2−10 , 2−9 , . . . , 24 } for γ and {2−5 , 2−4 , . . . , 210 } for λ. see that the proposed feature combination method outperforms
The optimal values of parameter γ and λ on the validation the single feature and the feature combination method of [36].
data set are determined and γ = 1 and λ = 16. This shows the effectiveness of the heterogeneous features
Then, the variation of F-measure with respect to various with the proposed combination method.
values of β (from 0 to 1) is analyzed using the determined
optimal values of γ and λ. The analytical results on the C. Quantitative Comparison Between
validation data set of red lights are shown in Fig. 11. From K-ELM and Other Methods
the results, it can be seen that different values of β have A traffic light contains both phase and type information.
different effects on the red light recognition, when using the First, we evaluate the contribution of the proposed K-ELM on
optimal values of parameters γ and λ. In Fig. 11(b), there exist the phase recognition performance. The proposed K-ELM is
1128 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 27, NO. 5, MAY 2017
TABLE II
T YPE R ECOGNITION R ATES OF D IFFERENT M ETHODS
TABLE III
R ECOGNITION R ESULTS OF THE T RAFFIC L IGHT P HASE
Fig. 15. Typical test results of the traffic light recognition system on a mobile
platform.
TABLE V
P ROCESSING T IME OF THE P ROPOSED S YSTEM
4) A CPU–GPU fusion-based approach is adopted to accel- [15] M. Diaz-Cabrera, P. Cerri, and J. Sanchez-Medina, “Suspended traf-
erate the execution of the proposed K-ELM so that fic lights detection and distance estimation using color features,” in
Proc. 15th Int. IEEE Conf. Intell. Transp. Syst. (ITSC), Sep. 2012,
a computational performance that is five times faster pp. 1315–1320.
than the only CPU version of implementation can be [16] H. Tae-Hyun, J. In-Hak, and C. Seong-Ik, “Detection of traf-
achieved. fic lights for vision-based car navigation system,” in Advances in
Image and Video Technology. Heidelberg, Germany: Springer, 2006,
The test results of real scenes show that the proposed system pp. 682–691.
can simultaneously accurately recognize the phase and type [17] Y. Jie, C. Xiaomin, G. Pengfei, and X. Zhonglong, “A new traffic
of traffic lights compared with the existing methods. Besides, light detection and recognition algorithm for electronic travel aid,” in
Proc. 4th Int. Conf. Intell. Control Inf. Process. (ICICIP), Jun. 2013,
the response of the system is rapid and a feedback can be pp. 644–648.
given in less than a second. It is also worth pointing out that [18] J. Choi, B. T. Ahn, and I. S. Kweon, “Crosswalk and traffic light detec-
the recognition of traffic lights (especially the recognition of tion via integral framework,” in Proc. 19th Korea-Jpn. Joint Workshop
Frontiers Comput. Vis. (FCV), Jan./Feb. 2013, pp. 309–312.
arrow lights) is not useful unless the results are associated [19] J. Levinson, J. Askeland, J. Dolson, and S. Thrun, “Traffic light mapping,
with the lane information. There may be multiple lights in the localization, and state detection for autonomous vehicles,” in Proc. Int.
cross road that has many lanes in the same direction. In such Conf. Robot. Autom., May 2011, pp. 5784–5791.
[20] Z. Li-Tian, F. Meng-Yin, Y. Yi, and W. Mei-Ling, “A framework
a scenario, each traffic light indicates the traffic situation of of traffic lights detection, tracking and recognition based on motion
its corresponding lane. Therefore, the recognition results of models,” in Proc. IEEE 17th Int. Conf. Intell. Transp. Syst. (ITSC),
traffic lights must be associated with the lanes. In the future, Oct. 2014, pp. 2298–2303.
we will try to fuse the recognition results with GPS navigation [21] H.-K. Kim, J. H. Park, and H.-Y. Jung, “Effective traffic lights recogni-
tion method for real time driving assistance system in the daytime,” in
information. By considering both the trajectory planning and Proc. 59th World Acad. Sci. Eng. Technol., 2011, pp. 1–4.
the current location information, the recognition results will [22] S. Sooksatra and T. Kondo, “Red traffic light detection using fast radial
be reasonably interpreted and utilized. symmetry transform,” in Proc. 11th Int. Conf. Elect. Eng./Electron.,
Comput., Telecommun. Inf. Technol. (ECTI-CON), May 2014, pp. 1–6.
[23] C. Jang, C. Kim, D. Kim, M. Lee, and M. Sunwoo, “Multiple exposure
R EFERENCES images based traffic light recognition,” in Proc. IEEE Intell. Vehicles
Symp., Jun. 2014, pp. 1313–1318.
[1] C. Yu, C. Huang, and Y. Lang, “Traffic light detection during day and [24] Y. Shen, U. Ozguner, K. Redmill, and J. Liu, “A robust video based
night conditions by a camera,” in Proc. IEEE 10th Int. Conf. Signal traffic light detection algorithm for intelligent vehicles,” in Proc. IEEE
Process. (ICSP), Oct. 2010, pp. 821–824. Intell. Vehicles Symp., Jun. 2009, pp. 521–526.
[2] R. de Charette and F. Nashashibi, “Traffic light recognition using image [25] Y. Zhang, J. Xue, G. Zhang, Y. Zhang, and N. Zheng, “A multi-feature
processing compared to learning processes,” in Proc. IEEE/RSJ Int. fusion based traffic light recognition algorithm for intelligent vehicles,”
Conf. Intell. Robots Syst. (IROS), Oct. 2009, pp. 333–338. in Proc. 33rd Chin. Control Conf. (CCC), Jul. 2014, pp. 4924–4929.
[3] J. Gong, Y. Jiang, G. Xiong, C. Guan, G. Tao, and H. Chen, “The [26] F. Lindner, U. Kressel, and S. Kaelberer, “Robust recognition of
recognition and tracking of traffic lights based on color segmentation traffic signals,” in Proc. IEEE Intell. Vehicles Symp., Jun. 2004,
and CAMSHIFT for intelligent vehicles,” in Proc. IEEE Intell. Vehicles pp. 49–53.
Symp. (IV), Jun. 2010, pp. 431–435. [27] J. Ren, J. Jiang, D. Wang, and S. S. Ipson, “Fusion of intensity and
[4] R. de Charette and F. Nashashibi, “Real time visual traffic lights inter-component chromatic difference for effective and robust colour
recognition based on spot light detection and adaptive traffic lights edge detection,” IET Image Process., vol. 4, no. 4, pp. 294–301,
templates,” in Proc. IEEE Intell. Vehicles Symp., Jun. 2009, pp. 358–363. Aug. 2010.
[5] M. Omachi and S. Omachi, “Traffic light detection with color and [28] J. Han, D. Zhang, X. Hu, L. Guo, J. Ren, and F. Wu, “Background
edge information,” in Proc. 2nd IEEE Int. Conf. Comput. Sci. Inf. prior-based salient object detection via deep reconstruction residual,”
Technol. (ICCSIT), Aug. 2009, pp. 284–287. IEEE Trans. Circuits Syst. Video Technol., vol. 25, no. 8, pp. 1309–1321,
[6] A. E. Gómez, F. A. R. Alencar, P. V. Prado, F. S. Osório, and Aug. 2015.
D. F. Wolf, “Traffic lights detection and state estimation using hidden [29] V. John, K. Yoneda, B. Qi, Z. Liu, and S. Mita, “Traffic light recognition
Markov models,” in Proc. IEEE Intell. Vehicles Symp., Jun. 2014, in varying illumination using deep learning and saliency map,” in
pp. 750–755. Proc. IEEE 17th Int. Conf. Intell. Transp. Syst. (ITSC), Oct. 2014,
[7] J. Baber, J. Kolodko, T. Noel, M. Parent, and L. Vlacic, “Cooperative pp. 2286–2291.
autonomous driving: Intelligent vehicles sharing city roads,” IEEE [30] W. Hong-Jiang et al., “Research on unmanned vehicle traffic signal
Robot. Autom. Mag., vol. 12, no. 1, pp. 44–49, Mar. 2005. recognition technology,” in Proc. Int. Conf. Intell. Syst. Design Eng.
[8] N. Fairfield and C. Urmson, “Traffic light mapping and detection,” Appl. (ISDEA), vol. 2. Oct. 2010, pp. 298–301.
in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), May 2011, [31] C.-C. Chiang, M.-C. Ho, H.-S. Liao, A. Pratama, and W.-C. Syu,
pp. 5421–5426. “Detecting and recognizing traffic lights by genetic approximate ellipse
[9] J. Roters, X. Jiang, and K. Rothaus, “Recognition of traffic lights in detection and spatial texture layouts,” Int. J. Innov. Comput. Inf. Control,
live video streams on mobile devices,” IEEE Trans. Circuits Syst. Video vol. 7, no. 12, pp. 6919–6934, 2011.
Technol., vol. 21, no. 10, pp. 1497–1511, Oct. 2011. [32] G. Cheng, J. Han, L. Guo, Z. Liu, S. Bu, and J. Ren, “Effective
[10] Y.-T. Chiu, D.-Y. Chen, and J.-W. Hsieh, “Real-time traffic light detec- and efficient midlevel visual elements-oriented land-use classification
tion on resource-limited mobile platform,” in Proc. IEEE Int. Conf. using VHR remote sensing images,” IEEE Trans. Geosci. Remote Sens.,
Consum. Electron.-Taiwan (ICCE-TW), May 2014, pp. 211–212. vol. 53, no. 8, pp. 4238–4249, Aug. 2015.
[11] A. Acharya, J. Lee, and A. Chen, “Real time car detection and tracking in [33] G. Trehard, E. Pollard, B. Bradai, and F. Nashashibi, “Tracking both
mobile devices,” in Proc. Int. Conf. Connected Vehicles Expo (ICCVE), pose and status of a traffic light via an interacting multiple model filter,”
Dec. 2012, pp. 239–240. in Proc. 17th Int. Conf. Inf. Fusion (FUSION), Jul. 2014, pp. 1–7.
[12] C.-W. Tang, K.-T. Feng, P.-H. Tseng, C.-H. Chen, and J.-W. Guo, [34] R. Pan, W. Gao, and J. Liu, “Color clustering analysis of yarn-
“A pitch-aided lane tracking algorithm for driver assistance system dyed fabric in HSL color space,” in Proc. WRI World Congr. Softw.
with insufficient observations,” in Proc. IEEE Wireless Commun. Netw. Eng. (WCSE), vol. 2. May 2009, pp. 273–278.
Conf. (WCNC), Apr. 2012, pp. 3261–3266. [35] W.-J. Park, D.-H. Kim, Suryanto, C.-G. Lyuh, T. M. Roh, and
[13] Y. K. Kim, K. W. Kim, and X. Yang, “Real time traffic light recognition S.-J. Ko, “Fast human detection using selective block-based HOG-LBP,”
system for color vision deficiencies,” in Proc. Int. Conf. Mechatronics in Proc. 19th IEEE Int. Conf. Image Process. (ICIP), Sep./Oct. 2012,
Autom. (ICMA), Aug. 2007, pp. 76–81. pp. 601–604.
[14] M. Omachi and S. Omachi, “Detection of traffic light using structural [36] X. Wang, T. X. Han, and S. Yan, “An HOG-LBP human detector with
information,” in Proc. IEEE 10th Int. Conf. Signal Process. (ICSP), partial occlusion handling,” in Proc. IEEE 12th Int. Conf. Comput. Vis.,
Oct. 2010, pp. 809–812. Sep./Oct. 2009, pp. 32–39.
LIU et al.: REAL-TIME TRAFFIC LIGHT RECOGNITION BASED ON SMARTPHONE PLATFORMS 1131
[37] Y.-X. Li, S. Ji, S. Kumar, J. Ye, and Z.-H. Zhou, “Drosophila Bing Yu received the B.S. degree from Shanghai
gene expression pattern annotation through multi-instance multi-label Jiao Tong University, Shanghai, China, in 2010, and
learning,” IEEE/ACM Trans. Comput. Biol. Bioinform., vol. 9, no. 1, the M.S. degree in automation from the Institut de
pp. 98–112, Jan./Feb. 2012. Recherche en Communications et Cybernétique de
[38] Z.-L. Sun, H. Wang, W.-S. Lau, G. Seet, and D. Wang, “Application of Nantes, Nantes, France, in 2012.
BW-ELM model on traffic sign recognition,” Neurocomputing, vol. 128, He is currently a Research Engineer with
pp. 153–159, Mar. 2014. the Advanced Automotive Electronics Technology
[39] G.-B. Huang, D. H. Wang, and Y. Lan, “Extreme learning machines: Research Center, Neusoft Corporation, Shenyang,
A survey,” Int. J. Mach. Learn. Cybern., vol. 2, no. 2, pp. 107–122, China. His current research interests include image
2011. processing, computer vision, and machine learning.
[40] S. S. Baboo and S. Sasikala, “Multicategory classification using
an extreme learning machine for microarray gene expression can-
cer diagnosis,” in Proc. IEEE Int. Conf. Commun. Control Comput.
Technol. (ICCCCT), Oct. 2010, pp. 748–757.
[41] N.-Y. Liang, G.-B. Huang, P. Saratchandran, and N. Sundararajan,
“A fast and accurate online sequential learning algorithm for feedforward
networks,” IEEE Trans. Neural Netw., vol. 17, no. 6, pp. 1411–1423,
Nov. 2006. Ting Zhou received the B.S. degree in automa-
[42] G.-B. Huang, H. Zhou, X. Ding, and R. Zhang, “Extreme learning tion and the M.S. degree in control engineering
machine for regression and multiclass classification,” IEEE Trans. Syst., from Northeastern University, Shenyang, China,
Man, Cybern. B, Cybern., vol. 42, no. 2, pp. 513–529, Apr. 2012. in 2011 and 2013, respectively.
[43] B. Peasley and S. Birchfield, “Replacing projective data association She is currently a Research Engineer with
with Lucas–Kanade for KinectFusion,” in Proc. IEEE Int. Conf. Robot. the Advanced Automotive Electronics Technology
Autom. (ICRA), May 2013, pp. 638–645. Research Center, Neusoft Corporation, Shenyang.
Her current research interests include machine
learning, computer vision, and image semantic
Wei Liu received the M.S. and Ph.D. degrees segmentation.
in control theory and control engineering from
Northeastern University, Shenyang, China, in
2001 and 2005, respectively.
He is currently a Professor-Level Senior
Engineer with the Research Academy, Northeastern
University. He is also the Director of the Intelligent
Vision Laboratory with Neusoft Corporation,
Shenyang. His current research interests include Huai Yuan received the B.S. degree in computer
computer vision, image processing, and pattern software from Nankai University, Tianjin, China,
recognition with applications to intelligent video in 1983, and the M.S. degree in computer software
surveillance and advanced driver assistance systems. from Northeastern University, Shenyang, China,
in 1986.
He is currently an Associate Professor with
Shuang Li received the master’s degree in Northeastern University. He is also the Director
applied mathematics from Northeastern University, of the Advanced Automotive Electronics Technol-
Shenyang, China, in 2014. ogy Research Center with Neusoft Corporation,
She is currently a Software Engineer with Neusoft Shenyang. His current research interests include
Corporation, Shenyang. Her current research inter- computer vision, image processing, and intelligent
ests include computer vision, image processing, and vehicles.
pattern recognition.
Jin Lv received the B.S. degree in electronic and Hong Zhao received the M.S. and Ph.D. degrees
information engineering from the Shenyang Univer- in computer science from Northeastern University,
sity of Technology, Shenyang, China, in 2008, and Shenyang, China, in 1984 and 1991, respectively.
the M.S. degree in pattern recognition and intelligent He has been a Professor with Northeastern Uni-
systems from Northeastern University, Shenyang, versity since 1994. He is currently the Director
in 2010. of the National Engineering Research Center
She is currently a Research Engineer with for Digital Medical Imaging Device, Shenyang.
the Advanced Automotive Electronics Technology His current research interests include computer
Research Center, Neusoft Corporation, Shenyang. multimedia systems, distributed computer systems,
Her current research interests include image process- image processing, and computer vision.
ing, computer vision, and machine learning.