You are on page 1of 7

Chinese Journal of Electronics

Vol.23, No.2, Apr. 2014

Optimization of Lip Contour Estimation


WU Wenchao1,2,3,5,6 , WANG Shilin3 , KURUOGLU Ercan Engin4,5 , MA Xiaoli5,6 ,
LI Shenghong3 , LI Jianhua3 and Lionel M. Ni1,2
(1.Department of Computer Science and Engineering, Hong Kong University of Science and Technology,
Kowloon, Hong Kong SAR, China)
(2.Fok Ying Tung Research Institute, Hong Kong University of Science and Technology,
Kowloon, Hong Kong SAR, China)
(3.School of Information Security Engineering, Shanghai Jiao Tong University, Shanghai 200240, China)
(4.Institute of Science and Technology of Information (ISTI), Italian National
Council of Research (CNR), Via G Moruzzi 1, 56124, Pisa, Italy)
(5.Georgia Institute of Technology, Shanghai 200240, China)
(6.School of Electrical and Computer Engineering, Georgia Institute of Technology, Georgia 30332, USA)
Abstract Developing algorithms based on lip contour estimation is a distinctive trend in lip segmentation
which is the first step of visual speech recognition. In order
to establish an optimized estimation of lip contour that is
complex enough to describe the principal features of the lip
but at the same time simple enough to be implemented,
the selection of lip model, estimator as well as parameters for features of lips, including the horizontal length of
snake of feature points and horizontal distance between
these feature points will be optimized. Experimental result demonstrates that the optimized estimation method of
lip contour provides more accurate and more stable results
of lip segmentation.
Key words Lip contour estimation, Lip segmentation, Weighted least squares estimator, Snake.

I. Introduction
Recently, speech recognition with the aid of visual information, sometimes called lip-reading, has proved to be great
interest to many researchers[16] . In the category of biometrics, lip-reading is not only applied to help the deaf or people
with hearing disability communicate with other people, or enable an intelligence agent to obtain information under noisy
environment conditions[1] , but also widely used in the eld of
personal identity recognition.
Robust and accurate lip segmentation, as the rst step
of most lip-reading systems, is of key importance for the accuracy and eciency of the whole system. Many dierent
approaches of lip segmentation have been proposed in recent
years. Generally, there are two distinctive trends in lip segmentation: the rst one, based on classic image and region
segmentation techniques[713] ; and the second one, based on
lip contour estimation and shape template tting[1420] . For

classic region-based approaches, the goal is to detect every single pixel in lip image that belongs to lip region. This kind of
approaches usually assumes that lip and skin pixels have different color features. Sometimes, a fast detection of the lip
region is obtained by using this approach; however, the result
is not accurate for lip edge detection.
This paper focuses on the contour-based approach due to
the inadequacy of classic region-based approach. In this kind
of approaches, the goal is to nd a set F = {f1 , f2 , , fn } of
parameterized functions fn , also known as features or observations of estimation, which is a preferred way to represent lip
information normally invariant to translation, rotation, scaling and illumination. Then estimation of lip contour can be
made by using F . To obtain an accurate lip contour estimation is unquestionably a dicult job. As is known to all, the
optimization of estimation that performs well for a particular
application in this case, lip contour estimation depends upon
many factors. The primary concern is the selection of a good
model, a suitable estimator and proper features. It should be
complex enough to describe principal features of the lip, but
at the same time simple enough to allow an estimator that is
optimal to be easily implemented. Consequently, this paper
tries to gure out the best model, best estimator and best features for lip estimation respectively in order to optimize the
estimation of lip contour.
The paper hereunder is presented as follows: The choice
of lip model is presented in Section II and details of the optimization of lip contour estimator described in Section III.
Section IV outlines how optimal features for estimation are
chosen, including the length of snake of feature points and
distance between these feature points. The experimental result obtained by our estimator is shown in Section V. Finally,

Manuscript Received Aug. 2013; Accepted Sept. 2013. This work is supported by the National Natural Science Foundation of China
(No.61271319, No.60702043, No.61071152).

Chinese Journal of Electronics

342
Section VI draws the conclusion.

II. Lip Model for Lip Contour Estimation

2014

proper order of curves in the model should be dened. Based


on properties of dierent curves, comparative experiments are
conducted with parabola, cubic and quartic as Fig.4 shown.

The selection of proper lip model is a great challenge owing


to high deformation of lip contour. If the chosen model not
suitable, the estimation result will be poor.
Several parametric models were proposed to model the
lip contour[18,21,22] . A simple model made of two parabolas
(Fig.1) was studied rst.

Fig. 1. A simple lip model made of two parabolas

However, experiments (Fig.2) show that the precision of


this model is limited, since the lip contour is not always strictly
symmetric in the image, especially for upper contour. A better
model should be established to describe the lip contour more
accurately.

Fig. 2. Lip model with 2 parabolas

To deal with the asymmetric situation, a model with three


independent curves [1] (Fig.3) is used to describe the lip contour. Through experiments to be described below, you can
see that such a three-curve model is adequate enough to describe lip contour, and more complex models are not necessary.
In the model, each curve describes part of lip boundary. The
number of lip feature points is adaptive so as to locate accurate
lip boundary that depends on the width of the lip (distance
between two lip corners).

Fig. 4. Comparison of results of lip model implementation


with parabola, cubic and quartic

For comparative experiment, several lip images are arbitrarily chosen from AR face database[23] . Half of them are
female lips, and others male lips. According to the result of
comparative experiment shown above, the model with quartic works obviously worse than that with parabola or cubic.
The reason might be that feature points far from the dip point
are not close to the lip contour and the quartic attempts to t
these bad points. Thus we can see that the curve of higher order is more deformable, which makes models with higher-order
curves fail to correct errors of individual feature points. Hence,
high-order curves are not suitable for lip estimation. For models with parabola and cubic, the result is quite similar. Therefore, the model consisting of three independent parabolas is
chosen so as to reduce computation complexity and improve
the eciency of estimation.
In brief, the proposed lip model can be described as follows:
The function for lower lip contour starting from the
left lip corner C1(xcornerL , ycornerL ) to the right lip corner
C2(xcornerR , ycornerR ):

y a1 x2 + b1 x + c1 = 0

(1)

The function for upper-left lip contour starting from the


left lip corner C1(xcornerL , ycornerL ) to the dip point P0 (xdip ,
ydip ):
(2)
y a2 x2 + b2 x + c2 = 0

Fig. 3. Proposed lip parametric model

These feature points in the model can be divided into three


groups: Qn to Qn and C1, C2 make up the Curve 3 to describe the lower lip, P0 to Pn and C1 make up the Curve 1
to describe the upper-left lip, P0 to Pn and C2 make up the
Curve 2 to describe the upper-right lip, respectively.
Next, in order to build a complete model, functions of
the three curves should be determined. In other words, the

The function for upper-right lip contour starting from


the dip point P0 (xdip , ydip ) to the right lip corner C2(xcornerR ,
ycornerR ):
(3)
y a3 x2 + b3 x + c3 = 0
The geometric constraints of the lip model can be described
by a parameter set
g = {xcornerL , ycornerL , xcornerR , ycornerR , xdip , ydip ,
a1 , b1 , c1 , a2 , b2 , c2 , a3 , b3 , c3 }.

III. Weighted Least Squares Estimator

Optimization of Lip Contour Estimation


In Section II, the suitable model has been chosen for lip
contour estimation. Now a proper estimator should be found
to t the lip contour.
For a lot of estimation problems, people often attempt to
nd an optimal or almost optimal estimator by considering
the class of unbiased and determining the one exhibiting minimum variance, the so-called MVU (Minimum variance unbiased) estimator. But in many cases, including lip contour
estimation, no probabilistic assumptions can be made about
the data, which makes it dicult to nd such an MVU estimator. For this reason, we decide to depart from this philosophy
to investigate another class of estimators. This is the least
squares estimator. A salient feature of the method is that no
probabilistic assumptions are made about the data, only a
signal model is assumed. Therefore, least squares estimators
in general have no optimality properties associated with them
but make good sense for many problems of interest. So the
method of least squares must be a good choice for the case of
lip contour estimation.
To estimate the lip contour properly with the model established in the previous section, the parameters am , bm , cm
(m =1,2,3) of Eqs.(1)(3) should be estimated to make the
dierence between estimation and actual curve be minimum.
The dierence to be minimized can be described by
J(u) = (Au v)T (Au v)

where

A=

x2i
x2i+1
..
.
x2i+n

xi
xi+1
..
.
xi+n

1
1
..
.
1

343

x2i
yi
xi
1

x2
y

a
m
1
x
i+1
i+1
i+1

A=
..
..
..
, u = bm , v = ..
.
.
.
.
cm
x2i+n xi+n 1
yi+n
By simple derivation, we have
u = (AT W A)1 AT W v

(7)

The results of non-weighted LSE (Least square estimator)


and the weighted LSE are compared in Fig.5.
From Fig.5, we nd that weighted LS estimator outperforms non-weighted LS estimator. The introduction of weight
matrix ensures the accuracy of positions of some points which
are more important in the lip contour model, particularly ensures that the upper and lower lips always intersect at lip corners. Furthermore, the usage of weighted LS estimator, to
some extent, helps to avoid errors caused by the inaccuracy
of some individual feature points. Therefore, weighted LS estimator should no doubt be chosen as an estimator for the
estimation of lip contour.

(4)

yi

am
i+1

.
,u =
,
v
=
b

m
.

cm
yi+n

After simplication, the dierence can be also written as


Au = v AT Au = AT v u = (AT A)1 AT v

(5)

After calculating Eq.(5), the estimation of the lip contour can


be made.
Yet, in some cases, the result of the estimation is not good
enough (Fig.5). In order to get a more accurate and stable estimation of lip contour, improvement of the estimator should
be made. Based on the observation of the estimated results,
we notice that some points in the model of lip contour, including the central part of the upper and lower lip and two
lip corners, are more important than others. The accuracy of
these points directly aects the performance of the estimator.
It is useful to add more weights to these points during the
estimation. We, therefore, introduce the weight matrix W .
The weight associated with the feature point decreases as the
distance between the feature point and the dip point increases.
The weight associated with corner points is the same as the
weight associated with dip point. Afterwards, we also want
to minimize the weighted dierence between estimation and
actual curve
J(u, w) = (Au v)T W (Au v)

where
W = diag(W1 , W2 , WN )

(6)

Fig. 5. Comparison of results of LS estimator and weighted


LS estimator

IV. Optimized Features for Lip Contour


Estimation
The original algorithm employed for the extraction of lip
feature points is proposed by the author in Ref.[4]. Lip corner
detection based on pixel information is the rst step of this
algorithm. We make use of detected lip corner as a starting
point, which helps to simplify the algorithm and improve its
robustness. Then the Improved jumping snake algorithm
is used to obtain other feature points. Dierent from other
snake algorithms[25,26] , specic color features from DHT and
CIE-LUV color spaces are utilized for Snake which separate
lip and skin better. Consideration of adaptive length of the
snake is also proposed in Ref.[4]. By doing so, features for lip
contour estimation can be obtained.
Nevertheless, the number and quality of features directly
aect the eciency, robustness and accuracy of the estimation.

344

Chinese Journal of Electronics

For instance, fewer feature points lead to bigger errors, while


too many feature points make the convergence of the algorithm
very slow, which gives rise to poor real-time performance.
In order to make the estimation more ecient, more stable and more accurate, in this section, we try to optimize some
parameters for feature extraction, including the horizontal distance between feature points and the horizontal length of the
snake of feature points so as to obtain optimized features for
lip contour estimation.
1. Horizontal distance between feature points
The hotizontal distance d between feature points directly
aects the number of features used for estimation. Either too
big or too small distance between feature points will cause
problems for estimation. So a proper distance should be found
to obtain appropriate number of features.
Several comparative experiments are conducted with different distances (d = 1, 2, 3, , 10 pixels) between feature
points. Hereunder, results of four comparative expetiments
(Fig.6) with distances d = 2, 3, 4, 5, 7, 10 pixels are shown.

2014

rithm have to be found. Through comparative experiments,


d = 4 is chosen as the proper distance between feature points
for our estimation.
2. Horizontal length of the snake of feature points
The horizontal length of the snake of feature points, which
is dened as the horizontal distance between two ends of the
snake, is another parameter which should be optimized so as
to obtain better features for lip contour estimation. According
to Ref.[4], feature extraction algorithm works better for central part of the lip. The accuracy of the algorithm decreases
as the distance between feature point and dip point increases.
On one hand, too long snake is not only wasteful, but also affects the accuracy of estimation. On the other hand, too short
snake cannot ensure the accuracy of lip contour estimation.
In short, it is important for us to decide a suitable and
adaptive length of the snake of feature points.
Similarly, another group of comparative experiments are
conducted. To make the length of the snake adaptive, the
length is associated with the width of lip. In experiments, we
choose the length of the snake to be 1/3, 1/2, 3/4, 1 of the
width of lip. The results are shown in Fig.7.

Fig. 6. Comparison of the results of models with dierent distances (d =2, 3, 4, 5, 7, 10) between feature points

In general, small distance means high accuracy yet low


convergence speed and high complexity. However, according
to the results of the comparative experiments shown above,
too small distancein this case, d = 2, 3 does not lead to high
accuracy of the estimation. Instead, dense feature points make
the estimation more sensitive to noise of pictures, shadow and
wrinkles, which results in bad estimation. For d = 4, 5, 7, 10,
we can see that an error becomes worse with increase of the
distance d between feature points. Hence, we can conclude
that neither too small nor too big distance between feature
points will produce a satisfactory result of lip contour estimation. Balance between accuracy and complexity of the algo-

Fig. 7. Comparison of results of dierent lengths (1/3, 1/2,


3/4, 4/5, 5/6, 1 width of lip) of Snake

The result of the experiments veries the analysis above


that neither too long nor too short snake is good for estimation. For snakes with the length of 5/6 and 1 width of lip, not
only do they have low convergence speed of algorithm, but also
in some cases, their accuracy is strongly aected by poor performance of feature extraction algorithm on the points that are
far from dip points. In order to avoid the problem described
above, some feature points are omitted and only keep those in
the central part of lip. A problem happens again with those
short snakes. For snakes with the length of 1/2 or 1/3 width of

Optimization of Lip Contour Estimation


lip, although they have high convergence speed and low complexity, the accuracy of the estimation is limited becauseshort
snakes of feature points cannot describe the overall shape of
lip contour accurately. Comparatively, snakes with the length
of 3/4 and 4/5 width of lip work better. However, snakes with
the length of 3/4 width of lip not only have higher convergence speed but also work more accurately around lip corners
(Fig.8) than those with the length of 4/5 width of lip. Therefore, the estimation by snakes with the length of 3/4 width of
lip is stable and accurate. It establishes the balance between
complexity and accuracy of the algorithm.

345

is still not able to extract features accurately with the presence


of dense facial hair.
The statistic result of experiments is shown in Table 1.
The statistic results before[4] and after optimization are
shown in Table 1. According to the statistic result, approximately 80.6% of the results after optimization is perceived to
be good, while only 70.8% of them perceived to be good
before optimization. Thus, we can infer that the optimized
estimation method has performed much better in our experiments. In a word, the good rate of lip contour estimation is
improved noticeably after optimization.

Fig. 9. Experimental results: (a)(f ) good results, (g)(i)


acceptable results
Fig. 8. Comparison of results around lip corners of 3/4 and
4/5 width of lip of Snake

Thus, through comparative experiments, 3/4 width of lip


is chosen as proper length of the snake of feature points.

V. Experimental Results

Fig. 10. Experimental results: poor results

According to the study outlined above, the complete optimized lip contour estimation method is formed. The chosen lip
model consists of three parabolas. Weighted least square estimator is chosen as our estimator. Two important parameters
that determine the number of features or observations of estimation are dened as follows. The length of snake of feature
points should be 3/4 width of lip and the distance between
feature points is 4 pixels.
The optimized estimation method was rstly tested on AR
face database [23] . Each lip image was acquired from an image
in the rst ve parts of AR face database (consisting of 500
lip images).
Features for estimation are obtained using the algorithm
proposed in Ref.[4]. It is noted that lip contours can be estimated accurately using our optimized estimation method in
most cases, with dierent shapes of lips and even with presence of facial hair and with shadow. The experimental results
for the optimized estimation method are shown as Fig.9.
In some cases, the optimized lip contour estimation
method does not improve some poor results before optimization (Fig.10). The main reason is that the optimized algorithm

Table 1. Experimental statistic results before


and after optimization
Before optimization
After optimization
Total
Good Acceptable Poor Good Acceptable Poor
Part 1 76
21
3
82
15
3
100
Part 2 67
30
3
84
13
3
100
Part 3 74
20
6
79
16
5
100
Part 4 70
26
4
78
18
4
100
Part 5 67
26
7
80
14
6
100
Total 354
123
23
403
76
21
500
70.8%
24.6%
4.6% 80.6%
15.2%
4.2%

VI. Conclusion
Through optimization of lip model, estimator and parameter of features, the optimized lip contour estimation method
proposed in this paper has noticeably improved the result of estimation. The optimized lip model consisting of three parabolas is employed to describe dierent lip shape, which strengthens the robustness of algorithm. The weighted least square
estimator makes our algorithm perform with high accuracy.
In addition, the optimized parameters for features improve

Chinese Journal of Electronics

346

not only the accuracy but also eciency of the estimation.


Experimental results demonstrate that the optimized lip contour estimation method has markedly improved the result of
estimation.
KURUOGLU Ercan Engin gratefully acknowledges partial
support from Shanghai Jiao Tong University, Image Communication and Information Processing Institute in the framework of 111 Foreign Experts Video Science and Technology
Project.
References
[1] S.L. Wang, W.H. Lau, S.H. Leung, Automatic lip contour extraction from color images, Pattern Recognition, Vol.37, No.12,
pp.23752387, 2004.
[2] N.P. Erber, Interaction of audition and vision in the recognition of oral speech stimuli, J. Speech Hear. Res. Vol.12, No.2,
pp.423425, 1969.
[3] M.T. Chan, HMM-based audio-visual speech recognition integrating geometric and appearance-based visual features, IEEE
Fourth Workshop on Multimedia Signal Processing, Cannes,
France, pp.914, 2001.
[4] WU Wenchao, KURUOGLU Ercan Engin, WANG Shilin et al.
Automatic lip contour extraction using both pixel-based and
parametric models, Chinese Journal of Electronics, Vol.22,
No.1, pp.7682, 2013.
[5] M.N. Kaynak, Q. Zhi, A.D. Cheok, K. Sengupta, K.C. Chung,
Audiovisual modeling for bimodal speech recognition, Proceedings of IEEE International Conference on Systems, Man,
and Cybernetics, Tucson, AZ, USA, Vol.1, pp.181186, 2001.
[6] Y. Zhang, S. Levinson, T. Huang, Speaker independent audiovisual speech recognition, Proceedings of IEEE International
Conference on Multimedia and Expo, NewYork, USA, Vol.2,
pp.10731076, 2000.
[7] S.H. Leung, S.L. Wang, W.H. Lau, Lip image segmentation
using fuzzy clustering incorporating an elliptic shape function,
IEEE Transactions on Image Processing, Vol.13, No.1, pp.51
62, 2004.
[8] M. Gordan, C. Kotropoulos, A. Georgakis, I. Pitas, A new
fuzzy c-means based segmentation strategy. applications to lip
region identication, Proceedings of the 2002 IEEE-TTTC International Conference on Automation, Quality and Testing,
Robotics, Romania, 2002.
[9] S.L. Wang, W.H. Lau, S.H. Leung, A.W.C. Liew, Lip segmentation with the presence of beards, IEEE International Conference on Acoustics, Speech, and Signal Processing, (ICASSP
04 ), Vol.3, pp.529532, 2004.
[10] M. Sadeghi, J. Kittler, K. Messer, Real time segmentation of
lip pixels for lip tracker initialization, Lecture Notes in Computer Science, Vol.2124, pp.317324, 2001.
[11] M. Sadeghi, J. Kittler, K. Messer, Modelling and segmentation
of lip area in face images, IEE Proceedings - Vision, Image and
Signal Processing, Vol.149, pp.179184, 2002.
[12] B. Goswami, W.J. Christmas, J. Kittler, Statistical estimators for use in automatic lip segmentation, Proceedings of the
3rd European Conference on Visual Media Production (CVMP
2006 ), pp.7986, 2006.
[13] I. Mpiperis, S. Malassiotis, M.G. Strintzis, Expression compensation for face recognition using a polar geodesic representation, Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission
(3DPVT06 ), pp.224231, 2006.
[14] I. Shdaifat, R. Grigat, D. Langmann, Active shape lip modeling, Proceedings of the 2003 International Conference on Im-

2014

age Processing, Vol.3, pp.II-875II-878, 2003.


[15] P.C. Yuen, J.H. Lai, Q.Y. Huang, Mouth state estimation
in mobile computing environment, Proceedings of the Sixth
IEEE International Conference on Automatic Face and Gesture Recognition (FGR 2004 ), pp.705710, 2004.
[16] K.S. Jang, S. Han, I. Lee, Y.W. Woo, Lip localization based
on active shape model and gaussian mixture model, Lecture
Notes in Computer Science, Vol.4319, pp.10491058, 2006.
[17] M. Jiang, Z.H. Gan, G.M. He, W.Y. Gao, Combining particle
lter and active shape models for lip tracking, Proceedings of
the Sixth World Congress on Intelligent Control and Automation (WCICA 2006 ), Vol.2, pp.98979901, 2006.
[18] Z. Hammal, N. Eveno, A. Caplier, P.Y. Coulon, Parametric models for facial features segmentation, Signal Processing,
Vol.86, pp.399413, 2006.
[19] H. Seyedarabi, W.S. Lee, A. Aghagolzadeh, Automatic lip
tracking and action units classication using two-step active
contours and probabilistic neural networks, Proceedings of the
Canadian Conference on Electrical and Computer Engineering
(CCECE 06 ), pp.20212024, 2006.
[20] B. Beaumesnil, F. Luthon, M. Chaumont, Liptracking and
mpeg4 animation with feedback control, Proceedings of IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP 2006 ), pp.II-677II-680, 2006.
[21] L. Zhang, Estimation of the mouth features using deformable
templates, in Proceedings of International Conference on Image Processing, Santa Barbara, Vol.3, pp.328331, 1997.
[22] M.U. Ramos-Sanchez, J. Matas, J. Kittler, Statistical
chromaticity-based lip tracking with b-splines, in Proceedings
of IEEE International Conference on Acoustics, Speech and
Signal Processing, Munich, pp.IV-2973IV-2976, 1997.
[23] A.M. Martinez and R. Benavente, The AR face database,
CVC Technical Report #24, 1998.
[24] Steven M. Kay, Fundamentals of Statistical Signal Processing:
Estimation Theory, Prentice Hall PTR, pp.219, 2005.
[25] M.O. Berger, R. Mohr, Towards autonomy in active contour
models, 10th International Conference on Pattern Recognition
(ICPR90 ), Atlantic City, pp.847851, 1990.
[26] P. Delmas, N. Eveno, M. Lievin, Towards robust lip tracking, 16th International Conference on Pattern Recognition
(ICPR02 ), Quebec, pp.528531, 2002.
WU Wenchao
received the B. E.
degree in information security engineering
and M. E. degree in information and communication engineering from Shanghai Jiao
Tong University, China, in 2010 and 2013,
respectively. He obtained a master degree
from School of Electrical and Computer
Engineering, Georgia Institute of Technology in 2013. He is now pursuing Ph.D. degree from Department of Computer Science
and Engineering, Hong Kong University of Science and Technology. His research interests include pattern recognition, big data
visualization and analysis and articial intelligence. (Email: wenchao.wu@ust.hk)

WANG Shilin received the Ph.D.


degree at City University of Hong Kong in
2004. Since 2004, he has been with the
School of Information Security Engineering, Shanghai Jiao Tong University, where
he is currently an associate professor. His
biography is listed in Marquis Whos Who
in Science and Engineering.

Optimization of Lip Contour Estimation


KURUOGLU Ercan Engin
obtained the Ph.D. degree in information engineering at Cambridge University in 1998.
He is an associate professor and senior researcher at ISTI-CNR, Pisa, Italy. He was
a visiting professor in Georgia Institute of
Technology in Shanghai in 2007 and 2011.
He is recipient of the Alexander von Humboldt Fellowship, Germany. He is the editor in chief of Digital Signal Processing,
Elsevier. His research interests are in statistical signal processing
and information theory with applications in image processing, astronomy and bioinformatics.
MA Xiaoli obtained the Ph.D. degree at University of Minnesota in 2003.
She is now an associate professor in the
School of Electrical and Computer Engineering, Georgia Institute of Technology.
Her research interests include signal processing and image processing.

347

LI Shenghong is a professor in the Department of Electronic


Engineering, Shanghai Jiao Tong University. His biography is listed
in Marquis Whos Who in Science and Engineering.
LI Jianhua is a professor and the director of the School of
Information Security Engineering, Shanghai Jiao Tong University,
Shanghai, China. Since 2000, he has been the chief scientist of the
Information Security Technology Expert Board of the State HighTech Development Plan and is involved in the General Expert Board
of State Electronic Government Demonstration Pilot Projects.
Lionel M. Ni
is a chair professor with the Department of
Computer Science and Engineering, The Hong Kong University of
Science and Technology (HKUST). He also serves as the special assistant to the President of HKUST, as dean of the HKUST Fok
Ying Tung Graduate School, and as a visiting chair professor of
the Shanghai Key Laboratory of Scalable Computing and Systems,
Shanghai Jiao Tong University, Shanghai, China. He has chaired
over 30 professional conferences and has received six awards for authoring outstanding papers.

You might also like