You are on page 1of 4

Face Image Validation System

M. Subasic, S. Loncaric, T. Petkovic, H. Bogunovic Faculty of Electrical Engineering and Computing University of Zagreb, Unska 3, 10000 Zagreb, Croatia marko.subasic@fer.hr Abstract
In this paper, we present a novel face image validation system. The purpose of the system is to evaluate quality of face images for identification documents and to detect face images that do not satisfy the image quality requirements. To determine image quality the system first performs face detection in order to find facial features and determine image background. The system consists of seventeen separate tests. Each test checks one quality aspect of the face or of the whole image and compares it to the requirements of International Civil Aviation Organization (ICAO) proposals for machine readable travel documents. The requirements are designed to ensure good conditions for automatic face recognition. The tests are organized in a hierarchical way so the low-level tests are executed first and the high-level tests are executed last. The result of a test is a fuzzy value representing a measure of the image quality. Each test has a set of parameters that can be tuned to produce desired performance of the test. Initial testing of the system has been performed on the set of 190 face images and has demonstrated the feasibility of the method.

V. Krivec Siemens AG Strassganger Strasse 315, Graz, Austria vuk.krivec@siemens.com 2. Description of the Face Validation System
The system consists of 17 tests and each test checks one image quality parameter according to the ICAO proposals. The procedure is conducted in two steps. In the first step, low-level pixel intensity-based tests are executed, which require no higher-level analysis of the face image. In the second step, a more complex tests are executed, which require high-level image analysis including face detection and facial feature detection [5-9]. This order of execution is necessary because lowlevel tests check the image quality aspects that can influence the results of the high-level tests. For example, if a low-level test fails results of high-level tests may not be reliable. The parameters that our system is checking are following: image resolution, image aspect ratio, image sharpness, image brightness, image color balance, background uniformity, background color tone, shadows, hot spots, eyes (head) tilt, eyes horizontal and vertical position, head width and height, red eyes, head rotation, and eyes looking away, as indicated in Table 1. The low-level tests are based on image size and pixel intensity values taken from the whole image. These tests include image resolution and image aspect ratio, sharpness test, over/underexposure test, and color balance test. The high-level tests require knowledge of the image structure in order to conduct testing. The basic regions in a face image are image background and face. Image background and face regions are obtained using a segmentation procedure. 2.1 Background segmentation Background uniformity and color tests require knowledge of the image region corresponding to the background, but it is also used in other high-level tests. Background segmentation is performed on the per pixel basis. Pixels in a narrow region in the two upper corners of the images are selected. Both corners are checked for uniformity and if high non-uniformity is found in one corner that corner is left out from further calculation. Average value and standard deviation for all three color components (HSV) are calculated. An image pixel is considered as a background pixel if its color channel values are within a range around corresponding average value. The span of the range is obtained as corresponding standard deviation multiplied by a scaling factor.

1. Introduction
The main goal of the system is to validate face images and to check if the image is appropriate for use in identification documents. Such images should allow automatic face recognition to be successfully performed. The set of rules regarding the face image parameters that we use in our system is defined by the International Civil Aviation Organization (ICAO). ICAO proposals define thresholds and allowed ranges for parameters of the face image. The parameters concerned are numerous and are dealing with image resolution, sharpness and focus, image tonal range and color, lightning, subject and scene composition. The proposed system checks a subset of the parameters defined in ICAO proposals [1,2]. To the best of our knowledge no similar systems have been published in the literature. The procedure can be divided into two parts: the first part consisting of low-level tests based on the whole image, and the second part containing more complex, high level tests that require a more detailed image analysis (e.g. facial feature segmentation) prior to test execution.

30

Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis (2005)

Test name Image resolution Image aspect ratio Image sharpness Image brightness Color balance Background uniformity Background tone Shadows

Hot spots

Eyes tilt

Horizontal eyes position Vertical eyes position Head width

Head height Head rotation Red eyes

Eyes looking away

ICAO requirements Image size is at least 420x525 pixels. Should be between 1:1.25 and 1:1.33 ICAO proposals do not specify exact criteria. ICAO proposals do not specify exact criteria. ICAO proposals do not specify exact criteria. Image background is required to be uniform or smoothly transitional in one direction. ICAO proposals do not specify exact criteria. Image background tone is suggested to be 18% of gray, no tolerance is specified by ICAO proposals. ICAO proposals states that there should be no shadows in the image and that the diffuse light source should be used. The system checks the shadows on the face and the allowed amount of shadows is determined ad-hoc. Hot spots are very bright spots on the face caused by directional or single point light source. The maximum allowed size of the hot spot region is determined ad-hoc. Eyes tilt is equivalent to the head tilt and the maximum allowed deflection in any direction, according to ICAO is 5 degrees. No tolerance is specified by ICAO standard. Horizontal position must be in the image center. Vertical position should be at 5070% of the image height. Head width is determined in a narrow region under eyes. The image width to head width ratio is required to be between 7:4 and 2:1. Head height is required to be 70% 80% of image height Head rotation should be within 5 degrees but we extended the range by additional tolerance. ICAO proposals state that there should be no red eyes. The allowed amount of red color in eyes is determined ad-hoc. Eyes should not look away from the camera so we check this condition by checking the symmetry of each eye.

2.2 Skin segmentation Shadows and hot spots tests require face segmentation. The interesting part of the face in this case is the skin so face segmentation can be accomplished by skin detection. Skin detection is aided by the segmented background which helps to limit skin region to non-background regions. Skin detection algorithm is based on the work of Rein-Lien Hsu [3-5]. Skin segmentation is color based. According to Hsu the skin tone is not independent of luminance. In an experiment on an image database Hsu determined that in YCbCr color space skin colors form a curved shape. By non linear transformation the shape is straightened making the skin tone independent of the luminance. All image pixels that do not belong to the background are declared to be skin pixels if their Cb and Cr channels values fall within a predefined ellipse in Hsus transformed color space. 2.3 Eye Segmentation Once the skin is segmented hot spots and shadows are detected within the skin region only. The rest of the tests require eyes to be segmented and the system uses the segmented background and skin to limit the search for eyes. Eye segmentation uses a modified Rein-Lien Hsu method [3-5]. The basic idea of the eye segmentation is to find eyes as the regions of image where bright and dark regions are neighboring regions. In human faces this mostly happens at eyes. We use morphological grayscale erosion to emphasize dark regions and morphological grayscale dilation to emphasize bright regions. The image resulting from erosion should have dark area over eyes, while the image resulting from dilation should have bright region over eyes. Such operation is performed in grayscale image (Y) and in two color channels (CrCb), and using the results of the morphological operations we produce eye map (Figure 2. rightmost image in second row) where the regions containing dark and bright regions borders are emphasized. A threshold is applied to the eye map, and connected regions in resulting binary image represent eye candidates. Among all eye candidates only those are selected, whose size is within allowed range and whose shape is not too elongated. If less than two eye candidates are found, the eyes are not segmented and the tests using eye segmentation results can not be performed. Among remaining eye candidates we try to find the best matching pair. We decide on which pair is the most appropriate by checking several parameters. All parameters contribute to the fitness measure and the pair with the best fitness measure is declared to be eyes. The parameters that we check are the following: difference in size of two eye candidates, difference in standard deviation in x direction, difference in standard deviation in y direction, the distance between two eye candidates. If two eyes are detected the following image quality aspects are checked: eyes tilt, horizontal and vertical position of the eyes, red eyes, and eyes looking away.

Table 1. Facial image quality tests and requirements according to ICAO.

Proc. ISPA05

31

GROUPS OF TESTS
Resolution Aspect ratio Color balance Sharpness Brightness

The software is written in C language using OpenCV library and works on Windows XP platform. The duration of test for a single image takes less than 10 seconds on a Pentium 4, 3 GHz processor.

4. Experimental results
The proposed system has been validated using a database of 189 images. Of those images 44 are in conformance with ICAO requirements and 145 are not. The test image set consisted of face images that satisfy the ICAO requirements and of images that do not satisfy one or more ICAO criteria. Figure 2 (Images 1-14) illustrate the steps of the face image analysis procedure on one sample image from the database. Image 1 shows a size normalized input image. Image 2 shows color normalized image. Image 3 shows the result of background segmentation. Skin regions are shown in Image 4. The mask covering skincolored face regions including eyes is contained in Image 5. Shaded image regions are segmented in Image 6. Images 7 and 8 show detected shadow regions on the face. Hot spot detection resulted in an empty mask shown in Image 9, because the example image has no hot spots. Images 10-12 show the results of the eye segmentation step. Images 13 and 14 illustrate the eyes looking away test for left and right eyes, respectively. The validation has shown that 88% images are correctly classified, i.e. either the image was good and the system recognized it as good, or the image was bad and the system recognized it as such. The percentages of true positive, false positive, true negative and false negative classifications are given in Table 2. True False Positive 11.6% 0.5% Negative 76.2% 11.6%

Background uniformity

Background tone

Background segmentation

Shadows on face

Hot spots Skin segmentation Horizontal eyes position Head width Head height Eyes looking away

Eyes tilt Vertical eyes position Red Eyes Head rotation

Eyes segmentation

Figure 1. Execution order of tests and their required segmentations The rest of the tests use all available segmentation results: head width and height, and head rotation. The order of execution of tests is illustrated in Figure 1, where tests are grouped into execution groups. Each group can run its tests only if previous group succeeded. Required segmentations for each group are also showed. Each test returns a fuzzy value that can be used to determine the image quality, but it can also be used to obtain a quantitative measure of the goodness or the badness of the image.

Table 2. Experimental results Infinite variability in face images can present problems to our high level tests, or to be more precise to segmentations. Background segmentation is based on statistics from upper image corner. If for example, hair is spanning over the upper corners, erroneous background segmentation will occur. Since skin detection is color based, near skin tone background, clothes or hair can be mistaken for the skin. Eyes segmentation is most complex and hence most sensitive. Any regions containing dark-bright transition that satisfy our conditions can be declared eyes candidates. In some cases those regions can produce better eyes pairs according to our fitness estimation. Our system can be improved in two ways: by fine tuning algorithms on a larger image database, or by combining current segmentation methods with other

3. Implementation details
Each of the tests is based on the requirements stated in ICAO proposals. The requirements for each test are shown in Table 1. For ICAO requirements that do not define quantitative requirements we constructed appropriate thresholds and ranges. The values are obtained in experiments on several images from our image database. Some ICAO defined ranges have shown to be too strict so we had to add some tolerance. Each of the tests has a set of parameters which are used in internally in the tests algorithm or in determining the resulting fitness measure of the image. These parameters are easily altered so our system can be easily adopted or fine tuned.

32

Proc. ISPA05

segmentation methods. For example, eyes segmentation could be improved by Hough transformation for circles.

[2] ICAO Doc 9303, Parts I, II, III, Machine Readable Travel Documents specifications, http://www.icao.int/mrtd/ publications/doc.cfm [3] R.-L. Hsu, ``Face Detection and Modeling for Recognition,'' Ph.D. Thesis, Dept. of Computer Science & Engineering, Michigan State University, May 2002 [4] R.-L. Hsu and A. K. Jain, ``Face modeling for recognition,'' IEEE Int'l Conf. Image Processing (ICIP), vol. II, pp. 693-696, Oct. 2001. [5] R.-L. Hsu, M. Abdel-Mottaleb, and A. K. Jain, ``Face detection in color images,'' IEEE Trans. Pattern Analysis and Machine Intelligence (TPAMI), vol. 24, no. 5, pp. 696-706, May 2002. [6] B. Heisele, T. Poggio, M. Pontil, Face detection in still gray images, AI Memo 1687, Center for Biological and Computational Learning, MIT, Cambridge, MA, 2000.. [7] Kazuhiro Hotta, Takio Kurita, Taketoshi Mishima: Scale Invariant Face Detection Method Using Higher-Order Local Autocorrelation Features Extracted from Log-Polar Image. FG 1998: 70-75 [8] Stephen J. McKenna, Shaogang Gong, J. J. Collins: Face Tracking and Pose Representation. BMVC 1996 [9] A.W.Senior, Face and Feature Finding for a Face Recognition System, In proceedings of Audio- and Videobased Biometric Person Authentication '99 pp. 154-159

5. Conclusion
Here, we presented a novel system for classification of facial images according to the ICAO proposals. The purpose of the system is to ensure image quality sufficient for automatic face detection. Seventeen tests are conducted at low-level and high-level of processing to determine various image quality aspects of the facial image. Experimental results have shown the feasibility of the approach. The system has shown to be robust. Future work will include testing on a larger image database and further improvement of image analysis techniques.

6. Acknowledgment
This work has been supported by a grant from Siemens AG Austria. The authors want to thank S. Aras-Gazic, J. Rokov, and M. Strkalj for their contribution in software implementation and experimental validation of the system.

7. References
[1] Biometric Data Interchange Formats, Technical report, http://www.icao.int/mrtd/ download/ technical.cfm

Figure 2. Illustration of intermediate steps of the face analysis procedure (images referred to in text as images 1-14)
Proc. ISPA05 33

You might also like