You are on page 1of 6

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 2, FEBRUARY 2012, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.

ORG

74

Influencing Factors on Classification of Photographic and Computer Generated Images


Ahmed Talib, Massudi Mahmuddin, Husniza Husni and Loay E. George
AbstractClassification of images into photographic (PG) and computer graphic (CG) images is useful in many applications such as web searching, image indexing and video classification. Distinguishing between PG and CG is still a challenging task, in spite of many studies that have been conducted. Their attained accuracy remained behind the acceptable level, ranging from 70% to 90%. Those studies claim that their systems produce good results but actually this occurs in a limited domain (for specific datasets). This paper presents components of classification system and techniques used in these components extensively, and highlight the important factors that influence each component. Moreover, effectiveness of these factors on three terms of performance (speed, accuracy and diversity) is discussed. This study guides the researchers to contribute and improves the results in this field by providing them the influencing and important factors. Index TermsImage Classification; Computer Generated Images; Influencing Factors; Machine Learning.

1 INTRODUCTION

Many researchers use visual features to differentiate between PG and CG images [1-4]. These features usually used together with their measures such as statistical and spatial measures because they are highly correlated (i.e.: there are many statistics and spatial measures computed to some visual features to get the final features, to use them in the classification process such as the color histogram as statistical and color moments as spatial measures). Athitsos, Swain and Frankel [1] addressed some fea2 CLASSIFICATION SYSTEM DESCRIPTION tures to differentiate between PG and CG images that Feature extraction stage is considered as the heart of the considered the base for all subsequent researches in this field. These features are color histogram, farthest neighbor, prevalent color, farthest neighbor histogram, satura A. Talib is PhD Student in School of Computing, College of Arts and Sci- tion, number of colors, smallest dimension, and dimenences, Universiti Utara Malaysia, UUM, 06010 Sintok, Kedah, Malaysia. sion ratio. Lienhart and Hartmann [2] selected some powOn leave from Foundation of Technical Education, Baghdad, Iraq. erful features that used in [1] for classification, using M. Mahmuddin and H. Husni are PhD holders and senior lecturers in Graduate Department, School of Computing, College of Arts and Sciences, U- AdaBoost algorithm. In addition, they try to differentiate between real photos and computer-generated but realisniversiti Utara Malaysia, UUM, 06010 Sintok, Kedah, Malaysia. L. E. George is Ass. Prof. in Computer Science Department, College of Sc- tic-looking image by measuring noise using median and ience, Baghdad University, Al-Jadriya, 10071, Baghdad, Iraq. Gaussian filter. Furthermore, they classify graphical images itself into presentation slides/scientific posters and

HE widespread of the digital cameras produces large amounts of images, which require managing and classification of these images. Image can be classified according to ways in which images are generated into photographic (PG) and computer graphic (CG) images. PG images refer to the images captured by digital cameras while CG images refer to images that are created by a computer or generated by rendering software. Classification between PG and CG images is useful in many applications such as web and desktop image search, image indexing, video classification and other image processing applications. Distinguishing between PG and CG images is a challenging task to many researchers. Breakthroughs in this field can reduce, to a certain percentage, the image forgery in criminal investigation, journalism, and intelligence services. Any classification system has two stages: first stage is features extraction, and second stage is classification stage. In this paper, we investigate and find the factors that affected each of the two stages of the classification system. In the next sections, details of these stages will be explained. Section 3 identifies factors that influecing on classification systems stages. Section 4 and 5 contain discussion and conclusion of this paper.

classification system. Classification stage classifies images based on the features extracted from the first stage. Both parts are influenced by different factors that may affect their performances. In the following sub sections, we shall determine these stages and discuss their associated factors.

2.1 Feature Extraction Stage


Different features are extracted to distinguish between PG and CG. The features can be divided into two categories: (i) features based on visual content of an image, and (ii) features based on physical characteristics of an image.

2.1.1 Visual-Based Features

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 2, FEBRUARY 2012, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

75

comics/cartoons using some characteristics of visual text appearing in these images. Ianeva et al. [3] used average saturation, the ratio of pixels with brightness greater than a certain threshold, HSV color histogram, histogram of angles and absolute values of the gradient in each point (edge direction), compression ratio, and multi-scale pattern spectrum (distribution of object size). Ng and Chang [4] determine that Natural Image Statistics (NIS) also very useful in classification. NIS includes second-order Power spectrum, wavelet high order statistics features, and local patches features. Ng et al. [5] used same features in [3] with visual [4] and physical [6] features to improve system accuracy. Wang and Kan [7] explained that context of images can help in image classification. They used the statistics of text surrounding the images, besides visual content of images, as a useful feature set in classification. Chen et al. [8] used new statistical features that ranked histogram features and ranked region size features as well as some old features, such as color moment features (mean, standard deviation, and skewness which are taken from HSV color space) and correlograms feature. Chen et al. [9] explained statistical moments of characteristic function of wavelet sub-band and their prediction errors for both HSV and RGB color space as a comparison in term of accuracy. Huan and De-Shuang [10] used Gabor wavelet features with different orientation at a different scale for texture analysis. Sankar et al. [11] used YCbCr color space to get moment-based features and RGB to get other features which are color histogram, texture interpolation and patch statistics. Lyu et al. [12, 13] used a statistical model for photographic images consisting of first- and higher-order wavelet statistics. Chen et al. [14] used an alpha-stable distribution model that uses fractional lower order moments, which built to characterize the wavelet decomposition coefficients of natural images. In [15, 16] Transition probability matrices are derived from applying Markov process to model the difference JPEG 2-D arrays to formulate the distinguishing features, which is a second order statistic. In [17], examined several visual features derived from color, edge, saturation and texture features extracted with the Gabor filter. Gabor filter based texture feature shows very promising results (99% for PG and 91.5% for CG) while other visual features show some abilities to perform differentiation. Wang and Moulin [18] used 144dimensional (144-D) feature vector extracted from characteristic functions of wavelet histograms to distinguishing between PG and CG. From all visual features and their statistical and spatial measures mentioned above we conclude that visual features are very important to distinguish between PG and CG when using the suitable features with correct measures to these features.

some methods that have been done will be compared. Ng et al. [6] stated that geometrybased image descriptor is used in their discrimination prototype. These geometry-based features enable us to discover distinctive physical characteristics of PG and CG such as gamma correction of PG and sharp structure of CG which are not used in previous classification techniques. This prototype analyzes the differences existing between the physical generative process of computer graphics and photographic images and characterized by differential geometry and local patch statistics. Furthermore, Ng [5] used these geometrical features with other features used in [3, 4] to increase performance of the system. Dehnie et al. [19] explained that image acquisition in a digital camera is fundamentally different from the generative algorithms deployed by computer generated imagery. This difference is captured in terms of the properties of the residual image (pattern noise in case of digital camera images) extracted by a wavelet based denoising filter. Dirik et al. [20] used two detectors: i) Bayer Pattern Detector which characterizing the presence of a color filter array (CFA) interpolation and ii) Chromatic Aberration Detector introduces a set of features measure the misalignment among color channels to enable classification between camera and computer generated images. Finally, incorporating these detectors with features used in [13] to improve the system performance. Khanna et al. [21] proposed a scheme that utilizes statistical properties of the residual noise and the difference in the geometry of the imaging sensors in different devices. The residual noise presented in computer generated images does not have structures similar to the pattern noise of cameras and scanners. This proposed method is based on the differences in the image generation processes used in these devices and is independent of the image content. Researches in image forgery use these physicalbased techniques numerously. Therefore, you can see more about physical-based features in [22]. Physical-based features help visual features in the recognition process between PG and CG because each device has a special signature on the image. Therefore, combination of these features, visual and physical, is necessary get good classification accuracy.

2.2 Classification Stage


Classification or decision making is very important stage in a classification system, where it has the final decision to decide if the image is PG or CG image. Most of existing classification algorithms used supervised machine learning [1-2, 9-10, 12-13, 16-17] and one used semi-supervised learning technique [7]. Authors in [7] used text around the images on the web to help in making classification decisions. The reason of using supervised learning is the designers did not have accurate threshold value of their features to decide if these images are PG or CG. Therefore, they used supervised techniques to train their systems on some samples to get the suitable threshold value which considered as a separate line between PG and CG. In addition, from training, we can see the behavior of the classifier on different dataset. Factors that influence on

2.1.2 Physical-Based Features


Physical features are the features produced from a physical generation process of PG and CG. In other words, the features are produced from the devices utilized to generate the PG (digital cameras) and CG (computers). Thus,

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 2, FEBRUARY 2012, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

76

classifier also will be discussed later depending on three terms of performance.

3 CLASSIFICATION SYSTEMS INFLUENCING FACTORS


In this paper, an attempt to investigate all the affecting factors that influence classification between PG and CG. In addition, assist researchers that working within this area in determining their directions of improving state-ofthe-art techniques depending on these factors. In the following sections, factors affected on the two stages of classification will be highlighted. These factors also showed in Fig. 1.

From Table 2 and from our view, guaranteeing image diversity is most important factors especially in the highly updated database (online database), but it is also most challenging task.

TABLE 1 INFLUENCING FACTORS ON FEATURE EXTRACTION STAGE


Factor Name Number and Quality of Features Performance Terms that have been Affected Time and Accuracy Increase number of features will increase time required for classification, while high quality features can improve accuracy of the system without needing to increase features Accuracy - Characteristics of color space can effect on accuracy in distinguishing between PG and CG. In this area, three color spaces are used, which are RGB, HSV, and YCbCr. Supported References [6,9,11,14,23]

3.1 Factors Related with Feature Extraction Stage

Color Space and its subbands

-RGB color space used by [1,2,4,6,7,10,12,14,17,19,20,2 1,24] -HSV and RGB color spaces are used by [5,8,9,18,23] -HSV color space used by [3,14] -YCbCr color space used by [15] -Y and Cb bands only used by [16] are

-RGB and YCbCr used by [11] Fig. 1. Influencing Factors on PG and CG Classification.

In this section, the important factors which influencing on feature extraction of a classification system will discuss. It worth mentioning, performance of any classification system, especially online system, measured in three terms: speed, accuracy, and diversity of input images [5]. Therefore, we should mention to all factors that effect on these three terms. Table 1 explains all factors that effect on feature extraction stage. In Table 1, influencing factors on classification process are identified with performance terms that have been affected by these factors. From worth mentioning, image dataset used in classification system plays the vital role in degree of effectiveness of the used features and in turn on the performance of the classification process. Therefore, we cannot say that the method1 is better than method2, unless they are use the same dataset for testing. From these aforementioned factors, the quality of features is the important one (as we will explain later); hence, it must get focus in the classification, while the other will take into account but will consider as the minor factors.

Image Format

Accuracy - Type of image format that uses to save the image is also effect on accuracy of the system

-JPEG and GIF Image Formats as Comparison are used by [1, 2, 3]. In most cases, GIF show superiority than JPEG format because JPEG will destroy the features by its lossy compression -Resizing or Image cropping are used by [5, 12] to speed up the classification process. From our view, cropping central region is feasible to get all image characteristics without affecting on accuracy but this region must be large enough to extract features

Size of Image

Time and Accuracy Size of processed images also will effect on speed of the classification system. In other hand, Images downscaling reduces the processing time but destroys the edges in images, which is degrading the system accuracy Accuracy Characteristics of PG and CG images also have some effects on system accuracy

PG and CG characteristics

-CG characteristics are used by [1,3,10,24] to differentiate between PG and CG -Natural Image Statistics are used by [4] for image classification -Both Types are used by [2,5,8,11,12,14,18,23]

3.2 Factors Related with Classification Stage


In this section, the important factors which influencing on classification stage of a classification system will discuss. Also, discussion of effect these factors on the three terms of performance will be done. These factors can be shown in Table 2.

4 DISCUSSION
In this study, we identify stages of classification system and then specify all factors that could effect on these stages, as presented in Fig. 1.

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 2, FEBRUARY 2012, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

77

In the first stage of classification system (feature extraction), there are two types of features used to distinguishing between PG and CG; visual- and physical-based features. Visual-based features are the visual elements which could be noticed in the image such as color, texture, and others. Physical-based features are features related with the device (for example camera or computer) that generate the image such as gamma correction used in camera and simplified 3D object model used in computer. From literature, percentage of utilization each of these features is listed in Table 3. We noticed from Table 3 that the visual features are the most used features in this type of classification. TABLE 2 INFLENCING FACTORS ON CLASSIFICATION STAGE
Factor Name Classifier Type (Learning Algorithm) Performance Terms that have been Affected Accuracy- Learning algorithm has clear impact on classification system accuracy; this is due to different nature of each learning algorithm Supported References -Multiple Decision (DT) is used by [1] Tree

-Boosting Learning Algorithm is used by [2,8] -Support Vector Machines (SVM) Linear type is used by [3,4,6,9,11,15,16,18,20,21,23 ] while Non-Linear type is used by [12,14,17] One Class SVM (OCSVM) also is used by [10] -Itemized Dichotomizer3 (ID3) is used by [24] -Linear Discrimination Analysis (LDA) classifier is used by [12]

Training Set Size and Diversity

Time, Accuracy, and Diversity of input images - Training samples size and diversity in content are very important factors to improve the recognition ability of the classifier. Sizes of samples are not highly - weighted characteristics because many researchers show that classification accuracy do not much depend on size of training samples. The reason behind this is the much size of samples without contents diversity to these samples will not enhance classifier performance instead it will lead to overfitting. The reason for increasing size of training set is to increase the probability of diversity data, no more.

-Diversity of Training samples is tried to be guaranteed by [1, 3, 5, 26, 27] to increase speed, accuracy and diversity of input images. -Size of Training Samples is increased by [1, 2, 7, 12] to increase accuracy and diversity of input images.

was 82.1% in HSV while it was 76.9% when using RGB. Accuracy enhanced from 80.8% in [12], 83.5% in [6] (which are used RGB) and 82.1% in [9] (which is used HSV) into 87.6% in [16] using the YCbCr color space. Continually increasing of features may raise a probability of using low discriminating features, except, when high-quality features are used. For example, in [24] accuracy was 82.3% with 100-D that reduced by binary genetic algorithm while in the old 234-D features was 82.2%. Another example, in [16] features were 780 with accuracy 87.6% and when using Boosting Feature Selection (BFS) algorithm the dimensions of feature are reduced into 450 with high accuracy 92.7%. They show that accuracy of classification not always depends on the number of features, as shown in [16], instead it depends on quality of features. Therefore, the quality of features is an important factor. Furthermore, we cannot forget the role of increasing features' dimensions on speed of classification where they have an inverse ratio. Image format participates in effecting on classification performance. Where JPEG format has a high compression ratio, this will degrade of performance. For instance, lossy compression of JPEG decreases the features quality of image. In [1], classification of JPEG images was 89.7% accuracy while GIF images classified with 93.3% in average. Whereas in [3], using cartoon features made accuracy classification of JPEG and GIF images are approximately the same. From that we noticed that features types influence on effectiveness of image format on performance. Characteristics of PG and CG images are very important factors in classification performance. There are two questions about their effectiveness, the first one is "what are the features can we extract depending on these characteristics?". The second one is "what is the discrimination effectiveness weight of these characteristics to the classifier?". In the first side, many features have been extracted depend on PG or CG characteristics as shown in Table 1. In the second side is how PG or CG images will response to the classifier with specific features. In [10], the classification accuracy was 73.6% in PG images and 67.3% in CG images. In [12], CG and PG images accuracy was 66.8% and 98.8% respectively. We noticed from pervious shown results and more in [14, 25] that characteristics of PG and CG images effect on classification performance depend on the type of features and classifier used in a classification process. TABLE 3 PERCENTAGE OF UTILIZATION OF EACH FEATURE
Feature Visual Physical Visual + Physical Percentage of utilize 78.3 21.7 18

Color space, number and quality of features, image format, PG and CG characteristics, and size of image are factors related with these features that effect on system performance. The effectiveness of each factor will be discussed as follows: Color space factor, for example, in [10] the accuracy

Image size is obvious influencing factor on speed of a classification process. Whenever the size of an image increases, the processing time increases as well. Therefore, image resizing or cropping are used to reduce the size. This process will influence on accuracy, but it is im-

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 2, FEBRUARY 2012, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

78

portant especially for on-line systems. For example, in [5] when it used cartoon features, the classification accuracy for original size images were 76.1%, 73.1% for resized images, and 75.9% for cropping images. Certainly, the image cropping has slight effect on accuracy, and it is better than resizing because resizing will blur or distort edges areas of an image [1, 5]. Therefore, resizing always degrade the accuracy of classification. Color spaces, features, accuracies and learning algorithms for most PG/CG classification systems are explained in Table 4. Influencing factors on the second stage, classification stage, of a classification system are classifier type and training samples size and diversity. Classifier type is very important on classification accuracy. Multiple decision trees (DT), linear and non-linear Support Vector Machine (SVM), boosting learning algorithm and others are the classifier types used to distinguish between PG and CG image. SVM is used in 80% from literature, follow it boosting algorithm with 11%, DT with ratio 4% and 1% for others as depicted in Table 4. From previous, we noticed that boosting algorithm and SVM are more preferred learning algorithm in this type of classification. Size of training samples played minor role in affecting on classification accuracy as stated in [3]. They showed that accuracy little depended on size of training set. This cause by the training set must have diversity in its content to give a chance to classifier to learn more as much as possible. So, increasing size of training set without suitable diversity will not affect or degrade the classification performance [1, 3, 25] because it may will suffer from overfitting. Therefore, Dorado et al. [27] try to select good training samples using low level features of images rather than collecting image samples by just using random visual observation. Finally, and it worth mentioning we could not show general impact of each factor and its effectiveness on three terms of performance because each article from literature used different dataset for training and testing. For example, system which has features F1 and classifier C1 tested on dataset1 with accuracy 97%, for example, and the same system tested on dataset2 has accuracy 83%. Another system with different features F2 and classifier C2, tested on dataset1 with accuracy 78% while when tested on dataset2 get accuracy 95%. Consequently, we cannot say that features F1 gets the good results than features F2 nor can we claim that classifier C1 better than classifier C2. This depends on a combination of features, classifier, and dataset have been used. Moreover, we can belittle classifier consideration because in [18], they show that using effective features give good results in spite of using weak classifier, (simple Fisher linear discriminate classifier). In addition, they show that we may be able to achieve better classification results by using more complex classifiers such as SVM.

TABLE 4 TECHNIQUES, FEATURES, COLOR SPACES AND THEIR ACCURACY OF PG & CG CLASSIFICATION
Color Space RGB Technique/Classifier Multiple Decision Tree Boosting learning algorithm Itemized Dichotomizer3 (ID3) One Class SVM (OCSVM) linear discrimination analysis (LDA) and a non-linear SVM Non-linear SVM Wavelet based Denoising filter non-linear SVM, Weighted k-nearest neighbors, and Fuzzy k-nearest neighbors Support vector machine (SVM) Features Visual Result/Accuracy 90-93% depend on JPEG or GIF images Average Accuracy for multiCategories 98% 96.1% Authors [1]

Visual

[2]

Visual

[24]

Visual Physical

Physical Physical Visual

73.6% in PG and 67.3 in CG CG: LDA 54.6% SVM 66.8% PG: LDA 99.2% SVM 98.8% PG :68.5% CG: 95.2% --99% for PG and 91.5% for CG 99.8% 100% 85.9% 83% 83.5% 82%

[10] [12]

[14] [19] [17]

Physical Visual Physical Visual Physical Visual and Physical Visual and Textual Visual Visual

[20] [18] [21] [4] [6] [6, 23]

RGB and HSV

Fusion SVM

Boos-Texter (Semisupervised) AdaBoosting learning Algorithm SVM with Radial Basis Function HSV SVM

Average 93.6%

[7]

94.5% 82.1% HSV 76.9% RGB 92% 82.3% 92.7% 90%

[8] [9]

Visual Visual Visual Visual

[3] [14] [15, 16] [11]

YCbCr YCbCr and RGB

SVM+ Boosting feature selection Histogram manipulation and Hybrid images detection

5 CONCLUSION
In this study, we attempt to collect all information about the classification between PG and CG, especially the influencing factors that could affect the classification per-

formance, which is measured by speed, accuracy and diversity of input images. We conclude the factors: color space; number and quality of features, image format, PG and CG characteristics, and size of image are factors that effect on feature extraction stage. Classifier type, sizes and diversity of training samples are factors that effect on classification stage. This study is a part of an on-going research to build robust classification system to detect cartoon images from others. Moreover, according to the aforementioned factors we select the quality of features and diversity of training samples as candidate factors to be enhanced to increase performance of PG & CG classification system. Therefore, visual features that are based on HSV color space will be used, especially to classify car-

JOURNAL OF COMPUTING, VOLUME 4, ISSUE 2, FEBRUARY 2012, ISSN 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

79

toon images. In addition, using image features instead of random visual observation to select diversity training samples to learn the classifier is also very important.

[19]

REFERENCES
[1] V. Athitsos, M. J. Swain, and C. Frankel, "Distinguishing photographs and graphics on the World Wide Web," presented at the Proceedings of the 1997 Workshop on Content-Based Access of Image and Video Libraries (CBAIVL '97), 1997. R. Lienhart and A. Hartmann, "Classifying images on the web automatically," Journal of Electronic Imaging, vol. 11, 2002, pp. 445-454. T. I. Ianeva, A. P. d. Vries, and H. Rohrig, "Detecting cartoons: a case study in automatic video-genre classification," presented at the Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 2, 2003. T.-t. Ng and S.-f. Chang, "Classifying photographic and photorealistic computer graphic images using natural image statistics," 2004. T.-t. Ng and S.-f. Chang, "An online system for classifying computer graphics images from natural photographs," SPIE Electronic Imaging, 2006. T.-T. Ng, S.-F. Chang, J. Hsu, L. Xie, and M.-P. Tsui, "Physicsmotivated features for distinguishing photographic images and computer graphics," presented at the Proceedings of the 13th annual ACM international conference on Multimedia, Hilton, Singapore, 2005. F. Wang and M. Kan, "NPIC: Hierarchical synthetic image classification using image search and generic features," Image and Video Retrieval, 2006, pp. 473-482. C. Yuanhao, L. Zhiwei, L. Mingjing, and M. Wei-Ying, "Automatic Classification of Photographs and Graphics," in IEEE International Conference on Multimedia and Expo, 2006, pp. 973-976. W. Chen, Y. Q. Shi, and G. Xuan, "Identifying computer graphics using hsv color model and statistical moments of characteristic functions," in IEEE International Conference on Multimedia and Expo, 2007, Beijing , China, 2007, pp. 1123-1126. X. Huan and H. De-Shuang, "One Class Support Vector Machines for Distinguishing Photographs and Graphics," in Networking, Sensing and Control, 2008. ICNSC 2008. IEEE International Conference on, 2008, pp. 602-607. G. Sankar, V. Zhao, and Y. Yang, "Feature based classification of computer graphics and real images," in International Conference on Acoustics, Speech, and Signal Processing, Taipei, Taiwan, 2009, pp. 1513-1516. S. Lyu and H. Farid, "How realistic is photorealistic?," Signal Processing, IEEE Transactions on, vol. 53, 2005, pp. 845-850. H. Farid and S. Lyu, "Higher-order wavelet statistics and their application to digital forensics," presented at the Conference on Computer Vision and Pattern Recognition Workshop, 2003. W. Chen, Y. Shi, G. Xuan, and W. Su, "Computer Graphics Identification using Genetic Algorithm," in The 19th International Conference on Pattern Recognition (ICPR 2008), 2008, December 8-11, Tampa, Florida, USA, 2009, pp. 1-4. P. Sutthiwan, X. Cai, Y. Shi, and H. Zhang, "Computer graphics classification based on Markov process model and boosting feature selection technique," in 16th IEEE International Conference on Image Processing (ICIP), 2009 Cairo, Egypt, 2009, pp. 2913-2916. [16] P. Sutthiwan, J. Ye, and Y. Shi, "An enhanced statistical approach to identifying photorealistic images," Digital Watermarking, 2009, pp. 323-335. J. Wu, M. Kamath, and S. Poehlman, "Detecting differences between photographs and computer generated images," in Proceedings of the 24th IASTED international conference on Signal processing, pattern recognition, and applications, Anaheim, CA, USA, 2006, pp. 268-273. Y. Wang and P. Moulin, "On discrimination between photorealistic and photographic images," in IEEE International Conference on [20]

[21]

[2]

[3]

[22]

[23]

[4]

[5]

[24]

[6]

[25]

[7]

[26]

[8]

[27]

[9]

Acoustics, Speech and Signal Processing (ICASSP ), Toulouse, France, 2006. S. Dehnie, T. Sencar, and N. Memon, "Digital image forensics for identifying computer generated and digital camera images," in IEEE International Conference on Image Processing, 2006, Atlanta, GA, 2006, pp. 2313-2316. A. Dirik, S. Bayram, H. Sencar, and N. Memon, "New features to identify computer generated images," in IEEE International Conference on Image Processing, 2007. ICIP 2007, 2007, pp. 433-436. N. Khanna, G. Chiu, J. Allebach, and E. Delp, "Forensic techniques for classifying scanner, computer generated and digital camera images," in In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),2008, 2008, pp. 1653-1656. B. Mahdian and S. Saic, "A bibliography on blind methods for identifying image forgery," Signal Processing: Image Communication, 2010. T. Ng, S. Chang, and M. Tsui, "Lessons learned from online classification of photo-realistic computer graphics and photographs," in IEEE Workshop on Signal Processing Applications for Public Security and Forensics, 2007. SAFE '07, Washington, DC, USA, 2007, pp. 1-6. C. J. S. Oliveira and A. de Albuquerque Araujo, "Separating images collected in the World Wide Web into two semantic classes: photographs and graphics," 4th EURASIP Conference focused on Video/Image Processing and Multimedia Communications,Vol.2, 2003, pp. 495-500. J. Wu, M. Kamath, and S. Poehlman, "Detecting differences between photographs and computer generated images," in Proceedings of the 24th IASTED international conference on Signal processing, pattern recognition, and applications, Anaheim, CA, USA, 2006, pp. 268-273. M. A. Windhouwer, A. R. Schmidt, and M. L. Kersten, "Acoi: A System for Indexing Multimedia Objects," presented at the International Workshop on Information Integration and Web-based Applications & Services, Yogyakarta, Indonesia, 1999. A. Dorado, D. Djordjevic, W. Pedrycz, and E. Izquierdo, "Efficient image selection for concept learning," in IEE Proceedings - Vision, Image and Signal Processing, 2006, pp. 263-273.

[10]

[11]

Ahmed Talib received the B.S. and M.S. from Computer Science Department, College of Science, Baghdad University, Iraq in 1999 and 2001 respectively. He is a senior lecturer in IT Department, Technical College of Management, Foundation of Technical Education, Baghdad, Iraq. He is currently a Ph.D candidate in School of Computing, College of Arts and Sciences, Universiti Utara Malaysia. His current research interests include content-based image retrieval, pattern recognition and cartoon image classification. He is member in IEEE. Massudi Mahmuddin obtained his PhD in 2008 in the areas of system engineering, Cardiff University, United Kingdom. He is currently a senior lecturer with Department of Computer Science, School of Computing, Universiti Utara Malaysia. During last 10 years of his stay at the school, his teaching research and development interests have been in the areas of technical and social aspect of computing, computational intelligent and expert system.. Husniza Husni holds a Bachelor of IT (Hons.), Universiti Utara Malaysia, UUM in 2002 and later received her Master of Computer Science from The University of Western Australia in 2005. In 2010, she received her Ph.D in UUM in the area of speech recognition for dyslexic children reading. She is currently a senior lecturer in School of Computing, UUM and a head of coordinators for Computing Professional Enrichment & Development Division (CoPED), UUM. Her research interest is in the area of advancement of innovation in educational technology, which includes speech and pattern recognition and revolves around artificial intelligence, children, and reading. Loay E. George received the Ph.D. degree from College of Science, Baghdad University at 1997. Now, he is the head of Computer Science Department at the same college. His current research interest is digital image processing applications, pattern recognition and data compression.

[12] [13]

[14]

[15]

[16]

[17]

[18]

You might also like