You are on page 1of 6

MIPRO 2014, 26-30 May 2014, Opatija, Croatia

Determining the impact of demographic features


in predicting student success in Croatia
Bruno Trstenjak*, Dženana **
*Medimurje University of Applied Sciences in Cakovec/Dept. of Computer Engineering, Cakovec, Croatia
**Faculty of Electrical Engineering/Dept. of Computer Science,Sarajevo, Bosnia and Herzegovina
btrstenjak@mev.hr; ddonko@etf.unsa.ba

Predicting the success of students is a topic which has been prediction and has an influence on the extension of time
studied for a long time in different scientific fields. for prediction and consumption computer resources. Due
Evaluation of importance of the features used in the to the large number of features that are included in the
prediction and their subsequent selection is an immensely prediction model and to accelerate the prediction process,
important step in the process of classification and data some models use algorithms to determine the importance
mining. This paper presents a study on the importance of of individual features for the final score prediction. In
student demographic features in the process of predicting. order to achieve better performance of prediction, it is
The study and performed analyses used the demographic necessary to pre-process the data. Pre-processing is the
data collected from the Information System for Higher
process of preparing the data before it is sent to the model
Education (ISVU). For determining the importance of
of prediction, transforming data suitable for the prediction
demographic features in the study the following methods
have been used: Information Gain (IG), Gain Ratio (GR), model. Data pre-processing methods are divided into four
Sequential Backward Selection (SBS), Sequential Forward activities: data cleaning, data integration, data
Selection (SFS). The results show the features rank, their transformation and data reduction. Data reduction refers to
importance weight in the prediction and comparison of the reducing the number of dimensions and features,
results and the use of different methods. Two classification determining the subset of features [6],[7]. Feature subset
algorithms for evaluating the impact of ranking features to selection is the process of identifying and removing
the quality of prediction are used: Naive Bayes i Support irrelevant and redundant information.
Vector Machine (SVM). Final results provide guidelines for
the development of a new prediction model.
Generally, selection algorithms are distinguished
between filter algorithms and wrapper algorithms [8],[9].
I. INTRODUCTION In evaluation filter algorithm only considers data
reduction but does not take into account feature activities.
Predicting the success of students has long been a Filters ranked features based on certain statistical criteria.
topic which is being studied in various scientific Features with highest ranking values are selected on a
communities. So far they have conducted various studies subset. In the wrapper approach, feature selection is
and analyses related to success in learning, finding the “wrapped” in a learning algorithm. Wrappers use the
answer to the question how to achieve better results, and learning algorithm itself to evaluate the usefulness of the
the relation regarding student properties and their impact features. The filter algorithms that are widely used are:
on studying success. Predicting the success of students is Information Gain (IG), Document Frequency (DF),
closely associated with the concept of data mining and F2statistic (CHI), Expected cross entropy (ECE), Weight
machine learning. In the research various techniques of of evidence for text (WET), Odds ratio (ODD). Selection
learning and classification algorithms were used [1],[2]. of algorithm largely depends on the property of
Demographic data are an integral part of each multidimensional data and the selected machine learning
classification that includes information about people, technique [10]. Wrapper algorithms like Sequential
users, students, etc. For the purposes of the research in this backward selection (SBS), Sequential forward selection
article, we used Information System of Higher Education (SFS), Plus-L minus-R selection (LRS), Genetic
Institutions (ISVU), the data from Medimurje University algorithms (GA)[11,12] can be grouped into two
of Applied Sciences in Cakovec, Croatia. categories: greedy algorithms and random / stochastic
Data mining and machine learning are the scientific algorithms [13]. These algorithms typically give better
fields in which the concept of prediction has been results than filter algorithms, but they are much slower
explored, various algorithms have been developed in order and need to be repeated every time when any changes in
to achieve better prediction accuracy [3],[4]. When the data structure happen, as well as the changes at
mentioning prediction, it primarily refers to classification machine learning algorithm [14].
of data to determine their membership depending on their This article presents the evaluation of selection
features. If the prediction is determined on the basis of algorithms on the base of the same data. During the study
input data and achieved results, then we are taking about a prediction over a different set of input features was
Supervised Learning [5]. conducted. Analysis and comparison of results show the
The amount of data, the number of features, data achieved accuracy.
dimensions, vary depending on the prediction purpose.
Multi-dimensionality of data and large number of features
increase the complexity of algorithms which use in

1222
II. OBJECTIVES OF RESEARCH No Feature description
The main objective of this study was to determine the completed
importance and influence of certain demographic features
5. Vocation - vocation which the student
of students in the process of prediction, to rank the
features in order of importance and determine the rank completed at the secondary school
threshold where the feature has an impact on the final 6. Duration of education - The number of years of
score of prediction and compare several algorithms education in secondary school
selection of features, evaluate the results and determine 7. Programme (curriculum) - designation of the
algorithm that provide the best results. curriculum in secondary school
8. Profession - finished profession in secondary
To predict the student success two techniques of school
machine learning were used: Naive Bayes and Support
9. Student rights - designation of student rights
Vector Machine (SVM). The prediction was achieved
and subventions
with binary classes: successful and unsuccessful student.
The results of prediction were determined by two classes: 10. Marital status - married / unmarried
successful and unsuccessful student. 11. Status of residence – subtenant, lives with
his/her parents, his/her own flat/house
III. DATA SOURCE
12. Education level of mother - the system defined
The data used in this study are part of a national six levels of education
database of Information System of Higher Education 13. Mother’s profession - the current workplace
Institutions (ISVU) [15].The database contains a large
number of different data about students organised into 14. Education level of father - the system defined
several groups: personal demographic information, six levels of education
demographic information about the parents and the type of 15. Father’s profession - the current workplace
residence, information on passed exams, data on high
school achievement, information about the current student
status, social status of student, student grants for food. For IV. CLASSIFICATION ALGORITHMS
a description of one student the system uses more than 60 In evaluation of quality of prediction two learning
features. Prediction of student success is focused on techniques were used: Naive Bayes and Support Vector
determining the success in first year of the study. In Machine (SVM).For both of the two learning techniques
addition to demographic data, which are the basis of this during forming the model a k-fold Cross-validation
research, student high school grades were added to the method was used, by which the input data are divided into
input data, which is one of the categories in the k equal subsets. In k-fold Cross-validation, the training
application to the study. sample is partitioned into k-folds as equally as possible
For training and test data, the data of the students and each fold is left out of the learning process and used
enrolled in the program in the last three academic years as a testing set, k-1 folds used as a training set [16],[17].
2010- 2012 have been used. As the target data ECTS
A. Support Vector Machine (SVM)
(European Credit Transfer and Accumulation System)
credits which the student achieved at the end of the SVM is a machine learning techniques used in the
academic year have been used. For success in the first field of data mining. SVM algorithm is based on statistical
year of study a student must achieve at least 42 ECTS learning theory developed by Vapnik (1995). The
credits. These students were classified in the "successful algorithm is defined as a supervised learning algorithm
students" class, other students in the "unsuccessful which solves linear and nonlinear binary classification
students" class. problem. The SVM training algorithm aims to find a
hyperplane that separates the multidimensional training
Cleaning and adjustment of the data was performed data into a discrete predefined number of classes, usual
before the feature selection and prediction. By removing binary class. The operation of the SVM algorithm is based
irrelevant and redundant data (ID number, address, phone on finding the hyperplane that gives the largest minimum
number, name of high school, job title ...) a collection of distance to the training examples. Determining the
15 demographic features remained. Each student is direction and position of the hyperplane where the
described with the following demographic data, shown in distance between the training data, both two-class, and
Table 1: hyperplane will be maximised. The hypeplane divides
TABLE I. multidimensional training data into two sets: the positive
LIST OF STUDENT DEMOGRAPHIC DATA and negative class. Lines that pass the data vector that is
No Feature description closest to the hypeplane are called margins. The training
data that lie on the margins, called support vectors, are the
1. Sex - male / female
only ones that define the hyperplane of maximum margin
2. Place of birth - city post code or the birth place [18],[19]. SVM ensures maximum distance between the
of student margins throughout the length of the hyperplane.
3. Nationality (Croatian) – Yes / No
B. Naive Bayes (NB)
4. Completed secondary school - the name of the Naive Bayes is a linear classifier based on Bayesian
secondary school that the student has theorem [20]. The algorithm analyses the relations

1223
between features and classes. The algorithm calculates the B. Gain Ratio (GR)
conditional probability for relations between the values of The Gain Ratio method is a normalized version of
the features and classes. Features {x1, ...,xk} represent a Information Gain. Normalization is accomplished by
vector x, vector x belongs to class c. During training, the dividing the information gain with the entropy of the
algorithm determines the probability of each class. attribute with respect to the class. The result of this
Probability is computed by counting how many times it approach is reduced bias from the information obtained by
occurs in the training dataset. This is called the prior the algorithm. The formula used by the GR is represented
probability P(C=c). The algorithm also calculates the by Eq. (4)[22]
probability that the instance x belongs to class c, shown in
equation (1)[17] ( H ( Klasa )  H ( Klasa | Atribut ))
GR( Klasa, Atribut ) . (4)
H ( Atribut )
P(C c) – P( X
i
i |C c)
C. Sequential forward selection (SFS)
P(C c| X x) . (1)
P( X x) Sequential forward selection is a simple greedy search
algorithm which is trying to determine the optimal set of
In the denominator there is a scalar coefficient, which is features. The algorithm uses a special list in which the
calculated as the sum of all the associated probability, optimal features are recorded. At the beginning the list is
shown in equation (2) empty. The algorithm adds in the list feature x + whose
objective function J (Yk + x +) gives the best result when
P( X x) ¦ P(Cj j ) P( X x|Cj). (2) combined with the feature Yk which has been selected and
is on the list.
This algorithm carried out a quality prediction over a
small amount of training data, enables the use of multiple Algorithm pseudocode [23]:
output class, but is sensitive to pre-processing of data and 1. Start with the empty set, initialise the feature set
prediction over nominal data [21]. Y0={Ø}; k=0
V. FEATURE SELECTION 2. Select the next best feature
x+=argmax[J(Yk+x)]; x¢Yk
The features selection is performed in order to
optimise the operation of the classifier. Prediction is often 3. Update the feature set Yk+1=Yk+ x+;
performed over multi-dimensional data with a large 4. While k < d
number of features. In order to achieve a simpler and k=k+1
faster classification, some systems reduce the number of
Go to Step 2
input features and determine the features which have a
significant role in classification. Features get ranked and
determine a features subset which is sent to the prediction D. Sequential backward selection (SBS)
model. Selection methods which have been used for the
purposes of this article: Sequential backward selection is an algorithm that is
similar to the SFS algorithm, but it works in the opposite
A. Info Gain (IG) direction. Initially, the algorithm in its optimal list has all
Info Gain method calculates the value of the features the features of X from set D. Algorithm removes a single
information taking into account the class to which they feature that improves (or minimally worsens) an objective
belong, thereby determining the entropy. IG uses the function to obtain the best subset of features. The
following expression for the calculation: algorithm uses the objective function J (Yx-) and
determines for which attribute function the minimum
Info Gain (Klasa, Atribut)=H (Klasa) – H(Klasa|Atribut), value is returned. This attribute is permanently removed
where H is the entropy, which is defined by using the from the list and increases the value of the target function.
following equation (3) J (Yk- x-) > J (Yk).

m VI. EXPERIMENTAL RESULTS


Entropy ( S )  ¦
j 1
p j log 2 p j , (3) A. Features ranking
For each student in the database approximately 62
where p is the probability, for which a particular value demographic features are located. The original data
occurs in the sample space S. contained a large number of missing, redundant and
irrelevant data. Before the pre-processing of input data
Entropy value ranges from 0 to 1. Value 0 means that and reducing the number of features, feature selection and
all instances of the variables have the same value, value 1 calculation of the prediction of student success was
equals the number of instances of each value. Entropy performed over the "raw" data, all in order to enable later
shows how the attribute values are distributed and results evaluation, to enable defending individual impact
indicates "purity" of attributes. High entropy indicates the of features in achievement of prediction accuracy.
uniform distribution of attributes. Opposite to that, low
entropy indicates distribution which is concentrated For the classification of raw data and features ranking
around the point [22]. we used the Information Gain method and SVM classifier.
The study achieved "interesting" results of ranging

1224
features and very poor prediction accuracy. Table 2 shows that successful students have(cheaper food, transport,
the results of ranking top 10 features. students' service, etc.).
TABLE II. TABLE III.
RATING LIST OF FEATURE – RAW DATA RATING LIST OF SELECTED FEATURE
Ranked attributes Rating
0.94142 1 Student surname Ranked feature:
0.86595 2 Date of birth Info 0.74982 1 Average grade
0.86595 3 Citizenship Gain 0.29000 2 Programme (curriculum)
0.77155 4 Address (IG) 0.15111 3 Completed secondary school
0.71501 5 Average grade 0.14609 4 Secondary vocation
0.71001 6 Mother's name 0.1079 5 Student rights
0.70076 7 Student name 0.08909 6 Education level of the father
0.41789 9 City 0.08285 7 Place of birth
0.41789 8 Place of residence 0.08262 8 Education level of the mother
0.40975 10 Father’s name 0.05238 9 Duration of education
0.30528 11 Student rights 0.04683 10 Mother’s profession
... 0.03954 11 Father’s profession
0.03700 12 Status of residence
0.02160 13 Marital status
In the classification, input data were divided into 10-
fold Cross-validation. The following results were 0.01381 14 Nationality
obtained: 0.00497 15 Sex
Ranked feature:
Correctly Classified Instances 56.6038 % Gain 0.16713 13 Marital status
Incorrectly Classified Instances 43.3962 % Ratio 0.11926 1 Average grade
Kappa statistic 0.0225 (GR) 0.08177 12 Status of residence
Mean absolute error 0.434 0.07612 2 Programme (curriculum)
Root mean squared error 0.6588 0.06896 5 Student rights
Relative absolute error 93.8619 % 0.05782 6 Education level of the father
Root relative squared error 137.1311 % 0.05516 3 Completed secondary school
0.04679 8 Education level of the mother
Results show a low level of prediction accuracy, with only 0.04674 4 Profession
56.6% correctly classified instances. The analysis of input 0.04071 9 Duration of education
data showed a significant number of incomplete data 0.03897 7 Place of birth
which certainly had an impact on prediction accuracy. 0.01885 14 Nationality
0.0139 10 Mother’s profession
After classification, original data were pre-processed, 0.01272 11 Father’s profession
irrelevant data and redundant features removed. The
0.00561 15 Sex
results of the analysis separated 15 demographic features,
Ranked feature:
features that were used in further process of prediction SFS/
determination. Ranking using all four methods was 0.206 1 Average grade
SBS 0.179 5 Student rights
conducted on the selected demographic features. Table 3
shows the results of the ranking depending on the chosen 0.177 2 Programme (curriculum)
selection method. 0.176 12 Status of residence
0.172 13 Marital status
The results are somewhat different than we expected at 0.168 6 Education level of the father
the beginning of the studies. All used ranking methods 0.166 3 Completed secondary school
have highlighted "Average grade" feature as the most 0.163 9 Duration of education
important, more than any demographic features. This is
0.158 8 Education level of the mother
somehow logical because various demographic data such
0.154 7 Place of birth
as situation in the family, type of curriculum, etc. have the
impact on the final secondary school grades. Secondary 0.150 4 Profession
school results are a strong indicator of success / failure of 0.145 14 Nationality
the newly enrolled students. 0.138 10 Mother’s profession
0.132 11 Father’s profession
From the ranked demographic attributes, the analysis 0.127 15 Sex
highlighted the curriculum that the student completed in
secondary school as one of the more important for Selective methods have also underlined the parents'
prediction. According to the rank results, the curriculum education as an important element in predicting the
that the student learned in high school has significant
student success. According to the character of data,
influences on the success during the study. The second
father’s education has a significant role compared with
demographic feature that has an impact on the prediction
is "Student rights". The feature belongs to a group of mother’s education. All methods have determined that the
motivating features in order to keep the acquired rights gender of students is ranked the lowest. This information

1225
leads to the conclusion that this attribute should have the accuracy reduction if a specific feature is not taken into
least impact on the data classification. This could be due consideration during classification. Positive value has the
to relatively small number of female population that is opposite effect on the prediction. In other words, the
enrolled in the study of computing engineering. assumption is that the separation of particular feature
from a set of input features would increase the accuracy
B. Features impact
of prediction.
With specifying the individual impact of each
attribute, a features selection process can be improved and Such a classification and analysis of the results
a more accurate classification provided. After showed the interesting results. The biggest impact of the
implementing the feature selection, classifications were
features has been calculated for “Student rights” feature.
performed with both techniques, NB and SVM. Table 4
As mentioned in the previous section, this feature
shows the results of classification.
indicates the students motive to keep student status and
TABLE IV. achieve some benefits. Very often some of the students
CLASSIFICATION RESULTS
try to maintain their status due to the difficult economic
Classification summary Naive Bayes SVM situation or heavy employment. The second ranked
Correctly Classified 61.607% 70.53 % feature by impact on prediction is “Completed secondary
Incorrectly Classified 38.39 % 29.46% school” feature.The years of my experience justify this
result. This feature is the most significant at the study
Kappa statistic 0.2287 0.4027 programmeswhich belong to the area of technical
Mean absolute error 0.4128 0.2946 sciences. For the purpose of this research the students
Root mean squared error 0.5268 0.5428 come from three types of secondary schools: technical
school, high school and school of economics. By
comparing the data about their ECTS, it is evident that
The results show that the SVM classifier achieved significantly better results were achieved by the students
significantly better prediction than NB classifier. SVM from technical schools. In Table 5, four features that have
also achieved better results with all demographic shown positive Impact factorwere indicated, which means
attributes involved. After measuring, the classification that they have no significant impact on the accuracy of
additional analysis of the feature impact on classification prediction. To evaluate this assumption a classification
accuracywas performed. In testing the prediction, some was carried out in a way that the indicated features were
target features were blocked and after that the accuracy removed from the input set. After extracting the attribute
using SVM technique was measured. The obtained classification with both techniques better accuracy of
results are shown in Table 5. prediction, for approximately 5%,was achieved. Table 6
shows the evaluation results:
TABLE VI.
TABLE V. THE CLASSIFICATION WITH REDUCED SET OF FEATURES
FEATURES IMPACTED TO THE RESULT OF CLASSIFICATION
Classification technique
Feature Accuracy Impact
Naive Bayes
Average grade 66.96 -5% Correctly Classified 65.1786 %
Programme (curriculum) 64.28 -9% Incorrectly Classified 34.8214 %
Kappa statistic 0.3004
Completed secondary school 62.50 -11%
Mean absolute error 0.3876
Secondary vocation 69.64 -1% Root mean squared error 0.5013
Student rights 55.34 -22% Classification Improvement ca. 6%
Education level of father 69.64 -1%
Support Vector Machine
Place of birth 73.21 +4% Correctly Classified 73.2143 %
Education level of mother 73.21 +4% Incorrectly Classified 26,7857 %
Duration of education 65.17 -8% Kappa statistic 0.4582
Mean absolute error 0.2679
Mother’s profession 71.42 +1% Root mean squared error 0.5175
Father’s profession 69.64 -1% Classification Improvement ca. 4%
Status of residence 71.42 +1%
Marital status 70.73 0% VII. CONCLUSION
Nationality 69.64 -1%
Evaluation of the results definitely has proved that
Sex 68.75 -3% quality pre-processing is a requirement for obtaining
good results of classification, in this case the prediction
In the table, the column "Impact", shows the impact of student success. In the study of selection and ranking
of each feature on the prediction accuracy expressed as a features four relatively commonly used methods are used:
percentage. A negative value indicates the percentage of Information Gain (IG), Gain Ratio (GR), Sequential

1226
Backward Selection (SBS), Sequential Forward Selection [3] L.David, O.D.Delen, "Advanced data mining techniques",
Springer, 2008.
(SFS). None of the methods showed identical results of
[4] S.Pandya, Dr.Paresh V. Virparia, “Comparing the applications of
ranking features.It can be concluded that in the selection various algorithms of classification technique of data mining in an
of attributes and determination of their importance in Indian“, in IJARCSSE, vol. 3, 2013.
predicting it is necessary to include additional verification [5] R.Dash, R.L.Paramguru, R.Dash, “Comparative analysis of
and empirical methods. Supervised and Unsupervised discretization techniques“, in
International Journal of Advances in Science and Technology, vol.
2, No. 3, 2011.
Evaluation results of ranking have shown three
[6] T.Karunaratne, H.Boström, U.Norinder, “Pre-Processing
features that have a significant impact on the success of structured data for standard machine learning algorithms by
the study: Average grade, Completed secondary school supervised graph propositionalization - a Case study with
and Student rights. The analysis of results offers a medicinal chemistry datasets“, in Machine Learning and
Applications (ICMLA), 2010.
conclusion that the completed secondary school and the
[7] W.Jianping, “Research on data preprocessing in supermarket
curriculum are very important for success in the first year customers data mining“, Information Engineering and Computer
of the study. Students with poor professional prior Science, 2010, pp.1-4.
knowledge adapt harder to the requirements of the [8] Pablo Bermejo, Luis de la Ossa, José A. Gámez, José M. Puerta,
technical study, which in some way justifies the faculties "Fast wrapper feature subset selection in high-dimensional
when emphasising that future students should pass some datasets by means of filter re-ranking", Knowledge-Based
Systems, vol.25, 2012, pp. 35-44.
specific courses. In addition to the list of courses,
[9] S.Beniwal, J.Arora, “Classification and feature selection
faculties also give great importance to final secondary techniques in data mining“, in International Journal of Engineering
schoolgrades. Research & Technology, 2012.
[10] Yan Xu, “A comparative study on feature selection in Chinese
Two other demographic features have a different Spam Filtering“, in Application of Information and
Communication Technologies (AICT), 2012, pp. 1-6.
meaning. Students who come from technical schools, in
[11] S.F. Pratama, A.K. Muda, “Feature selection methods for writer
this case, they can easily adapt on study demands because identification: A Comparative Study“, in International Conference
they have necessary background knowledge which is on Computer and Computational Intelligence, 2010.
required for technical studies. In this way some students [12] X. Liu, L.Shang, “A fast wrapper feature subset selection method
have reduced the impact of lower grades from the high based On Binary Particle Swarm Optimization“, in IEEE Congress
school. Other demographic feature acts as a motivational on Evolutionary Computation, 2013, pp. 3347-3353.
factor in education, as previously explained. We [13] I.A.Gheyas, L.Smith, “Feature subset selection in large
dimensionality domains“, in Pattern Recognition, 2009.
conducted a ranking of features with four different
[14] Hui-Yan Wang, “A comparative study of the feature selection
methods. Each method has estimated slightly different influence on diagnosis In traditional Chinese medicine“, in
order of features importance. In estimating the impact of Proceedings of the Ninth International Conference on Machine
demographic features on the prediction accuracy, analysis Learning and Cybernetics, Qingdao, 2010.
was pointed out that feature "Student rights" had the [15] Informacijski sustav visokih  
   
znanosti i tehnologije Republike Hrvatske, 2003.
greatest impact. The results of analysis have shown that
“Place of birth”, “Education level of mother”, “Mother [16] Ji-Hyun Kim, “Estimating classification error rate: Repeated
Cross-validation, Repeated Hold-out and Bootstrap“, Department
profession” and “Status of residence” are four features of Statistics and Actuarial Science, Soongsil University, 2009.
who do not have a significant impact on the [17] N. Williams, S.Zander, G.Armitage, "Evaluating machine learning
classification. After determining the impact of features, algorithms for automated network application identification",
four features with the least impact were removed from the CAIA Technical Report, 2006.
input set. After reducing the number features, with new [18] G. Mountrakis, J.Im, C.Ogole, "Support vector machines in
remote sensing: A review", in Journal of Photogrammetry and
classification was achieved about 5% better results. The Remote Sensing, 2010.
obtained results indicate the need for involving additional [19] Pei-Chann Chang, Chiung-Hua Huang, Chi-Yang Tsai, “A hybrid
control in the features selection process. One of possible system by the integration of Case-Based-Reasoning with Support
solutions is to introduce a hybrid model of prediction. Vector Machine for prediction of financial crisis“, in ICIC
Model that with additional algorithms would increase the International, vol. 9, No. 6, 2013.
accuracy of prediction. This assumption gives a scope for [20] G. H. John, P. Langley, “Estimating continuous distributions in
Bayesian classifiers”, 11th Conference on Uncertainty in Artificial
further research in the field of classification and Intelligence, pp. 338-345, Morgan Kaufman, San Mateo, 1995.
prediction of students success. [21] S.B.Kotsiantis, "Supervised machine learning: A Review of
classification techniques", Informatica 31, 2007, pp. 249-268.
REFERENCES [22]    ! "#$  % &   C.Acar,
[1] C. Romero, S. Ventura, P. G. Espejo, C. Hervás,“ Data mining "Comparison of feature selection Algorithms for medical data",
algorithms to classify students“, in 1st Int. Conf. on Educational IEEE Innovations in Intelligent Systems and Applications, 2012.
Data Mining, 2008, pp. 187-191. [23] L.Ladha, T.Deepa, "Feature selection methods and algorithms“, in
[2] R.R.Kabra, R.S.Bichkar, “Performance prediction of engineering International Journal on Computer Science and Engineering, vol.3,
etudents using Decision trees“, in International Journal of No.5, 2011.
Computer Applications, vol. 36, No.11, 2011. [24] Ricardo Gutierrez-Osuna, "Pattern analysis", CSCE

1227

You might also like