Professional Documents
Culture Documents
Volume: 4 Issue: 3
ISSN: 2321-8169
400 - 403
_______________________________________________________________________________________
AbstractSubjective test is rarely used for the assessment of online test examinations. In online examination, objective test exams are already
available but the subjective test exams are in need which is considered as the best way in terms of understanding and knowledge. This paper
presents a survey on the effective techniques for subjective test assessment. In this, the answers are unstructured data which have to be
evaluated. The evaluation is based on the semantic similarity between the model answer and the user answer. Different techniques are compared
and a new approach isproposed to evaluate the subjective test assessment of text.
Keywords: Subjective test assessment; Online examinations; Semantic Similarity; Evaluation.
__________________________________________________*****_________________________________________________
I.
INTRODUCTION(HEADING 1)
LITERATURE SURVEY(HEADING 2)
_______________________________________________________________________________________
ISSN: 2321-8169
400 - 403
_______________________________________________________________________________________
define as they are widely used within the information
retrieval(IR).
Example: Clustering is the process of grouping the data into
classes or clusters.
The keywords of the above example can be clustering,
process, grouping, data, classes,clusters.Keyword extraction
also improves the quality of document that are mentioned in
the text. Words that are occurred in the document are
analyzed to represent the most appropriate words.The
techniques below shows the survey of keyword extraction of
words from large text document.
a. Term Frequency- Inverse Document Frequency(TF-IDF)
TF-IDF is the weighing factor in information retrieval and text
mining. It evaluates the important word in the corpus of large
text. Term Frequency(TF) is the number of times the word
appears in the document and Inverse Document
Frequency(IDF) is the weight to measure the importance of
term in text document. Weighing is generally multiplying the
IDF by TF as TF*IDF to filter out common terms. It can be
calculated as
tfidf(t, d, D) = tf(t, d) * idf(t, D)(1)
Menaka Sand Radha N, [1] have classified the text using
keyword extraction. The keywords are extracted using TF-IDF
and WordNet. TF-IDF algorithm is used to select the words
and WordNet is the lexical database of English used to find the
similarity among the words. In this proposed work, the word
which have the highest similarity are selected as keywords.
Sungjick Lee and Han-joonKim[2] proposed conventional TFIDF model for keyword extraction. It involves cross domain
filtering and table term frequency(TTF) for extraction. Ari
Aulia Hakim, Alva Erwin, Kho I Eng, MaulahikmahGalinium,
andWahyuMuliady[3] works on the TF-IDF algorithm which
create a classifier that can classify the online articles. Stephen
Robertson[4], explains the understanding concepts of IDF.
b. Conditional Random Fields(CRF)
CRF is the probabilistic framework for segmenting the
structured data and labeling. The basic idea of conditional
random
field
is
the
distribution
of
labels
sequence.JasmeenKaur and Vishal Gupta[5] presents CRF
model for keyword extraction as the suitable model that is
efficient for keyword extraction.Feng Yu, Hong-weiXuan and
De-quanZheng[6] works on the CRF model to extract the key
phrase and uses SVM model to build classification model. The
experimental result shows the method better as compared to
other machine learning approach.ChengzhiZhang,HuilinWang,
Yao Liu1, Dan Wu, Yi Liao and Bo Wang[7] have proposed
and implemented CRF model to extract keywords.
c. Query Focused keyword extraction
Keywords are correlated with the query sentences and
calculates the query by relating feature which obtains
important words. Query is calculated by words w1 and w2 of
length k words. It works on query, sentence pruning followed
by query related feature and keywords are extracted. Liang
Ma, Tingting He, Fang Li, ZhuominGui and JinguangChen[8]
proposed a strategy which summarized sentence using query
focused multi-document and extract the keywords. Massih R.
_______________________________________________________________________________________
ISSN: 2321-8169
400 - 403
_______________________________________________________________________________________
semantic relations between words and documents using LSA.
The category of attributes of words are shown by the search
engine using Latent semantic method while the useless are
removed.
b. Ontology Method
The data model are ontologies which is the working model of
entities and interaction. An ontology is a set of concepts,
objects, relations and other entities and relationships among
them. Ontology can be defined in the form:
O = [ C, P, RC, RP, A, I ]
Model
answer
(2)
keyword extraction algorithm
Extracted keywords
Student
answer
PROPOSED WORK(HEADING 3)
Output (Score)
CONCLUSION(HEADING 4)
402
IJRITCC | March 2016, Available @ http://www.ijritcc.org
_______________________________________________________________________________________
ISSN: 2321-8169
400 - 403
_______________________________________________________________________________________
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
403
IJRITCC | March 2016, Available @ http://www.ijritcc.org
_______________________________________________________________________________________