Professional Documents
Culture Documents
Prasad
L14VectorClassify
Text Classification
n
Today:
n
K Nearest Neighbors Decision boundaries Vector space classification using centroids Decision Trees (briefly)
Later
n
Prasad
Classify based on prior weight of class and conditional parameter for what each word says:
c NB
n
P(x k | c j )
Tc j xk +
x i V
[Tc j xi + ]
3
n n
Each document is a vector, one component for each term (= word). Normally normalize vector to unit length. High-dimensional vector space:
n n n
Terms are axes 10,000+ dimensions, or even 100,000+ Docs are vectors in this space
Prasad
14.1
As before, the training set is a set of documents, each labeled with its class (e.g., topic)
In vector space classification, this set corresponds to a labeled set of points (or, equivalently, vectors) in the vector space Premise 1: Documents in the same class form a contiguous region of space Premise 2: Documents from different classes dont overlap (much)
n
Prasad
Prasad
L14VectorClassify
Prasad
L14VectorClassify
Prasad
L14VectorClassify
10
Define k-neighborhood N as k nearest neighbors of d Count number of documents ic in N that belong to c Estimate P(c|d) as ic/k Choose as class arg maxc P(c|d) [ = majority class]
Prasad
L14VectorClassify
11
14.3
Prasad
L14VectorClassify
12
Does not explicitly compute a generalization, category prototypes, or class boundaries. Compute similarity between x and all examples in D. Assign x the category of the most similar example in D. Case-based learning, Memory-based learning, Lazy learning
Testing instance x:
n n
Also called:
n
Prasad
L14VectorClassify
13
kNN: Value of k
n
Using only the closest example (k=1) to determine the category is subject to errors due to:
n n
A single atypical example. Noise in the category label of a single training example.
More robust alternative is to find the k (> 1) mostsimilar examples and return the majority category of these k examples. Value of k is typically odd to avoid ties; 3 and 5 are most common.
L14VectorClassify 14
Prasad
kNN gives locally defined decision boundaries between classes far away points do not influence each classification Prasad L14VectorClassify decision (unlike in Nave Bayes, Rocchio, etc.)
15
Similarity Metrics
n
Nearest neighbor method depends on a similarity (or distance) metric. n Simplest for continuous m-dimensional instance space is Euclidean distance. n Simplest for m-dimensional binary instance space is Hamming distance (number of feature values that differ). n For text, cosine similarity of tf.idf weighted vectors is typically most effective.
L14VectorClassify 16
Prasad
Prasad
L14VectorClassify
17
Naively finding nearest neighbors requires a linear search through |D| documents in collection But determining k nearest neighbors is the same as determining the k best retrievals using the test document as a query to a database of training documents. Heuristics: Use standard vector space inverted index methods to find the k nearest neighbors. Testing Time: O(B|Vt|) where B is the average
number of training documents in which a testdocument word appears. n Typically B << |D|
L14VectorClassify
Prasad
18
kNN: Discussion
n n
Dont need to train n classifiers for n classes Actually: some data editing may be necessary.
No training necessary
n
n n
Scores can be hard to convert to probabilities May be more expensive at test time
L14VectorClassify 19
Prasad
Infinite memory Decision surface has to be linear (hyperplane see later) Too much capacity/variance, low bias
n n
Botanist who memorizes Will always say no to new object (e.g., different # of leaves) Lazy botanist Says yes if the object is green
20
Prasad
L14VectorClassify
21
14.6
Prasad
L14VectorClassify
22
Deciding between two classes, perhaps, government and non-government [one-versus-rest classification]
How do we define (and find) the separating surface? How do we test which region a test doc is in?
Prasad
L14VectorClassify
23
Separation by Hyperplanes
n
A strong high-bias assumption is linear separability: n in 2 dimensions, can separate classes by a line n in higher dimensions, need hyperplanes Can find separating hyperplane by linear programming
(or can iteratively fit solution via perceptron):
n
Prasad
24
Find a,b,c, such that ax + by c for red points ax + by c for green points.
Prasad L14VectorClassify 25
AND
x 1 1 0 0 y 1 0 1 0 c 1 0 0 0
Linearly separable
x + y 1.5 = 0
x 0 1
26
Prasad
L14VectorClassify
y 1
Linearly separable
x + y 0.5 = 0
x
Prasad
L14VectorClassify
27
The planar decision surface in data-space for the simple linear discriminant function:
w x + w0 0
Prasad
28
XOR
x 1 1 0 0 y 1 0 1 0 c 0 1 1 0
y Linearly inseparable 1
Prasad
L14VectorClassify
29
Which Hyperplane?
Which Hyperplane?
n n
Lots of possible solutions for a,b,c. Some methods find a separating hyperplane, but not the optimal one [according to some criterion of expected goodness]
n
E.g., perceptron
All points n Linear regression, Nave Bayes Only difficult points close to decision boundary n Support vector machines
31
Linear Classifiers
n
Many common text classifiers are linear classifiers n Nave Bayes n Perceptron n Rocchio n Logistic regression n Support vector machines (with linear kernel) n Linear regression n (Simple) perceptron neural networks
L14VectorClassify 32
Prasad
Linear Classifiers
n
n n
For separable problems, there are infinite number of separating hyperplanes. Which one do you choose? What to do for non-separable problems? Different training methods pick different hyperplanes
Classifiers more powerful than linear often dont perform better. Why?
L14VectorClassify 33
Prasad
Decide class C if the odds is greater than 1, i.e., if the log odds is greater than 0. So decision boundary is hyperplane:
P(C ) + wV w nw = 0 where = log ; P(C ) P( w | C ) w = log ; nw = # of occurrences of w in d P( w | C )
34
A nonlinear problem
n
A linear classifier like Nave Bayes does badly on this task kNN will do very well (assuming enough training data)
35
Most document pairs are very far apart (i.e., not strictly orthogonal, but only share very common words and a few scattered others) In classification terms: virtually all document sets are separable, for most classification This is partly why linear classifiers are quite successful in this domain
L14VectorClassify 36
Prasad
Classes are independent of each other. A document can belong to 0, 1, or >1 classes. Decompose into n binary problems Quite common for documents
Classes are mutually exclusive. Each document belongs to exactly one class E.g., digit recognition is polytomous classification
n
Prasad
37
14.5
Build a separator between each class and its complementary set (docs from all other classes). Given test doc, evaluate it for membership in each class. Apply decision criterion of classifiers independently Done
n
Prasad
Build a separator between each class and its complementary set (docs from all other classes). Given test doc, evaluate it for membership in each class. Assign document to class with:
n n n
? ? ?
Prasad
39
Prasad
L14VectorClassify
40
Use standard TF/IDF weighted vectors to represent text documents For each category, compute a prototype vector by summing the vectors of the training documents in the category. n Prototype = centroid of members of class Assign test documents to the category with the closest prototype vector based on cosine similarity.
L14VectorClassify 41
Prasad
14.2
Prasad
L14VectorClassify
42
Definition of centroid
1 r (c) = v (d) | Dc | d Dc
n
Where Dc is the set of all documents that belong to class c and v(d) is the vector space representation of d. Note that centroid will in general not be a unit vector even when the inputs are unit vectors.
43
Rocchio Properties
n
Forms a simple generalization of the examples in each class (a prototype). Prototype vector does not need to be averaged or otherwise normalized for length since cosine similarity is insensitive to vector length. Classification is based on similarity to class prototypes. Does not guarantee classifications are consistent with the given training data.
Prasad
L14VectorClassify
44
Rocchio Anomaly
n
Prasad
L14VectorClassify
45
Prasad
L14VectorClassify
46
47
w x =
n
Rocchio classification
n
Rocchio forms a simple representation for each class: the centroid/prototype Classification is based on similarity to / distance from the prototype/centroid It does not guarantee that classifications are consistent with the given training data It is little used outside text classification, but has been used quite effectively for text classification Again, cheap to train and test documents
49
Prasad
L14VectorClassify
50
n n
Tree with internal nodes labeled by terms Branches are labeled by tests on the weight that the term has Leaves are labeled by categories Classifier categorizes document by descending tree following tests to leaf The label of the leaf node is then assigned to the document Most decision trees are binary trees (advantageous; may require extra internal nodes)
DT make good use of a few high-leverage features
51
Prasad
At each stage choose the unused feature with highest Information Gain (feature/class MI)
P(class) = .2
L14VectorClassify 53
Prasad
Entropy Calculations
If we have a set with k different values in it, we can calculate the entropy as follows:
k
Where P(valuei) is the probability of getting the ith value when randomly selecting one from the set. So, for the set R = {a,a,a,b,b,b,b,b}
a-values
b-values
54
This equals: 0.9836 This makes sense its almost a 50/50 split; so, the entropy should be close to 1.
Prasad
L14VectorClassify
56
Size
Small
Edible? + + + + + +
Large
Color Green Yellow Yellow Yellow Yellow Yellow Yellow Yellow
L14VectorClassify
0.8113
(8 examples)
0.9544
(8 examples)
Entropy of left child is 0.8113 I(size=small) = 0.8113 Entropy of right child is 0.9544 I(size=large) = 0.9544
expect_entropy(size) = I(size) =
Prasad
So, we have gained 0.1008 bits of information about the dataset by choosing size as the first branch of our decision tree.
Prasad L14VectorClassify 59
Fully grown trees tend to have decision rules that are overly specific and therefore unable to categorize documents well
n
Therefore, pruning or early stopping methods for Decision Trees are normally a standard part of classification packages
Use of small number of features is potentially bad in text categorization, but in practice decision trees do well for some text classification tasks Decision trees are easily interpreted by humans compared to probabilistic methods like Naive Bayes Decision Trees are normally regarded as a symbolic machine learning algorithm, though they can be used probabilistically
L14VectorClassify 60
Prasad
61
Representations of text are usually very high dimensional (one feature for each word) High-bias algorithms that prevent overfitting in high-dimensional space generally work best For most text categorization tasks, there are many relevant features and many irrelevant ones Methods that combine evidence from many or all features (e.g. naive Bayes, kNN, neural-nets) often tend to work better than ones that try to isolate just a few relevant features (standard decision-tree or rule induction)*
*Although one can compensate by using many rules
Prasad
Is there a learning method that is optimal for all text classification problems?
n
No, because of bias-variance tradeoff. How much training data is available? How simple/complex is the problem? (linear vs. nonlinear decision boundary) How noisy is the problem? How stable is the problem over time?
n
n n
For an unstable problem, its better to use a simple and robust classifier.
63