You are on page 1of 5

Multi-Label Image Categorization With

Sparse
Factor Representation

AbstractThe goal of multilabel classification is to reveal the underlying label correlations


to boost the accuracy of classification tasks. Most of the existing multilabel classifiers
attempt to exhaustively explore dependency between correlated labels. It increases the
risk of involving unnecessary label dependencies, which are detrimental to classification
performance. Actually, not all the label correlations are indispensable to multilabel model.
Negligible or fragile label correlations cannot be generalized well to the testing data,
especially if there exists label correlation discrepancy between training and testing sets.
To minimize such negative effect in the multilabel model, we propose to learn a sparse
structure of label dependency. The underlying philosophy is that as long as the multilabel
dependency cannot be well explained, the principle of parsimony should be applied to the
modeling process of the label correlations. The obtained sparse label dependency structure
discards the outlying correlations between labels, which makes the learned model more
unrealizable to future samples. Experiments on real world data sets show the competitive
results compared with existing algorithms.

Existing method:

Most

of

existing

multi-label

classifiers

assume

that

the

labels

are

ubiquitously correlated with each other with densely correlative dependency.


However, it does not hold since there exist labels with negligible or quite
weak correlations among them. For example, it is intuitive that bird and
train have no obvious correlations between them. Incorporating such
outlying label correlations can increase the risk of imposing incorrect
correlative constraints between labels that are not generalizable to testing
samples. From the perspective of modeling, it unnecessarily increases the
complexity of the multi-label model, which makes the label correlations in
the model prone to over-fitting into those of the noisy training set. This

problem is extremely serious if there exists remarkable label correlation


discrepancy between the training and test sets.

Proposed method:
Supervised learning (SL), consists of training and prediction phases, which
requires a batch of training examples annotated by a set of semantic labels
to establish a learner of the satisfactory generalization capability. To work
toward the goal, various methods for multi-label classification have been
proposed according to different problem settings. Chen et al. proposed a
supervised nonnegative matrix factorization (NMF) approach for both image
classification and annotation with the aid of label information, in which two
supervised nonnegative matrix factorizations are combined together to
identify the latent image bases and represent the training images in the
bases space. Hypergraph spectral learning is utilized in and for multi-label
classification, where a hypergraph is constructed to exploit the correlation
information among different labels. Han et al. proposed a multi-task sparse
discriminate analysis approach that formulates multi-label prediction as a
quadratic optimization problem. Structured visual feature selection and the
implementation of hierarchical correlated structures among multiple tags are
explored together in to boost the performance of image annotation.

Merits:
1. Better PSNR values
2. Output image more enhancement.
3. Low BER rate

Demerits:
1.noise level is very high
2. restoration process time is very high.

Results:

You might also like