You are on page 1of 8

SYNOPSIS FOR PROPOSED RESEARCH WORK

1. INTRODUCTION

Image processing can be defined as the manipulation of an image for the purpose of
either extracting information from the image or producing an alternative representation of
the image.Digital image processing is concerned with the development of computer
algorithms working on digitized images. It is quite a broad field, drawing upon many
disciplines such as Optics, Signal processing,Electronics,Computer science,Pattern
recognition,Perception science andCognitive science.

The first work in image processing dates back to the 1920s, when automated means of
image transmission were first used. In the 1950s, computers were starting to be used.

The advent of affordable computer power in that period, the questions posed by the space
program around the same time, and the increasing availability of imaging and
visualization equipment, led to a large increase in the need for algorithms to process
image data. The goal of image processing is usually automatic detection or recognition of
image content, in which case one can speak of machine vision. However, the goal might
also be to enhance images for further processing by humans. Its current applications are
numerous, in medicine, industrial inspection, video communication, remote sensing,
robot vision etc.

Image Preprocessing Data Segmentation Object Image Image


Input reduction recognition understand Output
ing

Optimization

Figure1. The image processing chain.

The range of image processing problems is wide, encompassing everything from low-
level signal enhancement to high-level image understanding. In general, image
processing problems are solved by a chain of tasks. This chain, shown in figure 1,
outlines the possible processing needed from the initial sensor data to the outcome (e.g. a
classification or a scene description). This pipeline consists of the steps of pre-processing,
feature extraction, segmentation, object recognition and image understanding. In each
step, the input and output data could either be images (pixels), measurements in images
(features), and decisions made in previous stages of the chain (labels) or even object
relation information (graphs). What type of data is appropriate at what stage depends on
the application?

With each step in the chain the need for using prior (world) knowledge increases. For
simple noise reduction, not much knowledge about the contents of the image itself needs
to be known, whereas for image understanding it is imperative to limit the domain of
images which can be processed.

There are numerous specific motivations for image processing but many fall into the
following categories: (i) to remove unwanted signal components that are corrupting the
image and (ii) to extract information by rendering it in a more obvious or more useful
form.

Image enhancement is one of the simplest and most appealing areas of digital image
processing. The basic idea behind enhancement is to bring out detail that is obscured, or
to highlight certain features of interest in an image. A familiar example of enhancement
is increasing the contrast of an image. Histogram equalization (HE) is a popular
technique for automatic contrast enhancement. But the approach has several limitations.

In many cases the histogram equalization does not preserve the luminance of the image.
A technique based on equal area dualistic sub-image histogram equalization is proposed
in [6]. The technique preserves the luminance of the image up to great extent so that it
can be used in video systems directly. Some other luminance preserving techniques are
proposed in [7-11].

Histogram equalization fails if amplitudes of some histogram components are very high
at one or several locations on the grayscale, while other histogram components are very
small, but not zero, in the rest of the grayscale. This makes it difficult to increase the
image contrast by simply stretching its histogram. Since the high amplitude of the
histogram components generally corresponds to the image background, the HE technique
could cause a washed-out effect on the appearance of the output image and/or amplify the
background noise. A very interesting improvement to histogram equalization is gray level
grouping (GLG) [12]. The basic procedure is to first group the histogram components of
a low-contrast image into a proper number of bins according to a selected criterion, then
redistribute these bins uniformly over the grayscale, and finally ungroup the previously
grouped gray-levels. GLG not only produces results superior to conventional contrast
enhancement techniques, but is also fully automatic in most circumstances, and is
applicable to a broad variety of images.
Reversible Data Hiding (RDH) technique is to embed a piece of information into a host
signal to generate the marked one, from which the original signal can be exactly
recovered after extracting the embedded data. The technique of RDH is useful in some
sensitive applications where no permanent change is allowed on the host signal. Many
algorithms have been proposed for digital images to embed invisible data (e.g. [13]–[20])
or a visible watermark (e.g. [21]). An interesting RHD algorithm has been proposed in
[22]. The proposed algorithm also achieves contrast enhancement by modifying the
histogram of the image while embedding the data. The authors have claimed that it is the
first algorithm that achieves image contrast enhancement by RDH.

The field of achieving contrast enhancement by RHD is relatively new and can be
explored for the possibility of better RHD techniques while achieving enhanced quality.
Some of the possible applicationsmay be in the biomedical imaging, where poor contrast
medical images containing personal information of the patient need to be processed. For
such applications, it is important that important information is preserved while improving
the image contrast. The immediate focus will be to look for efficient RHD techniques
with good data hiding rate and subsequently to achieve more generalized image
enhancement.

2. Motivation

There are many problems in image processing for which good, theoretically justifiable
solutions exists, especially for problems for which linear solutions suffice. Segmentation
is the fundamental step in medical image analysis.
Tough past researchers have prepared their research, still now it is a vast research field
because of the variation of the data of MRI. e authors in [1] use watershed segmentation
and EM–GM algorithm for segmenting brain tumor. But they did not mention any
potential dataset.
Similar type research was proposed by some authors who use support vector machine
classifier to classify the tumor from the normal tis-sue. But, due to the fragile training set
and not a better technique of feature extraction, their algorithm cannot robustly classify
tumor.
Some authors had tried scale-invariant feature transform (SIFT) algorithm to find the
feature points and match the tumor region. As this algorithm finds the feature points in
the high-cluster region, sometimes they misclassify the normal tissue as high-cluster
tumor tissue [3].
Besbes et al. [4] introduced a model together with discrete Markov random field (MRF)
for the segmentation of brain tumor. But the parameter estimation and computing
probability for this method are very difficult.
Shen et al. [5] introduced traditional fuzzy C-means (FCM) clustering algorithm. But it is
prone to noise that may affect the pixel intensities and may have improper segmentation.
Chen et al. [6] applied K-mean clustering and knowledge-based algorithm for biomedical
image segmentation. But it only takes into consideration the image intensity, thereby not
producing adequate outputs in noisy images.
Many efforts have explored artificial neural network (ANN) [7]. Edge-based
segmentation techniques cannot work well due to having inherent speckle noise and
texture characteristics.

In addition, K-nearest neighbor (KNN) [8], support vector machine (SVM) [9], Bayesian
algorithm, hid-den Markov model, conditional random field [10], high-dimensional
features with level set [11] are different segmentation algorithms. Unsupervised
algorithms start to evolve in recent days [12, 13], albeit they are still in the early stages
with non-autonomous.

Problems further in the image processing chain, object recognition and Feature extraction
image understanding, cannot (yet) be solved using “standard” techniques. For example,
the task of recognizing any of a number of objects against an arbitrary background calls
for the same human capabilities investigated in artificial intelligence: the ability to
generalize, associate etc.

All this naturally leads to the idea that adaptive methods might be an ideal set of tools for
difficult image processing problems. Possible advantages are:
 Instead of designing an algorithm, one could construct an example data set and an
error criterion, and train any of a number of learning algorithms to perform the
desired input-output mapping;
 For many adaptive methods, such as neural networks, the input can consist of
pixels or measurements in images; the output can contain pixels, decisions, labels,
etc., as long as these can be coded numerically – no assumptions are made. This
means adaptive methods can perform several steps in the image processing chain
at once;
 Some adaptive models, such as neural networks, can be highly nonlinear; the
amount of nonlinearity can be influenced by design, but also depends on the
training data [24, 25];
 Various methods, such as neural networks, have been shown to be universal
classification or regression techniques [26, 27, 28].
3. Main Challenges

The discussion above leads to two main challenges:


 Can Feature Extraction in image processing operations be learned by adaptive
methods? To what extent can adaptive methods solve problems that are hard to
solve using standard techniques? Is nonlinearity a bonus?
 How can prior knowledge be used, if available? Can, for example, the fact that
neighboring pixels are highly correlated be used in neural network design or
training?
 What can be learned from adaptive methods trained to solve feature extraction
problems? If one finds an adaptive method to solve a certain problem, can one
learn how the problem should be approached using standard techniques? Can one
extract knowledge from the solution?

Especially the last question is intriguing. One of the main drawbacks of many learning
systems is their black-box character, which seriously impedes their application in systems
in which insight in the solution is an important factor, e.g. medical systems. If a
developer can learn how to solve a problem by analyzing the solution found by a learning
algorithm, this solution may be made more explicit. It is to be expected that for different
types of neural networks, the answers to these questions will be different.

4. Research Gap& Scope of work

There has been a lot of work done in Image processing particularly in Feature
Extractionusing Neural Networks supervised learning. Though the modern medical
imaging research is advancing at a booming rate, it is still a very challenging task to
detect brain tumor perfectly.

Medical imaging unlike other imaging system has highest penalty for a minimal error. In
past, researchers had used biopsy to detect the tumor tissue from the other soft tissues in
the brain which is time-consuming and may have errors.

The identification, segmentation and detection of infecting area in brain tumor MRI
images are a tedious and time-consuming task. The different anatomy structure of human
body can be visualized by an image processing concepts.

It is still very difficult to have a vision about the abnormal structures of human brain and
the extent to which the tumor has grown using simple imaging techniques. Magnetic
resonance imaging technique distinguishes and clarifies the neural architecture of human
brain, it contains many imaging modalities that scans and capture the internal structure of
human brain but does indicate the status of tumor so that if possible it can be treated in
time.

Neural network thus finds far and wide reaching applications in detecting the tumor
tissue from the other soft tissues in the brain.

Image processing has already begun to move our world. But for it to shift the axis,
computers will have to see the way we do.

5. Work plan

i. First six months: Course work completion


ii. Next six months: Literature review & Problem formulation/finding research gaps,
and a review paper
iii. Next six months: Implementation & coding with a research paper
iv. Consecutive six months: Problem solving and results followed by a research
paper
v. Next six months: Result analysis & beginning of thesis writing followed by a
research paper.
vi. Final: Thesis submission and a research paper publication.
REFERENCES

[1] S. M. Kamrul Hasan1,2* and Mohiuddin Ahmad, “Two-step verification of brain tumor
segmentation using watershed-matching algorithm.” Taylor & Francis, Conference on Brain
Informatics, https://doi.org/10.1186/s40708-018-0086-x

[2] N. Varuna Shree1, T. N. R. Kumar, “Identification and classification of brain tumor MRI
images with feature extraction using DWT and probabilistic neural network.” Taylor &
Francis, Conference on Brain Informatics,https://doi.org/10.1007/s40708-017-0075-5
[3] W. K. Pratt. Digital image processing. John Wiley & Sons, New York, NY, 2nd edition,
1991. Pages: 3, 27, 66, 109
[4] M. Sonka, V. Hlavac, and R. Boyle. Image processing, analysis and machine vision.
Chapman & Hall, London, 1993. Pages: 4
[5] W.H. Walton. Automatic counting of microscopic particles. Nature, 169:518–520, 1952.
Pages: 4
[6] Y. Wang, Q. Chen, and B. Zhang, “Image enhancement based on equal area dualistic sub-
image histogram equalization method,” IEEE Trans. Consum. Electron., vol. 45, no. 1, pp.
68–75, 1999.
[7] R. Ramli, “Minimum mean brightness error Bi-histogram equalization in contrast
enhancement,” IEEE Trans. Consum. Electron., vol. 49, no. 4, pp. 1310–1319, Nov. 2003.
[8] A. R. Ramli, “Contrast enhancement using recursive mean-separate histogram equalization
for scalable brightness preservation,” IEEE Trans. Consum. Electron., vol. 49, no. 4, pp.
1301–1309, Nov. 2003.
[9] C. Wang and Z. Ye, “Brightness preserving histogram equalization with maximum entropy: a
variational perspective,” IEEE Trans. Consum. Electron., vol. 51, no. 4, pp. 1326– 1334,
Nov. 2005.
[10] H. Ibrahim and N. S. P. Kong, “Brightness preserving dynamic histogram equalization for
image contrast enhancement,” IEEE Trans. Consum. Electron., vol. 53, no. 4, pp. 1752–
1758, 2007.
[11] M. Abdullah-Al-Wadud, M. Kabir, M. Akber Dewan, and O. Chae, “A Dynamic Histogram
Equalization for Image Contrast Enhancement,” IEEE Trans. Consum. Electron., vol. 53,
no. 2, pp. 593–600, 2007.
[12] Z. Chen, S. Member, B. R. Abidi, D. L. Page, and M. A. Abidi, “Gray-Level Grouping
(GLG): An Automatic Method for Optimized Image Contrast Enhancement — Part I : The
Basic Method,” vol. 15, no. 8, pp. 2290–2302, 2006.
[13] J. Tian, “Reversible data embedding using a difference expansion,” IEEE Trans. Circuits
Syst. Video Technol., vol. 13, no. 8, pp. 890–896, Aug. 2003.
[14] Z. Ni, Y. Q. Shi,N. Ansari, andW. Su, “Reversible data hiding,” IEEE Trans. Circuits Syst.
Video Technol., vol. 16, no. 3, pp. 354–362, Mar. 2006.
[15] D.M. Thodi and J. J. Rodriguez, “Expansion embedding techniques for reversible
watermarking,” IEEE Trans. Image Process.,vol.16, no.3, pp. 721–730, Mar. 2007.
[16] D. Coltuc and J.-M. Chassery, “Very fast watermarking by reversible contrast mapping,”
IEEE Signal Process. Lett.,vol.14, no.4, pp. 255–258, Apr. 2007.
[17] V. Sachnev, H. J. Kim, J. Nam, S. Suresh, and Y. Q. Shi, “Reversible watermarking
algorithm using sorting and prediction,” IEEE Trans. Circuits Syst. Video Technol., vol.
19, no. 7, pp. 989–999, Jul. 2009.
[18] X. Li, B. Yang, and T. Zeng, “Efficient reversible watermarking based on adaptive
prediction-error expansion and pixel selection,” IEEE Trans. Image Process., vol. 20, no.
12, pp. 3524–3533, Jan. 2011.
[19] Z. Zhao, H. Luo, Z.-M .Lu, and J.-S. Pan, “Reversible data hiding based on multilevel
histogram modification and sequential recovery,” Int. J. Electron. Commun. (AEÜ), vol.
65, pp. 814–826, 2011.
[20] H. T.Wu and J.Huang, “Reversible image watermarking on prediction error by efficient
histogram modification,” Signal Process., vol. 92, no. 12, pp. 3000–3009, Dec. 2012.
[21] Y. Yang, X. Sun, H. Yang, C.-T. Li, and R. Xiao, “A contrast-sensitive reversible visible
image watermarking technique,” IEEE Trans. Circuits & Systems on Video Technology,
vol. 19, no. 5, pp. 656–667, May 2009.
[22] H. Wu, J. Dugelay, and Y. Shi, “Reversible Image Data Hiding with Contrast
Enhancement,” IEEE Signal Processing Letters, vol. 22, no. 1, pp. 81–85, Jan 2015.
[23] I.T. Young, J.J. Gerbrands, and L.J. Van Vliet. The digital signal processing handbook,
chapter Image processing fundamentals, pages 51/1 – 51/81. CRC Press/IEEE Press, Boca
Raton, FL, 1998. http://www.ph.tn.tudelft.nl/Courses/FIP. Pages: 5, 66, 91, 103, 109
[24] S. Raudys. Evolution and generalization of a single neurone: I. Single-layer perceptron as
seven statistical classifiers. Neural Networks, 11(2):283–296, 1998. Pages: 5, 12
[25] S. Raudys. Evolution and generalization of a single neurone: II. Complexity of statistical
classifiers and sample size considerations. Neural Networks, 11(2):297–313, 1998. Pages:
5, 12
[26] K.-I. Funahashi. On the approximate realization of continuous mappings by neural
networks. Neural Networks, 2(3):183–192, 1989. Pages: 5, 12, 97
[27] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal
approximators. Neural Networks, 2(5):359–366, 1989. Pages: 5, 12, 92, 97
[28] K. Hornik, M. Stinchcombe, and H. White. Universal approximation of an unknown
mapping and its derivatives using multilayer feedforward networks. Neural Networks, 3(5):
551–560, 1990. Pages: 5
[29]Parth Sane, Ravindra Agrawal. Pixel normalization from Numeric data as Input to Neural
Networks for Machine Learning and Image Processing.
[30] F.Schwenker and E. Trentin, Pattern classification and clustering: A review of partially
supervised learning approaches. Pattern recognition letter, vol.37, pp.4-14 ,Feb.2014.
[31] Jonathan Amezcua, P.Melin and O.Castillo. A neural network with a learning vector
quantization algorithm for multiclass classification using a modular approach. Recent
Developments and New direction in Soft computing Foundations and Applications,
Springer International Publishing, pp. 171-184, 2016.
[32] Z. Jiang, Y. Wang, L. Davis, W. Andrews, and V. Rozgic. Learning Discriminative features
via label consistent neural network. 2016.

You might also like