You are on page 1of 5

International Journal of Scientific Research Engineering & Technology (IJSRET)

Volume 2 Issue 10 pp 637-641 January 2014 www.ijsret.org ISSN 2278 0882


IJSRET @ 2014
A Literature Survey in Various Approaches for Tumor Segmentation
C. Ganga
1
, C. Subbulakshmi
2
PG Scholar, Dept. of Communication Systems, Pet Engineering College, India.
Assistant Professor, Dept. of Communication Systems, Pet Engineering College, India.
Abstract - This paper presents an efficient and novel
geometric flow-driven technique for mesh optimization
of segmented tetrahedral meshes with non-manifold
boundary surfaces. The presented technique consists of
geometric optimization and topological transformation
techniques. This model intends to derive a mapping
which will evolve given contours or piecewise-constant
regions towards objects in the image. It construct an
averaged curvature flow for fairing space boundary
curves with shape preserved, and it adopt the averaged
mean curvature flow to fair surface patches with the
property of volume- preserving. In the meantime,
boundary meshes are regularized by adjusting curve
nodes and surface nodes along tangent directions.
Additionally, face-swapping and edge-removal
operations are applied to eliminate poorly-shaped
elements. Lastly, it validates the presented technique on
several application examples, and also the results
demonstrate that mesh quality is improved considerably.
KeywordsGeometric Flow, Optimization, Boundary
Curves, Edge Removal
1. INTRODUCTION
IMAGE segmentation is a fundamental however still a
challenging problem in computer vision and image
processing. Particularly, its an essential process for
several applications like object recognition, target
tracking, content-based image retrieval, and medical
image processing. Usually speaking, the goal of image
segmentation is to partition an image into a certain
number of pieces that have coherent features (color,
texture, etc.) and, in the meantime, to group the
meaningful pieces together for the convenience of
perceiving [Aboutanos and Dawant - 1997]. In several
practical applications, as a large number of images are
required to be handled, human interactions involved in
the segmentation process ought to be as less as possible.
This makes automatic image segmentation techniques
more appealing. Several high-level segmentation
techniques (e.g., class-based object segmentation [L.K.
Arata et al. -1995, Ashburner and Friston - 1997])
additionally demands sophisticated automatic
segmentation techniques.
Methods for performing segmentations vary widely
depending on the precise application, imaging modality,
and further factors. For instance, the segmentation of
brain tissue has different needs from the segmentation of
the liver. General imaging artifacts like noise, partial
volume effects, and motion may have significant
consequences on the performance of segmentation
algorithms. Moreover, every imaging modality has its
own idiosyncrasies with that to contend. There is
currently no single segmentation method which yields
acceptable results for every medical image. Strategies
do exist that are more general and may be applied to a
variety of data. Though, methods that are specialized to
particular applications can typically achieve better
performance by taking into consideration prior
knowledge. Selection of an appropriate approach to a
segmentation problem can thus be a difficult dilemma.
This chapter provides a summary of current strategies
used for computer assisted or computer automated
segmentation of anatomical medical images. Strategies
and applications that have appeared in the recent
literature are briefly delineated. A full description of
competing methods is beyond the scope of this chapter
and also the readers are referred to references for
further details. It focuses instead on providing the reader
an introduction to the different applications of
segmentation in medical imaging and also the various
issues that should be confronted. Moreover, it refers
only to the most normally used radiological modalities
for imaging anatomy: magnetic resonance imaging
(MRI), X-ray computed tomography (CT), ultrasound,
and X-ray projection radiography. Most of the concepts
delineated, though, are applicable to other imaging
modalities also.
The main contributions in this paper may be
summarized as follows:
1. It derives the comparison among different
International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 2 Issue 10 pp 637-641 January 2014 www.ijsret.org ISSN 2278 0882
IJSRET @ 2014
images based image segmentation.
2. It extends the Error rate based segmentation
approach with different information.
2. BASIC DESCRIPTION
In this section it define terminology that will be used
throughout and describe important issues in the
segmentation of medical images.
Definition
An image is a collection of measurements in two-
dimensional (2-D) or three-dimensional (3-D) space. In
medical images, these measurements or image intensities
may be radiation absorption in X-ray imaging, acoustic
pressure in ultrasound, or RF signal amplitude in MRI. If
a single measurement is made at every location in the
image, then the image is known as a scalar image. If
more than one measurement is made (eg. dual-echo
MRI), the image is termed as a vector or multi-channel
image. Images may be acquired in the continuous
domain like in X-ray film, or in discrete space as in MRI.
In 2-D discrete images, the location of every
measurement is called a pixel and in 3-D images, it is
called a voxel. For ease, it will often use the term
pixel to refer to both the 2-D and 3-D cases.
When the constraint that regions be connected is
removed, then determining the sets is termed pixel
classification and also the sets themselves are termed
classes. Pixel classification instead of classical
segmentation is often a desirable goal in medical images,
significantly when disconnected regions belonging to the
same tissue class have to to be identified. Determination
of the total number of classes in pixel classification may
be a difficult problem. Frequently, the value is known
based on prior knowledge of the anatomy being
considered.
Labeling is the process of assigning a meaningful
designation to every region or class and may be
performed separately from segmentation. In medical
imaging, the labels are often visually obvious and may
be determined upon inspection by a physician or
technician. Computer automated labeling is fascinating
when labels arent obvious and in automated processing
systems. A typical circumstances involving labeling
happens in digital mammography where the image is
segmented into distinct regions and also the regions are
subsequently labeled as being healthy tissue or
tumorous.
Methods that delineate a structure or structures in an
image, including both classical segmentation and pixel
classification methods, are considered in this review.
Although it does not discuss specific labeling methods, it
will discuss several techniques which perform both
segmentation and labeling at the same time.
Dimensionality
Dimensionality refers to whether a segmentation
method operates in a 2-D image domain or a 3-D image
domain. Methods that rely solely on image intensities
are independent of the image domain. But, certain
methods like deformable models, Markov random
fields, and region growing, incorporate spatial
information and may therefore operate differently
depending on the dimensionality of the image. Usually, 2-
D methods are applied to 2-D images and 3-D methods
are applied to 3-D images. In some cases, still, 2-D
methods are applied sequentially to the slices of a 3-D
image. In addition, certain structures are more easily
defined along 2-D slices.
Figure 1: Illustration of partial volume effect: (a) Ideal
image, (b) acquired image.
Soft segmentation and partial volume effects
Segmentations that permit regions or classes to
overlap are known as soft segmentations. Soft
segmentations are vital in medical imaging due to partial
volume effects, wherever multiple tissues contribute to a
single pixel or voxel leading to a blurring of intensity
across boundaries. Fig.1 illustrates how the sampling
process may result in partial volume effects, resulting in
ambiguities in structural definitions. In Fig.1b, its
difficult to precisely determine the boundaries of the two
objects. A hard segmentation forces a decision of
whether a pixel is inside or outside the object. Soft
segmentations on the other hand, preserve additional
information from the original image by providing
uncertainty in the location of object boundaries. Hence,
partial volume effects might cause boundaries to be
blurred across significant portions of an image.
3. REVIEW ON VARIOUS SYSTEMS
International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 2 Issue 10 pp 637-641 January 2014 www.ijsret.org ISSN 2278 0882
IJSRET @ 2014
The Region Growing and the Split and Merge are the
typical region based segmentation algorithms. Though
both share the necessary concept of homogeneity, the
way they carry out the segmentation process is actually
dierent in the decisions taken. Thus it have developed
two dierent algorithms named A1 and A2 based on
Split and Merge, and Region growing correspondingly.
Split and Merge
Typical split and merge techniques [Bardinet et al. -
1998] contain two basic steps. First, the whole image is
taken into account as one region. If this region doesnt
satisfy a homogeneity criterion the region is split into
four quadrants (sub regions) and each quadrant is tested
in the same way; this process is recursively repeated
until each square region created in this way contains
homogeneous pixels. Next, within the second step, all
adjacent regions with similar attributes may be merged
following other (or the same) criteria. The criterion of
homogeneity is usually based on the analysis of the
chromatic characteristics of the region. A region with
small standard deviation in the color of its members
(pixels) is taken into account homogeneous. The
integration of edge information permits adding to this
criterion another term to take into account. Thus, a
region is considered homogeneous when is totally free of
contours.
This is an algorithm based on the concepts of Bonnin
and his colleagues who proposed in a region extraction
based on a split and merge algorithm controlled by edge
detection. The criterion to determine the split of a region
takes into consideration edge and intensity
characteristics. The homogeneity intensity criterion is
rendered necessary owing to the possible failures of the
edge detector. After the split phase, the contours are
thinned and chained into edges relative to the boundaries
of the initial regions. Later, a nal merging process takes
into consideration edge information in order to solve
possible over-segmentation problems. In last step, two
adjacent initial regions are merged only if there are no
edges on the common boundary.
Region Growing:
Region growing algorithms are based on the growth
of a region when its interior is homogeneous consistent
with certain features as intensity, color or texture.
Typical Region Growing is based on the growth of a
region by adding similar neighbours. Region Growing is
one of the simplest and most popular algorithms for
region based segmentation. The most traditional
implementation starts by choosing a starting point called
seed pixel. Then, the region grows by adding similar
neighboring pixels along with a certain homogeneity
criterion, increasing the size of the region. Thus, the
homogeneity criterion has the function of deciding
whether a pixel belongs to the growing region or not.
The decision of merging is usually taken based only on
the contrast between the evaluated pixel and the region.
However, its not easy to decide when this dierence is
small (or large) enough to take a decision. The edge map
provides an additional criterion on that, like the
condition of contour pixel when deciding to aggregate it.
The encounter of a contour signies the process of
growing has reached the boundary of the region, so the
pixel should not be aggregated and the growth of the
region have nished.
The algorithm is based on the work of Xiaohan et al.,
who proposed a homogeneity criterion consisting of the
weighted sum of the contrast between the region and the
pixel, and the value of the modulus of the gradient of the
pixel. A low value of this function signifies the
convenience of aggregating the pixel to the region. A
similar proposal was recommended by Kara et al., where
at every iteration, only pixels having low gradient values
(below a certain threshold) are aggregated to the
growing region. On the other side, Gambotto proposed
using edge information to stop the growing process. This
assumes the gradient takes a high value over a large part
of the region boundary. So, the iterative growing process
is continued till the maximum of the average gradient
computed over the region boundary is detected.
Guidance of Seed Placement
The placement of the initial seed points may be stated
as a central issue on the obtained results of a region-
based segmentation. Despite their importance, the
traditional region growing algorithm chooses them
randomly or employing a set a priori direction of image
scan. Consecutively to make a more reasonable decision,
edge information may be accustomed to decide what the
most correct position in which is to place the seed. Its
usually accepted that the growth of a region has to start
from inside it. The interior of the region is a
representative zone and allows the obtention of a correct
sample of the regions characteristics. On the other side,
it is necessary to avoid the boundaries between regions
when choosing the seeds because they are unstable zones
and not adequate to obtain information over the region.
Therefore, this approach, named A3, uses the edge
information to place the seeds in the interior of the
regions. The seeds are launched in placements free of
International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 2 Issue 10 pp 637-641 January 2014 www.ijsret.org ISSN 2278 0882
IJSRET @ 2014
contours and, in some proposals, as far as possible from
them.
The algorithm proposed by Sinclair has been taken as
the basic reference for the implementation of A3. During
this proposal, the Voronoi image generated from the
edge image is accustomed to derive the placement of the
seeds. The intensity at every point in a Voronoi image is
the distance to the closest edge. The peaks within the
Voronoi image, reecting the utmost points from the
contours, are then used as seed points for region
growing. But, A3 avoids the necessity of extracting the
edge image that involves the difficult step of binarization
generating the Voronoi image directly from the gradient
image.
Boundary Refinement
As delineated above, region-based segmentation
yields a good detection of true regions, though as is
familiar that the resultant sensitivity to noise causes the
boundary of the extracted region to be highly irregular.
A region-growing procedure is used to get an initial
estimate of a target region, thats then combined with
salient edge information to achieve a more accurate
representation of the target boundary. Since in the over-
segmentation proposals, edge information permits here,
the renement of an initial result. Examples of this
approach are the works of Haddon and Boyce, Chu and
Aggarwal or the most recent of Sato et al. However, two
basic techniques may be considered as general ways to
rene the boundary of the regions:
1. Multiresolution: this method is based on the
analysis at dierent scales. A coarse initial
segmentation is rened by increasing the
resolution.
2. Boundary Renement by Snakes: an
additional possibility is that the integration of
region information with dynamic contours,
concretely snakes. The renement of the
region boundary is performed by the energy
minimization of the snake.
Boundary Renement by Snakes:
The snake method is known to resolve such problems
by locating the object boundary from an initial plan.
However, snakes dont try to solve the entire problem of
nding salient image contours. The high grey-level
gradient of the image may be due to object boundaries in
addition to noise and object textures, and then the
optimization functions may have many local optima.
Classifier:
Classifier methods are pattern recognition techniques
which seek to partition a feature space derived from the
image using data with known labels. A feature space is
the range space of any function of the image, with the
foremost general feature space being the image
intensities themselves. A histogram, as shown in Fig.2a,
is an instance of a 1-D feature space. All pixels with
their associated features on the left side of the partition
would be grouped into one class.
Fig.2: Feature space methods and region growing: (a) a
histogram showing three apparent classes, (b) example
of region growing.
Classifiers are known as supervised methods since
they require training data that are manually segmented
and then used as references for automatically segmenting
new data. There are a number of ways in that training
data may be applied in classifier methods. A easy
classifier is the nearest-neighbor classifier, where every
pixel or voxel is classified in the same class as the
training datum with the closest intensity. The k-nearest-
neighbor (kNN) classifier is a generalization of this
approach. The kNN classifier is considered a
nonparametric classifier since it makes no underlying
assumption about the statistical structure of the data.
Additional nonparametric classifier is the Parzen
window, where the classification is made consistent with
the majority vote in a predefined window of the feature
space centered at the unlabeled pixel intensity.
4. CONCLUSION
Future research in the segmentation of medical images
will strive towards improving the accuracy, exactness,
and computational speed of segmentation methods, in
addition to reducing the amount of manual interaction.
Accuracy and exactness may be improved by
incorporating prior information from atlases and by
combining discrete and continuous-based segmentation
methods. For rising computational efficiency, multistate
International Journal of Scientific Research Engineering & Technology (IJSRET)
Volume 2 Issue 10 pp 637-641 January 2014 www.ijsret.org ISSN 2278 0882
IJSRET @ 2014
processing and parallelizable methods like neural
networks appear to be promising approaches.
Computational efficiency will be particularly vital in
real-time processing applications.
Possibly the most important question surrounding
the use of image segmentation is its application in
clinical settings. Computerized segmentation methods
have previously demonstrated their utility in research
applications and are now garnering increased use for
computer aided diagnosis and radiotherapy planning. It
is not likely that automated segmentation methods will
ever replace physicians however they will likely become
crucial elements of medical image analysis. Segmentation
methods will be significantly valuable in areas like
computer integrated surgery, wherever visualization of
the anatomy is a critical component.
REFRENCES
[1] G.B. Aboutanos and B.M. Dawant. Automatic brain
segmentation and validation: image-based versus
atlas-based deformable models. In Medical
Imaging, SPIE Proc., volume 3034, pages 299310,
1997.
[2] L.K. Arata, A.P. Dhawan, J.P. Broderick, M.F.
Gaskil-Shipley, A.V. Levy, and Nora D. Volkow.
Three dimensional anatomical model-based
segmentation of MR brain images through principal
axes regis- tration. IEEE T. Biomed. Eng.,
42:10691078, 1995.
[3] J. Ashburner and K. Friston. Multimodal image
coregistration and partitioning a unified
framework. Neuroimage, 6:209217, 1997.
[4] E.A. Ashton, M.J. Berg, K.J. Parker, J. Weisberg,
C.W. Chen, and L. Ketonen. Segmentation and
feature extraction techniques, with applications to
MRI head studies. Mag. Res. Med., 33:670677,
1995.
[5] M.S. Atkins and B.T. Mackiewich. Fully automatic
segmentation of the brain in MRI. IEEE T.
Med.Imag., 17:98109, 1998.
[6] N. Ayache, P. Cinquin, I. Cohen, L. Cohen, F.
Leitner, and O. Monga. Segmentation of complex
three- dimensional medical objects: a challenge
and a requirement for computer-assisted surgery
planning and performance. In R.H. Taylor, S.
Lavallee, G.C. Burdea, and R. Mosges, editors,
Computer- integrated surgery: technology and
clinical applications, pages 5974. MIT Press,
1996.
[7] K.T. Bae, M.L. Giger, C. Chen, and C.E. Kahn.
Automatic segmentation of liver structure in CT
images. Med. Phys., 20:7178, 1993.
[8] E. Bardinet, L.D. Cohen, and N. Ayache.
Tracking and motion analysis of the left ventricle
with deformable superquadrics. Med. Im. Anal.,
1:129149, 1996.
[9] E. Bardinet, L.D. Cohen, and N. Ayache. A
parametric deformable model to fit unstructured 3D
data.Comp. Vis. Im. Understand., 71:3954, 1998.
[10] J. Besag. On the statistical analysis of dirty
pictures. CVGIP: Im. Understand., 57:359372,
1986.
[11] J.C. Bezdek, L.O. Hall, and L.P. Clarke. Review
of MR image segmentation techniques using
pattern recognition. Med. Phys., 20:10331048,
1993.
[12] D.C. Bloomgarden, Z.A. Fayad, V.A. Ferrari, B.
Chin, M.G.S.J. Sutton, and L. Axel. Global cardiac
function using fast breath-hold MRI: Validation of
new acquisition and analysis techniques. Magnetic
Resonance in Medicine, 37:683692, 1997.
[13] M.E. Brandt, T.P. Bohan, L.A. Kramer, and J.M.
Fletcher. Estimation of CSF, white and gray matter
volumes in hydrocephalic children using fuzzy
clustering of MR images. Computerized Medical
Imaging and Graphics, 18:2534, 1994.
[14] B.H. Brinkmann, A. Manduca, and R.A. Robb.
Optimized homomorphic unsharp masking for MR
grayscale inhomogeneity correction. IEEE T. Med.
Imag., 17:161171, 1998.
[15] M.S. Brown, M.F. McNitt-Gray, N.J.
Mankovich, J.G. Goldin, J. Hiller, et al. Method for
segmenting chest CT image data using an
anatomical model: preliminary results. IEEE T.
Med. Imag., 16:828839, 1997.
[16] M.E. Brummer, R.M. Merseerau, R.L. Eisner,
and R.R.J. Lewine. Automatic detection of brain
contours in MRI data sets. IEEE T. Med. Imag.,
12:153166, 1993.

You might also like