You are on page 1of 7

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 58, NO.

3, MARCH 2011

567

Boundary Detection in Medical Images Using Edge


Following Algorithm Based on Intensity Gradient
and Texture Gradient Features
Krit Somkantha, Nipon Theera-Umpon*, Senior Member, IEEE,
and Sansanee Auephanwiriyakul, Senior Member, IEEE

AbstractFinding the correct boundary in noisy images is still


a difficult task. This paper introduces a new edge following technique for boundary detection in noisy images. Utilization of the
proposed technique is exhibited via its application to various types
of medical images. Our proposed technique can detect the boundaries of objects in noisy images using the information from the
intensity gradient via the vector image model and the texture gradient via the edge map. The performance and robustness of the
technique have been tested to segment objects in synthetic noisy
images and medical images including prostates in ultrasound images, left ventricles in cardiac magnetic resonance (MR) images,
aortas in cardiovascular MR images, and knee joints in computerized tomography images. We compare the proposed segmentation
technique with the active contour models (ACM), geodesic active
contour models, active contours without edges, gradient vector flow
snake models, and ACMs based on vector field convolution, by using the skilled doctors opinions as the ground truths. The results
show that our technique performs very well and yields better performance than the classical contour models. The proposed method
is robust and applicable on various kinds of noisy images without
prior knowledge of noise properties.
Index TermsBoundary extraction, edge detection, edge following, image segmentation, vector field model.

I. INTRODUCTION
MAGE segmentation is an initial step before performing
high-level tasks such as object recognition and understanding. Image segmentation is typically used to locate objects and
boundaries in images. In medical imaging, segmentation is important for feature extraction, image measurements, and im-

Manuscript received June 22, 2010; revised October 3, 2010 and October 28,
2010; accepted October 29, 2010. Date of publication November 9, 2010; date
of current version February 18, 2011. This work was supported in part by a
grant under the program Strategic Scholarships for Frontier Research Network
for Ph.D. Program Thai Doctoral degree from the Commission on Higher Education, Thailand. Asterisk indicates corresponding author.
K. Somkantha is with the Department of Electrical Engineering, Faculty
of Engineering, Chiang Mai University, Chiang Mai 50200, Thailand (e-mail:
krich_cpe@hotmail.com).
*N. Theera-Umpon is with the Department of Electrical Engineering, Faculty
of Engineering, and the Biomedical Engineering Center, Chiang Mai University,
Chiang Mai 50200, Thailand (e-mail: nipon@ieee.org).
S. Auephanwiriyakul is with the Department of Computer Engineering, Faculty of Engineering, and the Biomedical Engineering Center, Chiang Mai University, Chiang Mai 50200, Thailand (e-mail: sansanee@ieee.org).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TBME.2010.2091129

age display. In some applications it may be useful to extract


boundaries of objects of interest from ultrasound images [1],
[2], microscopic images [3][5], magnetic resonance (MR) images [6][8], or computerized tomography (CT) images [9],
[10]. Segmentation techniques can be divided into classes in
many ways, depending on the classification scheme. The most
commonly used segmentation techniques can be categorized
into two classes, i.e., edge-based approaches and region-based
approaches. The strategy of edge-based approaches is to detect the object boundaries by using an edge detection operator
and then extract boundaries by using the edge information. The
problem of edge detection is the presence of noise that results
in random variation in level from pixel to pixel. Therefore, the
ideal edges are never encountered in real images [11], [12]. A
great diversity of edge detection algorithms have been devised
with differences in their mathematical and algorithmic properties such as Roberts, Sobel, Prewitt, Laplacian, and Canny, all of
which are based on the difference of gray levels [13][16]. The
difference of gray levels can be used to detect the discontinuity of gray levels. On the other hand, region-based approaches
are based on similarity of regional image data. Some of the
more widely used approaches are thresholding, clustering, region growing, and splitting and merging [17]. However, the
performance evaluation of image segmentation results is still a
challenging problem as they fail to extract the correct boundaries
of objects in noisy images.
In recent years, there have been several new methods to
solve the problem of boundary detection, e.g., active contour
model (ACM), geodesic active contour (GAC) model, active
contours without edges (ACWE), gradient vector flow (GVF)
snake model, vector field convolution (VFC) snake model, etc.
The snake models have become popular especially in boundary detection where the problem is more challenging due to the
poor quality of the images. The ACMs also known as snakes
are curves defined within an image domain that can be moved
under the influence of the internal energy and external energy [18][20]. The internal energy is designed to keep the
model smooth during deformation. The external energy is designed to move the model toward an object boundary or other
desired features within an image. However, the snake has weaknesses and limitations of small capture range and difficulties
progressing into concave boundary regions. The GAC model is
an extension of the ACM by taking into account of the geometric information of an image [21]. An ACM based on curve
evolution and level sets, namely the ACWE, can detect objects

0018-9294/$26.00 2011 IEEE

568

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 58, NO. 3, MARCH 2011

boundaries that are not necessarily defined by gradient [22]. The


GVF is an ACM with a new external energy [23], [24]. This new
external energy is computed as a diffusion of gray-level gradient
vector of a binary edge map derived from the image. The resultant field has a large capture range and forces active contours
into concave regions. VCF was applied as a new external energy
of an ACM and proved to be better than the existing snake models [25]. However, when an image is highly noisy and possesses
a complex background, determining the correct boundaries of
objects from the gradients is not easy. Most of the methods of
snake models require a very accurate initial contour estimate
of the object. The contour of snake models can converge to a
wrong boundary if the initial contour is not close enough to the
desired boundary.
Though many algorithms for boundary detection have been
developed to achieve good performance in field of image processing, most algorithms for detecting the correct boundaries
of objects have difficulties in medical images in which illdefined edges are encountered [26][28]. Medical images are
often noisy and too complex to expect local, low level operations to generate perfect primitives. The complexity of medical
images renders the correct boundary detection very difficult.
To remedy the problem, we propose a new technique for
boundary detection for ill-defined edges in noisy images using a
novel edge following. The proposed edge following technique is
based on the vector image model and the edge map. The vector
image model provides a more complete description of an image
by considering both directions and magnitudes of image edges.
From the vector image model, a derivative-based edge operator
is applied to yield the edge field [29]. The proposed edge vector
field is generated by averaging magnitudes and directions in
the vector image. The edge map is derived from Laws texture
feature [30] and the Canny edge detection [31]. The vector image
model and the edge map are applied to select the best edges.
This paper is organized as follows. Section II describes the
proposed boundary detection technique. In section III, we show
the experimental results on the synthetic noisy images, prostate
ultrasound images, left ventricle in cardiac MR images, aorta in
cardiovascular MR images, and knee joints in CT images. The
results of our technique are compared with that of five snake
models by using the opinions of skilled doctors as the ground
truth. Section IV concludes this paper.
II. BOUNDARY EXTRACTION ALGORITHM

i,j

Mx (i, j) = Gy f (x, y)

f (x, y)
y

(4)

My (i, j) = Gx f (x, y)

f (x, y)
x

(5)

where Gx and Gy are the difference masks of the Gaussian


weighted image moment vector operator in the x and y directions, respectively, [29]


 2

x
1
x + y2

Gx (x, y) =
exp
(6)
2 2
2
x2 + y 2


 2

y
x + y2
1

Gy (x, y) =
exp
. (7)
2 2
2
x2 + y 2
Edge vectors of an image indicate the magnitudes and directions
of edges which form a vector stream flowing around an object.
However, in an unclear image, the vectors derived from the edge
vector field may distribute randomly in magnitude and direction.
Therefore, we extend the capability of the previous edge vector
field by applying a local averaging operation where the value of
each vector is replaced by the average of all the values in the
local neighborhood, i.e.,

1

(8)
Mx (i, j)2 + My (i, j)2
M (i, j) =
Mr
D(i, j) =

We exploit the edge vector field to devise a new boundary


extraction algorithm [29]. Given an image f (x, y), the edge
vector field is calculated according to the following equations:
1
(Mx (i, j)i + My (i, j)j)
k


1 f (x, y) f (x, y)
e(i, j)
i
j
k
y
x


Mx (i, j)2 + My (i, j)2 .
k = max

Each component is the convolution between the image and the


corresponding difference mask, i.e.,

(i,j )N

A. Average Edge Vector Field Model

e(i, j) =

Fig. 1. (a) Original unclear image. (b) Result from the edge vector field and
zoomed-in image. (c) Result from the proposed average edge vector field and
zoomed-in image.

(1)
(2)
(3)

1
Mr

(i,j )N

tan1

My (i, j)
Mx (i, j)


(9)

where Mr is the total number of pixels in the neighborhood N .


We apply a 3 3 window as the neighborhood N throughout
our research.
An example of the edge vector field and average edge vector
field is displayed in Fig. 1. Fig. 1(b) and (c) shows the results
of the edge vector field and average edge vector field of the
original image in Fig. 1(a). From the result, we can see that
our proposed edge vector field yields more descriptive vectors
along the object edge than that of the original edge vector field.

SOMKANTHA et al.: BOUNDARY DETECTION IN MEDICAL IMAGES USING EDGE FOLLOWING ALGORITHM

569

Fig. 3. Edge masks used for detecting of image edges (normal direction
constraint).

Fig. 2. (a) Synthetic noisy image. (b) Left ventricle in the MR image.
(c) Prostate ultrasound image. (d)(f) Corresponding edge maps derived from
Laws texture and Canny edge detection.

This idea is exploited for the boundary extraction algorithm of


objects in unclear images.
B. Edge Map
Edge map is edges of objects in an image derived from Laws
texture and Canny edge detection. It gives important information
of the boundary of objects in the image that is exploited in a
decision for edge following.
1) Laws Texture: The texture feature images of Laws texture are computed by convolving an input image with each of the
masks. Given a column vector L = (1, 4, 6, 4, 1) T , the 2-D mask
l(i, j) used for texture discrimination in this research is generated by L LT . The output image is obtained by convolving
the input image with the texture mask.
2) Canny Edge Detection: The Canny approach to edge detection is optimal for step edges corrupted by white Gaussian
noise. This edge detector is assumed to be the output of a filter
that reduces the noise and locates the edges. The first step of
Canny edge detection is to convolve the output image obtained
from the aforementioned Laws texture t(i, j) with a Gaussian
filter. The second step is to calculate the magnitude and direction of the gradient. The third step is nonmaximal suppression
to identify edges. The broad ridges in the magnitude must be
thinned so that only the magnitudes at the points of the greatest
local change remain. The last step is the thresholding algorithm
to detect and link edges. The double threshold algorithm is used
to detect and link edges.
Edge map shows some important information of edge. This
idea is exploited for extracting objects boundaries in unclear
images. Examples of the edge maps are shown in Fig. 2.
C. Edge Following Technique
The edge following technique is performed to find the boundary of an object. Most edge following algorithms take into
account the edge magnitude as primary information for edge
following. However, the edge magnitude information is not efficient enough for searching the correct boundary of objects in
noisy images because it can be very weak in some contour areas.

This is exactly the reason why many edge following techniques


fail to extract the correct boundary of objects in noisy images. To
remedy the problem, we propose an edge following technique
by using information from the average edge vector field and
edge map. It gives more information for searching the boundary
of objects and increases the probability of searching the correct
boundary. The magnitude and direction of the average edge vector field give information of the boundary which flows around
an object. In addition, the edge map gives information of edge
which may be a part of object boundary. Hence, both average
edge vector field and edge map are exploited in the decision of
the edge following technique. At the position (i, j) of an image,
the successive positions of the edges are then calculated by a
3 3 matrix
Lij (r, c) = Mij (r, c) + Dij (r, c) + Eij (r, c)
0 r 2,

0c2

(10)

where , , and are the weight parameters that control the


edge to flow around an object. The larger value of an element in
Lij indicates the stronger edge in the corresponding direction.
The 3 3 matrices Mij , Dij and Eij are calculated as follows:
Mij (r, c) =

M (i + r 1, j + c 1)
maxi,j M (i, j)

(11)

|D(i, j) D(i + r 1, j + c 1)|

Eij (r, c) = E(i + r 1, j + c 1), 0 r 2,

Dij (r, c) = 1

0c2

(12)

(13)

where M (i, j) and D(i, j) are the proposed average magnitude


and direction of edge vector fields as shown in (8) and (9).
E(i, j) is the edge map from Laws texture and Canny edge
detection. It should be noted that the value of each element in
the matrices Mij , Dij , and Eij are ranged between 0 and 1.
Let Ck , k = 1, 2, . . . , 8, be the constraint masks of edge
following to the next direction in object boundary as shown
in Fig. 3. The constraint mask is selected by considering the
direction of the vector model at a position (i, j). The mask which
has a similarity in direction of vector is selected to suit the chosen
constraint of edge following. The value of each element in each
mask dictates the corresponding direction. At the position (i, j),
the next direction of the edge following technique is selected as
the direction that gives the maximum value of the element-wise
multiplication results between Lij and Ck . The next direction

570

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 58, NO. 3, MARCH 2011

Fig. 5. (a) Aorta in cardiovascular MR image. (b) Averaged magnitude


[M (i, j)]. (c) Density of length edge [L(i, j)]. (d) Initial position map [P (i, j)]
and initial position of edge following derived by thresholding T m a x = 0.95.
Fig. 4. (a) Edge map [E(i, j)]. (b) Results of counting the connected pixels
[C (i, j)]. (c) Density of edge length [L(i, j)].

can be calculated by
Dij,opt = arg max
k

2
2

Lij (r, c)Ck (r, c)

(14)

r =0 c=0

where k = 1, 2, . . ., 8 denotes the eight directions as indicated by


the arrows at the center of the masks shown in Fig. 3. The edge
following is started from the initial position to end position.

interested areas. An example of the initial position derived from


our method is shown in Fig. 5 to illustrate that the method can
be applied to ill-defined edges in medical images.
We can see that any one of the white circle points in the initial
position map is a good candidate to be the initial position for our
edge following technique. However, the maximum value of the
white circle points is used in this research. After determining
the suitable initial position, the proposed technique will follow
edges along the object boundary until the closed loop contour
is achieved. This causes a limitation of the technique, i.e., the
boundary must be closed loop.

D. Initial Position
In this section, we present a technique for determining a
good initial position of edge following that can be used for
the boundary detection. The initial position problem is very
important in the classical contour models. Snake models can
converge to a wrong boundary if the initial position is not close
enough to the desired boundary. Finding the initial position of
the classical contour models is still difficult and time consuming
[32], [33]. In this proposed technique, the initial position of edge
following is determined by the following steps.
The first step is to calculate the average magnitude [M (i, j)]
using (8). The position with high magnitude should be a good
candidate of strong edges on the image. The second step is to
calculate the density of edge length for each pixel from an edge
map. An edge map [E(i, j)], as a binary image, is obtained
by Laws texture and Canny edge detection. The idea of using
density is to obtain measurement of the edge length. The density
of edge length [L(i, j)] in each pixel can be calculated from
L(i, j) =

C(i, j)
maxi,j C(i, j)

(15)

where C(i, j) is the number of connected pixels at each position


of pixel. An example of counting the number of connected pixels
is shown in Fig. 4(a) and (b). The density of edge length from
the example is shown in Fig. 4(c). The third step is to calculate
the initial position map P (i, j) from summation of average
magnitude and density of edge length, i.e.,
1
(M (i, j) + L(i, j)).
(16)
2
The last step is the thresholding of the initial position map. We
have to threshold the map in order to detect the initial position
of edge following. If P (i, j) > T m ax , then P (i, j) is the initial
position of edge following. We obtained the initial position by
setting T m ax to 95% of the maximum value. The initial positions
from our method are positions that are close to the edges of
P (i, j) =

III. EXPERIMENTAL RESULTS


We tested the performance of the proposed method by comparing its results with the results from five classical contour
models. We also computed the probability of error in boundary detection and the Hausdorff distance of the six methods by
comparing their results with the opinion from expert medical
doctors. Finally, we tested the efficiency of time consumption
by comparing running times of the six methods. Each input
image was preprocessed by a 3 3 median filter prior to the
application of each method.
A. Boundary Extraction in Synthetic Noisy Images
and Medical Images
The proposed method was first implemented on synthetic images. The synthetic images were generated from the original
binary image by corrupting them with additive white Gaussian
noise. We did this to make sure that the ground truths of the
boundaries were known. Several synthetic images were tested
and the intuitive results were achieved, i.e., the boundary detection results from the images with higher signal-to-noise ratio
were better than that with lower ratio. We further tested our
method to detect object boundaries in some types of medical
images including prostates in ultrasound images, left ventricles
in cardiac MR images, aortas in cardiovascular MR images, and
knee joints in CT images. The prostate ultrasound images were
obtained from the CIMI Labs website of the Nanyang Technological University [34] and UW image computing systems lab
[35]. The cardiac MR images of left ventricles were obtained
from York University [36] and Medical School, Chiang Mai
University (CMU.) The cardiovascular MR images of aortas
were obtained from Imaging Consult website [37] and Medical School, CMU. The CT images of knee joints were obtained
from Learning Radiology website [38] and Medical School,
CMU. The total numbers of synthetic, prostate ultrasound, left

SOMKANTHA et al.: BOUNDARY DETECTION IN MEDICAL IMAGES USING EDGE FOLLOWING ALGORITHM

571

TABLE I
ERRORS IN IMAGE SEGMENTATION AND HAUSDORFF DISTANCE BETWEEN TWO
SKILLED DOCTORS ON EACH TYPE OF MEDICAL IMAGES

ventricle MR, aorta MR, and knee join CT images are 20, 10,
20, 30, and 22, respectively. The object boundaries of these organs are very useful in the diagnoses and treatment planning
for the corresponding diseases. For comparison, we tried to apply the traditional edge following technique [39], [40] in this
research but it did not perform well. Therefore, we turned to
five conventional edge detection methods, i.e., the ACM, GAC,
ACWE, GVF, and VFC snake models that are normally applied
to solve the problem of boundary detection. To make the comparison fair, the initial contours of the five snake models were
selected manually to make them close to the object boundary.
We adjusted the weight parameters , , , and of the ACM,
GAC, ACWE, GVF, and VFC snake models between 0.1 and
0.5 with 0.1 increment. The results of the five snake models
were selected from the best experimental results of all parameter settings. For the weight parameters of the proposed method,
we used just three settings: = 0.6, = 0.2, = 0.2; =
0.5, = 0.2, = 0.3; and = 0.4, = 0.3, = 0.3 for all
images. The results of our method were selected from the best
experimental results of the three parameter settings.

Fig. 6. Prostate ultrasound images. (a) Original image. (b) Doctors delineation. Results of (c) ACM, (d) GAC, (e) ACWE, (f) GVF, (g) VFC, and (h) the
proposed technique.

Fig. 7. Left ventricle in cardiac MR images. (a) Original image. (b) Doctors
delineation. Results of (c) ACM, (d) GAC, (e) ACWE, (f) GVF, (g) VFC, and
(h) the proposed technique.

B. Boundary Detection Evaluation Measures


To further evaluate the efficiency of the proposed method
in addition to the visual inspection, we evaluate our boundary
detection method numerically using the Hausdorff distance and
the probability of error in image segmentation
PE = P (O)P (B|O) + P (B)P (O|B)

(17)

where P (O) and P (B) are a priori probabilities of objects and


background in images. P (B|O) is the probability of error in
classifying objects as background and vice versa [41]. The objects surrounded by the contours obtained using the five snake
models and the proposed method are compared with that manually drawn by skilled doctors from the Medical School, CMU.
Figs. 69 show segmentation results in medical images from the
six methods comparing with the skilled doctors opinions. The
results from our proposed method are visually better than that
from the five snake models. It yields the contours that are very
close to the experts opinions.
It should be noted that each of the medical images was delineated by two experts. This leads us to the investigation of
the variation among the experts opinions. Table I shows the
PEs and the Hausdorff distances between the experts. The results showing disagreements among them confirm that the object
segmentation is subjective among the experts.

Fig. 8. Aorta in cardiovascular MR Images. (a) Original image. (b) Doctors


delineation. Results of (c) ACM, (d) GAC, (e) ACWE, (f) GVF, (g) VFC, and
(h) the proposed technique.

The average PEs of the results on the synthetic images using


the ACM, GAC, ACWE, GVF, VFC snake models, and the
proposed method are 15.67%, 11.17%, 8.32%, 8.37%, 7.56%,
and 7.10%, respectively. The average Hausdorff distances of the
results on the synthetic images using the six methods are 6.40,
4.71, 2.76, 2.71, 2.54, and 2.17 pixels, respectively. The PEs and
Hausdorff distances between the results using all six methods
and each of the two experts opinion on each type of the medical
images are shown in Tables II and III, respectively. The results

572

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 58, NO. 3, MARCH 2011

TABLE II
AVERAGE RESULT ON ALL IMAGES BY MEAN OF PROBABILITY OF ERROR IN IMAGE SEGMENTATION (PE) (IN %)

TABLE III
AVERAGE RESULTS ON ALL IMAGES BY MEAN OF HAUSDORFF DISTANCE (IN PIXELS)

Fig. 9. Knee joints in CT images. (a) Original image. (b) Doctors delineation.
Results of (c) ACM, (d) GAC, (e) ACWE, (f) GVF, (g) VFC, and (h) the proposed
technique.

shown in the tables are the average results of all images in each
type. We can see that the results from the proposed method
are much closer to the experts opinions than that from the
five snake models. The ACWE and GVF snake models provide
similar performances that are better than the ACM and GAC
models. The VFC model provides the best performance among
the five snake models.

Fig. 10. Calculation time comparison of the six methods in images containing
a U-shaped object.

show that the proposed method provides much more efficient


computation time than the five snake models. The reason is that
it only considers a 3 3 neighborhood to follow the object
boundary while the other methods consider much larger neighborhood for each iteration. This is another advantage of our
proposed method over the classical contour models to detect the
correct object boundary.

C. Efficiency of Computation Cost


We compared the efficiency of time consumption between
the proposed method and the snake models on two synthetic
images, i.e., images containing an ellipse-shaped object and images containing a U-shaped object. The sizes of images used
in this experiment were varied from 64 64 pixels to 1024
1024 pixels. The maximum number of iterations of the five
snake models was set to 50 for all images. Fig. 10 shows the
results of the computation time for each method on the images
containing a U-shaped object. The times on the images containing an ellipse-shaped object are similar and are not shown here
to save space. From the figure, the five snake models require
huge computation cost. On the other hand, the proposed method
can achieve the contour much faster. The experimental results

IV. CONCLUSION
We have designed a new edge following technique for boundary detection and applied it to object segmentation problem in
medical images. Our edge following technique incorporates a
vector image model and the edge map information. The proposed technique was applied to detect the object boundaries in
several types of noisy images where the ill-defined edges were
encountered. The proposed techniques performances on object
segmentation and computation time were evaluated by comparing with five popular methods, i.e., the ACM, GAC, ACWE,
GVF, and VFC snake models. Several synthetic noisy images
were created and tested for the sake of the known ground truths.
The opinions of the skilled doctors were used as the ground

SOMKANTHA et al.: BOUNDARY DETECTION IN MEDICAL IMAGES USING EDGE FOLLOWING ALGORITHM

truths of interesting objects in different types of medical images including prostates in ultrasound images, left ventricles in
cardiac MR images, aortas in cardiovascular MR images, and
knee joints in CT images. Besides the visual inspection, all six
methods were evaluated using the probability of error in image
segmentation and the Hausdorff distance. The results of detecting the object boundaries in noisy images show that the proposed
technique is much better than the five contour models. The results of the running time on several sizes of images also show
that our method is more efficient than the five contour models.
We have successfully applied the edge following technique to
detect ill-defined object boundaries in medical images. The proposed method can be applied not only for medical imaging, but
can also be applied to any image processing problems in which
ill-defined edge detection is encountered.
ACKNOWLEDGMENT
The authors would like to thank the following surgeons
for drawing the respective ground truths: Dr. J. Tanyanopporn (prostates), Dr. W. Kultangwattana (left ventricles, aortas, and knee joints), Dr. S. Arworn (prostates). They thank
Dr. J. Euathrongchit, a diagnostic radiologist who provided the
ground truths for left ventricles, aortas, and knee joints and
the images of aortas and knee joints. They are also grateful to
Dr. A. Phrommintikul for providing the images of left ventricles.
REFERENCES
[1] J. Guerrero, S. E. Salcudean, J. A. McEwen, B. A. Masri, and
S. Nicolaou, Real-time vessel segmentation and tracking for ultrasound
imaging applications, IEEE Trans. Med. Imag., vol. 26, no. 8, pp. 1079
1090, Aug. 2007.
[2] F. Destrempes, J. Meunier, M.-F. Giroux, G. Soulez, and G. Cloutier,
Segmentation in ultrasonic B-mode images of healthy carotid arteries
using mixtures of Nakagami distributions and stochastic optimization,
IEEE Trans. Med. Imag., vol. 28, no. 2, pp. 215229, Feb. 2009.
[3] N. Theera-Umpon and P. D. Gader, System level training of neural networks for counting white blood cells, IEEE Trans. Syst., Man, Cybern.
C, App. Rev., vol. 32, no. 1, pp. 4853, Feb. 2002.
[4] N. Theera-Umpon, White blood cell segmentation and classification in
microscopic bone marrow images, Lecture Notes Comput. Sci., vol. 3614,
pp. 787796, 2005.
[5] N. Theera-Umpon and S. Dhompongsa, Morphological granulometric
features of nucleus in automatic bone marrow white blood cell classification, IEEE Trans. Inf. Technol. Biomed., vol. 11, no. 3, pp. 353359,
May 2007.
[6] J. Carballido-Gamio, S. J. Belongie, and S. Majumdar, Normalized cuts
in 3-D for spinal MRI segmentation, IEEE Trans. Med. Imag., vol. 23,
no. 1, pp. 3644, Jan. 2004.
[7] H. Greenspan, A. Ruf, and J. Goldberger, Constrained Gaussian mixture
model framework for automatic segmentation of MR brain images, IEEE
Trans. Med. Imag., vol. 25, no. 9, pp. 12331245, Sep. 2006.
[8] J.-D. Lee, H.-R. Su, P. E. Cheng, M. Liou, J. Aston, A. C. Tsai, and
C.-Y. Chen, MR image segmentation using a power transformation approach, IEEE Trans. Med. Imag., vol. 28, no. 6, pp. 894905, Jun. 2009.
[9] P. Jiantao, J. K. Leader, B. Zheng, F. Knollmann, C. Fuhrman, F. C.
Sciurba, and D. Gur, A computational geometry approach to automated
pulmonary fissure segmentation in CT examinations, IEEE Trans. Med.
Imag., vol. 28, no. 5, pp. 710719, May 2009.
[10] I. Isgum, M. Staring, A. Rutten, M. Prokop, M. A. Viergever, and
B. van Ginneken, Multi-Atlas-based segmentation with local decision
fusionApplication to cardiac and aortic segmentation in CT scans,
IEEE Trans. Med. Imag., vol. 28, no. 7, pp. 10001010, Jul. 2009.
[11] J. R. Parker, Algorithms for Image Processing and Computer Vision.
New York: Wiley, 1997.

573

[12] R. C. Gonzalez and R. E. Woods, Digital Image Processing. Reading,


MA: Addison-Wesley, 1992.
[13] J. M. S. Prewitt, Object enhancement and extraction, in Picture Processing and Psychopictorics. B. S. Lipkin and A. Rosenfeld, Eds., New
York: Academic Press, pp. 75149, 1970.
[14] G. S. Robinson, Edge detection by compass gradient masks, Comput.
Graph. Image Process., vol. 6, no. 5, pp. 492501, Oct. 1977.
[15] E. Argyle, Techniques for edge detection, Proc. IEEE, vol. 59, no. 2,
pp. 285287, Feb. 1971.
[16] J. F. Canny, A computational approach to edge detection, IEEE Trans.
Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679698, Nov. 1986.
[17] R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision. New York:
McGraw-Hill, 1995.
[18] V. Caselles, F. Catte, T. Coll, and F. Dibos, A geometric model for active
contours, Numer. Math., vol. 66, pp. 131, 1993.
[19] F. Leymarie and M. D. Levine, Tracking deformable objects in the plane
using an active contour model, IEEE Trans. Pattern Anal. Mach. intell.,
vol. 15, no. 6, pp. 617634, Jun. 1993.
[20] M. Kass, A. Witkin, and D. Terzopoulos, Snakes: Active contour models,
Int. J. Comput. Vis., vol. 1, pp. 321331, 1987.
[21] V. Caselles, R. Kimmel, and G. Sapiro, Geodesic active contours, Int.
J. Comput. Vis., vol. 22, no. 1, pp. 6179, 1997.
[22] T. Chan and L. Vese, Active contours without edges, IEEE Trans. Imag.
Process., vol. 10, no. 2, pp. 266277, Feb. 2001.
[23] C. Xu and J. L. Prince, Gradient vector flow: A new external force
for snake, in IEEE Proc. Conf. Comput. Vis. Pattern Recognit., 1997,
pp. 6671.
[24] C. Xu and J. L. Prince, Snakes, shapes, and gradient vector flow, IEEE
Trans. Imag. Process., vol. 7, no. 3, pp. 359369, Mar. 1998.
[25] B. Li and T. Acton, Active contour external force using vector field
convolution for image segmentation, IEEE Trans. Imag. Process, vol. 16,
no. 8, pp. 20962106, Aug. 2007.
[26] I. N. Bankman, Handbook of Medical Imaging. San Diego, CA: Academic, 2000.
[27] J. Cheng and S. W. Foo, Dynamic directional gradient vector flow for
snakes, IEEE Trans. Imag. Process., vol. 15, no. 6, pp. 15631571, Jan.
2006.
[28] L. Ballerini, Genetic snakes for color images segmentation, Lectures
Notes Comput. Sci., vol. 2037, pp. 268277, 2001.
[29] N. Eua-Anant and L. Udpa, A novel boundary extraction algorithm based
on a vector image model, in Proc. IEEE Sym. Circ. and Syst., 1996,
pp. 597600.
[30] K. Laws, Textured image segmentation, Ph.D. dissertation, Dept. Elec.
Eng., Univ. Southern California, Los Angeles, 1980.
[31] J. Canny, A computational approach to edge detection, IEEE Trans.
Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679698, Nov. 1986.
[32] A. Guocheng, C. Jianjun, and W. Zhenyang, A fast external force model
for snake-based image segmentation, in Proc. Int. Conf. Signal Process.,
2008, pp. 11281131.
[33] N. P. Tiilikainen, A Comparative Study of Active Contour Snakes, Dept.
Comput. Sci., Univ. Copenhagen, Denmark, DIKU 07/04, 2007.
[34] CIMI Lab, Nanyang Technological University. (2007). [Online]. Available: http://mrcas.mpe.ntu.edu.sg/
[35] ICSL, University of Washington. [Online]. 2004. Available:
http://icsl.washington.edu/node/316
[36] A. Andreopoulos and J. K. Tsotsos, Efficient and generalizable statistical
models of shape and appearance for analysis of cardiac MRI, Med. Image
Anal., vol. 10, no. 3, pp. 335357, 2008.
[37] Imaging Consult. (2007). [Online]. Available: http://imaging.consult.com
[38] Learning Radiology. (2008). [Online]. Available: http://www.
learningradiology.com
[39] S. Nagabhushana, Computer Vision and Image Processing. New Delhi:
New Age International Publishers, 2006.
[40] I. Pitas, Digital Image Processing Algorithms and Applications. NJ:
Wiley, 2000.
[41] X. W. Zhang, J. Q. Song, M. R. Lyu, and S. J. Cai, Extraction of karyocytes
and their components from microscopic bone marrow images based on
regional color features, Pattern Recognit., vol. 37, no. 2, pp. 351361,
2004.

Authors photograph and biography are not available at the time of publication.

You might also like