You are on page 1of 10

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 20, NO.

1, JANUARY 2016

333

Novel Accurate and Fast Optic Disc Detection


in Retinal Images With Vessel Distribution
and Directional Characteristics
Dongbo Zhang, Member, IEEE, and Yuanyuan Zhao

AbstractA novel accurate and fast optic disc (OD) detection


method is proposed by using vessel distribution and directional
characteristics. A feature combining three vessel distribution characteristics, i.e., local vessel density, compactness, and uniformity,
is designed to find possible horizontal coordinate of OD. Then,
according to the global vessel direction characteristic, a General
Hough Transformation is introduced to identify the vertical coordinate of OD. By confining the possible OD vertical range and
by simplifying vessel structure with blocks, we greatly decrease the
computational cost of the algorithm. Four public datasets have been
tested. The OD localization accuracy lies from 93.8% to 99.7%,
when 820% vessel detection results are adopted to achieve OD
detection. Average computation times for STARE images are about
3.411.5 s, which relate to image size. The proposed method shows
satisfactory robustness on both normal and diseased images. It is
better than many previous methods with respect to accuracy and
efficiency.
Index TermsOptic disc (OD) detection, retinal image, vessel.

I. INTRODUCTION
PTIC disc (OD) is a major retinal structure that usually
appears in retinal images as a circular bright object. The
detection of OD is favorable for the analysis of retinal image.
For example, it can serve as a landmark for localizing and segmenting macula (fovea) and vessel structure. Also, since the OD
can be easily confounded with bright lesions, the detection of
its location is important to remove it from a set of candidate
lesions. Therefore, researchers have always been interested in
detecting OD automatically in retinal images.
Many literature works have reported methods to detect OD.
Among them, the early methods usually use the appearance
characteristics, i.e., brightness, contrast, and shape information
around the OD. These methods presented high success rates in
normal images. However, they often failed on diseased images
due to the change of OD appearance and the interference of
lesions.

Manuscript received January 17, 2014; revised May 25, 2014, July 28, 2014,
and October 12, 2014; accepted October 22, 2014. Date of publication October
28, 2014; date of current version December 31, 2015. This work was supported
in part by the Construct Program of the Key Discipline in Hunan Province and in
part by the Scientific Research Fund of Hunan Provincial Education Department
under Grant 14A137, and in part by National Natural Science Fundation of China
under Grant 51277156.
The authors are with the College of Information Engineering, Xiangtan
University, Xiangtan 411105, China (e-mail: zhadonbo@163.com; zhaoyua
nyaun@163.com).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/JBHI.2014.2365514

In recent literature works, some robust and accurate OD detection techniques using anatomical structure of OD, macula,
and vasculature are proposed. By observation, all retinal vessels
originate from the OD and follow a parabolic like directional
pattern. And OD center is close to the vertex of the parabola. As
a stable retinal image feature, the vessel structure is effective to
help locate the position of OD. Foracchia et al. [1] identified the
position of the OD using a geometrical model with two parabolas, which describe the global direction of the vasculature, and
the OD position can be located as the common vertex of the
two parabolas. Hoover and Goldbaum [2] found the maximum
convergence point as OD through computing the convergence
of blood vessels. In [3], the authors detected the OD by computing the match degree between vessel map and vessels direction
matched filter. Lu and Lim [4] made use of the unique circular
brightness structure associated with the OD, and a line operator
was designed to capture such circular brightness object.
All these methods present relatively high success rate in diseased STARE images, but they are extensive time consuming.
For example, the geometrical model-based method proposed by
Foracchia et al. [1] achieves a success rate of 97.5% with average
computation time of 2 min to localize the OD in a given image.
The Vessels direction matched filter described by Youssif et al.
[3] achieves an accuracy of 98.8%, but it takes an average computation time of 3.5 min per image to correctly locate the OD. Lu
and Lim [4] needs 4.5 min to achieve 96.3% detection accuracy.
To achieve a fast OD detection method, Mahfouz and Fahmy
[5] observed the vessels present in OD mainly outspread along
the vertical direction. So in this area, vertical gradient component is far outweigh horizontal gradients, and the overall edge
gradient in this area is also greater than other areas. And by considering brightness, the OD is located through the projection
technique. Because it reduced the problem from one 2-D localization to two simple 1-D projections, the approach showed
ultrafast detection efficiency, e.g., 0.46 s for a STARE image
and 0.32 s for a DRIVE image. However, due to not fully make
use of the direction information of blood vessel, the accuracy
only achieves 92.6% in STARE images. Based on line operator
[4], Lu [6] presented an accurate and efficient OD detection
and segmentation technique based on a circular transformation.
The high efficiency of this method is owed to the image downsampling and the search space reduction by OD probability map
based on Mahfouzs method [5]. The algorithm cost 5 s, which
is substantially faster than many of the state-of-the-art methods.
Appearance and anatomical characteristics are main features
that could be used to detect OD. But in many diseased retinal

2168-2194 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

334

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 20, NO. 1, JANUARY 2016

images, appearance characteristic shows great variation, and


circular bright OD region may be destroyed, e.g., OD has lower
bright and contrast, only part of OD is observable, most seriously the OD may be completely changed by retinopathy. Thus,
usually the methods based on appearance characteristics cannot achieve satisfied accuracy for poor quality and/or diseased
images. Because the blood vessel structure is the most stable
anatomical feature that can be used to locate the OD position,
thus by applying vessel directional characteristics, studies [1],
[3] achieved high OD detection accuracy (98.8%) in public
STARE image dataset. However, due to the complicated vessel
modeling, these methods needed more than 2 min to implement
image detection.
In this paper, a novel accurate and fast OD detection method
based on vessel characteristics is proposed. The contributions of
the method include two aspects. First, the vessel distribution and
directional characteristics are used to locate the horizontal and
vertical coordinate of OD respectively. And to describe the vessel distribution characteristic, a feature combining local vessel
density, compactness, and uniformity is proposed. Second, by
reducing the 2-D search problem to two 1-D search problems,
the method improves the efficiency of OD searching process.
And by confining the possible OD vertical coordinate range and
by simplifying vessel structure with blocks, we further decrease
the computational cost of the algorithm. Experimental results
show that when appropriate vasculature is extracted, the robustness of the method can be achieved when a certain amount of
xOD candidates are selected. It means that the method has no
great dependence on finer vessel detection result, and it achieves
accurate and fast detection performance in many public image
datasets.
II. METHOD
A. Vessel Detection
Vessel structure detection is the premise of our method. It is
noticed that, to facilitate vessel detection by Gabor filter, a fixed
image size is appreciated; thus, all of test images are resized
into 512 385 during the detection of our algorithm. As many
literature works, due to the highest contrast, the green channel
of RGB image is used. To extract vessel feature, a 2-D Gabor
filter [7] is adopted


x 2 + 2 y 2
x

g,, , (x, y) = e 2 2 cos 2 +

x
= x cos + y sin
y = x sin + y cos

(1)

where is spatial aspect ratio, which determines the ellipticity


of the receptive field, it is specified to constant 0.5 in this paper.
The standard deviation determines the size of the receptive
field. The parameter is the wavelength of the cosine factor.
The ratio / determines the spatial frequency bandwidth. In
this paper, it is fixed to the value / = 0.56. In our resized
512 385 experimental images, and are assigned to 3.36
and 6, respectively. The angle parameter , [0, ), determines the preferred orientation. Phase offset , (, ],

determines the symmetry of g,, , (x, y) with respect to the


origin. Because the vessels appear as dark bar or line patterns,
= is expected to achieve vessel detection.
The response of Gabor filter, r,, , (x, y), is the convolution
of g,, , (x, y) with input image I:

r,, , (x, y) =
I(u, v)g,, , (x u, y v)dudv. (2)
Empirically, we observed that the responses of Gabor filter have no great improvement when more orientations are included, while the computational time will greatly increase. Thus,
like most of references adopted, 12 orientations are applied in
our experiment, i.e., the adjacent orientation difference is 15.
The responses of these orientations are computed, and the maximum value of them is selected as the final response of Gabor
filter:
AH
(x, y) = {r,, | max{HWR(r,, i , (x, y))
r,,
i

i = 0, /12, 2/12, . . . , 11/12}



0, z < 0
HWR(z) =
z, z 0

(3)

where HWR is called the half-wave rectification function.


To achieve binary vessel map, a standard hysteresis thresholding method [8] is applied in our work. By adjusting the threshold
pair, different vessel detection results will be achieved. Due to
the great variability of retinal images, it is impossible to find a
pair of thresholds adaptable to different sources retinal images.
By observation, in most images, the vessel pixels occupy about
1020% FOV region. FOV is the abbreviation of the term field
of view. It means that only the retinal image region is considered, and the perimeter dark background is discarded. Fig. 1
illustrates different vessel detection results when different
thresholds are chosen. To better appreciate the response of
Gabor filter, a false color (jet colormap) image is shown in
Fig. 1(c). Fig. 1(d)(f) shows detected vessel results occupying
5%, 10%, and 15% FOV region, respectively, which show coarse
to finer vessel structure in this sample image. It is easy to observe
that too coarse or too finer vasculature will cause false negative
or false positive results. Because the main vessel structure is
enough to describe the vessel distribution and global direction
characteristic; thus, it is no necessary to extract complete and
accurate vessel structure. By our experiment, 10% FOV vessel
detection result will show relative better vessel structure in most
cases.
To obtain the vessel structure occupying about 10% FOV region, an appropriate pair of thresholds should be found. Therefore, an adaptive strategy to determine threshold pair is adopted.
First, a pair of large thresholds are assigned to tl and th ; usually,
th is the double of tl . In our experiment, the initial values of
them are t0l = 0.2 and t0h = 0.4. If the detected vessel pixels
are less than 10% FOV region; then, the threshold pair will be
iteratively reduced as follows:
tkl = kt tkl 1
tkh = 2 tkl .

(4)

ZHANG AND ZHAO: NOVEL ACCURATE AND FAST OPTIC DISC DETECTION IN RETINAL IMAGES WITH VESSEL DISTRIBUTION

335

Fig. 1. Illustration of vessel detection. (a) Original retinal image. (b) FOV region. (c) Response of Gabor filter (jet colormap). (d) 5% FOV vessel detection
result. (e) 10% FOV vessel detection result. (f) 15% FOV vessel detection result.

To avoid great change of detected vessel structure, the adjacent pairs of threshold interval should be overlapped, and the
overlap percent is about 1090%. Deducing from (4), the overlap
percent of pairs of adjacent threshold interval can be calculated
by 2kt 1. Obviously, the extent of overlap can be tuned with
parameter kt , and the appropriate interval range of kt is about
(0.5, 0.9); thus, we select a median value, i.e., kt = 0.7 in our
experiment.
B. Blood Vessel Distribution Characteristics and Candidate
OD Horizontal Localization
To demonstrate the distribution characteristic of blood vessel structure, an example retinal image is shown in Fig. 2(a).
Fig. 2(b) shows its corresponding binary manual labeled vessel
map. And five vertical windows (with image height and twice
main vessel width) are especially selected to help the observation. Among them position 4 (centered at OD vicinity) is
marked with bright yellow frame to distinguish it from other
four positions, i.e., positions 1, 2, 3, and 5. Fig. 2(c)(g) illustrates these separated vertical windows and their statistic bar
graph about vascular segment appearing in these five specific
horizontal positions. Usually, there are more than one connected
vascular segments in each vertical window. In this study, the OD
center is identified as the convergence point of main vasculature. Obviously, there are less vascular segments in the vertical
windows including OD center. And they present compact distribution with high vessel density [see Fig. 2(f)]. While in the
other vertical windows [see Fig. 2(d), (e), and (g)], there are

much of connected vascular segments and scatter in a wider


area with relatively even distribution. Although in position 1
[see Fig. 2(c)], it also has less vascular segments, but they are
relatively even distribution and show lower local vessel density.
Based on these observations, three aspects distribution characteristics should be considered to locate the possible horizontal
positions of OD center. They are as follows.
1) Local vessel density, which can be estimated by the pixels
of vascular segment. Because the widest main vessels
originate from OD region; therefore, local vessel density
around OD vicinity is usually higher than that of other
areas.
2) Compactness: The vascular segments of the vertical window including OD center usually concentrate around the
OD center area. While in other positions, much more vascular segments appear away from OD horizontal centerline. To evaluate this characteristic, the standard deviation
of the vertical coordinate of all vascular segments centers
is calculated.
3) Uniformity:. In the vertical window around OD center, the
majority vessels belong to one or two vascular segments.
While in other vertical windows, vascular segments distribute evenly. That means the number of vessel pixels
belong to each vascular segment has not significantly difference. The uniformity can be evaluated by entropy concept borrowed from information theory.
To describe vessel distribution characteristic, a feature considering above characteristics is proposed as follows:

336

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 20, NO. 1, JANUARY 2016

Fig. 2. Illustration of vascular distribution at different horizontal positions. The boxed digitals in (b) show five selected vertical windows, and (c)(g) show the
vascular segments appear in these windows and corresponding statistic bar graphs of them. Where, the x-axis represent the index of connected vascular segment i(i
= 1, . . . ,13), and the y-axis represent corresponding pixels of vascular segment v i .(a) Example of retinal image. (b) Manual labeled vessel map and five marked
horizontal positions.

(1)
fD (x) = std(vc)

nx


pi log2 pi

i=1

max{vi }

(5)

where fD (x) is the vessel distribution feature value at horizontal


coordinates x. It is combined with three terms.
The first term std(vc) demonstrates the compactness of vascular segments in the vertical window centered on position x.
vc = [vc1 , vc2 , . . . , vcn x ] is the vector describing vertical coordinate of vascular segment centers. Here, vci (i = 1, . . . , nx )
is the vertical coordinate of geometrical center for vascular segment i, nx is the total number of vascular segments in given
vertical window, and std is the standard deviation function.

 x
pi log2 pi evaluates the uniforThe second term (1) ni=1
i
mity of vascular segments distribution, where pi = m
M is the
proportion of ith vascular segment to the vessels of specific vertical window, mi is the pixels of ith vascular segment, and M is
the whole vessel pixels in specific vertical window. Obviously,
more vascular 
segments and more even distribution will result
x
pi log2 pi .
in larger (1) ni=1
The third term max{vi } is the pixels of the largest vascular
i

segment in specific vertical window. Here, each vi can measure


the local vessel density near vascular segment at a certain extent.
It is not difficult to understand that the feature fD (x) has nonnegative value, i.e., fD (x) 0. Generally, the fD (x) of vertical
window centered at OD vicinity is lower than that of vertical
windows centered at other horizontal locations. We believe that

ZHANG AND ZHAO: NOVEL ACCURATE AND FAST OPTIC DISC DETECTION IN RETINAL IMAGES WITH VESSEL DISTRIBUTION

337

TABLE I
FAILED OD HORIZONTAL LOCALIZATION AMONG SPECIFIED k LOWEST
EXTREME POINTS UNDER 10% FOV VESSEL RESULT
k

DRIVE
STARE
DIARETDB0
DIARETDB1
Total

1
4
9
6
20

1
2
2
3
8

0
0
0
0
0

identified as the location of the minimum position of the 1-D


fD (x) signal:
xOD = {x| min{fD (x)}}.

Fig. 3. Illustration of OD horizontal localization and possible vertical range


of y O D . (a) Vasculature of an example retinal image. (b) Horizontal projection
curve.

OD is impossible to appear in the vertical windows without


vessels or there is a single vascular segment in the window and
its vessel pixels are less than two main vessel widths. Thus, the
fD (x) of these vertical windows are discarded by predefining
to be positive infinite.
To achieve the computation of fD (x), the width of a sliding
vertical window should be assigned in advance. Because the
ratio of diameter of retinal image to OD diameter and the ratio of
OD diameter to main vessel width are roughly constant, we can
estimate the OD diameter and the main vessel width after FOV
segmentation [9]. As some studies reported [4], [5], these two
ratios are approximately satisfied with following relationship:
1
DFOV
58
1
DOD

67

In most cases, the minimum points of fD (x) present in the


actual OD horizontal position, but in some cases, there are only
one vessel segment in the specific vertical window; therefore,
they present minimum fD (x), i.e., fD (x) = 0. And in some
special cases, the minimum point maybe not situate in the actual OD horizontal coordinate. By these observations, simple
selection of minimum point is not always right. Fortunately, the
OD horizontal position is usually found among the lowest extreme points. To improve the robustness of the algorithm, the
lowest k minimum extreme points are chosen as possible horizontal locations of OD. Table I illustrates the failed images in
four public image datasets if the OD horizontal coordinate of
them are not located in the lowest k minimum extreme points
of fD (x). If k is assigned to 1 or 3, there are, respectively, 20
or 8 failed images. While if k is assigned to 5, the actual OD
horizontal coordinate was always identified correctly in the test
images by one of the five candidate minimum extreme points.
Although Table I just shows the test results in the case of 10%
FOV vessel detection structure, we will find similar regularity
under other vessel detection results. It means that we can choose
limited k minimum extreme points to be candidate xOD .
C. Hough Transformation for Parabola Fitting and OD
Vertical Localization

DOD
DM V

(7)

(6)

where DFOV is the diameter of retinal image region, DOD is


the diameter of OD, and DM V is the main vessel width. According to above relationship, the ratios are assigned to 7 and
6.5, respectively, in our experiment. That means if the diameter
of FOV region is R, then OD diameter is about R/7, the main
vessel width is about R/45.5, and the width of sliding vertical
window is the twice main vessel width, i.e., R/22.75.
By sliding the vertical window along horizontal direction
from left to right, we can get fD (x) of each position x. Fig. 3
plots the fD (x) curve along horizontal axis of the example
image. Noticed that the horizontal location of OD (xOD ) can be

Be observed that the global direction of vasculature can be


described with main arcade of vessels and can be roughly represented by parabola model. Assume the origin is the up-left
corner of the image; then, a parabola model with axis parallel
to the image horizontal axis is used:
(y yOD )2 = 4 p(x xOD )

(8)

where p is the focal length and (xOD , yOD ) is the vertex of


parabola curve, which is also assumed as the coordinate of
OD center. Thus, there are three parameters to determine the
parabola model. Because the horizontal coordinate of OD xOD
is identified by (7); thus, only two parameters, i.e.,pand vertical
coordinate yOD need to be estimated. The ideas of General
Hough Transformation (GHT) [10] are adopted to achieve
parabola curve fitting. The GHT accumulation matrix has two

338

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 20, NO. 1, JANUARY 2016

Fig. 4. Illustration of OD vertical localization. (a) Thinning main arcade of vessel map. (b) Horizontal projection curve of fD (x). (c) Blocking vessel map.
(d) Five parabolas fitting most of vessel pixels at five candidate xO D .

dimensions: one for vertical coordinate yOD and the other for
focal length p.
To simplify search process, the parameter space should be
quantized. In this study, we choose 26 bins for the focal length
p (between pm in = 9 and pm ax = 81). Generally, the shape of
parabola is greatly depended on the focal length parameter p, and
herein, the search range of p is related to the global orientation of
main arcade of the vasculature. By observation, we believe p
[9, 81] is an appropriate range for our parabola fitting algorithm.
The range of yOD is the integer between 1 and image height.
However, because the OD center is the convergence point of
blood vessels, it is impossible present in vessel periphery region, so the lower and up bound of vessel vertical coordinate
at position xO D can be adopted to confine the possible search
range of yOD [see Fig. 3(a)]. To accelerate search process, the
step great than 1 is appreciated to identify yOD . In our experiments, this step parameter is set to 2, which accelerates the
efficiency of the algorithm, as well as does not significantly
reduce the localization precision of OD center.
To reduce the search space, we expect to implement parabola
fitting with single pixel width vessel structure; therefore, after
Gabor filtering, a nonmaxima suppression thinning technology
[8] is used to get central pixels of blood vessels. The main arcade

of the vessel structure provides enough global vessel directional


information; thus, we only need the main arcade to achieve OD
vertical localization. To extract the main arcade of vasculature,
the vessel bifurcations approximate vertical direction (> 70 )
or less than 20 pixels are removed [see Fig. 4(a)].
To improve the efficiency of follow-up parabola fitting, whose
computation cost is greatly depended on the number of detected
vessel pixels, a blocking technology is adopted to simplify the
main arcade of vessel structure, and it reduces the time for OD
vertical localization by GHT parabola fitting. To achieve blocking vessel map, the image space is split into 10 10 not overlapped blocks, and if there are vessel pixels fall into a block, then
it will be filled with 1, else it will be filled with 0; therefore,
a blocking binary vessel map can be acquired [see Fig. 4(c)].
Here, each white block will be used to fitting the parabolas by
GHT transformation. In Fig. 4(a), there are 3446 vessel pixels
need to be fitted, while in Fig. 4(c), there are only 427 white
blocks should be explored. Apparently, the computation time
will be greatly reduced by our blocking technology.
More than one parabola will be found by above GHT algorithm. The one fitting most of vessel pixels and according
best with the global vessel direction need to be found from
all possible parabolas. At specific k candidate OD horizontal

ZHANG AND ZHAO: NOVEL ACCURATE AND FAST OPTIC DISC DETECTION IN RETINAL IMAGES WITH VESSEL DISTRIBUTION

TABLE II
AMSE OF FIVE PARABOLAS FITTING MOST OF VESSEL PIXELS AT
FIVE CANDIDATE xO D

TABLE III
INFORMATION OF FOUR PUBLIC IMAGE DATASETS
Dataset

parabola
AMSE

0.1081

0.2459

0.2362

0.3497

0.3706

339

Normal Images

Diseased Images

#images

33
31
20
5
89

7
50
110
84
251

40
81
130
89
340

DRIVE
STARE
DIARETDB0
DIARETDB1
Total

positions, the parabola that fitting most of vessel blocks are


chosen in advance; then, the orientation of vessel pixels locating on k specific parabola curves are extracted to evaluate the
degree of the parabola according with global vessel direction.
The direction of any point (x, y) belonging to parabola curve
can be given by following equation:


2p
.
(9)
m o d (x, y) = arctan
(y yOD )
And the direction of vessel pixels can be determined by Gabor
filter with the equation
V (x, y) = {| max{HWR(r,, i , (x, y)

TABLE IV
IMAGING ENVIRONMENT OF FOUR PUBLIC IMAGE DATASETS
Dataset

Fundus camera

resolution

field of view

DRIVE
STARE
DIARETDB0
DIARETDB1

Canon CR5
TopCon TRV-50
unknown
Nikon F5

564 584
605 700
1500 1152
1500 1152

45
35
50
50

TABLE V
OD DETECTION FAILED IMAGES OVER DIFFERENT VESSEL RESULTS
AND k POSSIBLE x O D FOR DRIVE DATASET

i = 0, /12, 2/12, . . . , 11/12}.

(10)

The parabola with minimum average mean square orientation


error will be found as the final parabola we expected


nv p

2
1  V
AMSE = sqrt
(xi , yi ) m o d (xi , yi )

nv p i=1
(11)
where nv p is the common points presenting both on vessel map
and specific parabola model. Provided the parabola with minimum AMSE has been found, the corresponding parameters
(p, xOD , yOD ) of parabola model will be identified, and expected OD center (xOD , yOD ) can be detected.
To describe the process of finding most appropriate parabola
model clear, Fig. 4 illustrates the process of OD vertical localization by our method. The red circles marked in Fig. 4(b) mean
the five lowest minimum extreme points found in fD (x). Five
parabolas fitting most of vessel blocks at each possible xOD are
plotted over the example retinal image [see Fig. 4(d)]; among
them parabola 1 with vertex marked with red circled + has
the minimum AMSE value (0.1081, Table II), while the vertex
of other parabolas are marked with green +.
III. EXPERIMENT RESULTS
A. Dataset
Four public datasets are used in our experiment. i.e., DRIVE
[11], STARE [2], DIARETDB0 [12], DIARETDB1 [13].
Tables III and IV show the information and imaging environment of these image datasets.
In the experiment, as many previous approaches conducted, if
the estimated OD center falls within the OD boundary, which can
be observed by experimenter, then we believe that the detected
OD location is correct.

k=1
k=3
k=5

5%

8%

10%

12%

15%

20%

Manual labeled vessel

5
2
1

3
1
0

1
1
0

1
1
0

0
0
0

2
0
0

0
0
0

TABLE VI
OD DETECTION FAILED IMAGES OVER DIFFERENT VESSEL RESULTS
AND k POSSIBLE x O D FOR STARE DATASET

k=1
k=3
k=5

5%

8%

10%

12%

15%

20%

Manual labeled vessel

10
4
2

5
2
1

4
2
1

4
2
1

5
1
1

6
2
1

3
1
1

B. OD Detection Results
It should be noticed that all of characteristics used to locate
OD in this study are related to vessel distribution and global
direction information; thus, complete and accurate vessel structure extraction is important for OD detection. However, it is
still a hard problem need to be solved, especially for the retinal
images with abundant thin and weak vessels or including much
pathology. Fortunately, our method is robust to the vessel structure. In most cases, finer vasculature is not necessarily needed.
And the main vessel structure is enough for OD detection in
our approach. To adapt to different source of image datasets,
an adaptive threshold pair (tl , th ) selection process, i.e., (4), is
adopted to extract appropriate main vessel structure.
To evaluate the effect of vessel structure for OD localization,
Tables V and VI show the number of failed images during OD
detection by our method for DRIVE and STARE image dataset.
Coarse to finer vasculature are presented with 520% FOV vessel result. And k (k = 1, 3, 5) minimum extreme points are

340

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 20, NO. 1, JANUARY 2016

TABLE VII
OD DETECTION ACCURACY (%) OVER DIFFERENT VESSEL RESULTS
AND k POSSIBLE x O D FOR ALL OF DATASETS

k=1
k=3
k=5

TABLE VIII
OD DETECTION RESULTS FOR THE PROPOSED AND LITERATURE REPORTED
METHODS IN STARE DATASET

5%

8%

10%

12%

15%

20%

Methods

87.9
92.6
94.1

93.8
95.6
98.2

94.1
97.1
99.7

94.4
97.6
99.7

94.1
97.3
99.1

93.5
96.5
98.5

Lus [4]
Lus [6]
Hoover [2]
Foracchia [1]
Youssif [3]
Mahfouz [5]
The proposed

selected as possible horizontal coordinate of OD. Obviously,


as more vessel details are presented, the failed images reduce;
however, if there are much false positive vessel pixels, the failed
images may be increase, e.g., for DRIVE dataset, there are two
failed images in 20% FOV vessel result when k = 1, while all
images can be correctly detected with 15% FOV vessel results
and k = 1. Also for STARE dataset, there are five or six failed
images with k = 1 and 15% or 20% FOV vessel results, respectively, while there are only four failed images with 10% and 12%
FOV vessel results. Moreover, we can observe that in every vessel results, the failed images will decrease with the increase of
k. It means that when appropriate vasculature is extracted, the
robust of the method can be achieved when a certain amount of
xOD candidates are selected. Although the method shows excellent OD detection accuracy, it should be noticed that complete
and accurate vasculature is still favor to OD detection; the best
OD detection accuracy is achieved when manual labeled vessel
are used to OD localization for our method in both DRIVE and
STARE image datasets. But as the increase of k, e.g. k = 5, our
method also can achieve best OD accuracy under 820% vessel
results. Due to too much vessel pixels information are lost, 5%
FOV vessel results present worst OD accuracy.
The OD detection accuracy of four public image datasets with
our method is listed in Table VII. The best accuracy (99.7%)
is achieved when k = 5 and 10% or 12% vessel results are
provided. Moreover, the accuracy has no significant variation
when 820% vessel results are used to localize the OD region
in specified k. We observe that the variation of the accuracy
is no more than 2%. It shows that our method is robust to
vessel structure and exhibits no great dependence on finer vessel
detection results.
By experiment, we ultimately choose k = 5 and 10% vessel
results as the final parameters to our algorithm. There are total
of 340 images in four datasets. And only one image in STARE is
failed. We know that there are many retinal images suffer from
different lesions and imaging artifacts in these test images. Thus,
our proposed method presents excellent OD detection accuracy
(99.7%). And it is robust to various lesions and different quality
retinal images.
STARE dataset includes many pathological images and
widely used as a benchmarking in much of state-of-the-art OD
detection methods. Table VIII shows the OD detection accuracy of our method and other state-of-the-art methods. Lus [6],
Youssifs [3], and our method present best accuracy (98.8%),
and only one image is failed. Many OD detection literatures
mentioned the speed of their algorithms. Table VIII lists the reported times of other algorithms in the original papers [1][6].

Accuracy

#of failed image

Speed

96.3%
98.8%
89.0%
97.5%
98.8%
92.6%
98.8%

3
1
9
2
1
6
1

4.5 min
5s
15 s
2 min
3.5 min
0.46 s
3.411.5 s

TABLE IX
OD DETECTION RESULTS OF PROPOSED METHOD FOR STARE DATASET WHEN
RESIZED IMAGES WITH DIFFERENT SCALES ARE TESTED
Scale
1
0.7
0.5
0.3

Accuracy

Speed

98.8%
98.8%
98.8%
98.8%

11.5 s
6.5 s
4.2 s
3.4 s

The most efficient method presently reported is image feature


projection approach [5]. Due to the reduction of the 2-D problem into two 1-D projection problems, OD detection in a STARE
image with only 0.46 s is achieved. The best accuracy of STARE
dataset has been reported the methods of Youssif [3] and Lu [6];
they can achieve 98.8% accuracy in 3.5 min and 5 s, respectively. Obviously, 3.5 min is too much to real application. We
also notice that the speed of Lus [6] is achieved in a small size
of original retinal image, i.e., 0.3 of its original size. We observe
that the vasculature is the most stable feature of retinal image,
and it also presents scale invariant characteristic. Thus, it is reasonable believed that we can achieve OD detection with higher
speed in small size retinal image. To evaluate the performance
of our method in different scale retinal images, we resize the
STARE dataset with four scales, i.e., 1, 0.7, 0.5, and 0.3 of its
original size.
Table IX shows accuracy of STARE dataset in different scales.
And it presents excellent robust against scale variation, i.e.,
the accuracy has no decrease with the change of image size,
while the efficiency is greatly improved (from 11.5 to 3.4 s).
In experiment, the algorithm is implemented using MATLAB
2010 on Windows XP SP3 with an Intel Core 2 Duo P8400
2.4 GHz CPU with RAM DDR2 2 GB.
Fig. 5 shows OD detection result of 12 pathological images
by our method. In each image, the green parabola curve is the
final parabola we find to identify OD center, and the vertex of
the parabola, i.e., OD center, is marked with red +. All these
images are chosen from STARE dataset and suffer from much
pathology or imaging artifacts. Among them, only Fig. 5(j) is
failed, i.e., the OD center we find is fall outside of the OD
boundary, although it is very close to the OD boundary. Considering the vessel distribution and direction, characteristics are
applied to detect the OD location in our algorithm; thus, we

ZHANG AND ZHAO: NOVEL ACCURATE AND FAST OPTIC DISC DETECTION IN RETINAL IMAGES WITH VESSEL DISTRIBUTION

341

Fig. 5. OD detection results of selected STARE example images suffering from different type of lesions and imaging artifacts, detected OD center is labeled
with red +, and fitting parabola is plotted over with green line. (a) im0005, (b) im0020, (c) im0048, (d) im0026, (e) im0044, (f) im0042, (g) im0012, (h) im0043,
(i) im0027, (j) im0041, (k) im0008, and (l) im0004.

believe that the main reason for the failure was that the image
im0041 cannot supply a sizable vessel structure.
IV. DISCUSSION
The proposed method presents high OD detection accuracy in
our experiment, and only one image [im0041, Fig. 5(j)] is failed
in total of 340 retinal images. This owes to making use of vessel
distribution and directional characteristics. Although Foracchia
et al. [1] and Youssif et al. [3] also use vessel characteristics,
but only direction information is considered in their study. And
because they search OD in whole 2-D image space [3] or in
a larger parameter space, i.e., (p, xOD , yOD ) [1], it is time exhaustive to achieve OD detection for their method, i.e., 3.5 and
2 min is needed for [3] and [1], respectively, to handle a STARE
image. While for our method, the possible OD horizontal coordinate xOD can be identified in advance by a feature describing
vessel distribution characteristic, and the possible search range
1
2
, yOD
]) of vertical coordinate can be confined by lower
([yOD
and up bound of vessel ordinate at horizontal position xOD ;
thus, we do not need to search the OD in whole image space
and only two parameters need to be determined in Hough Transformation, i.e., focal length p and vertical coordinate yOD . On
the basis of these factors, the difficulty of solving the problem
has been greatly reduced. And it improves the accuracy and
efficiency of our method.
Although complete and accurate vasculature is favor to describe the vessel distribution and directional characteristics, but

it is not a strict prerequisite to implement the algorithm, because it is not the only conclusive factor to influence the OD
detection results. If we permit more minimum extreme points
of fD (x) as possible candidate xOD , the finer vasculature is not
necessarily needed. Provided that the main vessel information
is preserved in a vessel results, we can always find a certain
amount of candidate OD horizontal positions to achieve robust
OD detection by our method. It should be noticed that although
the vasculature of im0026 cannot be observed, but thanks to the
erroneous recognition of radial hemorrhages as vessels, it was
correctly detected.
We noticed that different methods failed in different retinal images, e.g., Foracchia et al.s [1] failed in Im0027 [see
Fig. 5(i)] and Im0008 [see Fig. 5(k)] with Track-1 data and
failed in Im0041 [see Fig. 5(j)] and Im0026 [see Fig. 5(d)] with
Track-2 data. Youssif et al.s [3] failed in im0004 [Fig. 5(l)],
Lus [6] failed in im0044 [see Fig. 5(e)]. Our method failed in
im0041 [see Fig. 5(j)]. Obviously, different methods use different features of retinal image, and these features present diverse
OD detection error; by this observation, an ensemble strategy
by combing different algorithms [14] or different retinal image
features [15] may be introduced to further improve the OD detection accuracy. Although in present public datasets, the methods
reported have presented satisfying accuracy, but their performance should be evaluated in more difficult retinal images. And
these are our future work.
Although there are many parameters involved in whole algorithm, but some of them can be assigned in advance, e.g.,

342

IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 20, NO. 1, JANUARY 2016

the focal length range (between pm in = 9 and pm ax = 81) for


Hough Transformation, spatial aspect ratio is set to constant
0.5 and phase offset = for the Gabor filter, etc. And other
parameters can be automatically determined by the algorithm,
e.g., retinal image diameter R will be estimated after FOV extraction; then, the main vessel width can be computed by R/45.5,
and the wavelength and standard deviation of Gabor filter
can be tuned proportionally to the size of the retinal image.
The hysteresis threshold pair (tl , th ) can be determined with the
adaptive process (4). Generally, speaking, our method is easy to
implement.
V. CONCLUSION
Vessel distribution and direction characteristics are used to
propose a novel accurate and fast OD detection method in this
paper. Although vasculature is the prerequisite of our method,
but the finer vessel structure is not necessarily needed, which reduce the demands to find a sophisticated vessel structure extraction approach. And due to only vessel characteristics are used
in this study, the method is robust to OD appearance change.
By reducing the search space, including OD horizontal coordinate localization, confining possible rang of vertical coordinate
of OD and vessel blocking technology, the method achieves an
efficient OD detection performance, and because it presents excellent stability against image size change; in real application,
we can resize the original retinal image into a small size in
advance; then, accurate and fast OD detection will be expected.
ACKNOWLEDGMENT
The authors would like to thank Editor-in-chief and Associate
editor for their handling of the paper.
REFERENCES
[1] M. Foracchia, E. Grisan, and A. Ruggeri, Detection of optic disc in retinal
images by means of a geometrical model of vessel structure, IEEE Trans.
Med. Imag., vol. 23, no. 10, pp. 11891195, Oct. 2004.
[2] G. A. Hoover and M. Goldbaum, Locating the optic nerve in a retinal
image using the fuzzy convergence of the blood vessels, IEEE Trans.
Med. Imag., vol. 22, no. 8, pp. 951958, Aug. 2003.
[3] A. Youssif, A. Ghalwash, and A. Ghoneim, Optic disc detection from
normalized digital fundus images by means of a vessels direction matched
filter, IEEE Trans. Med. Imag., vol. 27, no. 1, pp. 1118, Jan. 2008.
[4] S. Lu and J. H. Lim, Automatic optic disc detection from retinal images
by a line operator, IEEE Trans. Biomed. Eng., vol. 58, no. 1, pp. 8894,
Jan. 2011.
[5] A. E. Mahfouz and A. S. Fahmy, Fast localization of the optic disc using
projection of image features, IEEE Trans. Imag. Process., vol. 19, no. 12,
pp. 32853289, Dec. 2010.
[6] S. Lu, Accurate and efficient optic disc detection and segmentation by
a circular transformation, IEEE Trans. Med. Imag., vol. 30, no. 12,
pp. 21262133, Dec. 2011.

[7] C. Grigorescu, N. Petkov, and M. A. Westenberg, Contour detection


based on nonclassical receptive field inhibition, IEEE Trans. Imag. Process., vol. 12, no. 7, pp. 729739, Jul. 2003.
[8] C. Grigorescu, N. Petkov, and M. A. Westenberg, Contour and boundary
detection improved by surround suppression of texture edges, Image
Vision Comput., vol. 22, no. 8, pp. 609622, 2004.
[9] M. Niemeijer, M. D. Abr`amoff, and B. V. Ginneken, Fast detection of
the optic disc and fovea in color fundus photographs, Med. Image Anal.,
vol. 13, no. 6, pp. 859870, 2009.
[10] D. H. Ballard, Generalizing the Hough transform to detect arbitrary
shapes, in Readings in Computer Vision: Issues, Problems, Principles, and Paradigms. San Mateo, CA, USA: Morgan Kaufmann, 1987,
pp. 714725.
[11] J. J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, and
B. V. Ginneken, Ridge based vessel segmentation in color images
of the retina, IEEE Trans. Med. Imag., vol. 23, no. 4, pp. 501509,
Apr. 2004.
[12] T. Kauppi, V. Kalesnykiene, J. K. Kamarainen, L. Lensu, I. Sorri,
H.Uusitalo, H. Kalviainen, and J. Pietila, DIARETDB0: Evaluation
database and methodology for diabetic retinopathy algorithms, Lappeenranta Univ. Technol., Lappeenranta, Finland, Tech. Rep., 2006.
[13] T. Kauppi, V. Kalesnykiene, J. K. Kamarainen, L. Lensu, I. Sorri,
H. Uusitalo, H. Kalviainen, and J. Pietila, DIARETDB1: Diabetic
retinopathy database and evaluation protocol, Lappeenranta Univ. Technol., Lappeenranta, Finland, Tech. Rep., 2007.
[14] R. J. Qureshi, L. Kovacs, and B. Harangi. Combining algorithms for
automatic detection of optic disc and macula in fundus images, Comput.
Vis. Image Und., vol. 116, no. 1, pp. 138145, 2012.
[15] A. P. Rovira and E. Trucco, Robust optic disc location via combination
of weak detectors, in Proc. IEEE Annu. Int. Conf. Eng. Med. Biol. Soc.,
pp. 35423545, 2008.

Dongbo Zhang (M10) received the B.S. and M.S.


degrees in computer science from Xiangtan University, Xiangtan, China, in 1996 and 2001, respectively,
and the Ph.D. degree in control science and technology from Hunan University, Changsha, China, in
2007.
Since 1996, he has been with Xiangtan University, where he has been a Professor since 2012. His
current research interests include digital image processing, pattern recognition, and machine learning.

Yuanyuan Zhao received the Bachelor of Engineering degree in electronic from the Hunan University of
Science and Technology, Xiangtan, China, in 2011,
and the Masters degree in pattern recognition and intelligent system from Xiangtan University, Xiangtan,
in 2014.
Her current research interests include digital image processing and pattern recognition.

You might also like