You are on page 1of 39

A Segment based Technique for detecting Exudate from Retinal Fundus image

PRESENTED BY ATUL KUMAR


PHD. RESEARCH SCHOLAR, DEPTT. OF ELECTRICAL & ELECTRONICS, UNIVERSITI TEKNOLOGI PETRONAS, MALAYSIA

CONTENT
1.INTRODUCTION 2.STAGES OF DIABETIC RETINOPATHY 3.DATASET SPECIFICATION 4.METHODOLOGY AND RESULTS 5.CONCLUSION 6.REFERENCES 7.APPENDIX

1.INTRODUCTION
Non-proliferative Diabetic Retinopathy(NPDR) is a micro vascular complication of Diabetic Retinopathy leading to irreversible loss of vision. It mainly comprises of[1]. 1.Microaneurysms 2.Hemorrhages 3.Cotton wool spots 4.Hard Exudates

Contd..
Micro aneurysms-The capillaries get blocked leading to swelling of the

blood vessels to compensate for the reduced blood supply thus leading to Micro aneurysms (MA) [2].
Hemorrhages-The MAs are usually thin walled and are prone to

rupture causing hemorrhages. Hemorrhages can also be caused in blocked vessels due to undue expansion and stretching of the vascular walls[2].
Cotton wool spots- Loss of blood supply to the retina leads to the

development of pale areas known as Cotton wool spots [3].


Hard exudates- Hard Exudates are the result of fats and proteins

leaking out of the permeable vessels along with water. The water can be quickly reabsorbed into the vessels or into the tissue under the retina, but the fatty material is absorbed only very slowly [3].

Various stages of non-proliferative Diabetic Retinopathy

NORMAL EYE

http://www.fevr.net/THE%20EYE/normal_eye.htm

NORMAL EYE: Feature


This image shows a normal retina of a left eye. The retina is made up of two layers, the outer retinal pigment epithelium and the inner neurosensory layer. The optic disc (1) is entry of optic nerve fibres, it measures approximately 1.5mm in diameter. The macula (2) is the central area of the retina. Within this lies the fovea (3), a central depression in the macula which is approximately 1.5mm in diameter. Using direct ophthalmoscopy, the fovea is seen as a light reflection (termed reflex).

2.STAGES OF DIABETIC RETINOPATHY


Diabetic retinopathy has four stages: Mild Nonproliferative Retinopathy:

At this earliest stage, microaneurysms occur. They are small areas of balloon-like swelling in the retina's tiny blood vessels.

Continue
Moderate nonproliferative diabetic retinopathy:

As the disease progresses, some blood vessels that nourish the retina are blocked.

Continue
Severe Nonproliferative Retinopathy: Many more blood vessels are

blocked, depriving several areas of the retina with their blood supply. These areas of the retina send signals to the body to grow new blood vessels for nourishment.

Continue
Proliferative Retinopathy: At this advanced stage, the signals sent by

the retina for nourishment trigger the growth of new blood vessels. This condition is called proliferative retinopathy. These new blood vessels are abnormal and fragile. They grow along the retina and along the surface of the clear, vitreous gel that fills the inside of the eye.

CLASSIFICATION OF NPDR( National screening


committee of UK)

3.MATERIAL AND DATASET


332 color retinal images obtained from a Canon CR6-45 non-mydriatic

retinal (3CCD) camera with a 45field of view (FOV) is used as our initial image dataset. The images were derived from DRIVE and STARE databases [5][6].
The Image resolution was 565 by 584(565584) at 24bit RGB. The FOV

of each images are circular with a diameter of approximate 540pixels.


These images sets also contain Fundus photography which was made

after pupil dilation with one or more drops of PHENYLEPHRINE HYDROCHOLORIDE (2.5%) and/or TROPICAMIDE (1%)[8].

http://www.isi.uu.nl/Research/Database/DRIVE/

3.1 Data Collection


3.1 Data Collection
These 332 images are subdivided into two datasets that is

156 images as training and testing sets which are used for feature extraction from the images and Retinopathy (NPDR) based classification stages.
The remaining 176 colour images were employed to

investigate the diagnostic accuracy on our system basis for identification of images containing any evidence of retinopathy.

Step 1 -4.1 Image Pre-Processing and Normalization

Contd..
4.1.1 Image pre-processing and Image Normalization
Typically, there is wide variation in the color of the fundus from different patients, related to race and iris color. The first step is therefore to normalize the retinal images across the set. [a] Removing differences in brightness correction, contrast enhancement, color modification from the Original images. [b]Modify the pixels value of each Image in the database which is done by Histogram Specification. [c] Convert the RGB image to Gray Scale Image.

4.1.2 Image Filtering


[a] 2-D filter is used which and returns the central part of the correlation that is the same size as X (matrix)[7]. [b] Performing Color mapping on the filtered image (after setting the dimension of the image matrix).

Contd..
After Image Filtering, Adaptive Histogram Equalization is implemented. The main problem with Fundus image is uneven illumination and is

resolved by implementing AHEM. In this methodology a point transformation is defined within a local fairly large window. This method also assumes that there is a local distribution of intensity value over the whole image. And
(1)

Here, Max and Min are the intensity value of the image and w and w is

the standard deviation and mean of local window.

Resultant Image after step 1

Original Image

Color Map Image

Gray Scaled Image

Step 2 -4.2 Image Boundary Tracing

Contd..
In the binary image form the Boundaries tracing block traces object (RGB Images) boundaries, where objects are represented by nonzero pixels while other by 0 pixels i.e. the background.

4.2.1Calculating the Threshold of the resultant Image:


[a]Convert Original RGB sampled Image into Grayscale Image. [b]Global image threshold using Otsu's method are used to calculate the Threshold of the Grayscale Image[12].

4.2.2 Calculating Centroid of the sampled Image:


[a]Convert the Resultant Image into black and white (Binary Image). [b]Find the boundaries -Concentrate only on the exterior boundaries. [c]Measure properties of image regions (blob analysis). [d]Calculate Centroid of the Resultant Image.

Contd..
4.2.3Use Canny Edge Detection Method:
-It calculates the gradient using the derivative of the Gaussian filter. - Select the User-defined threshold check box to define the low and high threshold values.

4.2.4 After that Zero Padding is applied


Discrete wavelet transform extension function is implemented which decompose intensity

values into subbands with smaller bandwidths and slower sample rates. In the DWT approach, Zero padding mode is used from which zero values of Fundus Image is calculated that determine minimum and maximum intensity value.[13] Original coordinator from the image is originated and then zero matrix of image size is created. Exterior of the mask is filled with zero and minimum and maximum intensity within the mask is calculated that returns a new zero matrixes.

4.2.4 After that Optic Disk Localization


-Before optic disk identification, maximum pixel information from the gray scaled image is calculated which is then stored in the array.

Contd..
-After performing the localization of non-zero pixels values , standard deviation is calculated. Here, Median with respect to row and column is calculated which were round off towards negative intensity values. -Then initial boundary location point is extracted which is utilize to trace the boundary of the image to fit the circle to the boundary. -Backslash operation is implemented in the least- square values that calculate the location of the centre and the radius after tracing boundary. -After that Vessel Convergence in which the thicker blood vessel skeletons are modelled as lines. -Transforming the vessel image into Hough space through Hough transformation is implem-ented to detect the connecting vessels in the image and then ROI is extracted.

Resultant Image of step 2

Input Image

Thresholded Image

Localized Image

Step 3 -4.3 Image Segmentation

Contd..
Image segmentation is a process of partitioning image pixels based on one or more selected image features and in this case the selected segmentation feature are Vessels[15]. This proposed algorithm implemented on the green-channel of a Fundus Image. [1]First Morphological operations are performed to modify the sampled Image. [2] After that we applying Matched filter (basically simple Gaussian filter).

(2) -The size of filter is 1615(since resolution is not known, size of image is 605700). -Since Direction of Vessels is not known, filter is rotated 12times (each 15 degree) [15]. [3] Here, for characterizing the projected sample Combined Two Dimensional PCA is implemented that based on 2Dimage matrices rather than ID vectors. Y=AX (3) Thus, an m-dimensional projected vector Y is obtained. It is called the projected feature vector of image A.

Contd..
- The total scatter of the projected samples can be introduced to measure the discriminatory power of the projection vector X. -For a given image sample A, let Yk =A Xk k=1,2,...,d

(4)

-Y1, Y2.., Yd are the principal component of the sample image A. Each principal component of 2DPCA is a vector while the principal component of PCA is a scalar. -However, one disadvantage of 2DPCA is that more coefficients are needed to represent an image. To further reduce the dimension of 2DPCA, we can use PCA after 2DPCA. -The reasons that 2DPCA outperform PCA are that the covariance matrix of 2DPCA is quite small thus it evaluates the covariance matrix more accurately. -The 2DPCA also includes the spatial information due to the 2D input matrix format rather than ID vector format. However, 2DPCA only emphasizes on the correlation of feature vectors of one directional projection of A. In order to add the correlation information of feature vectors in the other directional projection of A, we propose combined 2DPCA: Y1 =A X1 (5) Y2 =AT X2 (6)

Resultant Image after step 3

Step 4 -4.4 Image Classification

Contd..
- After obtaining the two different feature sets from Y1 =A X1 and Y2 = AT X2 respectively, we combine two SVM classifiers. -Each SVM is trained by the different feature set independently. There are many ways of combining individual classifiers [17], [18]. Here, the decision value of the combined classifier is the average of the outputs of the two different SVM classifiers[20]. -The combination of two different SVM classifiers trained by different feature sets can help reduce the potential risk of overfitting that causes high variance in generalization. -After that the resultant pixels intensity values of a resultant image are converted into binary image then image transformation is performed and then resultant image is inverted. -Region props technique is used where properties of the image region is extracted. -Here area covered by the maximum pixel intensity feature from each neighbourhood pixel. It is the area occupied by the featured vector is extracted. Finally optic disk is removed from the original image.

Contd..
-A support vector machine (SVM) is a supervised type of learning methodology that classifying input date by analyze them and also by recognize there patterns. It makes a concept of non-probabilistic binary linear classifier to SVM. -Consider a labeled two-class training set { xi ,yi} , i= 1l , xi Rd, yi {-1, 1} is the associated "truth". The separating hyperplane must satisfy the following constraints: yi[(w.xi) +b] 1- i, i 0 (7)

where w is the weight vector, b is the bias, s. is the slack variable. To find the optimal separating hyperplane, the following function should be minimized subjected to the above constraint: (w) = ||w||2 /2 + C(l i=1 i ) (8) where C is a parameter to be chosen by the user which controls the trade-off between maximizing the margin and minimizing the training error. -In case of a linear boundary being inappropriate, the SVM can map the input vector x into a high dimensional feature space by choosing a nonlinear mapping kernel.

Contd..
The optimal separating hyperplane in the feature space is given by: f(x) =sgn(l i=1 yi 0i K(xi .x) + b0) (9)

where K(x, y) is the kernel function. The following are some commonly used kernels: Polynomial: K(x, y) = ( ) +1)d Gaussian Radial Basis Function:

(10)

Resultant image after step 4

5.Conclusion
In this research, a segment based technique for detecting the exudate from the retinal

image is presented. The methodology is composed of morphological operation with the SVM algorithm.
Both qualitative and quantitative experiments on normal and abnormal retinal images

indicate that the proposed approach is effective and can produce identical results as NPDR also known as pre-proliferative stage of Diabetic Retinopathy if diagnosed early can go a long way in reducing DR associated blindness.
An automated process for the early diagnoses and intervention can hence be of great help

to the patient and Specialist alike in the timely management of Non-Proliferative Diabetic Retinopathy.

6.References
[1] www.avclinic.com/nonproliferative [2] medweb.bham.ac.uk/easdec/gradingretinopathy [3] Nonproliferative Diabetic Retinopathy, Catherine B.Meyerle, Emily Y. Chew and Frederick L. Ferris,Contemporary Diabetes, 2009, 1, 3-27 [4] Neera Singh, Atul Kumar, Ramesh Chandra Tripathi An automated hybrid technique for detecting the stage of non-proliferative diabetic retinopathy, page 73-80 ,IITM10, December 28-30, 2010, Allahabad, UP, India. [4] http://www.wrongdiagnosis.com/retinopathy/intro.htm [5] http://www.isi.uu.nl/Research/Database/DRIVE/ [6] http://www.parl.clemson.edu/stare/probing/ [7] Joo V. B. Soares*, Jorge J. G. Leandroand etall, RetinalVessel Segmentation Using the 2-D Gabor Wavelet and Supervised Classification, IEEE, Vol.25, No.9, September, 2006 [8] Cemil Kirbus and Francis K.H. Quek, Vessel Extraction Techniques and Algorithm: Vision Interface and System Laboratory (VISLab): Department Of computer Science & Engg, Wright State University, Dayton, Ohio: (November2007). [9] Young Yang***, Shuying Huang***, Nini Rao*,An Automatic Hybrid Method for Retinal Blood Vessel Extraction,Int.J.Aplp.Math.Comput.Sci.2008,Vol.18,No.3,399-407.

Contd..
[10] Benson Shu Yan Lam* and Hong Yan,A Novel Vessel Segmentation Algorithm for Pathological Retinal Images based on the Divergence of Vector fields, IEEE, Vol.27, No.2, February, 2008. [11] Shu-Chen Cheng and Yueh-Min Huang, A Novel Approach to Diagnose Diabetic Based on the Fractal Characteristics of Retinal Images, IEEE, Vol.7, No.3, Septermber, 2003. [12] Xiaohui Zhang, Opas Chutatape,A SVM Approach for Detection of Hemorrhages in Background Diabetic Retinopathy, Proceeding of International Joint Conferences on Neural Network, Montreal, Canada, July 31-August 4,2005. [13] Berrichi Fatima Zohra, Benyettou Mohamed, Automated diagnosis of retinal images using the Support Vector Machine (SVM), Faculte des Science, Department of Informatique, USTO, Algerie.2005. [14] A Osareh, M Mirmehdi, B Thomas, R Markham,Automated identification of diabetic retinal exudates in digital colour Images, Br J Ophthalmol 2003; 87:12201223 [15] A. Hoover and M. Goldbaum, Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels, IEEE Trans.Med. Imag, vol. 22, no. 8, pp. (951958), Aug. 2003. [16] Ali Can, Hong Shen, James N. Turner, Badrinath Roysam and etall, Rapid Automated Tracing and Feature Extraction from Retinal Fundus Images Using Direct Exploratory Algorithms, Member, IEEE.

Contd..
[17] M. Figueiredo, J. Leitao, A nonsmoothing approach to the estimation of vessel controus in angiograms, IEEE Trans. Med. Image., vol. 14, pp. 162172, 1995. [18] Hoover, M. Goldbaum, Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels, IEEE Trans.Med. Imag. vol. 22, no. 8, pp. (951 958), Aug.2003. [19] M. Sofka, Stewart, Retinal vessel centerline extraction using multiscale matched filter, confidence & edge measures,IEEE Trans. Med. Imag., vol. 25, no. 12, pp. 15311546, Dec.2006. [20] Atul Kumar, Abhishek Gaur, Manish Srivastava, A Segment based Technique for detecting Exudate from Retinal Fundus image (Detecting Exudate from Retinal Fundus image using SVM,Volume 6, Pages 1-9 (2012), 2nd International Conference on Communication, Computing & Security [ICCCS-2012], 2212-0173 2012 Elsevier Ltd.

7.APPENDIX
A.1. List of Function used with description
Function Name Description/ Usage Section Applicable

MyZeropaddingRemoval.m

Remove added Zero at the edges of image

Pre-processing stage

RemoveFovea_ver1.m

Detect Fovea like material and remove it

Classifier Stage for Bleeding detection

MyMainProcessing.m

Search for processed data

Main Processing

MyRgb2Hsi.m

Convert an Image from RGB to HIS

Pre-processing Stage

MyMean.m

Calculate the mean on data

Pre-processing Stage

ZeroPadding.m

Add Zeros to the edges of an Image

Pre-processing Stage

MyWinAdaptiveEq.m

Perform Widowing and Adaptive Histogram Equalisation

Pre-processing Stage

Contd..
B.1. List of Function with implementation
Function Name Description/ Usage Section Applicable

imgprep.m

Exudate image resizing, brightness correction, colour conversion

Pre-processing stage

imgprepAh.m exudate1.m

Noise removing, Adaptive Histogram equalization Dilation, erosion, image subtraction is performed

Image Normalization Image Boundary Traceing

exudatehist.m dubmin.m adaptive_threshold.m

histeq function is performed Mean Calculation using mean taking from intensity value Otsus Method Implementation

Histogram Plotting

Mean Calculation Adaptive Thresholding

Function_Ex

vessel Extraction, segmentation

Main Function

THANK YOU

You might also like