You are on page 1of 15

Diagnosis is generally of diabetic retinopathy, skilled medical personnel (usually

ophthalmologist) is carried out by evaluation of the fundus photographs taken using


a fundus direct examination or dedicated camera, by. Directly to the event to do, or event to do
look at the pictures, such examination is a consuming imprecise process of time, in terms of its
results, not match often the opinion of experts of the diagnosis, very of this disease early is
especially true. In addition, such cost diagnostic is high, due to the lack of such a high costs and
skilled personnel, diagnosis is carried out at longer intervals than often desired spacing.
SUMMARY OF THE INVENTION
[Problems that the Invention is to Solve
[0003]
One of the complications of diabetes, there is a gradual decrease in visual acuity, known
as diabetic retinopathy. Treatment diabetic patients the majority have been known to be in this
state loses part of sight, current medical technology, for processing lesions of the retina that
characterize the disease at the laser beam over time It has been taken. However, the diagnosis
of diabetic retinopathy, specialist trained regularly and exploratory retinal (or retinal photo), it is
necessary to recognize the small lesions and its changes. The number of diabetic patients
compared to the number of specialists much more, diabetics because it must undergo regular
checkups and difficulties and defects occur in the health management system to be screened to
treat diabetic retinopathy there. The present invention is realized by a computer, it provides an
efficient screening analysis of retinal photograph to identify the stage of diabetic retinopathy.
In order to solve the problems]
[0004]
The present invention, lesions occurring in the initial stage of diabetic retinopathy, ie petechiae
or microaneurysms, stain-like bleeding, powerful technique for automatically grading retinal
image through detection of striatal hemorrhage and lipid exudates including. The present
invention further provides a method of detecting a nerve fiber layer infarcts, the method of
extracting the optic nerve in an appropriately identified fields, and the vessel was tracking
identifies (in vessel diameter, measuring the tortuous and branched angle) METHOD
including. The present invention, without retinopathy, microaneurysms only, it is preferable to
identify the three levels consisting of microaneurysms + lesions. Among last two levels is
earliest detectable forms of the disease. This method, however, the detection of lesions caused an
advanced stage of the disease can be extended up to seven levels, using this method, to evaluate
changes in normal vasculature (tube diameter, tortuous and branched angle) by, it is possible to
evaluate the risk of retinopathy develops.
[0005]
The method of the present invention is particularly suitable for overcoming the difficulties in
grading retinopathy. This difficulty, variation between images, derived from uneven illumination
and flare same image leading to variations in between some of the lesion of low contrast, and the
quadrant of the same image different to the background.
[0006]
Expert judgment system realized by the present invention, based on the low level detector (lesion
detection) result for each eye taken to determine the grade of retinopathy. The system for
individual detectors and mydriasis image and non-mydriatic images, camera type, image type,
can be adjusted using a separate parameter based on patient characteristics
and fundus characteristics.
[0007]
Data Archive, all of the patient data, image, the function of the central repository of
demographics and the report, which is the core part of the data management system. The archive
is accessible for storage and retrieval through the Internet, it is designed scalable to meet the
growing over time. The advantage of a centralized data management architecture include:. (1)
regardless of the location where the history data is obtained, because an ophthalmologist or
retinal specialist comparison, access to the current study and previous studies. (2) a central data
storage unit, a change in normal blood vessels, on the occurrence of existing changes and
new diabetic pathology of diabetic lesions, objective and quantitative evaluate the progression
over time diseased individuals or groups to provide a means. This database, which provides the
basis to develop regression and continuous studies to develop a risk prediction algorithms in the
future. (3) the algorithm for scanning the archive yields a quantitative measure of tortuous blood
vessels, branched angle and tube diameter changes. These has been identified as a marker of
vascular diseases can be tracked over time. Again this is also, to strengthen the mainly initial,
development before or risk prediction after the development of retinopathy. Note that it is not
possible to evaluate these parameters by grading by human. (4) by the data mining of large-scale
data warehouse, proficiency of the screening (proficiency) and it is possible to examine the
patient compliance, also, valuable to the trend of the various populations insights, and patient
populations (for example of each observed similar patients) the effect of treatment during the
clinic can be obtained compared to tracked over time.
BEST MODE FOR CARRYING OUT THE INVENTION
[0008]
According to the present invention, acquired images locally with immediate feedback to the
photographer that using a small looping process to guide the photographer to help optimize each
image quality, analyzing the image to the local and it is locally stored images on CD-ROM and a
magnetic storage device.Screening is immediately reported from the system by a printed
report. Higher than significant can not be several parse photos (currently insufficient even three
times shooting), or acceptable number / type of lesion / level of retinopathy thresholds for one
eye eye that have been "excluded" from the screening in order is sent to a remote specialist
ophthalmologist for review. Specialist, to report the results of the re-considered in conjunction
with the grading and recommendations. The patient, to take the reservation of the examination,
or photograph is enough, it is advised to repeat the screening at intervals of
recommendation. This report is kept in the database for data mining about the success and
thresholds for screening and queries.
[0009]
In the practice of the present invention, an image can be obtained by the following one way.
[0010]
(A) 35 mm Ektachrome (TM) from the slide, green Wratten filter (green Wratten filter) digitized
monochromatic 30 ° image of at least 1024 × 1024 × 8-bit depth using.
[0011]
(B) taking a eye using a digital line systems using fundus camera using at least 2 mega × 2
Mega゛ × depth 32 bit resolution color photographic digital camera capable of.
[0012]
(C) 535 nm notch filter, using one or more filters, and a fundus camera monochromatic 1024 ×
1024 × 8-bit depth using the digital camera capable placed between, including 605nm notch filter
to shoot the eye by using a digital on-line system.
[0013]
The present invention preferably uses a protocol to shoot 5 photos consisting 35 ° field per eye.
[0014]
The invention further tracking of batch processing and the medical history of the disease, as well
as to permit identification of risk indicators, patient (including providing the name, as well as
other demographic and clinical data in the image) to provide a catalog. It catalog of the patient is
important is clear. Possible to hold the digital photo files are efficient for clinic. Image, for
example, can be held in the image management database, this, note ancillary patient data and
physicians for each photo set may be recorded.
[0015]
Image management system of the present invention, medical professional, tracks the
improvement of the disease, which factors in which patients class allows to more accurately
determine whether and determines the deterioration rate of the disease. It determines the
frequency of re-examination and re-examination of the patient by the deterioration rate.
[0016]
The system of the present invention uses an image file naming convention that allows to
automatically determine the position of the image and the corresponding field of the same
patient. Naming scheme of all of the image is as follows. : PatientID-Camera type.Eye-Field-
Field of view.?Image type.Processing
[0017]
· Eye is the L or R.
· Field is a number of from 1 to 5.
· PatientID is an identifier of alphanumeric generated by the system.
· Camera type is a non-mydriatic camera 1, the mydriasis camera is 2.
· Field of view is the angle of the retina of each image captured by the camera (i.e. 30 ° or 35 ° or
45 °).
· Image type is a numerical value for the image type, in digitized monochrome images from 1,
Ektachrome slide in digital direct color image 2, a monochromatic image after the peak of the
notch filter (e.g., 535 in 535nm peak filter) is 3.
[0018]
Processing is a code indicating the image type. : RAW (original photos), GRA (graded image
indicating the lesion present)
[0019]
The method of the present invention includes a software system that helps photographer to
capture the appropriate retinal image. In use, the photographer captured the eye, to indicate the
field in the shooting. When these images are captured, the identifier is attached to each photo. In
the current system, OD field #l: the fovea centered, having optic disc to the right or left of the
field in accordance with the eye; OD field # 2: lower left of the optic disc (superior nasal
(Superonasal) ); OD field # 3: the upper left of the optic disc (inferior (inferonasal)); OD field #
4: the upper right corner of the optic disc, the fovea corner (lower head (inferotemporal)); and
OD field # 5: the lower-right corner of the optic disc, the fovea corner (upper head
(superotemporal)) have been used. But not limited to these. OS field is left and right is the exact
opposite. These 5 Photos gives an overlap of between these 45-degree image having several
pictures captured of fovea, covering all the necessary areas of the retina that require
photographic and exploratory. If the photographer to mark the fovea of field # 1, it has been
known to be improved invention. For each photo,
- examine these necessary elements, optic disc and blood vessels, as well as the position or the
like on those photos.
Contrast: comparing the blood vessel background on eg Canon CR6 (caused by reduction of
contrast is too moved to near or camera far). To do this, it helps to adjust the position of the
photographer are some bright alignment point.
Focus: detecting a shift image focus by examining peripheral resolution of the central vessel.
Alignment (up / down / right / left of the pupil): Miss for the pupil alignment, create the edges of
the flare in the photos of the field (decrease in contrast). It evaluates the field edges and the
center of the contrast, to detect the flare (compared with a database consisting of a number of
examples images, photographed, to determine), re-alignment when the flare edge is detected It
requires that tell the photographer to implement.
[0020]
If the photograph with respect to any of the parameters examined is determined to be unsuitable,
whether re-imaging which fields and eye, and how the instructions of how to improve the image,
instructions for each element set is immediately presented to the photographer. This not only the
quality of the images acquired for the individual is improved, in general, more rapid
improvement of retinal photograph by inexperienced photographer is achieved.
[0021]
Grading of retinopathy
Grading retinopathy of the present invention detects a lesion from digitized monochrome images,
based on image processing / computer vision software to quantify. The digitized
monochrome images, taken by using the green channel from captured online or three primary
color camera with 1 channel pixel density similar, using a digital monochrome camera pixel
density 1280 × 1024 or it is obtained by digitizing the same resolution from the 30 ° or 50 ° of
the slide film images taken from a non-mydriatic camera. See Appendix CD-ROM containing the
computer source code of the preferred embodiment of the present invention.
[0022]
Quantification :( for each field) number / field, the total area / Evaluation fields area, the number
of the histogram to the size, the number of histogram for the density of each field.
[0023]
Screening graded into three levels by: 1) lesions (hemorrhage, lipid exudates or nerve fiber layer
infarcts) none, 2) only microaneurysms (petechiae), 3) other bleeding, exudates or infarction
point-like bleeding associated with.
[0024]
There are separate detectors for the following lesions and anatomical characteristics of the
retina.
[0025]
A. Bleeding: point-like, stain-like, striatum B. Lipid exudates C. Nerve fiber layer infarction
D. The optic disc E. Blood vessels (arteries, veins): primary, secondary, detection [0026] of the
branches cubic minutes
Detector for it has been pursued, in this other lesions of more advanced retinopathy grade are as
follows.
[0027]
A. Intra-retinal microvascular abnormalities (IRMA)
B. Vein loop C. On the nipple (epi-papillary) neovascularization D. On the retina (epi-retinal)
neovascularization E. Vitreous under (sub-hyaloid) bleeding F. On the retina (epi-retinal) fibrosis
G. Retinal detachment [0028]
The method of determining the grade include:.
[0029]
1. Geometric techniques 2. Location of the lesion, the image processing for obtaining a size
3. Matched filter. Find one of the oval. Using the biological code 4.Morphological filter. Growth
- contraction method 5. Removal of the vessel - find all joints - find vessels emanating from the
junction track [0030]
Extraction of the region of interest
For digitized images from the film, to extract a portion including the retina. For both
digital images and digitized image, to remove the defective contrast portion caused by flare. One
improvement of the present invention is to eliminate the portion of the photograph that does not
contain useful data resulting from the image flare.
[0031]
Histograms of two different fundus image shown in FIG. 1 shows that there is considerable
distance between the gray values of the background and the retina.Unfortunately, this gap occurs
in a wide range of values. Edge of the flare of the retina, such as artifacts in imaging, there are
other factors that severely limit the use of simple techniques.
[0032]
Using these vessels boundary, to determine the primary, secondary and tertiary vessels. Using
adaptive threshold to determine the position of the blood vessel.The setting is based on the
contrast of the image. This determines the position of the retina in the image, it is necessary to
measure the contrast. To distinguish retina from the background, we use the ISODATA clustering
technique, first to define the threshold first, and then binarized at that value. ISODATA algorithm
is an iterative method based on the density histogram of the image. The histogram is divided into
two parts of the foreground and background pixels, assume the first threshold. Then, the average
value of the foreground pixel and calculates the average value of the background pixel, the
middle of these two averages the new threshold. The process based on the new threshold
estimate is repeated until the threshold does not change.
[0033]
Due to the presence of deep shadows on a part of the image, which is often not lead to define the
entire retina (Fig. 4). One way of overcoming this obstacle is to use the convex hull. First find
the greatest interest, then, connecting two of all combinations of contour points having a certain
distance (d) below Euclidean distance set in advance in a straight line. If there is a background
pixel on such line, adds the pixel to the original target. This operation is further width close
narrow all holes than d pixels.
[0034]
Alternative method is a method that does not use the convex hull that is necessary to close the
internal bore of the segmented retina. Therefore, to generate a binary image using ISODATA
techniques, to remove everything but the most significant target. Then, by inverting the binary
image, not adjacent to the boundary of the image, i.e., to remove all target not part of the
background. Then again inverts the binary image.
[0035]
After determining the position of the retina, in the edge of the retina mask set (inset) and defining
an inner boundary (Fig. 5). Suitable inset distance to 1k image is 100 pixels. To carry out this
operation is to avoid to include the edge of the retina to the measurement of retinal
contrast. Edge of the retina often causes a deep shadow or a bright flare.
[0036]
Using a histogram of retinal mask after inset, defining an image area to create a density
histogram. The standard deviation was calculated from the histogram, to use it as a measure of
image contrast. It was necessary to obtain a ranking to be used when obtaining the blood
vessel. Using a variety of order. Position = 0.25, 0.5 (median filter), shown in FIG. 6 the results
for 0.75. In this figure, the original image is to an image scanned in order to shorten the
processing time and subsampling.Factor of subsampling was 4. Subsampling was performed by
simple linear methods. There is also a possibility that the result by other techniques such as
bilinear sampling is improved, which will be discussed later. To see the effectiveness of this
technique, to remove the found vessel from the original image (scanned image sub-sampled by a
factor 4). Figure 7 shows the results of the original image and removed. It can be seen that there
are many false alarms which can be separated by the form factor and the total area of the
subject.
[0037]
Identifying vascular bed from the background, the density, is performed on the basis of size and
shape. The algorithm utilizes an adaptive threshold based on the ranking filter. The routine based
on the value of the pixel to the pixel value in the vicinity thereof, assigning a value of 0
(background) or 1 (subject) for a given pixel.And ranked on the basis of all the pixels near the
gray values, by adding a constant offset to the value of the center pixel, the result is compared to
the value of i-th element of the neighborhood. If it is lower, that is, when the center pixel is
substantially darker than its surroundings, it assigns the value 1 to the center pixel, giving a value
0 otherwise. Three parameters related to important mutually calculation of the adaptive
threshold, the offset is a kernel size and counting threshold (i). Kernel size and offset are both
constant constant over the entire image for a given image. We tested both circular kernel and a
rectangular kernel.As expected, the circular kernel gave fewer artifacts. Eventually they settled
on a circular filter with a diameter of 17 pixels. Offset to be used was 15. Counting threshold is
held constant for a given image, between the image is changed according to the image contrast.
[0038]
Using the standard deviation of the retina image is the setting of the counting threshold. Since
the count threshold to evaluate how changing the the image contrast, by calculating the standard
deviation of each histogram of the 25 images, and convolved (convolve) in each image adaptive
thresholding, visual best segmentation changing the count threshold until accomplished by
determining the.The relationship between the two, were modeled by fitting a straight line (y =
234.34-1.607x, r = 0.88, Figure 3).
[0039]
Identified, vascular removed from the image in order to leave only petechiae as dark
structure. The present invention removes the very bright object, such as optic papilla, it is
possible to obtain clearer gray value distribution of the remaining images (histogram). Much of
this process depends on the gray value, these techniques, k average corrected slope, code
analyzing, with specialized algorithms such as in the form algorithm executes adaptive
segmentation leading to more extraction of reliable lesions make it possible.
[0040]
The present invention removes the noise, without destroying the structure of the bleeding, or to
apply a smoothing filter of special configuration that does not cause artifacts in the image.
[0041]
Lesions detected by various detectors are supplied to the expert system of known
construction. Expert systems are built from the rules that have been built by consultation with
expert ophthalmologist. This is obtained by embodying the reasoning for trained ophthalmologist
for interpretation. Experts inquiry, the features that are encoded in the rule are as follows.
[0042]
1. Fast scan second image to detect the vessel fragments. Morphological processing for
connecting co-linear fragments and branched to major blood vessels (enlarged area then deflate)
3. Skeletonized was developed to find the midline algorithm 4. To see the effectiveness of the
process, [0043] removing the found vessel from the original image
The present invention uses a route search technique. To find the path, starting from the position
in the gray value image corresponding to a given endpoint, the mean gray value is searched a
direction which is the maximum. The first search direction, examines the connectivity backbone
and end are determined by setting the initial direction (d) is in the opposite direction. The current
pixel (x, y) and the current direction (d) is given, the three candidate points for the next pixel, i.e.
the direction (d-1), near the (d) and (d + 1) module 8 the point is obtained of. To determine
whether to take any of them, three straight lines, i.e., (d) the direction of the straight line, (d)
direction and (d + 1) line between the directions, and (d) and direction (d-1 ) consider third
straight line between direction. Along their respective lines, it calculates the average value over a
length corresponding to <val> on pixels. The next pixel is a candidate point that coincides with
the direction having an average greatest. Using the new direction found as the value of (d)
repeating the process with a new pixel. This process first, and stops if the distance of the
candidate pixels to the image boundary is smaller than a preset value. The process further
maximum of the average values of the candidate direction is stopped is smaller than the specified
value <minval>. Next, the process stops when the pixels set in the output bit-plane was
obtained. Therefore, the system uses the external background of the eye. Finally the process is
stopped when the number of found pixels become equal to a preset value.
[0044]
Alternatively, a study of vertical gradient curve generated at regular intervals, it is also possible
to dismiss the curve.
[0045]
Slope
[Number 1]
[0046]
Detector of the methodology
Venous separated from the arterial, measured branching pattern of blood vessels, tortuous blood
vessels, the variation of the blood vessel diameter. As can localize the position of the lesion with
respect to the position of the fovea, determine the position of the optic disc and fovea.
[0047]
General principles: encoding the test as a function that returns the three possible scores.
[0048]
1 (SUCCESS), 0 (Indeterminate or DONT_KNOW), or -1 (FAIL)
This setup later, multiplied by the weight to output of the function is used to generate a combined
trust (confidence) score.
[0049]
As discussed above, following the adaptive threshold and noise removal, to achieve the
segmentation of the vessel based on the size and form factor of the object. Rejected the subject is
classified as actually bleeding. Size is used to distinguish from petechiae blotchy hemorrhage
(DH) (BH). Stain-like bleeding is greater than point-like bleeding.
[0050]
Adaptive segmentation
Run the K-means method on a large image segments (4 × 4 or 8 × 8 segments in the entire
image), background and, vascular, and dark areas such as point-like heme, bright areas such as
lipid exudates produces three areas. This gives a coarse segmentation of the image in
consideration of local shading and fluctuations. Gray value of the average of the region is later
used as an important value for finer segmentation.
[0051]
Techniques for the heme
If any two of the following tests is that failed, i.e. (test1 () + test2 () + ... == - 2) If 0.It ranks gray
values of N × N area, then (a 1 5-minute interval in the case of M = 5) that they are divided into
M tiles. To color the least significant of the pixels in the RED (red).
[0052]
- Ideal profile of heme, since the periphery of the central low gray value crater shape with
concentric rings of large value, the point-like heme should RED portion is closer to the center.
- lipid exudate is as mound has the same characteristics as heme reverse image.
[0053]
1. Center 4 × 4 of the complete stamp does not have a RED.
[0054]
2. XMIN, and YMIN, XMAX, and the corner of YMAX a complete stamp, (ymin, xmin1,
xmax1) was used as a first ival of RED, (ymax, xmin2, xmax2) is referred to as the end of
ival. These form the actual corner of RED. As in the case of the triangle formed in the segment
and the flare region around the image to see if any edge of the RED is in the corner of the
complete stamp. Area array XMIN = lesionpost-> supp.x1, because it has a ..., test is as follows.
[0055]
[0056]
3. To investigate whether it is a blood vessel see if it extends over the entire length of the stamp.
[0057]
4. The total size of the stamp is> the case of a 60 × 60, returns a FAIL.
[0058]
5. RED form factor eliminates the trapezoid against the vessel and triangles, should be between
from 0.9 to enable a circular target 1.1.
[0059]
Techniques for the cotton-like infarction and lipid exudates
CWS and EX are initially inverts the image gray value as exudate becomes darker than the
background, then by applying the same adaptive filter using the segmentation of hemorrhage,
distinguished from the background of the fundusimage It is. After removal of the noise a median
filter, to separate the remaining target CWS and EX. This is done based on the size and
slope. Subjects with small objects and sharper slope than fixed constants are classified as
EX. The remaining object is classified as a CWS. Obviously, the slope is evaluated with respect
to the original gray value image.
[0060]
Improvement
We developed a method of detecting an object in contact with the blood vessel.This method, each
of the pixels of the target, using a distance transform to replace the estimated value of the
shortest distance to the background (the distance to the nearest background pixel). To increase
the speed of production of the map, we used the pseudo-Euclidean distance. Closest distance to
the vicinity of 5, until the diagonal 7, Katsura leapfrog (knight move) is 11. Then, a value, for
example 50 when thresholding, the pixel away 10 pixels or more from the nearest edge of the
object is captured. In this way it is possible to detect an abnormally thick blood vessel.
[0061]
Improvement of the method without than focusing on absolute thickness, attention is paid to
change in thickness. Here, from the small blood vessels to larger vessels, then look for a sudden
change to again narrow blood vessels. To do this, using the distance transform find a skeleton of
binary blood vessel images. Skeleton is in the middle of the target boundary, the point defined as
a phase contractions (topological retraction) 1 set of arcs connected Thickness 1 pixel is having
the same connectivity as the original target, the distance transform central axis conversion is
different. Accordingly, the central axis is the difference connectivity is maintained. It is different
from the method Hiruditchi (Hilditch) backbone to be applied, the distance backbone, city blocks
(city block) is calculated using the pseudo-Euclidean distance without the distance (1 to nearest
neighbor, the diagonal 2 in the Katsura leapfrog 3). Then use this scaffold as a mask having a
distance map, obtaining a distance from the backbone to the edge of the object.
[0062]
Level 1
Clesion (Dot, Blot etc) (converted into binary having a small stain grayscale in an image space)
filtering performed using the IndicationFilter <DarkonLight / LightonDark>
[0063]
Filtered to smooth the noise, to produce a vascular bed. The use of this filtering can be varied is
determined by the use of experiments performed to the image from a particular camera to
measure the relative merits.
[0064]
Has a 3 × 3 cross the center, to perform the filter with 8 neighboring "arms" having "hand" of
length 2 pixels at each end. Gray value of the center (the weighted sum) is compared with the
sum of the "hand". If any darker than a hit is marked in the center 3 × 3. The hand is moved only
StepSize from the center to the maxSize starting from minSize. Added multiple paths in order to
pick up more than one lesion size. It is an unfinished matched filter. - a lot of false alarms. Call
the stepSize using minSize and maxSize ODD. EVEN. Is as follows.
[0065]
The last step is PruneBySize ... [0066]
Convert the CRa the target list, by applying a simple geometric rules, the pruning of the target in
the list. List manages to hold on CIArray. See the definition. Using the minimum and maximum
threshold specifies consider color range to the subject.For example, red and green is from 1 to
2. To enable pruning by the minimum and maximum range of pixel number and the target
constituting the subject. Specifying AND / OR limitations rules of x and y ranges, allowing the
specification of whether or not fill subjects pass.
[0067]
Level 2
Processing of the target space. To initiate a call of <CObjetsManipulation ::
PruneBySize>. PruneBySize is a common function for all of the lesions, do not have a separate
body.
[0068]
BOOL CLesion :: GeometricFilter (BOOL bUseDlg)
[0069]
Convert the CRa aggregation pixel group or list object, by applying a simple geometric rules, the
minimum and maximum range of pixel number and the target constituting the target, the pruning
of the target in the image.
[0070]
The first and second moments calculating (area, center of gravity, optimum matching ellipse
major and minor axes).
[0071]
Test 1: area> minPix
Test 2: The xExtent and yExtent compared with minimum and maximum threshold, or only one
of them is checked whether the minimum constraint condition is satisfied. Specifying AND / OR
for this constraint, i.e. whether one of the dimensions are constrained to limit rules of x and y
ranges, or both dimensions to permit designation of being restrained.
[0072]
Level 3
The image space and object space evidence to improve with the help of expert rules.
[0073]
CLesion :: SignatureFilter (BOOL bUseDlg)
[0074]
cf5 or modified 02/28/97 to allow processing based on the raw code. Large-scale overhaul of
CDialogManager and ini files were also performed. 1 to add three new string sections per lesion
(stringsection) to ini, erasing the one. I.e. L3 [lesion] Dlg, was replaced by L3 [lesion]
CommonDlg, L3 [lesion] Cf5Dlg and L3 [lesion] RawDlg.The first Dlg, users, CF5 or raw
(BOOL, m_param10) determines whether you want to use other parameters common to and both
methods. The second dialog is selected based on the query for parameters specific to the
particular method chosen value of M_param10, and previously.
[0075]
CLesion :: SignatureFilterBasedOnRaw
[0076]
For analysis functions that were modified 1/17/97 to create a diagnostic file having a fixed name
in the current results directory (using CF5) (using raw) original type or a new type of functions
divided into 2/27/97 to allow two different types of processing. This file, including the target that
path filled with yellow for the final analysis. File is overwritten with each pass. It is effective
during the batch processing of a single image.
[0077]
process
[Number 2]
[0078]
Default constructor generates an array of SIGNATURESTATISTICS that are hard-coded in size
= 5 at this time. Of these, the index 0 is a substitute for the statistics CRegionarray parameters of
the stamp index 1-4 holds statistics R / G / Y / B stamp (area array Nakanoshima). The
validation, it is necessary to ensure that the statistical value of each color has a meaning. NULL
in the array struct (struct) is checked in there.
[Number 3]
[0079]
1. (Sign <RED if dot / blot, YELLOW lipid / cwool (OIS), BLUE lipid / cwool (Kaiser)>
Referring to part) illustrating the CStampAanalysis.
[0080]
2. To initialize the statistics.
[0081]
typedef struct_SIGNATURESTATS
{
ShortnObjects; number of subjects in a given stamp (CIArray) in the (in a particular color)
doubleaveSize; average size
doublesizeSD; modal size
Doubledispersion; area normalized by the center centroid x, y
doublelargestArea; the pixels of the largest in the subject
CRectlargestRect; of the largest object rect
} SIGNATURESTATS;
SIGNATURESTATS m_ss.SetSize (5); an array of SignatureStatistics structure
Initializ: m_ss [0] .nObjects, .largestArea, .aveSize, .sizeSD, .dispersion, .largestRect.left,
.largestRect.right, .largestRect.top, .largestRect.bottom
(This is consistent with the stamp Rect. Other numbers are not used)
GetSignatureStatistics
[0082]
Functions Added to 030,497 in order to support the code analysis expert system
design. Definition of SIGNATURESTATS is in the CScreenerApp. Until accuracy of poor
(damned) system is sufficient, or add parameters until no improvement further. If required too
much redundant processing into a single function may be divided into a plurality of hierarchical
functions. In any case, maintaining a single SIGNATURESTATS structure.
[Number 4]
[0083]
3. Examine the 8 neighbors color target core executing the code analysis to count the transition to
the next color band recording. In other words, a deviation from the ideal case of the following.
[0084]
44444444
43333334
43222234
43211234
43211234
43222234
43333334
44444444
[0085]
Since it is difficult to calculate such "next" and "substantially surround", can not be performed
using a list object this. This function allows the use of the rule, to provide a next band such as (7,
6, 3) 0 to 8 scale work metrics transition to from each band. Function requires a starting point to
the core color region of Regionaray. Further determines the subject of the effect of the concave.
[0086]
Local contrast is not so high, sanjay is, when it is desired to use this rule added to 070,998 is a
FAIL unconditionally.
[Number 5]
[0087]
The maximum of the target becomes the FAIL unconditionally if it is bleeding.
[6]
[0088]
The FAIL unconditionally if the maximum of the target is too small.
[Equation 7]
[0089]
The FAIL unconditionally if there is too much clutter.
[Equation 8]
[0090]
The FAIL unconditionally if the shape is distorted.
[Equation 9]
[0091]
InvestigateNeighborhood is SUCCEED when giving "good" {a, b, c} a slope.
[Number 10]
[0092]
4. [Number 11] After each of the lesions path
[0093]
5. After processing the respective image (all lesions), written in mdb for post plot Se Thing
(graded).
[Number 12]
[0094]
6. After updating the mdb to the patient a batch, by integrating all of the fields results (7 / eye)
from the database and generates a grade from 1 to 3.
[0095]
Generalized using the quantification and neural nets of severity
Neural net improves the segmentation based on features extracted from the image.
[0096]
* Type Best type of matching filters in the number and size of the lesion histogram * lesions
lesions are classical neural networks.
[A] for each lesion, selecting an input layer size compatible with the size of the lesion.
[B] to train on the target area.
[C] is applied to the candidate regions identified between the biased false positive test.
[D] 2 or more to identify the lesion or no reason conceivable view images as a single entity, in a
single pass.
[E] "holistic" relationship, or "clear" in the bit map does not need to be aware of the "entity" is a
small object having a defined characteristic, a problem of classifying the presence and number of
lesions fundamentally different from the readers of the discussion. (Large objects are not relevant
at this stage, we can probably note the characteristics that require cross-reference).
[0097]
It was modified to accept the penalty function for the area of the image to be segmentation entity
(dynamic programming based segmentation); initializing the penalty function (indicated by the
color of the circular area before departure PARCEL); restricted zone defining a (penalty = 1).
[0098]
Extraction of lesions given instruction of the image and fields
Intelligent Agents
Algorithm to learn parameters from marked image by retinal specialist. Associate parameters to
the image type.
[0099]
This is one JAZZ, it helps to achieve the following: for 10 patients: original, Expert (point /
blotchy, lipids, striatum, cotton-like plaque), four comparison and intermediate results.
[0100]
Implementation problem
Several polymorphisms data structure that represents the area and lines so as to be able to write
an efficient algorithm for various types of processing by various expressions. For example,
expression of the run-length encoded binary image, that enables high-speed determination of the
statistical properties, such as area, the array representation, quick access by random cell for
morphological processing to enable the. Picture coding method for efficiently searching for
recognition and interpretation.
[0101]
Physical realization of the system is imagined to take the following form.
[0102]
I) SQL basic image database
II) access to the expert knowledge base that has been verified by SQL
III) database scripts in LISP for finding respective areas: rules for knowledge of all types
Inference in IV) LISP engine: OTTO, OPS5, etc.
V) C low-level routines: Filter; modified entities (dynamic programming based segmentation to
accept a penalty function for the area of the image to be split); the penalty function initializes
(before departure PARCEL indicated by the color of the annular region); defining a restricted
zone (penalty = 1).
[0103]
Comparison of continuous image
Comparison of sequential images to enhance the risk prediction of time occurring pathologic
changes by superposition and the feature comparison difference, and detects the (new lesions
provided by the comparison in a database of the number and location of the lesion, Alternatively
the anatomical features, for example, to detect the movement of those toward the fovea).
[0104]
Above, like has been described the invention with reference to specific embodiments, the present
invention will be understood that the invention is not limited to the embodiments disclosed. The
present invention encompasses various modifications and equivalents structures fall within the
spirit and scope of the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0105]
1 is a diagram showing a histogram of two fundus images.
2 is a diagram showing the results of the first fundus image and retinal extraction process.
3 is a diagram showing a Koaresento filter (coalescent filter) [Bhasin and Meystel 94] results
obtained blood vessel by processing the retinal image by.
4 is a diagram schematically showing the problem of determining the position of the retina on the
photograph.
FIG. 5 is a diagram showing the inset.
6 (a) is a diagram showing the original retinal image sub-sampling. (B) is a diagram showing
after the removal of blood vessel images of (a).
7 is a diagram showing a result of searching the retina using geometry to improve the pick-up.
8 is an overall flow diagram of the method of the present invention.
9 is a flow diagram of the image processing method of the present invention.
10 is a flow diagram of a photographer's support method of the present invention.
11 is a flow diagram of (over time) Continuous research methods of the present invention.

Claims (15)

Hide Dependent

translated from Japanese

1. Screening of retinal pictures is realized by a computer, a method of


diagnosing diabetic retinopathy,
(A) receives one or more images of the human retina, and storing the retinal image in
the memory of the digital computer,
(B) processing the stored the original image, petechiae, presence of microaneurysms,
stain-like bleeding, wherein instructing the selected diabetic retinopathy from the
group consisting of striated bleeding and lipid exudates and identifying,
(C) a method comprising a step of reporting the presence or absence and the nature of
the presence of the feature that instructs the diabeticretinopathy.

2. Identifiable essential elements, contrast, focus, the images stored are considered
acceptable with respect to at least one image quality criteria selected from the
group consisting of alignment and integrity, the step of selecting an image to be
processed further comprising the method of claim 1.

3. Received the retinal image is a record by retinal camera from human retina, the
method according to claim 1.

4. The (D) said stored retinal image, identifiable essential elements, and
determining the contrast, focus, for at least one image quality criteria selected
from the group consisting of alignment and integrity,
When the (E) the determination of a selected said image quality criteria in step
(D) is unacceptable, further comprising the step of prompting the operator to
record another retinal images, according to claim 3 Method.

5. Photographs the retina photograph at a predetermined one or more intervals by


comparing the retinal photograph across one or more of said interval to
determine the development of the feature that instructs diabetic retinopathy,
claim 1 the method according to.
6. Patient identification, imaging ocular and fields, and using the indexing rule
indicating the applied processing, storing said original photograph in the
memory of the digital computer, the method according to claim 1.

7. Select one or more images on the basis of predetermined criteria and sends them
to the examination by experts The method of claim 1.

8. Providing a sent the image and reported the feature indicating the retinopathy of
step (C) of claim 1 to the expert, the method according to claim 7.

9. Feed using a network exploration result obtained by the expert, which is stored
for sent the image, The method of claim 7.

10. A diabetic retinopathy diagnostic system having a computer including a processor and
memory, the processor,
(A) receives one or more images of the human retina, and stores the retinal image on
the computer memory,
(B) processing the stored the original image, petechiae, presence of microaneurysms,
stain-like bleeding, wherein instructing the selected diabetic retinopathy from the
group consisting of striated bleeding and lipid exudates to identify,
(C) programmed system to report the presence and nature of the existence of the
feature that instructs the diabetic retinopathy.

11. Wherein operably coupled to a computer, further comprising a retina camera for
capturing images from a human retina, the system according to claim 10.

12. Using the network further comprising at least one additional computer
operatively coupled to said computer, the method according to claim 10.

13. Selected based one or more stored retinal image sending over the network for
exploratory expert of predetermined criteria and method of claim 12.

14. Providing a sent the image and reported the feature indicating
the retinopathy steps of claim 10 (C) to the expert, the method according to
claim 13.

15. It sends the examination result of the expert using said network, which is stored
for sent the image, The method of claim 13.

You might also like