Abstract—An
automatic method to detect hard exudates, lesion associated with diabetic
retinopathy, is proposed. The algorithm found on their color, using a
statistical classification, and their sharp edges, applying an edge detector,
to localize them. A sensitivity of 79.62%
with a mean number of 3 false
positives per image is obtained in a database of 20 retinal image with variable
color, brightness and quality. In that way, we evaluate the robustness of the
method in order to make adequate to a clinical environment. Further efforts
will be done to improve its performance.Keywords— Diabetic retinopathy, hard
exudates, image processing, retinal images.
I. INTRODUCTION
DIABETIC retinopathy
(DR) is a severe eye disease at affects many diabetic patients. It remains one
of the leading causes of blindness and vision defects in developed countries.
There exist effective treatments that inhibit the progression of the disease
provided that it would-be diagnosed early enough. But DR is usually asymptomatic
in its beginning, so diabetic patients do not undergo any eye examination until
it is already too late for an optimal treatment and severe retinal damages have
been caused. Regular retinal examinations for diabetic patient’s guarantee an
early detection of DR reducing significantly the incidence of blindness cases.
Because of great prevalence of diabetes, mass screening is time consuming and
requires many trained graders to examine the fundus photographs searching retinal
lesions. A reliable method for automated assessment of the presence of lesions
in fundus images will be a valuable tool in assisting the limited number of
professional and reducing the examination time. This paper focuses only in the
automatic detection of one of the lesions associated with DR: hard exudates. They
usually appear in the fundus photographs as small yellow-white patches with
sharp margins and different shapes. Among lesions caused by DR, exudates are
one of the mostoccurring early lesions [1]. So the detection and quantification
of them will contribute to the mass screening and assessing of DR.Some
investigations in the past have
identified retinal exudates in fundus images based on their gray level [2],
[3],their high contrast [4-7] or their color [8],[9]. Because the brightness,
contrast and color of exudates vary a lot among different patients and,
therefore, different photographs, these method would not work in all the images
used in clinical environment. The main improvement introduced by the technique
described in this paper is its robustness to the variable appearance of retinal
fundus images to obtain an optimal performance in all types of images, in
contrast to these other approaches.
II.
METHODOLOGY
The method attempts to
detect hard exudates using two features of this lesion: its color and its sharp
edges. So hard exudates extraction is carried out in the following stages:
·
Detection of the optic disk and the
blood vessels
·
Detection of yellowish objects in the
image.
·
Detection of objects in the image with
sharp edges.
·
Combination of the previous steps to
detect Yellowish objects with sharp edges.
A. Detection of the optic
disk and the blood vessels.
In order to localize these main features, we
build on some works developed by other authors. We follow the method proposed
in [7] to detect the center of the optic disk (OD). This method determined a
number of candidate regions with the brightest pixels in intensity image. Then
the PCA based model approach is applied to the candidate regions to give the
final location of the OD. We also detect the disk boundary using a snake driven
by an external fieldv(x,y)=[u(x,y),v(x,y)] called Gradient Vector Flow (GVF)
[10] over the image
(1)In this work the snake is initialized
automatically as a circle placed in the center of the OD localized previously.
The blood vessels are segmented applying the matchedfilter method described in
[11] to enhance blood vessels and thresholding the image obtained.
B. Detection of yellowish objects
The detection of this
kind of objects is carried outperforming color segmentation based on the statistical
classification method described in [8] and [9]. This method found on the fact
that if a group of features can be defined so that the objects in an image map
to nonintersecting classes in the feature space, then we can easily identify
different objects classifying them into corresponding classes by a certain
rule. For our algorithm, we have to discriminate between two classes: yellowish
objects and background, which are perfectly characterized using only three color
features (the luminance of the pixels in each plane (R,G,B)). In order to map all the pixels in the image
to one of these classes, an appropriate discriminant function has to be
defined. Using the posterior probability and Bayes’ theory, we can obtained the
minimum distance discriminant
Where i=1,...,N and N
is the number of classes (in our caseN=2).So for each pixel X(xR,xG,xB), the
distances Dyell(X) and Dback(X) are calculated. If Dyell(X) is less than
Dback(X), then the pixel X is classified as yellowish lesion, otherwise it is
classified as background. Cyell and Cback denote the center of each class in
RGB space, which characterize the color of the yellowish objects and the
background respectively. Therefore, one problem has to be resolved before
applying this method: the definition of the features center Cyell and Cback. In
[8] and [9], they are selected as a global value after obtaining them from
different windows in training samples. In that way, it is taken for granted
that all the images have the same fundus color, and that the exudates and the background
appear with the same illumination and color. In practice, there is a wide
variation in the color of fundus from different patient, strongly correlated to
skin pigmentation and iris color. So, global values for Cyell and Cback can
work in some images but fail in others. This problem can be resolved using
specified feature centers for each image. To define them avoiding user
interaction, we have to find pixels belonging to both classes in all the
images. For background, we select a group of pixels that surrounds the contour
of the OD obtaining in section A. And because of the fact that the OD usually
has the same color as the exudates, the pixels that belong to the OD are used
to identify the color of the yellowish objects. So we obtain for each fundus
photograph the values of Cyell and Cback:
where m and n are the number of pixels in yellowish and
background region respectively that are used to calculate these centers and Yi
and Bi are the vectors of the three color features in the different region Because
of lighting variation, decreasing color saturation, skin pigmentation, etc, the
color of lesions in some regions of an image may appear dimmer than the
background color that is located in another region and would be wrongly classified.
So it is of crucial importance to perform an adjustment for non-uniformity of
illumination. But if a general method to avoid this phenomenon is applied, the color
in some fundus photograph, due to the wide variation of this feature in
different patients, could be modified introducing some strange effects. In this
work, we use a new color image. This image is obtained performing an operation of
the channels (N1, N2, N3) of the NTSC color space
and then converting the
image obtained (N1´,N2, N3) into the RGB color space again. In that way, we
improve both contrasting attributes of lesions and the overall color saturation
in the image, achieving that the OD and the exudates appear with the same color
independently of their location (Fig. 1.(b)). Hard exudates and other yellowish
objects can be detected applying the minimum distance discriminant to all the
pixels of this image, as shown in Fig. 1.(c). As well as hard exudates, other
yellowish regions are detected, as the optic disk, other lesions (cotton wool
spots and drusen), artifacts, etc.
C. Detection of objects with sharp edges
An edge-finding
operator can characterize the edge strength of the objects of an image. So, in
our case, Kirsch’s mask (5) and different rotations of it are applied to the
green component of the color fundus and the maximum response of them is
selected to detect the edges in the fundus photograph.
Thresholding this image
at grey level T1, we obtain objects with sharpest edges (Fig. 1.(d)). T1 is a
parameter of the algorithm. If T1 is chosen too low, the sensitivity increases
but the specificity decreases. Other objects in the images with sharp edges are
also detected, as the optic disk, blood vessels, hemorrhages’, etc
D. Combination of the two previous images
To detect only hard
exudates and remove all the false positives introduces in the previous stages,
we combine the two images obtained using a
Boolean operation, feature-based AND. In feature-based AND, ON pixels in
one binary image are used to select objects (connected groups of ON pixels) in
another image. Here we use the image with objects with sharp edges to select
object in the image with yellowish elements, because in the last one the
lesions are detected completely, not only their contours. In this way, we obtain
lesions characterized by the two desired features: yellowish color and sharp
edge. After that, we also get some false positives due to the papillary region
and some artifacts near the vessels (because the reflection in young people).
To reduce them, we remove a dilated version of the segmentation result of the
detection of the OD and of the vessel in section. Fig.2. shows the final image.
Fig.1.Images obtained applying the
method to (a), (b) image after the enhancement, (c) detection of the yellowish
objects, (d) detection of the objects with sharp edges.
Fig.2. Detection of hard exudates presented in Fig. 1.
(a).
III
RESULTS
We have tested the
algorithm on an data base of twenty576x768 digital images taken with a TopCon
TRC-NW6SNon-Mydriatic Retinal Camera and have compared the results obtained by
the algorithm with the performance of a specialist who marked the exudates on
these images. For evaluation of the detection performance of the system the number
of true and false positive clusters has to be determined for each image in the
test set, while the segmentation threshold T1 is varied. In this way the true positive
(TP) rate can be plotted as a function of the number of false positive (FP)
Detections per image,
using free-response receiver operating characteristic (FROC) curve. Each
decision threshold results in a corresponding operating point on a curve. We
believe that FROC analysis is an appropriate measure for our detection system,
because there will be a trade-off between the TP rate and the number of FP detections
per image. A true exudates is considered detected if the detected cluster
overlaps at least 50% of its area. All findings outside the criterion are
considered as false detections. The curve obtained is shown in Fig. 3.,
obtaining a T1=0.8 a sensitivity of
79.62%.
IV.
DISCUSSION
The best performance is
achieved at the operation point with a sensitivity of 79.62% with a mean number
of 3.2false detections per image. Some exudates are not detected due to their
proximity to blood vessels or because they appear very faint, even after the
proposed enhancement. Missing faint exudates has not a crucial importance since
even human experts are not sure about some ambiguous regions. In the present
work we have evaluated the system on an independent database of retinal images
with variable characteristics to investigate its robustness. Due to the lack of
a common database and a reliable way to measure the performance, it is
difficult to compare the performance of our method related to those reported in
the literature. Although some work [5], [7] show superior performance than our
algorithm, the main improvement is that a good performance is obtained overall
independently on the color, illumination, size, etc, keeping FPs low. This
independence on the aspect of the image is obtained using a particular method
for each image (to enhance them, to obtain the color of the background and
exudates), unlike other authors which use global approaches for all of them. So
the behavior of our algorithm is appropriate for a clinical environment. But
there are some problems that deserve comment. First of all, the algorithm
depends on other detection tasks, as the detection of the OD and blood vessels,
making the results dependent of the successful of these methods. This indicates
the further necessity of improving the robustness of these tasks. On the other
hand, we have used the color of the OD to characterize yellowish regions but
this cannot represent its real color. It could be a good idea to localize some exudates
firstly and then use their color.
Other
issues concerning ADDR
One of the issues
arising from the use of digital images for diabetic retinopathy screening is
the time and space involved in capture and storage of the files. Currently, the
use of image compression using utilities such as Joint Photographic Experts
Group (JPEG) have not been recommended, although there is some evidence that while
large file compression significantly reduces the ability of automated detection
programs, a compression ratio of 1:12 or 1:20 would produce little reduction
insensitivity Another consideration for diabetic screening is the use of
routine mydriasis. Hansen et al. (2004a) address the impact of
pharmaco-logically dilated pupils on ADDR. They report a change in sensitivity
before and after pupil dilatation of 90%and 97%,respectively,for detection of ‘red
lesions’(hemorrhages’/micro aneurysms) and specificity before and after pupil
dilatation was reported as 86% and75%,respectively (n ¼ 165 eyes of 83
patients). The use of routine mydrias is for diabetic screening is controversial.
Currently, the National Screening Committee in England and Wales have
recommended routine my-driasis for all screened patients, whereas the Health Technology
Assessment Board for Scotland only
recommend mydriasis under certain defined circumstances.
Whilst the detection of
sight-threatening diabetic retinopathy has received the most attention with respect
to automated digital image analysis, other pathologies offer potential to use
this tool as well, including morphological evaluations of the optic nerve in
glaucoma and themacular region in age-related macular degenerationand
retinopathy of pre-maturity (ROP) Table 1 summarises sensitivities and
specificities of selected studies of ADDR.
V. CONCLUSION
In this work we have
evaluated an automated detection scheme for one of the primary signs of DR:
hard exudates. This lesion was identified by its color, using a statistical classification,
and its sharpness of its edges, applying a Kirsch operator. After applying our
method to 20 fundus photograph, the detection sensitivity for the hard exudates
jumped from 65% to 85% when the number of FPs was kept low (3/image). Our
results suggest that the system is competent to complement the screening of DR
of ophthalmologists in their daily practice because it is very robust in the
face of changes of the characteristic of the images. Future work will address
the issue of improving the sensitivity by improving the results of other tasks, as the detection of
OD and blood vessels, and trying to localize faint and small hard exudates.
Fig.3. FROC curve for a database of 20 retinal
images using the developed method.
REFERENCES
[1] D. Klein, B. E..
Klein, S. E. Moss et al, “The Wisconsinepidemiologic study of diabetic
retinopathy. VII. Diabeticnonproliferative retinal lesions,” Ophthalmol., vol.
94, pp. 1389–1400, 1986.
[2] N. P. Ward, S.
Tomlinson, and C. J. Taylor, “Image analysis of fundus photographs – The
detection and measurement ofexudates associated with diabetic retinopathy,”
Ophthalmol., vol.96, pp. 80–86, 1989.
[3] R. Philips, J.
Forrester, and P. Sharp, “Automated detection and quantification of retinal exudates,”
Graefe’s Arch. Clin. Exp.Ophthalmol, vol. 231, pp. 90–94, 1993.
[4] K. Akita and H.
Kuga, “A computer method of understanding ocular fundus images,” Pattern
Recogn., vol. 21, no. 6, pp. 431–443, 1982.
[5] T. Walter, J.-C.
Klein, P. Massin, and A. Erginay, “Acontribution of image processing to the
diagnosis of diabetic retinopathy – Detection of exudates in color fundus
images of the human retina,” IEEE.
Trans. Med. Imag., vol. 21, no. 10, pp.1236–1243, Oct. 2002.
[6] H. Li and O.
Chutatape, “Fundus image features extraction,” in Proc. 22nd Annual Int. Conf.
of the IEEE Engin. Med. Biol. Soc., EMBS’00, Chicago, IL, pp. 3071–3073.
0 comments:
Post a Comment