DeepLab and Bias Field Correction Based Automatic Cone Photoreceptor Cell Identification with Adaptive Optics Scanning Laser Ophthalmoscope Images

The identification of cone photoreceptor cells is important for early diagnosing of eye diseases. We proposed automatic deeplearning cone photoreceptor cell identification on adaptive optics scanning laser ophthalmoscope images. The proposed algorithm is based on DeepLab and bias field correction. Considering manual identification as reference, our algorithm is highly effective, achieving precision, recall, and F1 score of 96.7%, 94.6%, and 95.7%, respectively. To illustrate the performance of our algorithm, we present identification results for images with different cone photoreceptor cell distributions. The experimental results show that our algorithm can achieve accurate photoreceptor cell identification on images of human retinas, which is comparable to manual identification.


Introduction
Vision is one of the most important human senses. Unfortunately, as a major cause of blindness, retinopathy has become increasingly common. Most retinopathy patients can prevent blindness with early diagnosis and treatment, which provide promising outcomes. Although optical imaging allows observing the retina, higher-resolution imaging is required for the early diagnosis of retinopathy. However, ocular aberrations limit the resolution of optical imaging. To address this limitation, adaptive optics (AO), which was originally intended for removing aberrations caused by atmospheric instability [1], has been used to correct ocular aberrations in retinal imaging [2][3][4]. AO allows the resolution of in vivo retinal imaging to reach the cellular level [4][5][6]. In particular, AO scanning laser ophthalmoscopy (AO-SLO) uses an integrated AO for clearly imaging cone photoreceptor cells [4]. Thus, AO-SLO allows to observe pathological changes in the distribution of photoreceptor cells on the retina, thus, outperforming other retinal imaging techniques in the diagnosis of certain diseases characterized by disorders in the distribution of cone photoreceptor cells [7][8][9][10][11].
In 2014, Google introduced a supervised deep-learning semantic segmentation model called DeepLab [27]. With remarkable advantages, DeepLab has become a hot topic in research and engineering [28][29][30][31][32][33], and one of its popular variants, DeepLab v3 [34], has been widely used in medical image processing [35][36][37][38][39][40][41]. We propose an automatic cone photoreceptor cell identification algorithm based on Dee-pLab v3 for AO-SLO images. The proposed algorithm also uses bias field correction [42] to further improve the identification accuracy. To confirm the effectiveness of the proposed algorithm, we determined various evaluation measures (i.e., precision, recall, and F1 score) with respect to manual identification, which is considered as the reference providing the ground truth. The performance of the proposed algorithm is further demonstrated by showing cone photoreceptor-cell    Figure 1 shows the outline of the proposed deep-learning cone photoreceptor-cell identification algorithm with its main steps of (1) training, (2) testing, and (3) postprocessing. First, the training dataset that includes AO-SLO images and their corresponding segmented images is used to train Dee-pLab [34]. Second, the bias-field-corrected images obtained from the test dataset after applying bias field correction [42] are input to the trained DeepLab [32] to generate segmented test images. Third, the bias-field-corrected images and segmented test images are processed by thresholdbased algorithm to obtain finely segmented images to identify individual cone photoreceptor cells by calculating their centroids.

Methods
2.1. Training. To achieve a fine segmentation of cone photoreceptor cells, we magnified the training AO-SLO images and their corresponding segmented images four times isotropically before training segmentation. In detail, the training AO-SLO images were interpolated using the antialiasing mode to obtain high-quality images, and the corresponding segmented images were interpolated using the nearest mode for binarization. Both interpolation operations are available in Python Imaging Library. Then, DeepLab v3 [34] with its ResNet-101 backbone pretrained on the ImageNet dataset was trained using the magnified images. In the training images, the area of the cone photoreceptor cells is larger than that of the background. To compensate for such imbalance, we introduced a cross-entropy loss function that weights the cone photoreceptor cells (0.3) and background (0.7) separately. During training, we set the batch size and number of epochs to 2 and 100, respectively. The outline of the training process is shown in Figure 2.

2.2.
Testing. The direct usage of the trained DeepLab v3 to segment four-time magnified test AO-SLO images can cause failure with high probability. A representative example of a failure case is shown in Figure 3, where segmentation is based on local intensity bias instead of cone photoreceptor cells, leading to segmentation failure.
To solve this problem, we applied bias field correction to the AO-SLO images. First, a bias field image is generated by applying a Gaussian filter whose sigma value is 22 pixels length to the AO-SLO image [26]: Third, the four-time magnified bias-field-corrected image is input to the trained DeepLab, and the segmentation results are obtained. The outline of the testing process is shown in Figure 4. Figure 5 depicts the bias field correction [42] and Dee-pLab segmentation [34] performed on the image shown in Figure 3. The bias field is corrected, and the segmented image is accurate. Figure 5 shows that some cone photoreceptor cells are merged after DeepLab segmentation. To mitigate this problem, we applied thresholding to the bias-fieldcorrected images [36]. The intensity values in the DeepLab segmentation mask were first extracted from the bias-fieldcorrected image. Then, the mean intensity value was calculated and used as the threshold to segment the bias-fieldcorrected image. Through thresholding, cone photoreceptor cells were identified in two steps. In detail, the contours of the segmentation results were extracted using function find Contours of OpenCV, and the centroids of the areas inside   Figure 6, where adjacent cell merging is mostly solved, and individual cone photoreceptor cells are accurately identified.

Results
We evaluated the proposed algorithm on a publicly available dataset [15] that contains 840 AO-SLO images and their corresponding cone photoreceptor cell segmentation results as ground truth [15]. We used 800 AO-SLO images for the training dataset, and the remaining 40 images for the test dataset. The automatic processing took 2.95 hours for training with two batch sizes over 100 epochs, 8.77 s for testing, and 0.76 s for postprocessing. These computation times were obtained on a computer running 64-bit Python and equipped with an Intel Core i7-10870H processor (2.20 GHz), 16.0 GB RAM, and NVIDIA GeForce RTX 2060 graphics card.
To confirm the effectiveness of the proposed algorithm for cone photoreceptor cell identification, we evaluated its identification performance regarding three measures,

Wireless Communications and Mobile Computing
namely, precision, recall, and F1 score, with respect to the manual identification results taken as reference. The overall precision, recall, and F1 score for identification are listed in Table 1, where the values are compared with those of several algorithms [15,18,25,26]. The proposed algorithm achieves accurate cone photoreceptor cell identification, outperforming the comparison algorithm [18,25,26] except the graph theory-based algorithm [15] which is often referred to as ground truthing cone photoreceptor cell identification but needs a large amount of computing and complex implementation.
To illustrate the performance of the proposed algorithm, Figure 7 shows cone photoreceptor cell identification results for different cone photoreceptor cell distributions on AO-SLO images. The cone photoreceptor cells are accurately identified on the three AO-SLO images with different distributions.

Discussion
In semantic segmentation, the relationship between the target segmentation objects and background is usually complex. Cone photoreceptor cell identification is relatively simple: (1) only one type of object, a cone photoreceptor cell, should be segmented; (2) cone photoreceptor cells do not contain rich texture details. Thus, an algorithm can segment the images according to area-based information. As the target area containing the cone photoreceptor cells is much larger than the area in general semantic segmentation, DeepLab algorithm is trained with bias if the cone photoreceptor cells and background are weighted equally. To prevent bias, we designed a cross-entropy loss function with a smaller weight given to cone photoreceptor cells.
In general, supervised deep-learning algorithms provide higher accuracy than nonlearning-based and unsupervisedlearning algorithms. Therefore, automatic algorithms for the accurate identification of cone photoreceptor cells on AO-SLO images can be developed by applying and modifying deep learning algorithms, which have demonstrated high-performance image segmentation and identification but have not yet been used for cone photoreceptor cell identification. In this regard, we presented the modified versions of three famous methods [43][44][45] as promising solutions for developing automatic and accurate cone photoreceptor cell identification algorithms on AO-SLO images.

Conclusions
We propose an automatic deep-learning algorithm for the identification of cone photoreceptor cells on AO-SLO images. The algorithm implements DeepLab v3 and bias field correction as its core techniques. To confirm the effectiveness of the proposed algorithm, we obtained its precision, recall, and F1 score with respect to manual identification, obtaining values of 96.7%, 94.6%, and 95.7%, respectively. Furthermore, to illustrate the performance of the proposed algorithm, we obtained the cone photoreceptor cell identification results for different cone photoreceptor cell distributions on AO-SLO images.

Data Availability
The original dataset used in this paper is a publicly available dataset which can be obtained online (http://people.duke .edu/~sf59/Chiu_BOE_2013_dataset.htm) [25]. However, our source codes are not publicly available due to them containing information that could compromise research participant privacy.

Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.