We developed a new method of automatically characterizing corneal nerves into main trunk nerves and branching nerves for further morphometric analysis after segmentation using a deep neural network. Our trained model achieved on average 86.1% sensitivity and 90.1% specificity and AUC of 0.88 on the test dataset during cross-validation. We automatically calculated average nerve tortuosity, total nerve density, and total number of branch points, which have been shown to be useful clinical parameters to measure the severity of DED and ocular pain.
7,8,12 Our work builds upon the first deep learning–based CNF segmentation and evaluation method based on U-Net architecture, which was proposed by Williams et al.
43 This approach achieved ICC of 0.933 for total CNF length, 0.891 for branching points, 0.878 for number of nerve segments, and 0.927 for fractals between manual annotation and automatic segmentation (Liverpool Deep Learning Algorithm (LDLA) method) but had a different method for corneal nerve characterization. Williams et al.
43 calculated the total number of nerve segments by calculating the nerve segments between two branching points, two end points, or an end and a branching point, and they used fractal dimensions to describe the nerve curvature.
Supplementary Figure S4 illustrates the difference between the method of Williams et al.
43 and our new proposal for detecting the trunk and branch nerves, highlighting the difference in total nerve count. Our work also differs in the loss function used. Williams et al.
43 used the Dice similarity coefficient as a loss function, whereas we used clDice,
55 a state-of-the-art topology preserving loss function that preserves connectivity among the segmented nerves. Another recent deep learning–based CNF segmentation based on U-Net was proposed by Wei et al.
44 This method achieved 96% sensitivity and 75% specificity for segmenting CNFs from IVCM images. In contrast, our U-Net–based model has achieved 86.1% sensitivity and 90.1% specificity for segmenting CNFs and provides validated automatic morphometric evaluation parameters such as the number of nerves, nerve density, nerve length, branching points, and tortuosity. Furthermore, our proposed method demonstrates better performance compared to ACCMetrics for low-quality images. In particular, average CNFL per segment obtained by our proposed method are closer to those obtained by manual annotation than the values obtained by ACCMetrics. By comparing both methods with regard to the image quality of the dataset used, we found that there is a higher requirement for image quality when using ACCMetrics to calculate CNFL, as out-of-focus, faint, or thinner nerves on the images cannot be detected properly. This is in agreement with previous studies using ACCMetrics in which only images with high optical quality with regard to brightness, contrast, and sharpness were selected for the analysis.
69,70 In this context, Williams et al.
43 proposed further research on interrupted CNF segments. Our proposed deep learning model was trained using low-quality images; thus, it seems able to segment more nerves in the category of low-quality images.