May 2023
Volume 12, Issue 5
Open Access
Glaucoma  |   May 2023
RGC-Net: An Automatic Reconstruction and Quantification Algorithm for Retinal Ganglion Cells Based on Deep Learning
Author Affiliations & Notes
  • Rui Ma
    Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
  • Lili Hao
    Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
    Department of Ophthalmology, The First Affiliated Hospital of Jinan University, Guangzhou, China
  • Yudong Tao
    Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
  • Ximena Mendoza
    Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
  • Mohamed Khodeiry
    Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
  • Yuan Liu
    Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
  • Mei-Ling Shyu
    School of Science and Engineering, University of Missouri-Kansas City, Kansas City, MO, USA
  • Richard K. Lee
    Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
    Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
  • Correspondence: Richard K. Lee, Department of Electric and Computer Engineering, University of Miami, 1252 Memorial Drive, Coral Gables, FL 33146, USA. e-mail: rlee@med.miami.edu 
Translational Vision Science & Technology May 2023, Vol.12, 7. doi:https://doi.org/10.1167/tvst.12.5.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rui Ma, Lili Hao, Yudong Tao, Ximena Mendoza, Mohamed Khodeiry, Yuan Liu, Mei-Ling Shyu, Richard K. Lee; RGC-Net: An Automatic Reconstruction and Quantification Algorithm for Retinal Ganglion Cells Based on Deep Learning. Trans. Vis. Sci. Tech. 2023;12(5):7. https://doi.org/10.1167/tvst.12.5.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: The purpose of this study was to develop a deep learning-based fully automated reconstruction and quantification algorithm which automatically delineates the neurites and somas of retinal ganglion cells (RGCs).

Methods: We trained a deep learning-based multi-task image segmentation model, RGC-Net, that automatically segments the neurites and somas in RGC images. A total of 166 RGC scans with manual annotations from human experts were used to develop this model, whereas 132 scans were used for training, and the remaining 34 scans were reserved as testing data. Post-processing techniques removed speckles or dead cells in soma segmentation results to further improve the robustness of the model. Quantification analyses were also conducted to compare five different metrics obtained by our automated algorithm and manual annotations.

Results: Quantitatively, our segmentation model achieves average foreground accuracy, background accuracy, overall accuracy, and dice similarity coefficient of 0.692, 0.999, 0.997, and 0.691 for the neurite segmentation task, and 0.865, 0.999, 0.997, and 0.850 for the soma segmentation task, respectively.

Conclusions: The experimental results demonstrate that RGC-Net can accurately and reliably reconstruct neurites and somas in RGC images. We also demonstrate our algorithm is comparable to human manually curated annotations in quantification analyses.

Translational Relevance: Our deep learning model provides a new tool that can trace and analyze the RGC neurites and somas efficiently and faster than manual analysis.

Introduction
Retinal ganglion cells (RGCs) are the connection neurons that transmit highly processed visual information from the retina to the brain. The irreversible damage and apoptosis of RGCs is a central common end point for many optic neuropathies.1,2 Many different subtypes of RGCs exist, each with unique morphological features, characteristic molecular markers, psychophysical and electrophysiological properties, and specific central axonal and dendritic projection patterns.3 The classification of RGCs morphologically is important to understand the pathobiology of many eye diseases, because it is presumed that certain RGC subtypes may be specifically targeted for cell death in different ocular disorders, such as glaucoma.46 Similarly, different neuronal subtypes may be specifically and critically affected to cause neurologic diseases. Systematic analysis of the modulated mechanism of type-specific responses to injury under in vitro and in vivo experiments may lead to targeted therapies for these progressive sight-threatening and/or blinding diseases. Studies of the morphological properties of RGCs often involve digital reconstruction and analysis of the soma and neurites, which are key aspects of RGC subtype phenotype and function. 
Digital reconstruction of neuron morphology is widely viewed as one of the most challenging tasks in computational neuroscience due to the complexity of neuronal morphology and the prominent noise in neuronal imaging. Fully automatic digital reconstruction tools are needed, as most manual or semi-automatic measurement used to date are time-consuming, labor-intensive, and error prone.7,8 Most conventional neuronal reconstruction methods were developed based on manually crafted feature extraction rules, while being incapable of reconstructing neurons with highly complicated morphology.9 On the other hand, deep learning-based methods have demonstrated superior performance compared to conventional methods in terms of capability for reconstructing complex neurons and robustness relative to background noise. However, the majority of these methods were developed for brain neurons, and are not well generalizable to RGC images.1014 
To address these issues, we developed a deep learning-based fully automated neuronal reconstruction system, RGC-Net, to delineate the neurites and somas reliably and accurately in RGC images. The proposed system consists of a multi-task soma and neurite segmentation module, a post-processing module, and an automatic pipeline for quantification analyses. Specifically, given an input RGC scan, the soma and neurite segmentation module uses a multi-task learning model to segment all the somas and neurites in it. Post-Processing techniques are then applied to remove the speckles or dead cells in the soma segmentation results. Finally, an automatic pipeline for neurite and soma quantification is built to measure the total length of neurites, neuritic field area, and the area, length, and width of each soma. All these steps are performed in a fully automated manner without the need for any manual interventions. RGC-Net significantly decreases RGC analysis time, manual segmentation effort, and neuronal tree mapping inaccuracies. RGC-Net will help accelerate RGC research. 
Methods
Overview
RGCs Culture and Immunofluorescence
Mice RGCs were isolated and purified using immunopanning.15 Briefly, C57 mice retinas were dissected and digested in papain solution (5 mg/mL; Worthington Biochemical Corporation, LS003126) containing 1.65 mM L-cysteine (Sigma-Aldrich, c7352) and 125 U/mL deoxyribonuclease I (DNase I; Sigma-Aldrich, 69182) at 37°C for 30 minutes. Digestion was halted by adding ovomucoid. The cell suspension was centrifuged at 300 g for 5 minutes, then resuspended in panning buffer (DPBS, 0.02% BSA, and 5 µg/mL insulin) and incubated on a panning plate coated with goat anti-rabbit IgG (Jackson ImmunoResearch; 115-005-044) to remove macrophages. Next, RGCs were purified using a panning plate coated with mouse anti-mouse Thy1.2 (CD90) IgM (Serotec MCA02R). After washing with DPBS, bound RGCs were released by trypsinization and plated on glass coverslips coated with 10 µg/mL of poly-D-lysine and 2 µg/mL of laminin in 24-well plates. RGC culture medium was composed of half neurobasal medium (Gibco; 21103-049) and half DMEM medium (Gibco; 11960-044) supplemented with 5 µg/mL insulin, 1 mM sodium pyruvate (Gibco, 11360-070), 1 × penicillin-streptomycin (Gibco; 15140-122), 1 × SATO supplement (which includes 100 µg/mL BSA, 100 µg/mL transferrin, 16 µg/mL putrescine, 60 ng/mL progesterone, and 40 ng/mL sodium selenite), 40 ng/mL thyroxine (T3), 2 mM L-glutamine, 1 × NS21 (R&D Systems; AR008), 5 µg/mL N-acetylcysteine, 5 µM forskolin, 50 ng/mL brain-derived neurotrophic factor (PeproTech, 450-02), and 10 ng/mL ciliary neurotrophic factor (PeproTech, 450-13). One-half of the culture medium was changed every other day. 
After 7 days, RGCs on cell culture chamber slides were fixed with 4% PFA (4% paraformaldehyde in PBS buffer) for 30 minutes at 4°C. After rinsing in PBS, slides were incubated in rodent blocker M (Biocare Medical, Concord, CA) for 1 hour at room temperature to minimize nonspecific antibody binding. Primary antibodies were diluted in 0.5% Triton X-100/PBS and incubated overnight at 4°C. Primary antibodies included anti-TuJ1 (Abcam, ab18207, 1/400). After rinsing in PBS, slides were incubated with the CY3 donkey anti-rabbit secondary antibody (Jackson ImmunoResearch, 711-165-152, 1/400) for 1 hour at room temperature and mounted using Antifade Mounting Medium with DAPI (Vector Laboratories H-1200). RGCs were imaged by confocal microscopy (Leica TCS SP5 Confocal Microscope; Leica Microsystems, Buffalo Grove, IL). 
RGC Manual Tracing
RGC somas and neurites were traced using NeuronJ following the developers’ instructions.7 RGC neurites labeled with fluorescent marker of anti-TuJ1 were traced. The tracing was initiated by moving the mouse to the beginning of a neurite of RGC and clicking the left mouse button. The neurite was traced by clicking the mouse along the correct path until the dendrite end is reached, which is indicated by double-clicking the mouse button. After tracing was completed, a text file containing neurite length measurement data was generated for each RGC traced and a snapshot of the tracings overlaid the RGC was saved as a TIFF file. RGC soma position was confirmed with the positive immunostaining with Brn3a (an RGC marker) and DAPI. The somas were traced by clicking each corner or bend to draw the circle outlines. 
The widths of neurites are automatically adjusted by Neuron J. The tracing of neurites is initiated by moving the mouse to the beginning of a neurite of interest and clicking the mouse button. The tracing algorithm subsequently computes and shows the “optimal” path from the current mouse position in the image to the clicked point. To facilitate accurate positioning of the starting points, so that they are really on a neurite rather than close to one, the program carries out a local snapping operation. This means that when moving the mouse within the image, the program quickly searches in a small window around the current mouse position for the pixel that is most likely to be along a neurite. The size of the search window, and thus the snapping range, is set in the Parameters dialog. 
In total, we used 166 RGC images, each manually traced with somas and neurites. We randomly selected 80% of data, namely 132 RGC images, to train our RGC-Net, whereas the remaining 34 images were used as testing data to evaluate our algorithm. The resolution of all images was resized to 1024*1024 pixels, which was the input size of the model used in the experiments. For each image, we applied a minimum-maximum normalization so that their pixel values fall in the range of 0 and 1. Manual annotations were generated using the resized images by human experts. Moreover, we scrutinized the differences among human annotators by having them annotate the neurites and somas of the same set of RGC images and found that there is no significant variation between different graders. Therefore, the intergrader variability will not have an impact on the quality of manual annotations. For more information about the intergrader variability, please refer to Supplementary Figure S1 and Figure S2
Neurite and Soma Segmentation
Given a raw RGC image \(X \in {\mathbb{R}}^{H \times W}\), where H is the height and W is the width of the image, the goal of the neurite and soma segmentation task is to assign each pixel xX to a class label y in the label space L = {0, 1, 2}, in which 0, 1, and 2 stand for the background, neurite, and soma, respectively. A deep learning-based segmentation network, RGC-Net, was developed to perform this task, which can be represented as:  
\begin{eqnarray*}Y = f(w,X)\end{eqnarray*}
 
The model takes X as input and generates an output mask \(Y \in {\mathbb{R}}^{H \times W}\). f stands for the transition performed through the network, whereas w denotes its set of parameters to be learned. The architecture of RGC-Net is presented in Figure 1, which consists of an encoder block and two parallel decoder blocks. Firstly, a U-Net with ResNet-10116 backbone is used as encoder to extract high-level feature maps Xe from X. Next, the extracted feature maps Xe are fed into two separate decoder networks to obtain binary segmentation results for neurites and somas, represented as Yd and Ys, respectively. The whole process can be represented as:  
\begin{eqnarray*} {X}_e &\,=& {f}_e(X)\\ {Y}_d &\,=& {f}_d({X}_e)\\ {Y}_s &\,=& {f}_s({X}_e) \end{eqnarray*}
where fe, fd, and fs stand for the transitions inside the encoder, the decoder for neurite segmentation, and the decoder for soma segmentation. Resolution of X is reduced by a factor of two for five times through down sampling operations in fe, and restored to its original size via up sampling operations in fd and fs so that Yd and Ys preserve the same resolutions as X. Skip connections are also added between the encoder and the two decoders to alleviate the vanishing gradient problem. 
Figure 1.
 
Overall architecture of RGC-Net.
Figure 1.
 
Overall architecture of RGC-Net.
During training, a variety of data augmentation techniques, including horizontal and vertical flipping, random rotation, cropping, scaling, adding random noise, and brightness adjustment, have been applied to enhance the robustness of the model. The network is trained using a multi-task learning loss function, Lmulti, that consists of a pixel-wise binary cross entropy loss LBCE for neurite segmentation and a dice loss LDICE for soma segmentation, which is written as:  
\begin{eqnarray*}{L}_{BCE} = \sum\limits_{j = 1}^W {\sum\limits_{i = 1}^H {\left( { - {y}_{ij}\log \,{{\hat{y}}}_{ij} - \left( {1 - {y}_{ij}} \right)\log \left( {1 - {{\hat{y}}}_{ij}} \right)} \right)} } \end{eqnarray*}
 
\begin{eqnarray*}{L}_{DICE} = \frac{{2\sum\nolimits_{j = 1}^W {\sum\nolimits_{i = 1}^H {{y}_{ij}{{\hat{y}}}_{ij}} } }}{{\sum\nolimits_{j = 1}^W {\sum\nolimits_{i = 1}^H {{y}_{ij} + \sum\nolimits_{j = 1}^W {\sum\nolimits_{i = 1}^H {{{\hat{y}}}_{ij}} } } } }}\end{eqnarray*}
 
\begin{eqnarray*}{L}_{Multi} = {L}_{BCE} + {L}_{DICE}\end{eqnarray*}
where ∑( · ) is the summation symbol; yij and \({\hat{y}}_{ij}\) are the pixels in the i-th row (1 ≤ iH) and j-th column (1 ≤ jW) from the predicted and ground truth masks, Y and \(\hat{Y}\), respectively. The two loss functions are chosen based on the morphologies of the subjects and the objectives of the segmentation tasks. We choose pixel-wise BCE loss for neurite segmentation based on the thin and expanding properties of neurites, in which we want to minimize the numbers of false-positive and false-negative pixels. For soma segmentation, our goal is to preserve the shapes of the somas in the segmentation results so that they will not deviate significantly from the ground truth data. In this case, dice loss which penalizes for shape deviation is a better choice for soma segmentation. The network was trained with a fixed learning rate of 1e-4 using the Adam optimizer for 10,000 epochs on a Nvidia Tesla V100 GPU. 
Post-Processing
After obtaining the segmentation results from RGC-Net, a set of post-processing operations are applied to further enhance its robustness. Specifically, in the soma segmentation result, non-connected objects are detected using the “connectedComponentsWithStats” function in the OpenCV library.17 Then, a detected object is considered as a dead cell if no neurite is connected to it, or a speckle if its area is below 10 pixels. The value of 10 was chosen based on the minimum pixel area (14 pixels) of the somas in our training dataset. Therefore, we claim that any object with an area below 10 pixels can be classified as a speckle. All dead cells and speckles are then removed from the soma segmentation results, whereas the remaining objects were considered as somas. The effects of these post-processing techniques are visualized in Figure 2
Figure 2.
 
Effects of post-processing.
Figure 2.
 
Effects of post-processing.
Results
Performance Metrics
Four different metrics, including foreground accuracy (ForegroundACC), background accuracy (BackgroundACC), overall accuracy (OverallACC), and dice similarity coefficient (DSC), are used to evaluate the performance of the neurite and soma segmentation tasks. The equations of these metrics are written as:  
\begin{eqnarray*} Foreground\,ACC &\,=& \frac{{TP}}{{TP + FN}}\\ Background\,ACC &\,=& \frac{{TN}}{{TN + FP}}\\ Overall\,ACC &\,=& \frac{{TP + TN}}{{TP + TN + FP + FN}}\\ DSC &\,=& \frac{{2TP}}{{2TP + FP + FN}} \end{eqnarray*}
where TP, TN, FP, and FN are the numbers of true-positive pixels, true-negative pixels, false-positive pixels, and false-negative pixels, respectively, in a resulting segmentation mask. Among these metrics, DSC can be seen as a harmonic mean between foreground accuracy and background accuracy, thus being more powerful than the other metrics in segmentation tasks. We have also considered using probabilistic measures to evaluate our models. However, the probabilistic outputs from the RGC-Net must be binarized during post-processing and quantification analysis. Because the main purpose of the evaluation step is to determine the best model for the quantification analysis, using probabilistic metrics for model selection may lead to the selection of an undesired model (i.e. a model with higher probabilistic scores but lower DSC). 
Results From Neurite Segmentation
Figure 3 shows multiple input RGC scans (left column) from our testing data, neurite segmentation results from our model (middle left, colored in red), manual tracing results for neurites (middle right, colored in green), and the overlap between the results from our model and manual tracing (right). Because the overlap of red and green is yellow, yellow in the rightmost image indicates correct overlapping segmentation, whereas red and green indicate false positives and false negatives, respectively. Despite the heavy speckle noise in the first image or the complicated neurite structure in the fourth image, our neurite segmentation algorithm performs very well, as seen from the dominant yellow color in all images. 
Figure 3.
 
Neurite segmentation results from RGC-Net.
Figure 3.
 
Neurite segmentation results from RGC-Net.
Table 1 compares the average metrics for the neurite segmentation task on our testing dataset obtained from four different approaches, including U-Net,18 Panoptic Feature Pyramid Network (FPN),19 our RGC-Net without using multi-task learning, and our proposed RGC-Net. For the former three methods where no multi-task learning is used, two independent models were trained for neurite and soma segmentation, respectively. Moreover, for “RGC-Net w/o multi-task learning,” it has the same encoder as our proposed RGC-Net, whereas its decoder shares the same architecture as the neurite or soma decoder of our RGC-Net. 
Table 1.
 
Comparison Between Our RGC-Net and the Other Approaches on Neurite Segmentation Task
Table 1.
 
Comparison Between Our RGC-Net and the Other Approaches on Neurite Segmentation Task
Among all four approaches, our RGC-Net achieves the best foreground accuracy, background accuracy, overall accuracy, and dice similarity coefficient. One thing to note is that “RGC-Net w/o multi-task learning” performs slightly worse than U-Net or Panoptic FPN. For this issue, we have conducted an extensive investigation, and we found that “RGC-Net w/o multi-task learning” identifies speckles as neurites more frequently. One possible reason is that “RGC-Net w/o multi-task learning” is much deeper than the other two baseline methods and thus more likely to cause overfitting in training. Nevertheless, with the incorporation of our proposed multi-task learning approach, the neurite segmentation network is able to distinguish neurites from speckles more accurately. 
Results From Soma Segmentation
Figure 4 presents the input RGC scans (left column) from our testing data, soma segmentation results from our model before post-processing (middle left, circled in red), soma segmentation results after post-processing (middle right, circled in red), and the manual tracing results for somas (right, circled in red). Once again, our soma segmentation results closely approximated the manual tracing results in the majority of cases. Moreover, the effect of post-processing can be clearly observed from the third image, where two dead cells without neurite connections were pruned correctly. 
Figure 4.
 
Soma segmentation results from RGC-Net.
Figure 4.
 
Soma segmentation results from RGC-Net.
Table 2 compares the average metrics for the soma segmentation task obtained from five different approaches, where our RGC-Net achieved the highest metrics. Even though the accuracy metrics do not improve a lot after post-processing, its DSC increases by 0.02, thus validating the effectiveness of the post-processing step. 
Table 2.
 
Comparison Between Our RGC-Net and the Other Approaches on Soma Segmentation Task
Table 2.
 
Comparison Between Our RGC-Net and the Other Approaches on Soma Segmentation Task
Quantification Analyses
We conducted a series of quantitative analyses for neurites and somas. These include measuring the total length of neurites and neuritic field area for neurite quantification, as well as the area, length, and width of each soma for soma quantification. Specifically, the total neurite length is calculated as the number of pixels that are classified as neurites in the neurite segmentation result, whereas the neuritic field area is calculated as the area of a convex hull that contains all the neurites. For soma quantification, we identify each individual soma from the postprocessed soma segmentation mask using the “connectedComponentsWithStats” function in OpenCV. Then, for each soma, its area is calculated as its number of pixels, whereas its length and width are calculated as the lengths of the longest and shortest lines that pass through the center of its convex hull. 
Figure 5 shows four scatter plots comparing the above five metrics obtained from RGC-Net (“Predicted Value”) and manual reconstruction results (“Actual Value”), each labeled with an R-squared value obtained by fitting a linear model using these data. An average R-squared value of 0.904 indicates that the quantification metrics from the model are highly similar to those obtained through manual reconstruction, thus validating the effectiveness of the quantification analyses. 
Figure 5.
 
Comparison among total neurite length, neuritic field area, soma area, soma length, and width obtained from RGC-Net (Predicted) and manual (Actual) reconstruction results.
Figure 5.
 
Comparison among total neurite length, neuritic field area, soma area, soma length, and width obtained from RGC-Net (Predicted) and manual (Actual) reconstruction results.
Discussion
We demonstrate a fully automatic quantification algorithm, RGC-Net, for RGC images of mouse retina dendrites and somas based upon deep learning. RGC-Net demonstrates superior performance for the segmentation of both neurites and somas, whereas our post-processing algorithms effectively and accurately prune speckles or dead cells in soma segmentation results. Our automated RGC quantification algorithm will contribute to the ophthalmic research in animal models of eye disease. Manual analysis and tracing of RCGs is a time- and effort-consuming process. The RGC-Net algorithm can help accurately quantify and trace somas and neuritis of RGGs in an automated fashion. More importantly, the mouse retina is a leading model for analyzing the development, structure, function, and pathology of neural circuits.20 This artificial intelligence (AI)-driven morphological characterization of RGCs will accelerate neuronal research not only in the retina but also in the brain for identifying other neuronal cell types. 
As an extension of the central nervous system (CNS), the retina is a commonly used model to study neurodegenerative diseases. Analyzing morphological changes of RGCs in vitro is one of the key methods in assessing the responses of RGCs to various stimuli related to neurodegeneration or neuroprotection. We extracted RGCs from mature mice, which is important to further delineate function, aging, and disease pathogenesis of RGC and other CNS neurons. Automated quantitative analysis for the soma and neurites of RGCs is complicated due to diverse RGC subtypes and the complexity of morphology. So far, at least 40 different types of ganglion cells have been found in the retina of the mouse, based on morphology, molecular, and functional classification.5 
This work has significant translational significance given recent molecular and cellular single cell characterization of RGCs and its many subtypes. Many neuronal diseases are probably due to specific types of neuronal loss or dysfunction and it is critically important to identify these neuronal subtypes for understanding the pathophysiology of the CNS disease and targeting treatments. Recent research has identified molecular and functional subtypes of RGCs.35 Different cellular and molecular subtypes of RGCs may have differential survival attributes depending on their cellular environment.4 Glaucoma is believed to be due to preferential loss of an RGC subtype. Using our algorithm for identifying RGC structure for RGC cellular classification, the subtypes of RGCs which are preferentially lost in glaucoma may be identified. This could allow for translational targeting of these RGCs for survival to treat glaucoma. Similarly, this algorithm can be used to identify different neuronal subtypes based upon morphology to identify neuronal subtypes which are preferentially affected by specific neurological diseases. 
Recently, with the rapid development of deep learning techniques, deep learning-based medical image segmentation algorithms have demonstrated superior performance over traditional manual approaches.2125 Among them, end-to-end models, such as U-Net18 and its variants,2632 are most widely used due to their flexibility and relatively high performance for medical image segmentation tasks. Meanwhile, U-Net based multi-task learning models have been developed for tasks that involve multiple objects to be segmented.3337 A number of deep learning algorithms have also been proposed for quantitative analyses of RGC images. For example, Ritch et al.38 developed AxoNet to count the numbers of RGC neurites in optic nerve tissue images, but the ability of their algorithm is limited because only counting can be performed. Masin et al.39 proposed RGCode to automatically calculate the total RGC count, retinal area, and density, as well as isodensity maps, but their method was unable to reconstruct neurite or soma morphology. Deng et al.40 developed AxonDeep to automatically segment and quantify the neurite morphology, but their method cannot be used to perform quantification on somas. It is worth noting that the RGCs image sets used in these studies were different in terms of cell cultures and imaging techniques. RGC-Net, on the other hand, can perform automatic segmentation and quantification for both somas and neurites at the same time with high accuracy because of the use of multi-task learning. 
Our study is not without limitations. For example, in the fourth row, “complex structure,” in Figure 4, the soma segmentation results from model and manual tracing are slightly different. The reason is that our soma segmentation algorithm identifies a broader area where the soma and neurites overlap, which deviates from the human manual annotator who considered this area as neurites. The model's result in this case should be considered as an alternative reconstruction for the soma instead of being inaccurate. Another limitation of our study is that we only applied our RGC-Net on murine RGC images. However, if we fine tune RGC-Net on other types of neuron images, it can be adaptable to other neuron morphologies and accelerate neurological research, especially for neurobiological studies. 
Acknowledgments
The Bascom Palmer Eye Institute is supported by National Institutes of Health (NIH) Center Core Grant P30EY014801 and a Research to Prevent Blindness Unrestricted Grant. L. Hao is supported by the National Natural Science Foundation of China, No. 82101116. R.K. Lee is supported by the Walter G. Ross Foundation. 
Disclosure: R. Ma, None; L. Hao, None; Y. Tao, None; X. Mendoza, None; M. Khodeiry, None; Y. Liu, None; M.-L. Shyu, None; R.K. Lee, None 
References
Kalesnykas G, Oglesby EN, Zack DJ, et al. Retinal ganglion cell morphology after optic nerve crush and experimental glaucoma. Investig Ophthalmol Vis Sci. 2012; 53(7): 3847–3857. [CrossRef]
You Y, Gupta VK, Li JC, Klistorner A, Graham SL. Optic neuropathies: Characteristic features and mechanisms of retinal ganglion cell loss. Rev Neurosci. 2013; 24(3): 301–321. [CrossRef] [PubMed]
Sanes JR, Masland RH. The types of retinal ganglion cells: Current status and implications for neuronal classification. Annu Rev Neurosci. 2015; 38: 221–246. [CrossRef] [PubMed]
Tran NM, Shekhar K, Whitney IE, et al. Single-cell profiles of retinal ganglion cells differing in resilience to injury reveal neuroprotective genes. Neuron. 2019; 104(6): 1039–1055.e12. [CrossRef] [PubMed]
Laboissonniere LA, Goetz JJ, Martin GM, et al. Molecular signatures of retinal ganglion cells revealed through single cell profiling. Sci Rep. 2019; 9(1): 15778. [CrossRef] [PubMed]
Liu X, Liu Y, Jin H, et al. Reactive fibroblasts in response to optic nerve crush injury. Mol Neurobiol. 2021; 58(4): 1392–1403. [CrossRef] [PubMed]
Meijering E, Jacob M, Sarria JCF, Steiner P, Hirling H, Unser M. Design and validation of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytom Part A. 2004; 58(2): 167–176.
Longair MH, Baker DA, Armstrong JD. Simple neurite tracer: Open source software for reconstruction, visualization and analysis of neuronal processes. Bioinformatics. 2011; 27(17): 2453–2454. [CrossRef] [PubMed]
Donohue DE, Ascoli GA. Automated reconstruction of neuronal morphology: An overview. Brain Res Rev. 2011; 67(1-2): 94–102. [CrossRef] [PubMed]
Li Q, Zhang Y, Liang H, et al. Deep learning based neuronal soma detection and counting for Alzheimer's disease analysis. Comput Methods Programs Biomed. 2021; 203: 106023. [CrossRef] [PubMed]
Wang H, Zhang D, Song Y, et al. Multiscale kernels for enhanced u-shaped network to improve 3D neuron tracing. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Vol. 2019-June; 2019: 1105–1113. Available at: https://ieeexplore.ieee.org/abstract/document/9025487.
Huang Q, Chen Y, Liu S, et al. Weakly supervised learning of 3D deep network for neuron reconstruction. Front Neuroanat. 2020; 14: 38. [CrossRef] [PubMed]
Li Q, Shen L. 3D neuron reconstruction in tangled neuronal image with deep networks. IEEE Trans Med Imaging. 2020; 39(2): 425–435. [CrossRef] [PubMed]
Zhou Z, Kuo HC, Peng H, Long F. DeepNeuron: An open deep learning toolbox for neuron tracing. Brain Informatics. 2018; 5(2): 1–9. [CrossRef]
Winzeler A, Wang JT. Purification and culture of retinal ganglion cells from rodents. Cold Spring Harb Protoc. 2013; 8(7): 643–652.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Vol. 2016-December; 2016: 770–778. Available at: https://ieeexplore.ieee.org/document/7780459.
Bradski G. The OpenCV Library. Dr Dobb's J Softw Tools. Available at: https://opencv.org. Published online 2000.
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 9351. New York, NY: Springer Verlag; 2015: 234–241.
Kirillov A, Girshick R, He K, Dollár P. Panoptic feature pyramid networks. arXiv. Published online 2019:6399-6408. Accessed January 3, 2021. Available at: http://cocodataset.
Yan W, Laboulaye MA, Tran NM, Whitney IE, Benhar I, Sanes JR. Mouse retinal cell atlas: Molecular identification of over sixty amacrine cell types. J Neurosci. 2020; 40(27): 5177–5195. [CrossRef] [PubMed]
Guo Y, Hormel TT, Xiong H, Wang J, Hwang TS, Jia Y. Automated segmentation of retinal fluid volumes from structural and angiographic optical coherence tomography using deep learning. Transl Vis Sci Technol. 2020; 9(2): 1–12. [CrossRef]
Ma R, Liu Y, Tao Y, Alawa KA, Shyu ML, Lee RK. Deep learning–based retinal nerve fiber layer thickness measurement of murine eyes. Transl Vis Sci Technol. 2021; 10(8): 21. [CrossRef] [PubMed]
Ma R, Hao L, Tao Y, et al. Synthetic retinal ganglion cell image generation for deep-learning-based neuronal tracing. In: Investigative Ophthalmology & Visual Science; 2021. Accessed November 7, 2021. ARVO Annual Meeting Abstract June 2021. Available at: https://iovs.arvojournals.org/article.aspx?articleid=2773719.
Prentašic P, Heisler M, Mammo Z, et al. Segmentation of the foveal microvasculature using deep learning networks. J Biomed Opt. 2016; 21(7): 075008. [CrossRef]
Ma Y, Hao H, Xie J, et al. ROSE: A retinal OCT-angiography vessel segmentation dataset and new model. IEEE Trans Med Imaging. 2021; 40(3): 928–939. [CrossRef] [PubMed]
Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-net: Learning dense volumetric segmentation from sparse annotation. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). arXiv Preprint. Vol. 9901. LNCS; 2016: 424–432. Available at: https://arxiv.org/abs/1606.06650.
Oktay O, Schlemper J, Folgoc L, et al. Attention U-net: Learning where to look for the pancreas. arXiv Preprint. Published online 2018. Accessed November 8, 2021. Available at: http://arxiv.org/abs/1804.03999.
Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J. UNet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans Med Imaging. 2020; 39(6): 1856–1867. [CrossRef] [PubMed]
Milletari F, Navab N, Ahmadi SA. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. In: Proceedings - 2016 4th International Conference on 3D Vision, 3DV 2016 (3DV). Stanford, CA: Institute of Electrical and Electronics Engineers Inc.; 2016: 565–571.
Diakogiannis FI, Waldner F, Caccetta P, Wu C. ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS J Photogramm Remote Sens. 2020; 162: 94–114. [CrossRef]
Jin Q, Meng Z, Sun C, Cui H, Su R. RA-UNet: A hybrid deep attention-aware network to extract liver and tumor in CT scans. Front Bioeng Biotechnol. 2020; 8: 605132. [CrossRef] [PubMed]
Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021; 18(2): 203–211. [CrossRef] [PubMed]
Hu T, Xu X, Chen S, Liu Q. Accurate neuronal soma segmentation using 3D multi-task learning U-shaped fully convolutional neural networks. Front Neuroanat. 2021; 14: 102. [CrossRef]
Andrearczyk V, Fontaine P, Oreiller V, et al. Multi-task deep segmentation and radiomics for automatic prognosis in head and neck cancer. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 12928 LNCS. Springer Science and Business Media Deutschland GmbH; 2021: 147–156. Available at: https://link.spring.com/chapter/10.1007/978-3-030-87602-9_14.
Li W, Wang L, CMS-UNet Qin S.: Cardiac multi-task segmentation in MRI with a U-shaped network. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 12554 LNCS. Springer Science and Business Media Deutschland GmbH; 2020: 92–101. Available at: https://www.researchgate.net/publication/347785514_CMS-UNet_Cardiac_Multi-task_Segmentation_in_MRI_with_a_U-Shaped_Network#:∼:text=To%20better%20detect%20the%20correlations%20across%20modalities%20and,%28RV%29%20blood%20pool%2C%20myocardial%20edema%2C%20and%20myocardial%20scars.
Bui TD, Wang L, Chen J, Lin W, Li G, Shen D. Multi-task learning for neonatal brain segmentation using 3D dense-UNet with dense attention guided by geodesic distance. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 11795 LNCS. NIH Public Access. Domain Adap Represent Transf Med Image Learn Less Labels Imperfect Data (2019). 2019;11795: 243–251.
He K, Lian C, Zhang B, et al. HF-UNet: Learning hierarchically inter-task relevance in multi-task U-Net for accurate prostate segmentation in CT images. IEEE Trans Med Imaging. 2021; 40(8): 2118–2128. [CrossRef] [PubMed]
Ritch MD, Hannon BG, Read AT, et al. AxoNet: A deep learning-based tool to count retinal ganglion cell axons. Sci Rep. 2020; 10(1): 1–13. [CrossRef] [PubMed]
Masin L, Claes M, Bergmans S, et al. A novel retinal ganglion cell quantification tool based on deep learning. Sci Rep. 2021; 11(1): 1–13. [CrossRef] [PubMed]
Deng W, Hedberg-Buenz A, Soukup DA, et al. AxonDeep: Automated optic nerve axon segmentation in mice with deep learning. Transl Vis Sci Technol. 2021; 10(14): 22. [CrossRef] [PubMed]
Figure 1.
 
Overall architecture of RGC-Net.
Figure 1.
 
Overall architecture of RGC-Net.
Figure 2.
 
Effects of post-processing.
Figure 2.
 
Effects of post-processing.
Figure 3.
 
Neurite segmentation results from RGC-Net.
Figure 3.
 
Neurite segmentation results from RGC-Net.
Figure 4.
 
Soma segmentation results from RGC-Net.
Figure 4.
 
Soma segmentation results from RGC-Net.
Figure 5.
 
Comparison among total neurite length, neuritic field area, soma area, soma length, and width obtained from RGC-Net (Predicted) and manual (Actual) reconstruction results.
Figure 5.
 
Comparison among total neurite length, neuritic field area, soma area, soma length, and width obtained from RGC-Net (Predicted) and manual (Actual) reconstruction results.
Table 1.
 
Comparison Between Our RGC-Net and the Other Approaches on Neurite Segmentation Task
Table 1.
 
Comparison Between Our RGC-Net and the Other Approaches on Neurite Segmentation Task
Table 2.
 
Comparison Between Our RGC-Net and the Other Approaches on Soma Segmentation Task
Table 2.
 
Comparison Between Our RGC-Net and the Other Approaches on Soma Segmentation Task
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×