April 2023
Volume 12, Issue 4
Open Access
Artificial Intelligence  |   April 2023
Automatic Identification and Segmentation of Orbital Blowout Fractures Based on Artificial Intelligence
Author Affiliations & Notes
  • Xiao-li Bao
    Department of Ophthalmology, The Second Norman Bethune Hospital of Jilin University, Changchun, China
  • Xi Zhan
    The Army Engineering University of PLA, Nanjing, China
  • Lei Wang
    Wenzhou Medical University, Wenzhou, China
  • Qi Zhu
    Department of Ophthalmology, The Second Norman Bethune Hospital of Jilin University, Changchun, China
  • Bin Fan
    Department of Ophthalmology, The Second Norman Bethune Hospital of Jilin University, Changchun, China
  • Guang-Yu Li
    Department of Ophthalmology, The Second Norman Bethune Hospital of Jilin University, Changchun, China
  • Correspondence: Bin Fan and Guang-Yu Li, Department of Ophthalmology, The Second Norman Bethune Hospital of Jilin University, Changchun 130041, China. e-mail: fanb@jlu.edu.cn and liguangyu@aliyun.com 
Translational Vision Science & Technology April 2023, Vol.12, 7. doi:https://doi.org/10.1167/tvst.12.4.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xiao-li Bao, Xi Zhan, Lei Wang, Qi Zhu, Bin Fan, Guang-Yu Li; Automatic Identification and Segmentation of Orbital Blowout Fractures Based on Artificial Intelligence. Trans. Vis. Sci. Tech. 2023;12(4):7. https://doi.org/10.1167/tvst.12.4.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: The incidence of orbital blowout fractures (OBFs) is gradually increasing due to traffic accidents, sports injuries, and ocular trauma. Orbital computed tomography (CT) is crucial for accurate clinical diagnosis. In this study, we built an artificial intelligence (AI) system based on two available deep learning networks (DenseNet-169 and UNet) for fracture identification, fracture side distinguishment, and fracture area segmentation.

Methods: We established a database of orbital CT images and manually annotated the fracture areas. DenseNet-169 was trained and evaluated on the identification of CT images with OBFs. We also trained and evaluated DenseNet-169 and UNet for fracture side distinguishment and fracture area segmentation. We used cross-validation to evaluate the performance of the AI algorithm after training.

Results: For fracture identification, DenseNet-169 achieved an area under the receiver operating characteristic curve (AUC) of 0.9920 ± 0.0021, with an accuracy, sensitivity, and specificity of 0.9693 ± 0.0028, 0.9717 ± 0.0143, and 0.9596 ± 0.0330, respectively. DenseNet-169 realized the distinguishment of the fracture side with accuracy, sensitivity, specificity, and AUC of 0.9859 ± 0.0059, 0.9743 ± 0.0101, 0.9980 ± 0.0041, and 0.9923 ± 0.0008, respectively. The intersection over union (IoU) and Dice coefficient of UNet for fracture area segmentation were 0.8180 ± 0.0093 and 0.8849 ± 0.0090, respectively, showing a high agreement with manual segmentation.

Conclusions: The trained AI system could realize the automatic identification and segmentation of OBFs, which might be a new tool for smart diagnoses and improved efficiencies of three-dimensional (3D) printing-assisted surgical repair of OBFs.

Translational Relevance: Our AI system, based on two available deep learning network models, could help in precise diagnoses and accurate surgical repairs.

Introduction
With the frequent occurrence of traffic and sports accidents, the incidence of orbital blowout fractures (OBFs) has gradually increased. OBFs often present as ocular trauma in clinical settings1,2 and cause unbearable diplopia and enophthalmos. To alleviate these symptoms, an accurate diagnosis and a precise surgical repair are key steps.3 A detailed evaluation of orbital computed tomography (CT), a noninvasive examination, is crucial for the diagnosis and surgical repair of OBFs, as it can precisely visualize the fracture areas and entrapped soft tissue.4 The major purpose of surgical repair for OBFs is to restore the binocular single-vision function and improve the altered appearance from the enophthalmos caused by an enlarged orbital cavity. Therefore, anatomic reduction of the trapped soft tissue and precise repair of the orbital defect area with artificial materials, such as artificial bone and titanium mesh, are crucial in the surgical treatment process.5 Meticulous pre-operative planning based on a three-dimensional (3D) printed orbital model is of marked necessity and assistance for realizing the purpose of the surgical repair.6 In addition, an appropriately designed implant template based on the orbital model is very beneficial for the surgery and minimizes the occurrence of postoperative complications. However, manually annotating fracture areas to design an implant template or a 3D model remains a time-consuming task even for experienced clinicians.7 Thus, the automatic identification and annotation of OBF areas based on artificial intelligence (AI) may help simplify the pre-operative design process and improve the accuracy and efficiency of surgery. 
AI imitates the thinking process of the human brain by learning through experience via computer algorithms and plays an active role in medical diagnoses, such as automatically detecting diabetic retinopathy and age-related macular degeneration through fundus photography.8 Image segmentation is one of the most widely used AI technologies in medicine. Image segmentation refers to the pixel-level classification of images to label target objects. Moreover, it is useful for the full extraction of anatomic structure information in medical images, which can influence clinical decision making.9 Recently, Li et al. automatically detected CT images with OBFs using a popular deep convolutional neural network (DCNN) named InceptionV3 and achieved an accuracy of 0.92. However, it could not automatically locate the fracture areas depicted on the CT images.10 Hamwood et al. constructed an AI model for automatic segmentation of the bony orbit region based on orbital CT and magnetic resonance imaging (MRI) images.11 Zhu et al. analyzed the characteristics of aging Asians with an AI-assisted segmentation of orbital bony features.12 Some commercial software packages may help segment the orbital region based on CT images.13 However, these studies focused on the segmentation of the normal orbit. In addition, to the best of our knowledge, there are no studies on the automatic segmentation of OBFs yet. 
In this study, we develop and evaluate a CT-based AI system for the identification and segmentation of OBFs, which has the potential to become a useful tool for the smart diagnosis of OBFs and might improve the precision and efficiency of 3D printing-assisted repair. 
Methods
This study adhered to the Declaration of Helsinki principles and was approved by the Ethics Review Committee of the Second Norman Bethune Hospital of Jilin University. The requirement for informed consent was waived by the review board. 
Orbital CT Image Database
A total of 3016 orbital CT images (1997 fracture and 1019 non-fracture CT images) were obtained from the Second Norman Bethune Hospital of Jilin University. All patients were Asian, and the baseline demographic characteristics are shown in Table 1. The fracture CT images were from patients with monocular OBFs. For the non-fracture group (total patients = 162 and total images = 1019), we selected several consecutive CT scans with complete bony walls for every patient. For the fracture group (total patients = 335 and total image = 1997), we selected continuous scans that showed the fracture areas for each patient. The fracture and non-fracture images were composed of dataset 1, which was used for the training and evaluation of fracture identifications. The fracture images were composed of dataset 2 for the training and evaluation of fracture side distinguishment and fracture area segmentation. The CT images with fractures in the database were independently judged by three experienced radiologists; the diagnosis was established, and the fracture areas were annotated when a consensus was reached. Another senior physician was invited to determine and annotate the fracture areas in the event of a disagreement among the three physicians. The direct and indirect signs of OBFs in the orbital CT images were annotated with the online tool LabelMe. Direct signs of OBFs were comprised of an interruption of the continuity of the orbital wall and a change in the contour of the orbit. The indirect signs included effusion in the adjacent sinus cavities, thickening, swelling of the extraocular muscles, and entrapment of the orbital contents. The fracture-type and side-specific distribution of CT images in the fracture group are shown in Table 2. To minimize the computational cost, the target region from the orbit CT images was extracted automatically. We used the Open CV technology to realize automatically identify the region of interest (ROI) through the template matching program. This program could mark a rectangular region of the orbital region and extract the ROI. This procedure involved calculating the mean value of the target area after manual cropping. Then, the mean value was used as a template to realize automatic matching and cropping. Next, the input images were processed to 224 × 224 pixels for the training of DenseNet-169, and 128 × 256 pixels for the training of UNet; the pixel values were normalized to ensure that they ranged from 0 to 1. 
Table 1.
 
Baseline Demographic Characteristics of Each Group
Table 1.
 
Baseline Demographic Characteristics of Each Group
Table 2.
 
Fracture-Type and Side-Specific Distribution of CT Images in the Fracture Group
Table 2.
 
Fracture-Type and Side-Specific Distribution of CT Images in the Fracture Group
In this study, we used random rotation (0 degrees to 359 degrees) as the method of data augmentation for the training set in dataset 1. At the time of transformation, each image in dataset 1 was randomly rotated by selecting an angle between 0 degrees and 359 degrees. The training set was trained with 100 epochs, which means each image experienced random rotations for 100 times. The transformed images were only used in the current step and not stored. 
To avoid data leakage, we divided the data with patient labels to ensure that there was no patient overlap between the training and test sets. Additionally, k-fold cross-validation was used for the evaluation of the post-trained AI algorithm, whereby the data were randomly divided into k = 5 folds. In one cross-validation process, k-1 folds were used for training, and the rest of the folds were used for validation. The process was then repeated k times, using each of the k folds for validation. Compared to simply splitting the single dataset, cross-validation can effectively avoid bias in the test process. 
Network Models and Network Training
We implemented the automatic identification of fracture images and the distinguishment of fracture sides using DenseNet-169 with the pre-trained ImageNet weight, and we used UNet for fracture area segmentation. 
The feature layers with each dense block were reconnected to fully engage the combination of the shallow and deep features through DenseNet.14 Due to the neural networks’ strong fitting abilities, the small scale of the training set can easily cause overfitting. UNet includes contracting (down-sampling) and expanding paths (up-sampling). In the process of down-sampling, a 3 × 3 valid convolution operation and rectified linear unit (ReLU) activation were repeated twice to reduce image resolution, and the key information was saved with a 2 × 2 maximum-pooling operation.15 After each down-sampling, the layers of the image were increased, and the size was compressed. The expansion path of UNet gradually repaired the image details, precisely located the lesion site, and restored the feature map to the size of the input image. The expansion path also contained four blocks, each containing 3 × 3 deconvolutions, and the ReLU function. After each up-sampling operation, the feature map size was doubled, and the number of channels was halved. We divided the dataset according to the 8:1:1 ratio of the training set: validation set: test set. The several candidate parameters were selected through dynamic observation of the validation set, and the appropriate hyperparameters were further determined through the grid search method. The learning rate was 0.1 at the beginning. When it was observed that the accuracy was stable in the validation set, the learning rate was reduced gradually. Because of the size of the images in the database, we used the filter with a small size (3 × 3). The number of epochs was based on the dynamic observation of the neural network training results. The number of epochs was selected when we observed the accuracy was stable over a period of time with the number of epochs increasing. Then, we used the grid search method to determine the hyperparameters. Figure 1 shows the architectures of DenseNet-169 and UNet. Table 3 shows the parameters of the algorithms. 
Figure 1.
 
Architectures of DenseNet-169 and UNet. (A) DenseNet-169 included various convolutional layers. Each convolutional layer contained the output of all the previous convolutional layers. (B) UNet was composed of contracting (down-sampling) and expanding paths (up-sampling). In the process of down-sampling, 3 × 3 valid convolution operation and rectified linear unit (ReLU) activation were repeated twice to reduce image resolution. The expansion path also contained 4 blocks, each containing 3 × 3 deconvolutions, and the ReLU function.
Figure 1.
 
Architectures of DenseNet-169 and UNet. (A) DenseNet-169 included various convolutional layers. Each convolutional layer contained the output of all the previous convolutional layers. (B) UNet was composed of contracting (down-sampling) and expanding paths (up-sampling). In the process of down-sampling, 3 × 3 valid convolution operation and rectified linear unit (ReLU) activation were repeated twice to reduce image resolution. The expansion path also contained 4 blocks, each containing 3 × 3 deconvolutions, and the ReLU function.
Table 3.
 
DenseNet-169 and UNet Algorithm Parameters
Table 3.
 
DenseNet-169 and UNet Algorithm Parameters
The training process was performed on the Xeon E5-2630v3@2.40 GHz server. The original UNet network model could not achieve effective segmentation of the fracture areas due to the irregular morphology and high variability of the fracture areas. To address this challenge, we created a new loss function by adding a constraint referring to the strategy of Srivastava to optimize the network.16 
\begin{eqnarray} L &=& \alpha\; {\rm{*}}\;\log \left( { - \mathop \sum \limits_{i = 1}^N \left[ {{y_i}\log \left( {{{\hat y}_i}} \right) + \left( {1 - {y_i}} \right)\log \left( {1 - {{\hat y}_i}} \right)} \right]} \right) \nonumber \\ && + \beta\; {\rm{*}}\;\frac{1}{N}\mathop \sum \limits_{i = 1}^N \left| {({y_i} - {{\hat y}_i})} \right|\end{eqnarray}
(1)
 
The loss function of UNet is constructed as follows, yi is the real label extracted from the manually labeled segmentation map, \({\hat y_i}\;\)is the predicted label of the segmentation map generated by the model, α and β are the hyperparameters of the model, which take the values of 1 and 50, respectively, during the experiment, and N is the total number of pixels of the segmented image. 
Performance Assessments
We evaluated the performance of the post-trained AI algorithm for accuracy, specificity, sensitivity, and precision by calculating true positives (TPs), true negatives (TNs), false positives (FPs), and false negatives (FNs). Accuracy, specificity, and sensitivity were calculated by the cutoff value (0.5). Accuracy is defined as the concordance rate between the AI and the manual results. The precision reflects the assessment of the AI algorithms. The higher the precision, the fewer misjudgments in the AI algorithm. Sensitivity, also known as recall, reflects the ability of AI algorithms to identify positive results, whereas specificity assesses the ability to identify negative results. The formulas for calculating these parameters are as follows:  
\begin{eqnarray}Accuracy = \frac{{{\rm{TP}} + {\rm{TN}}}}{{{\rm{TP}} + {\rm{TN}} + {\rm{FP}} + {\rm{FN}}}}\end{eqnarray}
(2)
 
\begin{eqnarray}Precision = \frac{{TP}}{{{\rm{TP}} + {\rm{FP}}}}\end{eqnarray}
(3)
 
\begin{eqnarray}Sensitivity/Recall = \frac{{{\rm{TP}}}}{{{\rm{TP}} + {\rm{FN}}}}\end{eqnarray}
(4)
 
\begin{eqnarray}Specificity = \frac{{{\rm{TN}}}}{{{\rm{TN}} + {\rm{FP}}}}\end{eqnarray}
(5)
 
The loss function curve was used to measure the degree of inconsistency between the predicted and true outcomes, which is an indication of the training effect of the AI algorithms. In addition, we analyzed the correlation between sensitivity and specificity using the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC). The closer the AUC is to 1, the better the performance of the model. Next, we used precision-recall (PR) curves to analyze the correlation between precision and recall. 
We assessed the similarity between the AI and the manual segmentation using intersection over union (IoU) and the Dice coefficient. The Dice coefficient and IoU are the most used evaluation assessments in image segmentation and measure the similarity between two sets of pixels. The formulas for calculating the Dice coefficient and IoU value are as follows, |X| denotes the number of pixels in the label, and |Y| denotes the number of pixels in the predicted image.  
\begin{eqnarray}DICE = \frac{{2\left| {X \cap Y} \right|}}{{\left| X \right| + \left| Y \right|}}\end{eqnarray}
(6)
 
\begin{eqnarray}IoU = \frac{{\left| {X \cap Y} \right|}}{{\left| X \right| + \left| Y \right| + \left| {X \cap Y} \right|}}\end{eqnarray}
(7)
 
Results
Selection of Proposed Network Models
To select suitable neural network models, we compared four common neural networks: ResNet, AlexNet, VGGNet, and DenseNet. In the test set, DenseNet, ResNet, AlexNet, and VGGNet realized fracture identification with accuracies of 0.9681 ± 0.0034, 0.9176 ± 0.0255, 0.9352 ± 0.097, and 0.9045 ± 0.0236, respectively (Table 4). DenseNet achieved the best performance according to the ROC, as shown in Figure 2. In addition, DenseNet had the lowest number of parameters among the four network models, which might facilitate algorithm computation reduction and save calculation time in real clinical settings. 
Table 4.
 
Comparison of DenseNet With Other Network Models
Table 4.
 
Comparison of DenseNet With Other Network Models
Figure 2.
 
Selection of network models . (A) The receiver operating characteristic curves of DenseNet for fracture identification. (B) The receiver operating characteristic curves of ResNet for fracture identification. (C) The receiver operating characteristic curves of AlexNet for fracture identification. (D) The receiver operating characteristic curves of VGGNet for fracture identification. AUC, area under the curve.
Figure 2.
 
Selection of network models . (A) The receiver operating characteristic curves of DenseNet for fracture identification. (B) The receiver operating characteristic curves of ResNet for fracture identification. (C) The receiver operating characteristic curves of AlexNet for fracture identification. (D) The receiver operating characteristic curves of VGGNet for fracture identification. AUC, area under the curve.
Performance of DenseNet-169 for the Identification of OBFs
The accuracy of DenseNet-169 for fracture identification in the test set reached 0.9693 ± 0.0028. Table 5 summarizes the assessments of the AI algorithms in the test set. As shown in Figure 3C, the loss function gradually decreased with the increase in training epochs, indicating that the identification results of the AI algorithm gradually matched the manual results. The AUC of the AI algorithm in the test sets was 0.9920 ± 0.0021 (Fig. 3A), and the area under the PR curve was 0.9957 ± 0.0017 (Fig. 3B), showing that the post-training AI algorithm exhibited remarkable reliability in identifying OBFs based on CT images. 
Table 5.
 
Evaluation of DenseNet-169 for Fracture Identification and Fracture Side Distinguishment
Table 5.
 
Evaluation of DenseNet-169 for Fracture Identification and Fracture Side Distinguishment
Figure 3.
 
Evaluation of post-trained DenseNet-169 for fracture identification. (A) The receiver operating characteristic curve of DenseNet-169 for fracture identification. (B) The precision-recall curve of DenseNet-169 for fracture identification. (C) The convergence of the loss function of DenseNet-169 during the training process for fracture identification. AUC, area under the curve; ROC, receiver operating characteristic.
Figure 3.
 
Evaluation of post-trained DenseNet-169 for fracture identification. (A) The receiver operating characteristic curve of DenseNet-169 for fracture identification. (B) The precision-recall curve of DenseNet-169 for fracture identification. (C) The convergence of the loss function of DenseNet-169 during the training process for fracture identification. AUC, area under the curve; ROC, receiver operating characteristic.
Training and Evaluation of DenseNet-169 for Distinguishing the Fracture Side
After 100 epochs of training, the accuracy of DenseNet-169 for fracture side distinguishment was 0.9859 ± 0.0059 in the test set. Table 5 summarizes the results of various parameters of the AI algorithms for distinguishing the fracture sides in the test sets. The loss function gradually decreased as the training epochs increased, as shown in Figure 4C. The AUC of the AI algorithm in the test sets was 0.9923 ± 0.0008 (Fig. 4B), and the area under the PR curve was 0.9954 ± 0.0004, as shown in Figure 4C, suggesting that the AI algorithm has an excellent ability to distinguish the fracture side. 
Figure 4.
 
Evaluation of post-trained DenseNet-169 for fracture side distinguishment. (A) The receiver operating characteristic curve of DenseNet-169 for fracture side distinguishment. (B) The precision-recall curve of DenseNet-169 for fracture side distinguishment. (C) The convergence of the loss function of DenseNet-169 during the training process for fracture side distinguishment. AUC, area under the curve; ROC, receiver operating characteristic.
Figure 4.
 
Evaluation of post-trained DenseNet-169 for fracture side distinguishment. (A) The receiver operating characteristic curve of DenseNet-169 for fracture side distinguishment. (B) The precision-recall curve of DenseNet-169 for fracture side distinguishment. (C) The convergence of the loss function of DenseNet-169 during the training process for fracture side distinguishment. AUC, area under the curve; ROC, receiver operating characteristic.
Training and Evaluation of UNet for the Segmentation of the Fracture Area
We trained UNet for fracture area segmentation. After 50 training epochs, the IoU of the trained UNet in the test set was 0.8180 ± 0.0093, and the Dice coefficient was 0.8949 ± 0.0090. The trained UNet could annotate the direct and indirect signs of OBFs in the CT images, as shown in Figure 4. The trained UNet could also identify and annotate different types of orbital fractures, such as fractures involving the medial and inferior corners of the orbit (Fig. 5B). By analyzing the AI-annotated images, the FP segmentation of fracture areas was located in the infraorbital canal/infraorbital groove area (Fig. 6). 
Figure 5.
 
Manual and AI segmentation for various types of orbital blowout fractures. AI, artificial intelligence.
Figure 5.
 
Manual and AI segmentation for various types of orbital blowout fractures. AI, artificial intelligence.
Figure 6.
 
Manual and AI-segmentation located in the infraorbital canal/infraorbital foramen area.
Figure 6.
 
Manual and AI-segmentation located in the infraorbital canal/infraorbital foramen area.
Discussion
In this study, we developed an AI system based on orbital CT images for the automatic identification and segmentation of OBFs. The accuracy of our AI system for fracture identification reached 0.9693 ± 0.0028. The AI system was remarkably successful in distinguishing the fracture side, with an accuracy of 0.9859 ± 0.0059, which is a level that has not been previously reported. Compared with previous studies, we trained the AI system using the JPG format instead of the DICOM format data generated from the imaging equipment17; JPG images are more compatible and provide a substantial opportunity for further development and application of the AI system, such as in providing AI consultation in remote areas.18 
The IoU and Dice coefficient of the trained UNet were 0.8180 ± 0.0093 and 0.8849 ± 0.0090 in the test set, respectively. A minor difference between the AI and the manual segmentation may result in significantly lower IoU values and Dice coefficients. However, we think that the potential differences between AI and manual segmentation are limited and will not significantly affect the clinical implementation of AI. The OBF features on the CT images significantly differed in the medial wall or orbital floor. The herniation of orbital contents into the maxillary sinus cavity in the shape of a teardrop is a specific sign of OBFs, whereas wide damage to the bony septation of the sinus cavity is a typical feature of medial wall fractures. Meanwhile, the features of obsolete and fresh fractures differ. The shape of the orbital wall changes in obsolete fractures, and the orbital bone is continuous without significant interruption. In fresh fractures, soft tissue swelling and effusion are often present. The trained UNet can achieve effective segmentation of fracture areas with different features. Interestingly, the segmentations mistaken by the AI were predominantly located in the region of the infraorbital canal/infraorbital foramen. The infraorbital foramen is an oval hole approximately 0.5 cm below the midpoint of the infraorbital rim through which the infraorbital nerves and blood vessels pass; it is the opening of the infraorbital canal to the outside of the orbit. This area is also often misdiagnosed in orbital fractures,19 and we will further improve the algorithm in the future to avoid AI misjudgment in this region. 
DenseNet-169 with the pre-trained ImageNet weight was used to identify OBFs and distinguish the affected side. DenseNet is a DCNN model proposed by Huang et al.14 that draws on the advantages of ResNet and GoogleNet. Each convolutional layer of DenseNet contains the output of all the previous convolutional layers, which enables the input information to be completely reused. The feature fusion of shadow and deep levels can alleviate the gradient disappearance problem caused by layer depth and improve the anti-overfitting performance of the network. DenseNet has superior performance in automatic classification tasks for pulmonary nodules, breast cancer, and other diseases.2022 In addition, UNet was trained to segment the OBF areas in our study. As the current mainstream network for medical image segmentation,23 UNet has been trained for locating and segmenting intervertebral discs in MR images.24 In CT image segmentation, UNet has been used to segment liver tumors and chest organs.25,26 We constructed a new loss function by adding a constraint to resolve the ineffective segmentation of the fracture areas by the traditional BCELoss function, which is the key to successful segmentation. Furthermore, the improvement in the loss function of UNet made by us provides a new template for optimizing the AI algorithm in segmenting irregular objects with small samples and sizes. 
For better application in real clinical settings, we are planning to train the network models with more images by supplementing the axial and sagittal CT images in the database. In addition, we are preparing to improve the function of the AI system to make a quantified determination of the fracture area. High-quality CT images are the premise for implementing an accurate determination of the fracture area. Zhai et al. performed automatic calibrations and quantitative error evaluations of orbital CTs based on a signed distance field, which provides a direction for our study.27 Nevertheless, automatically calibrating large amounts of image data for AI training remains challenging. 
This study established and evaluated an AI-assisted identification and segmentation system for OBFs based on orbital CT images using two available deep networks, which exhibited remarkable reliability in the identification of OBFs and could effectively segment the fracture area. The AI system we developed may assist in the implementation of smart diagnosis of OBFs and lay a foundation for improving the accuracy and efficiency of 3D printing-assisted orbital wall fracture repair. 
Acknowledgments
Supported by the National Natural Science Foundation of China (No. 82171053 and 81570864) and the Natural Science Foundation of Jilin Province (No. 20200801043GH and 20190201083JC). The funders had no role in the study design, data collection, analysis, decision to publish, or manuscript preparation. 
Authors’ contributions: X.B. conceived and designed the experiments. X.B., B.F., and G.-Y.L. prepared the manuscript. X.B., X.Z., and Q.Z. performed the experiments. X.B. and X.Z. analyzed the data. L.W. optimized the algorithm and mended the manuscript. 
Disclosure: X. Bao, None; X. Zhan, None; L. Wang, None; Q. Zhu, None; B. Fan, None; G.-Y. Li, None 
References
Patel S, Andrecovich C, Silverman M, Zhang L, Shkoukani M. Biomechanic factors associated with orbital floor fractures. JAMA Facial Plast Surg. 2017; 19(4): 298–302. [CrossRef] [PubMed]
Valencia MR, Miyazaki H, Ito M, Nishimura K, Kakizaki H, Takahashi Y. Radiological findings of orbital blowout fractures: A review. Orbit. 2021; 40(2): 98–109. [CrossRef] [PubMed]
Brucoli M, Arcuri F, Cavenaghi R, Benech A. Analysis of complications after surgical repair of orbital fractures. J Craniofac Surg. 2011; 22(4): 1387–1390. [CrossRef] [PubMed]
Ploder O, Klug C, Voracek M, Burggasser G, Czerny C. Evaluation of the computer-based area and volume measurement from coronal computed tomography scans in isolated blowout fractures of the orbital floor. J Oral Maxillofac Surg. 2002; 60(11): 1267–1272; discussion 1273-1274. [CrossRef] [PubMed]
Harris GJ. Orbital blow-out fractures: Surgical timing and technique. Eye (Lond). 200620(10): 1207–1212. [CrossRef] [PubMed]
Mommaerts MY, Büttner M, Vercruysse H, Wauters L, Beerens M. Orbital wall reconstruction with two-piece puzzle 3D printed implants: Technical note. Craniomaxillofac Trauma Reconstr. 2016; 9(1): 55–61. [CrossRef] [PubMed]
Yi WS, Xu XL, Ma JR, Ou XR. Reconstruction of complex orbital fracture with titanium implants. Int J Ophthalmol. 2012; 5(4): 488–492. [PubMed]
Du XL, Li WB, Hu BJ. Application of artificial intelligence in ophthalmology. Int J Ophthalmol. 2018; 11(9): 1555–1561. [PubMed]
Song Y, Ren S, Lu Y, Fu X, Wong KKL. Deep learning-based automatic segmentation of images in cardiac radiography: A promising challenge. Comput Methods Programs Biomed. 2022; 220: 106821. [CrossRef] [PubMed]
Li L, Song X, Guo Y, et al. Deep convolutional neural networks for automatic detection of orbital blowout fractures. J Craniofac Surg. 2020; 31(2): 400–403. [CrossRef] [PubMed]
Hamwood J, Schmutz B, Collins MJ, Allenby MC, Alonso-Caneiro D. A deep learning method for automatic segmentation of the bony orbit in MRI and CT images. Sci Rep. 2021; 11(1): 13693. [CrossRef] [PubMed]
Li Z, Chen K, Yang J, et al. Deep learning-based CT Radiomics for feature representation and analysis of aging characteristics of Asian bony orbit. J Craniofac Surg. 2022; 33(1): 312–318. [CrossRef] [PubMed]
Becker M, Friese K, Wolter F, Gellrich N, Essig H. Development of a reliable method for orbit segmentation & measuring. In: Proceedings IEEE International Symposium on Medical Measurements and Applications (MeMeA), 2015;285–290.
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017: 2261–2269.
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A, eds. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. 2015;234–241.
Srivastava N. Dropout: A simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014; 15: 1929.
Yasaka K, Akai H, Abe O, Kiryu S. Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: A preliminary study. Radiology. 2018; 286(3): 887–896. [CrossRef] [PubMed]
Williams C, Asi Y, Raffenaud A, Bagwell M, Zeini I. The effect of information technology on hospital performance. Health Care Manag Sci. 2016; 19(4): 338–346. [CrossRef] [PubMed]
Kazkayasi M, Ergin A, Ersoy M, Bengi O, Tekdemir I, Elhan A. Certain anatomical relations and the precise morphometry of the infraorbital foramen–Canal and groove: An anatomical and cephalometric study. Laryngoscope. 2001; 111(4 Pt 1): 609–614. [PubMed]
Zhang F, Wang Q, Yang A, et al. Geometric and dosimetric evaluation of the automatic delineation of organs at risk (OARs) in non-small-cell lung cancer radiotherapy based on a modified DenseNet deep learning network. Front Oncol. 2022; 12: 861857. [CrossRef] [PubMed]
Li X, Shen X, Zhou Y, Wang X, Li TQ. Classification of breast cancer histopathological images using interleaved DenseNet with SENet (IDSNet). PLoS One. 2020; 15(5): e0232127. [CrossRef] [PubMed]
Vulli A, Srinivasu PN, Sashank MSK, Shafi J, Choi J, Ijaz MF. Fine-tuned DenseNet-169 for breast cancer metastasis prediction using FastAI and 1-cycle policy. Sensors (Basel). 2022; 22(8): 2988. [CrossRef] [PubMed]
Bargsten L, Wendebourg M, Schlaefer A. Data representations for segmentation of vascular structures using convolutional neural networks with UNet architecture. Annu Int Conf IEEE Eng Med Biol Soc. 2019: 989–992.
Dolz J, Desrosiers C, Ben Ayed I. In: Zheng G, Belavy D, Cai Y, Li S, eds. Computational Methods and Clinical Applications for Spine Imaging. Cham, Switzerland: Springer International Publishing, 2020: 130–143.
Jin Q, Meng Z, Sun C, Cui H, Su R. RA-UNet: A hybrid deep attention-aware network to extract liver and tumor in CT scans. Front Bioeng Biotechnol. 2020; 8: 605132. [CrossRef] [PubMed]
Jalali Y, Fateh M, Rezvani M, Abolghasemi V, Anisi MH. ResBCDUNet: A deep learning framework for lung CT image segmentation. Sensors (Basel). 2021; 21(1): 268. [CrossRef] [PubMed]
Zhai G, Yin Z, Li L, Song X, Zhou Y. Automatic orbital computed tomography coordinating method and quantitative error evaluation based on signed distance field. Acta Radiol. 2021; 62(1): 87–92. [CrossRef] [PubMed]
Figure 1.
 
Architectures of DenseNet-169 and UNet. (A) DenseNet-169 included various convolutional layers. Each convolutional layer contained the output of all the previous convolutional layers. (B) UNet was composed of contracting (down-sampling) and expanding paths (up-sampling). In the process of down-sampling, 3 × 3 valid convolution operation and rectified linear unit (ReLU) activation were repeated twice to reduce image resolution. The expansion path also contained 4 blocks, each containing 3 × 3 deconvolutions, and the ReLU function.
Figure 1.
 
Architectures of DenseNet-169 and UNet. (A) DenseNet-169 included various convolutional layers. Each convolutional layer contained the output of all the previous convolutional layers. (B) UNet was composed of contracting (down-sampling) and expanding paths (up-sampling). In the process of down-sampling, 3 × 3 valid convolution operation and rectified linear unit (ReLU) activation were repeated twice to reduce image resolution. The expansion path also contained 4 blocks, each containing 3 × 3 deconvolutions, and the ReLU function.
Figure 2.
 
Selection of network models . (A) The receiver operating characteristic curves of DenseNet for fracture identification. (B) The receiver operating characteristic curves of ResNet for fracture identification. (C) The receiver operating characteristic curves of AlexNet for fracture identification. (D) The receiver operating characteristic curves of VGGNet for fracture identification. AUC, area under the curve.
Figure 2.
 
Selection of network models . (A) The receiver operating characteristic curves of DenseNet for fracture identification. (B) The receiver operating characteristic curves of ResNet for fracture identification. (C) The receiver operating characteristic curves of AlexNet for fracture identification. (D) The receiver operating characteristic curves of VGGNet for fracture identification. AUC, area under the curve.
Figure 3.
 
Evaluation of post-trained DenseNet-169 for fracture identification. (A) The receiver operating characteristic curve of DenseNet-169 for fracture identification. (B) The precision-recall curve of DenseNet-169 for fracture identification. (C) The convergence of the loss function of DenseNet-169 during the training process for fracture identification. AUC, area under the curve; ROC, receiver operating characteristic.
Figure 3.
 
Evaluation of post-trained DenseNet-169 for fracture identification. (A) The receiver operating characteristic curve of DenseNet-169 for fracture identification. (B) The precision-recall curve of DenseNet-169 for fracture identification. (C) The convergence of the loss function of DenseNet-169 during the training process for fracture identification. AUC, area under the curve; ROC, receiver operating characteristic.
Figure 4.
 
Evaluation of post-trained DenseNet-169 for fracture side distinguishment. (A) The receiver operating characteristic curve of DenseNet-169 for fracture side distinguishment. (B) The precision-recall curve of DenseNet-169 for fracture side distinguishment. (C) The convergence of the loss function of DenseNet-169 during the training process for fracture side distinguishment. AUC, area under the curve; ROC, receiver operating characteristic.
Figure 4.
 
Evaluation of post-trained DenseNet-169 for fracture side distinguishment. (A) The receiver operating characteristic curve of DenseNet-169 for fracture side distinguishment. (B) The precision-recall curve of DenseNet-169 for fracture side distinguishment. (C) The convergence of the loss function of DenseNet-169 during the training process for fracture side distinguishment. AUC, area under the curve; ROC, receiver operating characteristic.
Figure 5.
 
Manual and AI segmentation for various types of orbital blowout fractures. AI, artificial intelligence.
Figure 5.
 
Manual and AI segmentation for various types of orbital blowout fractures. AI, artificial intelligence.
Figure 6.
 
Manual and AI-segmentation located in the infraorbital canal/infraorbital foramen area.
Figure 6.
 
Manual and AI-segmentation located in the infraorbital canal/infraorbital foramen area.
Table 1.
 
Baseline Demographic Characteristics of Each Group
Table 1.
 
Baseline Demographic Characteristics of Each Group
Table 2.
 
Fracture-Type and Side-Specific Distribution of CT Images in the Fracture Group
Table 2.
 
Fracture-Type and Side-Specific Distribution of CT Images in the Fracture Group
Table 3.
 
DenseNet-169 and UNet Algorithm Parameters
Table 3.
 
DenseNet-169 and UNet Algorithm Parameters
Table 4.
 
Comparison of DenseNet With Other Network Models
Table 4.
 
Comparison of DenseNet With Other Network Models
Table 5.
 
Evaluation of DenseNet-169 for Fracture Identification and Fracture Side Distinguishment
Table 5.
 
Evaluation of DenseNet-169 for Fracture Identification and Fracture Side Distinguishment
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×