January 2020
Volume 9, Issue 2
Open Access
Special Issue  |   January 2020
Factors in Color Fundus Photographs That Can Be Used by Humans to Determine Sex of Individuals
Author Affiliations & Notes
  • Takehiro Yamashita
    Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
  • Ryo Asaoka
    Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
  • Hiroto Terasaki
    Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
  • Hiroshi Murata
    Department of Ophthalmology, The University of Tokyo, Tokyo, Japan
  • Minoru Tanaka
    Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
  • Kumiko Nakao
    Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
  • Taiji Sakamoto
    Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
  • Correspondence: Taiji Sakamoto, Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan. 8-35-1, Sakuragaoka, Kagoshima-shi, Kagoshima 890-0075, Japan. e-mail: tsakamot@m3.kufm.kagoshima-u.ac.jp 
Translational Vision Science & Technology January 2020, Vol.9, 4. doi:https://doi.org/10.1167/tvst.9.2.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Takehiro Yamashita, Ryo Asaoka, Hiroto Terasaki, Hiroshi Murata, Minoru Tanaka, Kumiko Nakao, Taiji Sakamoto; Factors in Color Fundus Photographs That Can Be Used by Humans to Determine Sex of Individuals. Trans. Vis. Sci. Tech. 2020;9(2):4. https://doi.org/10.1167/tvst.9.2.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Artificial intelligence (AI) can identify the sex of an individual from color fundus photographs (CFPs). However, the mechanism(s) involved in this identification has not been determined. This study was conducted to determine the information in CFPs that can be used to determine the sex of an individual.

Methods: Prospective observational cross-sectional study of 112 eyes of 112 healthy volunteers. The following characteristics of CFPs were analyzed: the color of peripapillary area expressed by the mean values of red, green, and blue intensities, and the tessellation expressed by the tessellation fundus index (TFI). The optic disc ovality ratio, papillomacular angle, retinal artery trajectory, and retinal vessel angles were also quantified. Their differences between the sexes were assessed by Mann-Whitney U tests. Regularized binomial logistic regression was used to select the decisive factors. In addition, its discriminative performance was evaluated through the leave-one-out cross validation.

Results: The mean age of 76 men and 36 women was 25.8 years. The regularized binomial logistic regression delivered the optimal model for sex selected variables of peripapillary temporal green and blue intensities, temporal TFI, supratemporal TFI, optic disc ovality ratio, artery trajectory, and supratemporal retinal artery angle. With this approach, the discrimination accuracy rate was 77.9%.

Conclusions: Human-assessed characteristics of CFPs are useful in investigating the new theme proposed by AI, the sex of an individual.

Translational Relevance: This is the first report to approach the thinking process of AI by humans and can be a new approach to medical AI research.

Introduction
Artificial intelligence (AI), in particular deep learning, has become one of the most studied topics in science. In ophthalmology, AI is now about to enter into the clinical phase for the diagnosis and prognosis of diseases.14 In the field of AI in ophthalmology, there are some new findings assumed to be not possible before AI. One of the most unexpected findings was the ability of AI to identify the sex of an individual based on the characteristics of the ocular color fundus photographs (CFPs) of the individual. The report by Poplin et al. showed that the accuracy rate reached as high as 97%.5 However, because of the mechanisms of deep learning, it is not possible to identify which clinical parameters were used by the machine to discriminate the sex of the individual whose CFPs were being analyzed. To the best of our knowledge, there has not been a study that evaluated the usefulness of the characteristics of a CFPs in determining the sex of the individual whose CFPs were being analyzed (Fig. 1). 
Figure 1.
 
Representative fundus photographs of a man (left) and a woman (right). The fundus of the man has a reddish color (left), whereas that of the woman is bluish to greenish (right). The supratemporal artery is located closer to the macula in the woman's eye (right) than in the man's eye (left).
Figure 1.
 
Representative fundus photographs of a man (left) and a woman (right). The fundus of the man has a reddish color (left), whereas that of the woman is bluish to greenish (right). The supratemporal artery is located closer to the macula in the woman's eye (right) than in the man's eye (left).
Even though the overall approach may be new and its conclusion seems feasible, careful consideration is still needed when applying AI in the medical field. For example, AI can show the relationship between two phenomena but cannot differentiate the cause and the results. There is a well-known discrepancy of the AI conclusion that no patient exists in a region with no hospitals, and AI can conclude that the presence of a hospital is the cause of the disease. Thus, it is important to investigate the bases for the conclusions provided by AI using clinically established parameters especially in the medical field. 
One method to resolve this problem is tracking the process of AI to change a “black box AI” to an “explainable AI,” such as the heatmap in the report by Poplin et al.5 Another way can be by determining the theme that is validating the results independently using clinical parameters known by humans. 
We began this project to determine whether we can approach the conclusion of AI using only known parameters with a theme to distinguish the sex from ocular CFPs. Thus far, there have been numerous studies using various parameters of the ocular function and biometry,625 and the parameters used in this study were derived from them. If these factors are useful for distinguishing the sex, we may be able to understand the conclusions by Poplin et al.5 
We found that a combination of known clinical parameters in the CFPs are useful in identifying the sex of the individual whose CFPs were being analyzed. Most important, this is the first study of a novel diagnosis model using known clinical parameters that can solve a new theme provided by AI in ophthalmology. 
Methods
The study was approved by the Ethics Committee of Kagoshima University Hospital, and it was registered with the University Hospital Medical Network clinical trials registry. The registration title was “Morphological analysis of the optic disc and the retinal nerve fiber in myopic eyes” and the registration number was UMIN000006040. A detailed protocol is available at https://upload.umin.ac.jp/cgi-open-bin/ctr/ctr.cgi?function=brows&action=brows&type=summary&recptno=R00000715. Data in this manuscript are used in our other study.13,14 All of the procedures used conformed to the tenets of the Declaration of Helsinki. A written informed consent was obtained from all subjects after an explanation of the procedures being used. 
Subjects
This was a cross-sectional, prospective observational study. A total of 133 eyes of 133 volunteers were enrolled between November 1, 2010, and February 29, 2012. Volunteers with no known eye diseases that were determined by examining their medical charts were studied, and only the data from the right eyes were analyzed. The eligibility criteria were: age ≥20 years but <40 years; eyes normal by slit-lamp biomicroscopy, ophthalmoscopy, and OCT; best-corrected visual acuity ≤0.1 logarithm of the minimum angle of resolution units; and intraocular pressure ≤21 mm Hg. The exclusion criteria were: eyes with known ocular diseases such as glaucoma, staphyloma, and optic disc anomalies; presence of systemic diseases such as hypertension and diabetes; presence of visual field defects; and history of refractive or intraocular surgery. Seven eyes were excluded because of an ocular disease or prior ocular surgery; 3 eyes because of superior segmental optic hypoplasia, 1 eye because of glaucoma, and 3 eyes because of laser-assisted in situ keratomileusis; and 14 other eyes because of difficulty in measuring the fundus parameters. In the end, the data of 112 right eyes of 112 individuals (76 men and 36 women) were used for the statistical analyses. The axial lengths and refractive errors were measured as in our earlier studies.13,14 
Angles of Supratemporal and Infratemporal Retinal Arteries and Veins, Location of Papillomacular Position, and Degree of Optic Disc Ovality
The CFPs and the OCT images were taken by a fundus camera (Topcon 3D OCT-1000 Mark II). The angle between the supratemporal (ST) and the infratemporal (IT) major retinal arteries and the temporal horizontal line was measured by using a 3.4-mm green circle centered on the optic disc center and the intersection of the ST or IT major retinal arteries (RA) and the green circle. The angle between the ST or IT major retinal vein (RV) and temporal horizontal line was measured by the same method.1315 We named these the ST and IT retinal artery (ST-RA and IT-RA) and the ST and IT retinal vein (ST-RV and IT-RV) angles. 
The papillomacular position is the angle formed by a horizontal line and the line connecting the optic disc center and the fovea in the CFPs (Fig. 2A).16 The ovality ratio was determined on the CFPs as described in detail previously.17 The maximum and minimum disc diameters were measured by a single observer. We defined the vertical axis of the disc as the longest diameter that was less than 45° of the geometric vertical axis, and the horizontal axis as the longest diameter that was more than 45° of the geometric vertical axis. The degree of ovality, the ovality ratio, was determined by dividing the maximum by the minimum disc diameters (Fig. 2B). 
Figure 2.
 
Method of quantifying retinal vessel angles and (A) papillomacular position and (B) ovality ratio. Red double arrows point to the supra- and infratemporal retinal artery angles. Blue double arrows point to the supra- and infratemporal retinal vein angles. White double arrow is papillomacular position. The ovality ratio was determined by dividing the maximum by the minimum disc diameters.
Figure 2.
 
Method of quantifying retinal vessel angles and (A) papillomacular position and (B) ovality ratio. Red double arrows point to the supra- and infratemporal retinal artery angles. Blue double arrows point to the supra- and infratemporal retinal vein angles. White double arrow is papillomacular position. The ovality ratio was determined by dividing the maximum by the minimum disc diameters.
Measurement of Retinal Artery Trajectory
The curvature of the RA trajectory was quantified by fitting it to a second-degree polynomial equation as described in detail elsewhere.1820 The RA and the center of the optic discs were identified in the CFPs. The fovea-to-disc axis in the CFPs was rotated to a vertical position. At least 20 points on the ST-RA and IT-RA were marked on the CFPs. The x and y coordinates of each mark were then determined automatically by the ImageJ program (ImageJ version 1.47, National Institutes of Health, Bethesda, MD; available at: http://imagej.nih.gov/ij/). Then, the x and y coordinates in the CFPs were converted to a new set of coordinates, x and y, with the center of the disc as the origin. Finally, the converted coordinate data were fit to a second-degree polynomial equation, a+ bx + cx2/100, with the curve fitting program of ImageJ. The a, b, and c are constants calculated by the curve-fitting program of ImageJ. Under these conditions, a larger “a” will make the curve steeper and will bring the arms of the curve closer to the fovea. Thus, the a constant was used as the curvature of the RA trajectory (Fig. 3A). 
Figure 3.
 
Method of quantifying (A) retinal artery trajectory and (B) fundus color. The coordinate data (yellow dots in the fundus photograph) are fit to a second-degree polynomial equation, a + bx + cx2/100, with the curve fitting program of ImageJ (lower graph). The c constant was used as the degree of curvature of the retinal artery trajectory. ImageJ software was used to identify the mean intensity of the red (R), green (G), and blue (B) (right three graphs) within the peripapillary eight circles (yellow circles in the fundus photograph). The tessellation fundus index (TFI) is calculated by following formula: TFI = R/(R + G + B) in each of the eight locations.
Figure 3.
 
Method of quantifying (A) retinal artery trajectory and (B) fundus color. The coordinate data (yellow dots in the fundus photograph) are fit to a second-degree polynomial equation, a + bx + cx2/100, with the curve fitting program of ImageJ (lower graph). The c constant was used as the degree of curvature of the retinal artery trajectory. ImageJ software was used to identify the mean intensity of the red (R), green (G), and blue (B) (right three graphs) within the peripapillary eight circles (yellow circles in the fundus photograph). The tessellation fundus index (TFI) is calculated by following formula: TFI = R/(R + G + B) in each of the eight locations.
Measurement of Red, Green, and Blue Intensity in Eight Peripapillary Locations and Calculation of Tessellation Fundus Index
Using the same CFPs, the ImageJ software was used to calculate the mean value of R, G, and B within each area. This was followed by the construction of histograms of the number of red, green, and blue pixels in the circular area. The tessellation fundus index (TFI) calculation algorithms was determined as described in detail in earlier studies,2124 and the findings showed that the peripapillary location of the tessellations varied greatly.25,26 The TFI was calculated using the mean red intensity (R), the mean green intensity (G), and the mean blue intensity (B) as follows: TFI = R/(R + G + B) using the mean value of red-green-blue intensity in the eight locations.21 The area of the measurements was determined as follows: first, the center of the lateral circle with diameter of 48 pixels was placed on the line between the macular and the center of the optic nerve head contacting the lateral margin of the optic nerve head as in Figure 3B. Then the neighboring circle was placed in contact with each other. 
Statistical Analyses
The sex difference of each fundus parameter was assessed by Mann-Whitney U test. Then, the odds ratio of each fundus parameters for sex was evaluated using the univariate binomial logistic regression. 
Next, the optimal model for sex was determined by using regularized binomial logistic regression. It is widely acknowledged that ordinal statistical models, such as linear or binomial logistic regression, may be overfitted to the original sample especially when the number of predictor variables is large. The least absolute shrinkage and selection operator is a method proposed by Tibshrani et al. in which these problems in linear/logistic modeling could be mitigated by applying a shrinkage method so that the sum of the absolute values of the regression coefficients is regularized.27,28 This method has been used in many different fields from the analysis of human perception to genetic analysis,29,30 and we have recently shown the usefulness of this approach in glaucoma.31,32 More specifically, in the case of L2 regularized binomial logistic regression, Ridge binomial logistic regression, the penalized version of the log-likelihood function to be maximized using the formula below;  
\begin{eqnarray*} \mathop \sum \limits_{i = 1}^n [ {( {yixi\beta - \log } ( {1 + {e^{xi\beta }}})} ] - \;\lambda \mathop \sum \limits_{j = 1}^p \beta _j^2\end{eqnarray*}
where xi is the i-th row of a matrix of n observations, with p predictors, β is the columns vector of the regression coefficients, and λ represents the penalty applied. Of note, this is identical to the ordinary binomial logistic regression when the λ values are equal to zero. Unlike deep learning, it is possible to directly observe the effect of selected parameters in the optimal model. 
Next, the diagnostic performance of the Ridge binomial logistic regression approach was evaluated using the leave-one-cross-validation method. In the leave-one-out cross validation, a single observation from the original sample was used as validation data, and the remaining observations (111 subjects) were used as training data. This procedure was repeated such that each observation in the sample was used once as the validation data (112 iterations).33 The diagnostic accuracy was evaluated by using the area under the receiver operating characteristic curve (AROC). All statistical analyses were performed with SPSS statistics 19 for Windows (SPSS Inc., IBM, Somers, NY) and the statistical programming language R (ver. 3.1.3, The R Foundation for Statistical Computing, Vienna, Austria). 
Results
The demographics of the participants are shown in Table 1. The mean age was 25.8 years, and there were 76 men and 36 women. 
Table 1.
 
Participants' Data
Table 1.
 
Participants' Data
Mann-Whitney analysis showed that the ovality ratio, the ST-RA, and the ST-RV of the men were significantly larger than that of the women. The green and blue intensities of women were significantly higher than that of men except in the infranasal-G and inferior-G. All the TFIs in men were significantly higher than that of women (Table 2). 
Table 2.
 
Sex Difference of Ocular Fundus Parameters Used for Analysis
Table 2.
 
Sex Difference of Ocular Fundus Parameters Used for Analysis
Table 3 shows the odds ratio of each fundus parameters calculated by the univariate binomial logistic regression. The ovality ratio, ST-RA, and ST-RV of men were significantly larger than that of women. The retinal artery trajectory, temporal, supratemporal, infratemporal G, and all B of women were significantly higher than that of men except the inferior-B. All the TFIs in men were significantly higher than that of women. 
Table 3.
 
The OR of Ocular Fundus Parameters
Table 3.
 
The OR of Ocular Fundus Parameters
The optimal model for the male sex obtained using the Ridge binomial logistic regression was −1.27 −0.018 × temporal G − 0.00057 × temporal B + 14.9 × temporal TFI + 14.8 × supratemporal TFI + 0.41 × ovality ratio – 1.13 × artery trajectory + 0.014 × ST-RA. The AROC value obtained using the Ridge binomial logistic regression with the leave-one-out cross validation was 77.9% (P < 0.001, DeLong's method, Fig. 4). 
Figure 4.
 
The receiver operating characteristic curve with the Ridge binomial logistic regression. The area under the receiver operating characteristic curve was 77.9% (P < 0.001, DeLong's method). AROC, area under the receiver operating characteristic curve.
Figure 4.
 
The receiver operating characteristic curve with the Ridge binomial logistic regression. The area under the receiver operating characteristic curve was 77.9% (P < 0.001, DeLong's method). AROC, area under the receiver operating characteristic curve.
Discussion
The purpose of this study was to determine the factors that can be used to distinguish the sex of an individual just from the different components of CFPs. The major problem with AI, such as deep learning, is that the analyzing process used to reach the conclusion is not known. Particularly in the medical field, even if the conclusion seems correct, it would have limited application to patients if the assumption cannot be clinically understood. Thus, understanding the thinking process is no less important than the conclusion itself in medicine. 
We did not intend to determine or trace the exact thinking process of AI. On the contrary, we attempted to approach the AI-proposed conclusion using only the known clinical parameters. Specifically, we collected quantitative data obtained from CFPs such as the color of the fundus, retinal artery angle, and others. The results showed that we could distinguish the sex of each eye with an accuracy of 77.9%. 
The advantage of the present approach is that each factor can be discussed to explain the thinking process that cannot be done in the black box AI. This is as follows. First, male fundus had higher TFI values than female fundus, indicating that the male fundus looks more red-colored. Indeed, higher values of temporal TFI and supratemporal TFI were suggestive of a male fundus, as suggested by the optimal model with Ridge binomial logistic regression. The red color of the ocular fundus is supposed to reflect the color of the large choroidal vessels.21,26 Because men have a thinner choroid, the choroidal vessels are easily observed in the CFPs, which makes the male fundus more reddish in color.21,24 A large epidemiological study showed that men have a higher TFI value than women.24 In contrast, more blue- or green-colored ocular fundus was suggestive of female subjects in this study, as suggested by the optimal model with Ridge binomial logistic regression. It is already known that a thick retina appears bluish or greenish in ocular CFPs.34,35 Jonas et al. reported that men had a thicker central fovea than woman, but this difference was not observed in other retinal regions.36 Furthermore, an eye with a shorter axial length would tend to have a thicker retina.36,37 Indeed, in this study, men had significantly longer axial lengths than women (P = 0.0069). It is therefore understandable that the color of the ocular fundus was one of the significant factors to differentiate the sexes. 
Second, the CFPs of female eyes tended to have a larger retinal artery trajectory and smaller ST-RA than that of male eyes by the optical model. These results indicate that the temporal retinal arteries in female eyes were located closer to the macular region than in male eyes. In an earlier study on the shape of the eye when the axial length is the same, women had smaller circumferential equators than men.38,39 Thus, women had more rugby ball-shaped eyes than men, where the long axis is the anteroposterior axis. In these eyes, it is likely that temporal retinal arteries will be located closer to the macula and the optic disc is more tilted showing an oval-shaped optic disc head.13,14 These facts are consistent with the present findings. 
These results may be useful for determining the cause of diseases with larger sex differences.40 For example, a macular hole occurs more often in women than men. Generally, these findings obtained from CFPs would be related to the shape of the eyeball. Considering the tangential tractional force of the vitreous on the macula, it is understandable that a rugby ball-shaped eye would be more associated with a macular hole.41 At the same time, a rugby ball-shaped eye is more frequent in women. AI may detect these “hidden relationships” from the CFPs. 
We also suggest that conventional methods of cognition and quantification for interpreting the validity of future AI judgments will be important. In this Ridge binomial logistic regression method, it was suggested that it was most advantageous to analyze five parameters when distinguishing the sex of the individual with good accuracy. Thus, there was not necessarily a single or a few prominent factors to distinguish men from women among the present factors. Rather, it was possible to say that men and women were identified comprehensively by multiple factors in the CFPs. It is understandable that, when there is no strong factor that stands out, it would be difficult for human eyes to collect or recognize these features. 
There are other methods in machine learning, such as random forest42 and support vector machine.43 We also evaluated the discrimination ability of these methods using the same fundus parameters through leave-one-out cross validation; however, significant improvement in the AROC value was not observed compared with the currently used Ridge logistic regression (AROC = 79.1% and P = 0.76% with random forest, and AROC = 74.0% and P = 0.22 [data not shown]). These AROC values were considerably lower than that in the study by Poplin et al. (97%).5 These results suggest other unknown parameters may further enable better discriminative ability; otherwise, the use of deep learning is more advantageous than other machine learning methods such as Ridge binomial logistic regression, random forest, and support vector machine. In addition, the current method requires the manual extraction of multiple features by human graders, whereas deep learning has a fully automated nature. Nonetheless, this does not discredit the merit of our study, because the purpose of the current study was to investigate whether known clinical parameters are useful in discriminating the sex of an individual from the parameters of CFPs. 
This study has limitations. One limitation was that the study population was made up of young Japanese volunteers who are the most myopic group in the world.44,45 More specifically, the vast majority (112 eyes) of the eyes had a refractive error (spherical equivalent) of less than −0.5 D and only 12 of the remaining eyes had a refractive error of ≥−0.5 D. Thus, our results describe the characteristics of young myopic eyes, and they might not necessarily hold for older individuals, other ethnic, or non-myopic populations. A large epidemiological study needs to be conducted to further validate the current results, especially for other populations. Another limitation is the time of the measurement. It takes about 10 minutes per image to measure all fundus parameters by an expert. A semiautomated program of the measurement is needed when investigating this issue for a large epidemiological study. 
In conclusion, the results showed that it is possible to identify the sex of an individual by analyzing the CFPs of the individual. Our results indicate that the mean TFIs, ovality ratio, and the angles of the ST-RA in men were significant factors for making this identification. The green and blue intensities of the fundus around the optic disc were also important factors. Thus, a new technique of AI is being instituted in ophthalmology, and its use should make it possible to diagnose more efficiently and easily. However, the results of this study indicate that the thinking process of humans will be needed to complement the AI findings. 
Acknowledgments
The authors thank Duco Hamasaki of the Bascom Palmer Eye Institute of the University of Miami for providing critical discussions and suggestions to our study and revision of the final manuscript. 
Supported by JSPS KAKENHI grant numbers 18H02957 and 17K11426 and by a grant from Japan National Society for the Prevention of Blindness. The funding organizations had no role in the design or conduct of this research. 
Disclosure: T. Yamashita, None; R. Asaoka, None; H. Terasaki, None; H. Murata, None; M. Tanaka, None; K. Nakao, None; T. Sakamoto, None 
References
Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016; 316: 2402–2410. [CrossRef] [PubMed]
Burlina PM, Joshi N, Pacheco KD, Freund DE, Kong J, Bressler NM. Use of deep learning for detailed severity characterization and estimation of 5-year risk among patients with age-related macular degeneration. JAMA Ophthalmol. 2018; 136: 1359–1366. [CrossRef] [PubMed]
Asaoka R, Murata H, Iwase A, Araie M. Detecting preperimetric glaucoma with standard automated perimetry using a deep learning classifier. Ophthalmology. 2016; 123: 1974–1980. [CrossRef] [PubMed]
Asaoka R, Murata H, Hirasawa K, et al. Using deep learning and transfer learning to accurately diagnose early-onset glaucoma from macular optical coherence tomography images. Am J Ophthalmol. 2019; 198: 136–145. [CrossRef] [PubMed]
Poplin R, Varadarajan AV, Blumer K, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. 2018; 2: 158–164. [CrossRef] [PubMed]
Ferris FL,3rd Kassoff A, Bresnick GH, Bailey I. New visual acuity charts for clinical research. Am J Ophthalmol. 1982; 94: 91–96. [CrossRef] [PubMed]
Goldmann H, Schmidt T. Über Applanationstonometrie. Ophthalmologica. 1957; 134: 221–242. [CrossRef] [PubMed]
Matsumura I, Maruyama S, Ishikawa Y, Hirano R, Kobayashi K, Kohayakawa Y. The design of an open view auto-refractometer. Advances in Diagnostic Visual Optics. Berlin: Springer; 1983: 36–42.
Heijl A, Lindgren A, Lindgren G. Test-retest variability in glaucomatous visual fields. Am J Ophthalmol. 1989; 108: 130–135. [CrossRef] [PubMed]
Hitzenberger CK. Optical measurement of the axial eye length by laser Doppler interferometry. Invest Ophthalmol Vis Sci. 1991; 32: 616–624. [PubMed]
Hitzenberger CK, Drexler W, Fercher AF. Measurement of corneal thickness by laser Doppler interferometry. Invest Ophthalmol Vis Sci. 1992; 33: 98–103. [PubMed]
Fujimoto JG, Brezinski ME, Tearney GJ, et al. Optical biopsy and imaging using optical coherence tomography. Nat Med. 1995; 1: 970–972. [CrossRef] [PubMed]
Yamashita T, Asaoka R, Tanaka M, et al. Relationship between position of peak retinal nerve fiber layer thickness and retinal arteries on sectoral retinal nerve fiber layer thickness. Invest Ophthalmol Vis Sci. 2013; 54: 5481–5488. [CrossRef] [PubMed]
Yamashita T, Asaoka R, Kii Y, Terasaki H, Murata H, Sakamoto T. Structural parameters associated with location of peaks of peripapillary retinal nerve fiber layer thickness in young healthy eyes. PLoS One. 2017; 12: e0177247. [CrossRef] [PubMed]
Fujino Y, Yamashita T, Murata H, Asaoka R. Adjusting circumpapillary retinal nerve fiber layer profile using retinal artery position improves the structure-function relationship in glaucoma. Invest Ophthalmol Vis Sci. 2016; 57: 3152–3158. [CrossRef] [PubMed]
Garway-Heath DF, Poinoosawmy D, Fitzke FW, Hitchings RA. Mapping the visual field to the optic disc in normal tension glaucoma eyes. Ophthalmology. 2000; 100: 1809–1815. [CrossRef]
Tay E, Seah SK, Chan SP, et al. Optic disk ovality as an index of tilt and its relationship to myopia and perimetry. Am J Ophthalmol. 2005; 139: 247–252. [CrossRef] [PubMed]
Yamashita T, Sakamoto T, Terasaki H, Tanaka M, Kii Y, Nakao K. Quantification of retinal nerve fiber and retinal artery trajectories using second-order polynomial equation and its association with axial length. Invest Ophthalmol Vis Sci. 2014; 55: 5176–5182. [CrossRef] [PubMed]
Yamashita T, Terasaki H, Yoshihara N, Kii Y, Uchino E, Sakamoto T. Relationship between retinal artery trajectory and axial length in Japanese school students. Jpn J Ophthalmol. 2018; 62: 315–320. [CrossRef] [PubMed]
Yamashita T, Nitta K, Sonoda S, Sugiyama K, Sakamoto T. Relationship between location of retinal nerve fiber layer defect and curvature of retinal artery trajectory in eyes with normal tension glaucoma. Invest Ophthalmol Vis Sci. 2015; 56: 6190–6195. [CrossRef] [PubMed]
Yoshihara N, Yamashita T, Ohno-Matsui K, Sakamoto T. Objective analyses of tessellated fundi and significant correlation between degree of tessellation and choroidal thickness in healthy eyes. PLoS One. 2014; 9: e103586. [CrossRef] [PubMed]
Suzuki S. Quantitative evaluation of ‘‘sunset glow’’ fundus in Vogt-Koyanagi-Harda disease. Jpn J Ophthalmol. 1999; 43: 327–333. [CrossRef] [PubMed]
Neelam K, Chew RY, Kwan MH, Yip CC, Au Eong KG. Quantitative analysis of myopic chorioretinal degeneration using a novel computer software program. Int Ophthalmol. 2012; 32: 203–209. [CrossRef] [PubMed]
Yan YN, Wang YX, Xu L, Xu J, Wei WB, Jonas JB. Fundus tessellation: prevalence and associated factors: The Beijing Eye Study 2011. Ophthalmology. 2015; 122: 1873–1880. [CrossRef] [PubMed]
Terasaki H, Yamashita T, Yoshihara N, et al. Location of tessellations in ocular fundus and their associations with optic disc tilt, optic disc area, and axial length in young healthy eyes. PLoS One. 2016; 11: e0156842. [CrossRef] [PubMed]
Yamashita T, Iwase A, Kii Y, et al. Location of ocular tessellations in Japanese: population-cased Kumejima study. Invest Ophthalmol Vis Sci. 2018; 59: 4963–4967. [CrossRef] [PubMed]
Tibshirani R. Regression shrinkage and selection via the lasso. J Royal Stat Soc. 1996; 58: 267–288.
Friedman J, Hastie T, Tibshirani R. Regularization paths for generalized linear models via coordinate descent. J Stat Softw. 2010; 33: 1–22. [CrossRef] [PubMed]
Barbosa MS, Bubna-Litic A, Maddess T. Locally countable properties and the perceptual salience of textures. J Opt Soc Am A. 2013; 30: 1687–1697. [CrossRef]
Akutekwe A, Seker H. A hybrid dynamic Bayesian network approach for modelling temporal associations of gene expressions for hypertension diagnosis. Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society IEEE Engineering in Medicine and Biology Society Annual Conference. 2014: 804–807.
Asaoka R. Measuring visual field progression in the central 10 degrees using additional information from central 24 degrees visual fields and “lasso regression”. PloS One. 2013; 8: e72199. [CrossRef] [PubMed]
Fujino Y, Murata H, Mayama C, Asaoka R : Applying “Lasso” regression to predict future visual field progression in glaucoma patients. Invest Ophthalmol Vis Sci. 2015; 56: 2334–2339. [CrossRef] [PubMed]
Japkowicz N. Evaluating Learning Algorithms: A Classification Perspective. Cambridge, UK: Cambridge University Press; 2011.
Airaksinen PJ, Nieminen H, Mustonen E. Retinal nerve fibre layer photography with a wide angle fundus camera. Acta Ophthalmol (Copenh). 1982; 60: 362–368. [CrossRef] [PubMed]
Terasaki H, Sonoda S, Kakiuchi N, Shiihara H, Yamashita T, Sakamoto T. Ability of MultiColor scanning laser ophthalmoscope to detect non-glaucomatous retinal nerve fiber layer defects in eyes with retinal diseases. BMC Ophthalmol. 2018; 18: 324. [CrossRef] [PubMed]
Jonas JB, Xu L, Wei WB, et al. Retinal thickness and axial length. Invest Ophthalmol Vis Sci. 2016; 57: 1791–1797. [CrossRef] [PubMed]
Yamashita T, Tanaka M, Kii Y, Nakao K, Sakamoto T. Association between retinal thickness of 64 sectors in posterior pole determined by optical coherence tomography and axial length and body height. Invest Ophthalmol Vis Sci. 2013; 54: 7478–7482. [CrossRef] [PubMed]
Atchison DA, Jones CE, Schmid KL, et al. Eye shape in emmetropia and myopia. Invest Ophthalmol Vis Sci. 2004; 45: 3380–3386. [CrossRef] [PubMed]
Pope JM, Verkicharla PK, Sepehrband F, Suheimat M, Schmid KL, Atchison DA. Three-dimensional MRI study of the relationship between eye dimensions, retinal shape and myopia. Biomed Opt Express. 2017; 8: 2386–2395. [CrossRef] [PubMed]
Spaide RF, Campeas L, Haas A, et al. Central serous chorioretinopathy in younger and older adults. Ophthalmology. 1996; 103: 2070–2079. [CrossRef] [PubMed]
Yoshihara N, Sakamoto T, Yamashita T, et al. Wider retinal artery trajectories in eyes with macular hole than in fellow eyes of patients with unilateral idiopathic macular hole. PLoS One. 2015; 10: e0122876. [CrossRef] [PubMed]
Breiman L, . Random forests. Mach Learn. 2001; 45: 5–32. [CrossRef]
Cristianini N, J. S-T. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge, UK: 2Cambridge University Press; 2000.
Yamashita T, Iwase A, Sakai H, Terasaki H, Sakamoto T, Araie M. Differences of body height, axial length, and refractive error at different ages in Kumejima study. Graefes Arch Clin Exp Ophthalmol. 2019; 257: 371–378. [CrossRef] [PubMed]
Sawada A, Tomidokoro A, Araie M, Iwase A, Yamamoto T ; Tajimi Study Group . Refractive errors in an elderly Japanese population: the Tajimi study. Ophthalmology. 2008; 115: 363–370. [CrossRef] [PubMed]
Figure 1.
 
Representative fundus photographs of a man (left) and a woman (right). The fundus of the man has a reddish color (left), whereas that of the woman is bluish to greenish (right). The supratemporal artery is located closer to the macula in the woman's eye (right) than in the man's eye (left).
Figure 1.
 
Representative fundus photographs of a man (left) and a woman (right). The fundus of the man has a reddish color (left), whereas that of the woman is bluish to greenish (right). The supratemporal artery is located closer to the macula in the woman's eye (right) than in the man's eye (left).
Figure 2.
 
Method of quantifying retinal vessel angles and (A) papillomacular position and (B) ovality ratio. Red double arrows point to the supra- and infratemporal retinal artery angles. Blue double arrows point to the supra- and infratemporal retinal vein angles. White double arrow is papillomacular position. The ovality ratio was determined by dividing the maximum by the minimum disc diameters.
Figure 2.
 
Method of quantifying retinal vessel angles and (A) papillomacular position and (B) ovality ratio. Red double arrows point to the supra- and infratemporal retinal artery angles. Blue double arrows point to the supra- and infratemporal retinal vein angles. White double arrow is papillomacular position. The ovality ratio was determined by dividing the maximum by the minimum disc diameters.
Figure 3.
 
Method of quantifying (A) retinal artery trajectory and (B) fundus color. The coordinate data (yellow dots in the fundus photograph) are fit to a second-degree polynomial equation, a + bx + cx2/100, with the curve fitting program of ImageJ (lower graph). The c constant was used as the degree of curvature of the retinal artery trajectory. ImageJ software was used to identify the mean intensity of the red (R), green (G), and blue (B) (right three graphs) within the peripapillary eight circles (yellow circles in the fundus photograph). The tessellation fundus index (TFI) is calculated by following formula: TFI = R/(R + G + B) in each of the eight locations.
Figure 3.
 
Method of quantifying (A) retinal artery trajectory and (B) fundus color. The coordinate data (yellow dots in the fundus photograph) are fit to a second-degree polynomial equation, a + bx + cx2/100, with the curve fitting program of ImageJ (lower graph). The c constant was used as the degree of curvature of the retinal artery trajectory. ImageJ software was used to identify the mean intensity of the red (R), green (G), and blue (B) (right three graphs) within the peripapillary eight circles (yellow circles in the fundus photograph). The tessellation fundus index (TFI) is calculated by following formula: TFI = R/(R + G + B) in each of the eight locations.
Figure 4.
 
The receiver operating characteristic curve with the Ridge binomial logistic regression. The area under the receiver operating characteristic curve was 77.9% (P < 0.001, DeLong's method). AROC, area under the receiver operating characteristic curve.
Figure 4.
 
The receiver operating characteristic curve with the Ridge binomial logistic regression. The area under the receiver operating characteristic curve was 77.9% (P < 0.001, DeLong's method). AROC, area under the receiver operating characteristic curve.
Table 1.
 
Participants' Data
Table 1.
 
Participants' Data
Table 2.
 
Sex Difference of Ocular Fundus Parameters Used for Analysis
Table 2.
 
Sex Difference of Ocular Fundus Parameters Used for Analysis
Table 3.
 
The OR of Ocular Fundus Parameters
Table 3.
 
The OR of Ocular Fundus Parameters
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×