Open Access
New Developments in Vision Research  |   July 2021
A New Smartphone-Based Optic Nerve Head Biometric for Verification and Change Detection
Author Affiliations & Notes
  • Kate Coleman
    iKey, Nova, University College Dublin, Ireland
  • Jason Coleman
    iKey, Nova, University College Dublin, Ireland
  • Hector Franco-Penya
    iKey, Nova, University College Dublin, Ireland
  • Fatima Hamroush
    Drogheda Medical Eye Clinic, Drogheda, Ireland
  • Patrick Murtagh
    Mater Vision Institute, Mater University Hospital, Dublin, Ireland
  • Patricia Fitzpatrick
    Programme Evaluation Unit (PEU), National Screening Service, Dublin, Ireland
  • Mary Aiken
    Department of Law and Criminology, University of East London, East London, UK
  • Andrew Combes
    Northgate Public Services, Cambridge, UK
  • David Keegan
    Mater Vision Institute, Mater University Hospital, Dublin, Ireland
  • Correspondence: Kate Coleman, iKey Nova, University College, Dublin, Ireland. e-mail: katecoleman@ikey.ie 
Translational Vision Science & Technology July 2021, Vol.10, 1. doi:https://doi.org/10.1167/tvst.10.8.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kate Coleman, Jason Coleman, Hector Franco-Penya, Fatima Hamroush, Patrick Murtagh, Patricia Fitzpatrick, Mary Aiken, Andrew Combes, David Keegan; A New Smartphone-Based Optic Nerve Head Biometric for Verification and Change Detection. Trans. Vis. Sci. Tech. 2021;10(8):1. https://doi.org/10.1167/tvst.10.8.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Lens adapted smartphones are being used regularly instead of ophthalmoscopes. The most common causes of preventable blindness in the world, which are glaucoma and diabetic retinopathy, can develop asymptomatic changes to the optic nerve head (ONH) especially in the developing world where there is a dire shortage of ophthalmologists but ubiquitous mobile phones. We developed a proof-of-concept ONH biometric (application [APP]) to use as a routine biometric on a mobile phone. The unique blood vessel pattern is verified if it maps on to a previously enrolled image.

Methods: The iKey APP platform comprises three deep neural networks (DNNs) developed from anonymous ONH images: the graticule blood vessel (GBV) and the blood vessel specific feature (BVSF) DNNs were trained on unique blood vessel vectors. A non-feature specific (NFS) baseline ResNet50 DNN was trained for comparison.

Results: Verification reached an accuracy of 97.06% with BVSF, 87.24% with GBV and 79.8% using NFS.

Conclusions: A new ONH biometric was developed with a hybrid platform of ONH algorithms for use as a verification biometric on a smartphone. Failure to verify will alert the user to possible changes to the image, so that silent changes may be observed before sight threatening disease progresses. The APP retains a history of all ONH images. Future longitudinal analysis will explore the impact of ONH changes to the iKey biometric platform.

Translational Relevance: Phones with iKey will host ONH images for biometric protection of both health and financial data. The ONH may be used for automatic screening by new disease detection DNNs.

Introduction
Many global communities continue to have a dire shortage of medical personnel yet have ready access to telemedicine with their mobile phone, a supercomputer. Rapid advances in camera technology mean that Smartphone ophthalmoscopes, such as the iphone D-Eye, have been proven to be easier to use than traditional ophthalmoscopes.1 For the first time, the color image of the retina is available without any clinical expertise. Artificial intelligence of such images can automatically detect manifest glaucoma cupping,2 diabetic retinopathy,3 and age related maculopathy4 with high sensitivity and specificity, as well as systemic changes, such as aging, cardiovascular disease,5 renal disease,6 and even Alzheimer's disease.7 
We have chosen to start with the easiest part of the retina to capture with a small pupil, the optic nerve head (ONH), for use as a routine biometric. We wanted to develop software that could bring the color fundus image, with a motive to regularly use and update the image from childhood, to the ubiquitous mobile phone. The more regularly the image is taken, the easier it is to incidentally detect any change to the image which might indicate disease, either with automatic artificial intelligence or telemedicine. For example, prompt notification of silent ONH changes could herald Glaucoma, expected to affect 111 million people by 20408 yet usually undiagnosed until at least 25% sight is lost.9 
We present a new biometric, a feature-specific hybrid platform of artificial intelligence and computer vision algorithms for automatic analysis of the ONH image. It will map a new ONH image with a previously registered one for verification. If they do not match, for example, due to features belonging to a different person, or being obscured with hemorrhage or new vessel changes, then verification will fail. The iKey includes supervised specific feature deep neural networks (DNNs), so that routine verification should not be possible when, for example, the image of specific blood vessel features change, as with some potentially blinding diseases or life-threatening systemic conditions. 
Methods
A hybrid software platform of automatic ONH image analysis and verification was developed using three processes: (1) a ResNet-based10 baseline non-feature specific (NFS) system with no feature engineering, (2) a supervised “Graticule” blood vessel feature-specific system (GBV) where the features are engineered from the intersection of vessels with a geometric fixed size graticule, and (3) a partially supervised blood-vessel specific feature (BVSF) matched computer vision processing system, which does not require a training set (see Fig. 1). 
Figure 1.
 
Three automatic stages of iKey algorithm preparation are shown. The ONH is automatically cropped from the image before automatic vessel segmentation. The green channel blood vessel features are extracted for GBV and BVSF methods.
Figure 1.
 
Three automatic stages of iKey algorithm preparation are shown. The ONH is automatically cropped from the image before automatic vessel segmentation. The green channel blood vessel features are extracted for GBV and BVSF methods.
Image Acquisition and Preparation
The Data analyzed in this report comprise fully anonymized fundus images from 743 subjects (1486 eyes) acquired from the patients consented to the Irish Diabetic Retina Screen program.11 At each annual visit (encounter), two images per eye are captured: one macula centered image and one optic disc centered. The dataset contains up to four separate encounters per patient, taken over 4 years, resulting in 11,844 fundus images. The 45 degree images were captured with eight different nonmydriatic fundus cameras. The anonymized images were transferred for analysis. The anonymization process was independently confirmed by author A.C. and the RetinaScreen IT service. Analysis of images was performed in full compliance with the Declaration of Helsinki, the Charter of Fundamental Rights of the European Union, the European Convention on Human Rights, and the European Union's Ethics in Social Science guidelines Humanities. The RetinaScreen graders and authors D.K. and K.C. (ophthalmologists) excluded 796 (6.7%) images because of glaucoma, optic nerve head pathology, myopic distortion, peripapillary pigmentation, and media opacities. 
Dataset Split
The resulting 10,763 images were divided into 3 subgroups to provide a training set 60% (414 patients or 6450 images) used to train the neural networks of ResNet based NFS and GBV (not used on BVSF), a development set 20% (138 patients, 276 eyes, and 2159 images) for knowing when to stop the training and optimizing the distance threshold for predicting genuine or impostor, and a testing set of 20% (138 patients, 276 eyes, and 2154 images) used to evaluate the performance of all 3 systems and compare them against each other. 
The biometric systems work over pairs of images: a genuine pair is a pair of images from the same eye taken at different times (e.g. encounter one at year 1 and another at the annual encounter). An imposter pair is a pair of images of different eyes. Note that the impostor pairs can be composed of one right and one left eye image. 
Training Set
The training set was used for training the baseline NFS and GBV algorithms. The BVSF model does not require training. 
The training set for GBV was augmented by generating 20 similar feature vectors out of each image. 
The NFS training set was augmented by generating three similar feature vectors out of each image. 
Testing and Development Sets
The testing and development sets were not augmented. The Development Set is used on all three models (NFS, GBV, and BVSF) to determine the optimal distance threshold for which a pair will be predicted as “genuine” or “impostor.” 
The number of genuine pairs and impostor pairs is unbalanced. Genuine pairs are generated by exhausting all possible combinations of pairs of images of the same eye. Impostor pairs are generated by exhausting the possibilities of combining different eyes but choosing randomly only one image per eye of all possible eyes. 
In both cases, the amount of impostor pairs is the total amount of 276 eyes combinations of pairs \(( {{276}\atop 2} ) = 37,950\) impostor pairs, which is much higher than the possibilities for genuine pairs which is up to \(({ 4\atop 2} ) = 6\) per eye, 6 combinations * 276 eyes = 1656 pairs. 
The set of impostor pairs is preselected and fixed for all three methods, consequently, all three systems are evaluated with the same set of pairs (test set) and parameters tuned using the same development pairs. 
Data Preprocessing
Before processing, images from different cameras must be normalized so that the extracted ONH cropped areas offer the same angle of view per pixel area, independent of the camera quality. To conduct this step, the black background of each image is replaced by a square background of the size of the circular area of the fundus part of the image. This involves cropping the background on the sides and extending the background on the top and bottom, which differs across cameras (see Fig. 2). 
Figure 2.
 
The image is normalized by creating a square frame around the image before extraction of the 600 × 600 pixel area.
Figure 2.
 
The image is normalized by creating a square frame around the image before extraction of the 600 × 600 pixel area.
After making the image square, the resolution is normalized to a fixed size before extracting the 600 × 600 pixel cropped area. 
Segmentation and Cropping Methods
The Maharjan12 algorithm was used for blood vessel segmentation, intersection-over-union (IOU) of 84% on the DRIONS-DB dataset. 
The Giachetti algorithm13 is used to identify the center of the ONH. For the purposes of the development of the present work, the system only has to produce a 600-pixel square cropped image that contains the ONH rather than match the exact area. The algorithm has a 95.9% accuracy (probability of finding the ONH) on a sample of 219 adult images. 
After this process, a further 2.35% (2530) images were manually removed (author K.C.) because of failed cropping and blurred or missed ONH pathology. Images that are unfocussed (any image for which the Laplacian14 operator scored under 5) are automatically filtered out. 
Feature Extraction From ONH Cropped Image
For GBV and NFS models, a Siamese15 network is trained to classify pairs of those vectors as a genuine pair (positive class) or an imposter pair (negative class). 
Non-feature Specific Baseline
The cropped square images of 600 × 600 pixels are input into the pretrained ResNet50 network truncated at layer 175. This layer produces vectors of features of 2048 numbers that correspond to the internal states of the network. training images were augmented by changing the contrast of the cropped image. 
Graticule Blood Vessel Feature-Specific System
For GBV, the intersection between the blood vessels segmentation and a fixed size virtual graticule, placed at the center of the predicted ONH, is used to extract vectors of features that represent the relationships between the blood vessels. This method was developed to be independent of the shape of the ONH rim when segmenting the ONH, by allowing repeatable mapping of vessel cross-sections with a fixed geometric overlay. This is to avoid possible error due to variations in peripapillary boundaries, for example, with myopic eyes or refractive distortion.16 
The vectors of features are extracted by dropping a geometrical pattern of 10 concentric circle graticules over the center of the cropped ONH image (see Fig. 3). A histogram of the sum of the intersections between the circle and the overlying blood vessels is made. Images were augmented by micro-relocations of the vessel/graticule intersection, generating the training set for improved accuracy. 
Figure 3.
 
Left: The blood vessels of the cropped optic nerve head with the graticule, a set of circles placed over the center. Right: Extracted vectors plotted (green spots) over the original cropped image.
Figure 3.
 
Left: The blood vessels of the cropped optic nerve head with the graticule, a set of circles placed over the center. Right: Extracted vectors plotted (green spots) over the original cropped image.
The first 3 internal circles have a histogram of 12 bins, the next three 24 bins and the last four 36 bins, producing a feature vector of 252 numbers. 
Blood-Vessel Specific Feature
The blood vessel feature algorithm (Fig. 4) is based on the Oriented FAST and Rotated BRIEF algorithm (ORB),17 which identifies features on images and matches across other images. The 600 × 600 cropped pixel image is resized to 300 × 300 pixels and enhanced through software filter “Contrast Limited Adaptive Histogram Equalization” (CLAHE).18 Then the ORB algorithm is applied to a pair to extract features from both images, producing a list of matches between them. 
Figure 4.
 
Example of BVSF verification of two green channel enhanced greyscale ONH images, by aligning vectors (black circles) from the same feature on each image. The percentage of parallel lines reflect the level of accuracy. Movement away from parallel suggests impostor.
Figure 4.
 
Example of BVSF verification of two green channel enhanced greyscale ONH images, by aligning vectors (black circles) from the same feature on each image. The percentage of parallel lines reflect the level of accuracy. Movement away from parallel suggests impostor.
Given the list of matches, the algorithm selects the group of the top matched blood vessel feature matches that have the highest score, before analysis of the average movement of all individual matches on that top selection. The Euclidean distance is calculated for each individual match. The process repeatedly removes the matches of higher distance than average until reaching a target set size of matches. The final match score for alignment comprises the average distance of movement to each individual match. The average distance to the average movement from each individual match is the score of the matching for the whole image, in this way, images where the matches are aligned have higher scores. 
Deep Neural Network Training Methods
Two Siamese DNNs were trained. The first trained using the GBV feature-vectors, and the second using the NFS feature-vectors. Both Siamese DNNs classify each pair of the ONH images as a genuine pair (positive class) or as an imposter nonmatching pair (negative class). 
Threshold Optimization
Both the Siamese DNNs and the BVSF matching systems produce a distance score for each pair of images, maximized for impostor pairs and minimized for genuine pairs. The optimal distance boundary to discern genuine from impostor pairs is inferred. Figure 5 shows the results of the accuracy of genuine pairs (acc genuine pairs, or true positive rate) plotted against the accuracy of the system rejecting impostor pairs (acc impostor pair, or true negative rate). Each point of the graph is calculated for different threshold values for which the distance scores of a pair of images would be classified as a genuine pair. Figure 6 shows the area under the receiver operating curve (ROC) for correlating specificity and sensitivity for the three approaches (Siamese GBV and NFS DNNs and the BVSF DNN). 
Figure 5.
 
Threshold optimization. Calculation of the optimal accuracy of the system for different threshold values, using the development set. The result is used to make predictions on the test set.
Figure 5.
 
Threshold optimization. Calculation of the optimal accuracy of the system for different threshold values, using the development set. The result is used to make predictions on the test set.
Figure 6.
 
Correlation between specificity and sensitivity for different threshold values for BVSF, GBV, and NFS, showing the area under the receiver operating curve (ROC).
Figure 6.
 
Correlation between specificity and sensitivity for different threshold values for BVSF, GBV, and NFS, showing the area under the receiver operating curve (ROC).
Results
Results of verification are presented in Table 1, showing performance ranking. The BVSF reaches 97.06% accuracy. The GBF reaches 87.24% and the baseline NFS reaches 79.8%. 
Table 1.
 
Verification Performance and AUROC
Table 1.
 
Verification Performance and AUROC
Table 2 shows the results of a test set containing cropping errors before and after manual removal of the images which failed to crop. Manual removal improved verification from 2.69% to 1.92%, improving successful verification to 98.08%. Table 3 shows comparison between GBV and BVSF methods. 
Table 2.
 
Comparison of Verification Sensitivity of BVSF and GBV DNNs and Cropping Errors
Table 2.
 
Comparison of Verification Sensitivity of BVSF and GBV DNNs and Cropping Errors
Table 3.
 
Analysis of Overlap Between BVSF and GBV
Table 3.
 
Analysis of Overlap Between BVSF and GBV
Error Analysis Survey
Results of errors found in images, which failed to correctly verify, are shown in Table 4, with considerable overlap between groups. There were 2.69% images (198 pairs) that failed verification match by GBV and BVSF, 58 of them due to cropping errors. Most of these missed being filtered out at the preprocessing stage. Examples of errors are shown in Figure 7 (Fig. 7a to 7j). Cropping failure was associated with poor image acquisition, camera edge artifact “blur,” hypopigmentation, or pigmentation or more than one of these signs. One image (7d) failed to verify because the 600 × 600 pixel cropped area did not include all of the large optic disc. Eight images had no obvious reasons for verification failure. 
Table 4.
 
Observations on 198 Images Which Failed to be Correctly Verified by Both BVSF and GBV
Table 4.
 
Observations on 198 Images Which Failed to be Correctly Verified by Both BVSF and GBV
Figure 7.
 
(7a-7j) Demonstrate examples of images rejected by GBV and BVSF DNNs. Image pair 7g was verified by NFS, despite the absence of an ONH on the right image. 7e Shows a posterior vitreous hemorrhage on the right image.
Figure 7.
 
(7a-7j) Demonstrate examples of images rejected by GBV and BVSF DNNs. Image pair 7g was verified by NFS, despite the absence of an ONH on the right image. 7e Shows a posterior vitreous hemorrhage on the right image.
A single failed patient eye image caused all pairs with that image to fail (i.e. from 16 combinations of up to 4 years of encounters). Several images lacked contrast between the ONH and surrounding retina (“monochromatic”). Figure 7e shows a pair of the ONH images taken from macula centered and disc centered viewpoint, with a clear vitreous hemorrhage suspended on the posterior vitreous detachment of the disc centered image on the right. Figure 7i shows failed cropping due to bright peripapillary hypopigmentation on the left image macula centered image, whereas different illumination of the right disc centered image taken at the same sitting was successful. 
Fifty-two of 140 false negatives were verified by unsupervised nonfeature specific Resnet, despite several pairs having no image of the ONH to match. Analysis of images verified by GBV and not BVSF and verified by BVSF and not GBV revealed no clues to signs, which might suggest the reason for false negative verifications. This will be explored in future research. 
Discussion
We have developed a mobile phone proof-of-concept iKey biometric APP to verify an image of the ONH with an accuracy of 97.06%. The small ONH, as opposed to the larger retina, was chosen for this biometric development for two reasons: it lies close to the macula, behind the pupil, allowing for easy image capture with even a small pupil for a mobile phone owner, and the ONH contains the uniquely positioned retinal blood vessels and nerve fibers, previously only visible to ophthalmologists, reflecting local and systemic health. 
Verification will fail if the ONH image does not map onto a previously enrolled image. Asymptomatic changes to the ONH can occur for many reasons, including aging, disease, healing, or deterioration for subclinical reasons. The capture of the image may also be hampered by media changes, such as cataract and vitreous opacities. 
Lightweight portable retinal cameras are widely available, furthermore, several fundus camera lens adaptors have already received full US Food and Drug Administration (FDA) approval for using the mobile phone as an ophthalmoscope.19 
Once the image is enrolled in the iKey APP, a silent continuous record is accumulated and may be simultaneously processed by all platform DNNs (see Fig. 1). 
The ONH contains the root of the uniquely patterned retinal vessels allowing for easy capture with even a small pupil. There is a growing demand for a safe biometric for vulnerable groups, such as children, for digital onboarding and cybersecurity.20 The iKey verification accuracy of 97.06% surpasses that of face recognition technology,21 currently hampered as a biometric because of coronavirus disease 2019 (COVID-19) induced mask wearing.22 Unlike face or iris recognition, a live image of the ONH is internal and gaze evoked, so it cannot be unknowingly captured or altered. 
There is potential for iKey to be used from an early age not just for sight protection but also for general health maintenance. The iKey can work as part of a multimodal platform allowing analysis by other DNNs. 
Two of the most common causes of preventable blindness in the world,8 diabetic retinopathy and glaucoma, can occur with silent ONH changes. 
Diabetic retinopathy is already being screened using the Remidio-adapted android mobile phone offline as an edge device.23 Glaucoma-screening algorithms are more challenging.24 The current lack of clinical consensus on diagnosis of structural or functional changes in glaucoma25 hampers the clinical parameters used to train DNNs with adequate sensitivity. The precise pathogenesis of glaucoma remains to be established, but ONH vasculature changes with silent glaucoma.2630 Wang et al. have demonstrated the importance of mechanical shearing factors on cribriform plate and remodeling, with maximum shearing forces along the vessels.31 Retinal blood vessel shifts have been postulated to be strongly linked with biomechanical forces and differential tissue deformation, including changes to the ONH shape, with rapidly progressive glaucoma.32 Optical coherence tomography (OCT) and B scan analysis has also confirmed peripapillary age-related tissue remodeling.33 Preliminary results on age classification of a limited database of children are promising, the subject of a future report on the completed dataset (author communication, and safe data collection currently curtailed by the COVID pandemic). 
Thakur et al. developed a DNN to predict future glaucoma on color photographs of ONH, which progressed to develop glaucomatous optic neuropathy.34 Mendex-Hernandez et al. have reported equivalent sensitivity to glaucoma detection using fast cost-effective colorimetric on color fundus photographs.35 
Jammal et al36 have suggested the use of M2M DNN training, using DNNs trained with images of patients with known retinal nerve fiber layer (RNFL) and functional loss to train diagnostic DNNs on color fundus images of the same patients in order to improve detection of fast glaucoma progressors. Bowd et al.37 have combined machine learning gradient-boosting classifiers combining optical coherence tomography angiography (OCTA) and OCT macula and ONH measurements to predict glaucoma. 
Future iKey DNNs with feature relationship change mapping using OCT, spectral domain-OCT (SD-OCT), and OCTA images will be included in the multimodal APP. The ability to alert to specific vessel feature change within a short time of the event may allow more time specific correlation with structural RNFL and perimetric changes. 
ONH vessels have been affected systemic vascular conditions38,39 as well as cerebral small vessel disease5 and the homology among the retina, cerebral vascular features, and diseases, such as Alzheimer’s disease is well established.7 The iKey may have a much broader role for disease prevention and health monitoring, as with other wearable technology.40 Future work will include the relationship of nonvascular features around the vascular framework within the geometric graticular space. 
There are several limitations to this initial development of the iKey platform and all will be addressed with ongoing research. 
The database comprises a filtered predominantly Caucasian Irish diabetic population with normal nonmyopic fundi. The iKey is verifying existing features, not making a diagnosis, so it could be expected to make no difference with mixed populations, but this remains to be confirmed in future work over a broader demographic. It is known that the ONH area ranges from 1.8 mm to up to 6 mm in various populations.41 Interindividual variability of the optic disc area16 and refractive errors can cause significant distortion of the ONH image.42 We designed the GBV DNN in iKey to map vectors on the intersection of the blood vessels with a concentric graticule in order to avoid optical errors due to aberrant refractive distortion of the rim appearance and for potential mechanical change detection research with progressive myopia. 
The BVSF mapping is designed to optimize verification as a biometric, by mapping features common to unique vessel patterns on both images. The GBV DNN is, however, based on mapping an identical group of blood vessel vectors to optimize detection of specific blood vessel changes. There were 11.15% that failed to verify with the GBV but were successful with BVSF. In addition, 2.56% failed with BVSF but succeeded with GBV. We plan to test iKey verification on images of the ONH, which have actually developed changes, such as manifest glaucomatous optic neuropathy and disc hemorrhages, and are commencing a prospective longitudinal study of the same. In the interim, we performed a limited experiment to verify an ONH before and after applying a fake hemorrhage, drawn with Procreate APP software and an Apple pencil (Fig. 8). 
Figure 8.
 
Top left: ONH before and, bottom left, after segmentation of vessels. Top right: Same ONH image with a fake disc hemorrhage before and, bottom right, after vessel segmentation. Note area of hemorrhage is devoid of segmented vectors.
Figure 8.
 
Top left: ONH before and, bottom left, after segmentation of vessels. Top right: Same ONH image with a fake disc hemorrhage before and, bottom right, after vessel segmentation. Note area of hemorrhage is devoid of segmented vectors.
The GBV failed to verify, as expected, with occlusion of the BV features, whereas the BVSF accurately verified the unique features common to both images despite the hemorrhage. This suggests that the GBV algorithm is superior, as hoped for, at detection of change over the blood vessel feature map, but it is too small an experiment to confirm. Future research will explore improving sensitivities with modifications of graticule shape, size, and vector bin size. 
Another limitation of this study was due to problems with the variety of cameras used on a retrospective dataset. The verification accuracy of 97.06% was on a dataset where error analysis on the false negative results (198 pairs) demonstrated 95 pairs would not have been included according to data preprocessing protocols and 58 were due to technical cropping failure. Eight different fundus cameras were used for these data from the national diabetic screening service, mandating an image normalization process to equalize pixel content. The iKey has been designed to include use at a one-one verification level with a self-owned fundus camera, where the image will be re-taken if unsatisfactory. Correct data preprocessing and good image capture would have resulted in a verification accuracy of 98%. 
Other vessel-centered methods of cropping will be explored to improve accuracy further. 
Some images had obvious signs of media opacities, as would be expected with this diabetic data set. Uni-ocular visual field loss is often asymptomatic, being obscured by the contralateral visual field. Capture failure due to media opacities may have inherent screening benefits for a diabetic or aging population. 
Cropping was successful below a blur index of 9%. The successful BVSF verification of some blurred images was surprising, underpinning the sensitivity of the feature-trained DNN. Further research will explore the benefits of lowering the blur threshold further, versus the possible loss of feature change detection at too high a blur level. 
A strength of the iKey biometric is that it provides a motive for the APP owner to have their ONH image, ensuring its concomitant availability, not only for all data protection, including health-clouds, but also for use with the growing pool of multimodal diagnostic retinal algorithms revealing new biomarkers. The iKey offers a unique ability to detect change. Longitudinal studies incorporating this at the point of structural change of the various features in the future might allow timely intervention at the earliest opportunity to halt functional loss. Future work will include supervised DNNs trained on nonvascular features around the vascular framework within the iKey graticules. 
Here, we present a hybrid platform of ONH algorithms based on mapping of ONH vascular features for use on a smartphone in order to anticipate silent structural change before functional loss. It could facilitate the real time, longitudinal, self-monitoring of our optic discs as markers for ocular and systemic disease. It could join other dashboards of diagnostic algorithms42 with great potential to narrow the gap in access to preventative health care measures irrespective of global location. 
Acknowledgments
Disclosure: K. Coleman, iKey (I, P); J. Coleman, iKey (I, C); H. Franco-Penya, iKey (E); F. Hamroush, None; P. Murtagh, None; P. Fitzpatrick, None; M. Aiken, None; A. Combes, None, D. Keegan, None 
References
Mamtora S, Sandinha MT, Ajith A, Song A, Steel DW. Smartphone ophthalmoscopy: a potential replacement for the direct ophthalmoscope. Eye. 2018; 32: 1766–1771. [CrossRef] [PubMed]
Yang HK, Kim YJ, Sung JY, Kim DH, Kim KG, Hwang JM. Efficacy for differentiating nonglaucomatous versus glaucomatous optic neuropathy using deep learning systems. Am J Ophthalmol. 2020; 216: 140–146. [CrossRef] [PubMed]
Xie Y, Nguyen QD, Hamzah H, et al. Artificial intelligence for teleophthalmology- based diabetic retinopathy screening in a national programme: an economic analysis modelling study. Lancet Digit Health. 2020; 2(5): e240–e249. [CrossRef] [PubMed]
Peng Y, Dharssi S, Chen Q, et al. DeepSeeNet: a deep learning model for automated classification of patient -based age-related macular degeneration severity from colour fundus photographs. Ophthalmology. 2019; 126: 565–575. [CrossRef] [PubMed]
Poplin R, Varadarajan AV, Blumer K, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomed Engin. 2018; 2: 158–164. [CrossRef]
Sabanayagam C, Xu D, Ting DSW, et al. A deep learning algorithm to detect chronic kidney disease from retinal photographs in community-based populations. Lancet Digital Health. 2020; 2: e295–e302. [CrossRef] [PubMed]
O'Bryhim BE, Lin JB, Van Stavern GP, Apte RS. OCTA findings in pre- clinical Alzheimer's disease: 3 year follow-up. Ophthalmology. 2021, https://doi.org/10.1016/j. ophtha.2021.02.016.
Flaxman SR, Bourne RA, Resnikoff S, et al. Global causes of blindness and distance vision impairment 1990-2020: a systematic review and meta-analysis. Lancet Glob Health. 2017; 5(12): e1221–e1234. [CrossRef] [PubMed]
Kerrigan-Baumrind LA, Quigley HA, Pease ME, Kerrigan DF, Mitchell RS. Number of ganglion cells in glaucoma eyes compared with threshold visual field tests in the same persons. Invest Ophthalmol Vis Sci. 2000; 41(3): 741–748. [PubMed]
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, 770–778.
Powell S, Landi L, Blaaow K, et al. Analysis of patient population presenting with proliferative diabetic retinopathy identified and referred from the Irish National Screening Program, Diabetic RetinaScreen. Invest Ophthalmol Vis Sci. 2019; 60(9): 6553.
Maharjan A. Blood Vessel Segmentation from Retinal Images. Master's Thesis. Easter Finland: University of Eastern Finland; 2016.
Giachetti A, Ballerini L,, Trucco E. Accurate and reliable segmentation of the optic disk in digital fundus images. J Med Imaging (Bellingham). 2014; 1(2): 024001. [CrossRef] [PubMed]
Pech-Pacheco JL, Cristobal G, Chamorro-Martinez J, Fernandez-Valdivia J. Diatom autofocusing in brightfield microscopy: a comparative study. Proceedings 15th International Conference on Pattern Recognition. ICPR-2000. Vol. 3. IEEE, 2000.
Bromley J, Guyon I, LeCun Y, Sackinger E, Shah R. Signature verification using a “Siamese” time delay neural network. Int J Pattern Recog Artific Intell. 1993; 7(4): 669–688 [CrossRef]
Baniasadi N, Wang M, Wang H, Mahd M, Elze T. Associations between optic nerve head- related anatomical parameters and refractive error over the full range of glaucoma severity. Trans Vis Sci Tech. 2017; 6(4): 9 [CrossRef]
Rublee E, Rabaud V, Konolige K, Bradski G. (2011). ORB: An efficient alternative to SIFT or SURF. Proceedings of the IEEE International Conference on Computer Vision. November 2011, 2564–2571, https://doi.org/10.1109/ICCV.2011.6126544.
Pisano E, Zong S, Hemminger B,, et al. Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J Digit Imag. 1998; 11(4): 193.
Karakaya M, Hacisoftaoglu RE. Comparison of smartphone -based retinal imaging systems for diabetic retinopathy detection using deep learning. BMC Bioinformatics. 2020; 21: 259. [CrossRef] [PubMed]
Donaldson S, Davidson J, Aiken MP. Safer technology, safer users: The UK as a world-leader in Safety Tech. Online Safety Technology Sectoral Analysis. Report prepared for the Department for Digital, Culture, Media & Sport UK. (2020).
Taigman Y, Yang M, Ranzato MA, DeepFace Wolf L.: Closing the Gap to Human-Level Performance in Face Verification Yaniv Taigman AI Research Menlo Park, CA, USA 2020 {yaniv, mingyang, ranzato}@fb.com Lior Wolf Tel Aviv University Tel Aviv, Israel wolf@cs.tau.ac.il A.
Wang Z, Wang G, Huang B, et al. Masked Face Recognition Dataset and Application. arXiv:2003.09093v2 [cs.CV] Preprint March 23, 2020
Natarajan S, Jaion A, Krishnan R, Rogye A, Sivaprasad S. Diagnostic accuracy of community-based diabetic retinopathy screening with an offline artificial intelligence system on a smartphone. JAMA Ophthalmol. 2019; 137(10): 1182–1188. [CrossRef] [PubMed]
Thompson AC, Jammal AA, Medeiros P. A review of deep learning for screening, diagnosis and detection of glaucoma progression. Trans Vis Sci Tech. 2020; 9(2): 42. [CrossRef]
Jampel HD, Friedman D, Quigley H, et al. Agreement among glaucoma specialists in assessing progressive disc changes from photographs in open-angle glaucoma patients. Am J Ophthalmol. 2009; 147 (1): 39–44.e1. [CrossRef] [PubMed]
Varma R, Spaeth G, Hanau C, Steinmann WC, Feldman RM. Positional changes in the vasculature of the optic disc in glaucoma. Am J Ophthalmol. 1987; 104: 457–464. [CrossRef] [PubMed]
Radcliffe NM, Smith SD, Syed ZA, et al. Retinal blood vessels positional shifts and glaucoma progression. Ophthalmology. 2014; 121: 842–848. [CrossRef] [PubMed]
Alward WLM, Longmuir SQ, Miri MS, Garvin MK, Kwon YH. Movement of retinal vessels to optic nerve head with intraocular pressure elevation in a child. Ophthalmology. 2015; 122(7): 1532–1534. [CrossRef] [PubMed]
WuDunn D, Takusagawa HL, Sit AJ, et al. OCT angiography for the diagnosis of glaucoma. A Report by the American Academy of Ophthalmology. Ophthalmology. 2021, https://doi.org/10.1016/j.ophtha.2020.12.027.
Wang YX, Yang H, Luo H, et al. Peripapillary scleral bowing increases with age and is inversely associated with peripapillary choroidal thickness in healthy eyes. Am J Ophthalmol. 2020; 217: 91–103. [CrossRef] [PubMed]
Fortune Brad . Pulling and tugging on the retina: mechanical impact of glaucoma beyond the optic nerve head. Invest Ophthalmol Vis Sci. 2019; 60: 26–35. [CrossRef] [PubMed]
Tun TA, Wang X, Baskaran M, et al. Variation of peripapillary scleral shape with age. Invest Ophthalmol Vis Sci. 2019; 60(10): 3275–3282. [CrossRef] [PubMed]
Thakur A., Goldblaum M, Yousefi S. Predicting glaucoma before onset using deep learning. Ophthalmology Glaucoma. 2020; 2020: 1–7.
Mendez-Hernandez C, Wang S, Arribas-Pardo P, et al. Diagnostic validity of optic nerve colorimetric assessment and optical coherence tomography angiography in patients with glaucoma. Br J Ophthalmol. 2020, https://doi.org/10.1136/bjophthalmol-2020-316455.
Jammal AA, Thompson AC, Mariottoni EB, et al. Rates of glaucomatous structural and functional change from a large clinical population: The Duke Glaucoma Registry Study. Am J Ophthalmol. 2021; 222: 238–247. [CrossRef] [PubMed]
Bowd C, Belghith A, Proudfoot JA, et al. Gradient-boosting classifiers combining vessel density and tissue thickness measurements for classifying early to moderate glaucoma. Ophthalmol. 2020; 217: 131–139.
Chang J, Ko A, Park SM, et al. Association of cardiovascular mortality and deep learning funduscopic atherosclerosis score derived from retinal fundus images. American J Ophthalmol. 2020; 217: 121–130. [CrossRef]
Patton N, Aslam T, MacGillivray T, Pattie A, Deary IJ, Dhillon B. Retinal vascular image analysis as a potential screening tool for cerebrovascular disease: a rationale based on homology between cerebral and retinal microvasculatures. J Anat. 2005; 206(4): 319–348. [CrossRef] [PubMed]
Krummel TM. The rise of wearable technology in health care. JAMA New Open. 2019; 2(2): e187672, doi:10.1001/jamanetworkopen.2018.7672. [CrossRef]
Jonas JB, Gusek GC, Naumann GOH. Optic disc, cup and neuroretinal rim size, configuration and correlations in normal eyes. Invest Ophthalmol Vis Sci. 1988; 29: 1151–1158. [PubMed]
Flitcroft DI, He M, Jonas JB, et al. IMI - defining and classifying myopia: a proposed set of standards for clinical and epidemiological studies. Invest Ophthalmol Vis Sci. 2019; 60(3): M20–M30. [CrossRef] [PubMed]
Yousefi S, Elze T, Pasquales L, et al. Monitoring glaucomatous functional loss using an artificial intelligence-enabled dashboard. Ophthalmol. 2020; 127: 1170–1178. [CrossRef]
Figure 1.
 
Three automatic stages of iKey algorithm preparation are shown. The ONH is automatically cropped from the image before automatic vessel segmentation. The green channel blood vessel features are extracted for GBV and BVSF methods.
Figure 1.
 
Three automatic stages of iKey algorithm preparation are shown. The ONH is automatically cropped from the image before automatic vessel segmentation. The green channel blood vessel features are extracted for GBV and BVSF methods.
Figure 2.
 
The image is normalized by creating a square frame around the image before extraction of the 600 × 600 pixel area.
Figure 2.
 
The image is normalized by creating a square frame around the image before extraction of the 600 × 600 pixel area.
Figure 3.
 
Left: The blood vessels of the cropped optic nerve head with the graticule, a set of circles placed over the center. Right: Extracted vectors plotted (green spots) over the original cropped image.
Figure 3.
 
Left: The blood vessels of the cropped optic nerve head with the graticule, a set of circles placed over the center. Right: Extracted vectors plotted (green spots) over the original cropped image.
Figure 4.
 
Example of BVSF verification of two green channel enhanced greyscale ONH images, by aligning vectors (black circles) from the same feature on each image. The percentage of parallel lines reflect the level of accuracy. Movement away from parallel suggests impostor.
Figure 4.
 
Example of BVSF verification of two green channel enhanced greyscale ONH images, by aligning vectors (black circles) from the same feature on each image. The percentage of parallel lines reflect the level of accuracy. Movement away from parallel suggests impostor.
Figure 5.
 
Threshold optimization. Calculation of the optimal accuracy of the system for different threshold values, using the development set. The result is used to make predictions on the test set.
Figure 5.
 
Threshold optimization. Calculation of the optimal accuracy of the system for different threshold values, using the development set. The result is used to make predictions on the test set.
Figure 6.
 
Correlation between specificity and sensitivity for different threshold values for BVSF, GBV, and NFS, showing the area under the receiver operating curve (ROC).
Figure 6.
 
Correlation between specificity and sensitivity for different threshold values for BVSF, GBV, and NFS, showing the area under the receiver operating curve (ROC).
Figure 7.
 
(7a-7j) Demonstrate examples of images rejected by GBV and BVSF DNNs. Image pair 7g was verified by NFS, despite the absence of an ONH on the right image. 7e Shows a posterior vitreous hemorrhage on the right image.
Figure 7.
 
(7a-7j) Demonstrate examples of images rejected by GBV and BVSF DNNs. Image pair 7g was verified by NFS, despite the absence of an ONH on the right image. 7e Shows a posterior vitreous hemorrhage on the right image.
Figure 8.
 
Top left: ONH before and, bottom left, after segmentation of vessels. Top right: Same ONH image with a fake disc hemorrhage before and, bottom right, after vessel segmentation. Note area of hemorrhage is devoid of segmented vectors.
Figure 8.
 
Top left: ONH before and, bottom left, after segmentation of vessels. Top right: Same ONH image with a fake disc hemorrhage before and, bottom right, after vessel segmentation. Note area of hemorrhage is devoid of segmented vectors.
Table 1.
 
Verification Performance and AUROC
Table 1.
 
Verification Performance and AUROC
Table 2.
 
Comparison of Verification Sensitivity of BVSF and GBV DNNs and Cropping Errors
Table 2.
 
Comparison of Verification Sensitivity of BVSF and GBV DNNs and Cropping Errors
Table 3.
 
Analysis of Overlap Between BVSF and GBV
Table 3.
 
Analysis of Overlap Between BVSF and GBV
Table 4.
 
Observations on 198 Images Which Failed to be Correctly Verified by Both BVSF and GBV
Table 4.
 
Observations on 198 Images Which Failed to be Correctly Verified by Both BVSF and GBV
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×