Open Access
Articles  |   November 2018
Depth-Based, Motion-Stabilized Colorization of Microscope-Integrated Optical Coherence Tomography Volumes for Microscope-Independent Microsurgery
Author Affiliations & Notes
  • Isaac D. Bleicher
    Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
  • Moseph Jackson-Atogi
    Department of Biomedical Engineering, Duke University, Durham, NC, USA
  • Christian Viehland
    Department of Biomedical Engineering, Duke University, Durham, NC, USA
  • Hesham Gabr
    Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
    Department of Ophthalmology, Ain-Shams University, Cairo, Egypt
  • Joseph A. Izatt
    Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
    Department of Biomedical Engineering, Duke University, Durham, NC, USA
  • Cynthia A. Toth
    Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
    Department of Biomedical Engineering, Duke University, Durham, NC, USA
  • Correspondence: Cynthia A. Toth, Department of Ophthalmology, Duke University Medical Center, 2351 Erwin Rd, Box 3802, Durham, NC 27710, USA. e-mail: cynthia.toth@duke.edu 
Translational Vision Science & Technology November 2018, Vol.7, 1. doi:10.1167/tvst.7.6.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Isaac D. Bleicher, Moseph Jackson-Atogi, Christian Viehland, Hesham Gabr, Joseph A. Izatt, Cynthia A. Toth; Depth-Based, Motion-Stabilized Colorization of Microscope-Integrated Optical Coherence Tomography Volumes for Microscope-Independent Microsurgery. Trans. Vis. Sci. Tech. 2018;7(6):1. doi: 10.1167/tvst.7.6.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: We develop and assess the impact of depth-based, motion-stabilized colorization (color) of microscope-integrated optical coherence tomography (MIOCT) volumes on microsurgical performance and ability to interpret surgical volumes.

Methods: Color was applied in real-time as gradients indicating axial position and stabilized based on calculated center of mass. In a test comparing colorization versus grayscale visualizations of prerecorded intraoperative volumes from human surgery, ophthalmologists (N = 7) were asked to identify retinal membranes, the presence of an instrument, its contact with tissue, and associated deformation of the retina. In a separate controlled trial, trainees (N = 15) performed microsurgical skills without conventional optical visualization and compared colorized versus grayscale MIOCT visualization on a stereoptic screen. Skills included thickness identification, instrument placement, and object manipulation, and were assessed based on time, performance metrics, and confidence.

Results: In intraoperative volume testing, colorization improved ability to differentiate membrane from retina (P < 0.01), correctly identify instrument contact with membrane (P = 0.03), and retinal deformation (P = 0.01). In model microsurgical skills testing, trainees working with colorized volumes were faster (P < 0.01) and more correct (P < 0.01) in assessments of thickness for recessed and elevated objects, were less likely to inadvertently contact a surface when approaching with an instrument (P < 0.01), and uniformly more confident (P < 0.01 for each) in conducting each skill.

Conclusions: Depth-based colorization enables effective identification of retinal membranes and tissue deformation. In microsurgical skill testing, it improves user efficiency, and confidence in microscope-independent, OCT-guided model surgical maneuvers.

Translational Relevance: Novel depth-based colorization and stabilization technology improves the use of intraoperative MIOCT.

Introduction
Advances in imaging and computer processing speeds have enabled development of novel intraoperative imaging techniques. Volumetric rendering of data collected peri- and intraoperatively has been adopted to guide surgical planning and maneuvers in ophthalmology, neurosurgery, orthopedic surgery, and reconstructive surgery.111 Volumetric display creates a view of the surgical field that can be manipulated intuitively and interacted with to provide critical feedback to trainee and experienced surgeons. 
In ophthalmology, microscope-integrated optical coherence tomography (MIOCT) is being used increasingly to augment the en face-only view of the operating microscope in posterior and anterior segment settings.15,12,13 Live, three dimensional (3D) rendering of OCT scans and visualization on a heads-up display have been described previously by our group.1416 These technologies allow for intraoperative imaging and real-time guidance of surgical maneuvers and have been shown to improve visualization of epiretinal membrane elevation, localization of instruments, and monitoring of retinal contour deformation during surgery. As surgeon experience with these systems has developed, it has started to impact surgical decision-making.16,17 
MIOCT faces a fundamental data visualization issue as the scanning technology advances – how can the surgeon view and analyze large quantities of continuously changing OCT data while actively operating and remaining safe in surgery? Current volumetric renders have been insufficient in solving this problem. As a 3D object is compressed into a two-dimensional (2D) display, foreground, midground, and background structures can be difficult to resolve, and instruments may be difficult to differentiate from surrounding tissue. Artificial shadowing, stereoptic displays, and rotation of the rendered volume can be used to highlight boundaries between surfaces, but they remain insufficient solutions, adding complexity to the MIOCT system and its operation.15 These issues have limited MIOCT volumes to ancillary intraoperative use and reinforced the need for the traditional optical view through the microscope. 
In other settings, colorization of medical imaging has been used to provide contextual information for complex 3D structures to address this data visualization question. Topographic maps have been used to visualize table-top OCT and magnetic resonance imaging (MRI) scans of the retina in evaluation of myopia, retinal detachment, and age-related macular degeneration (AMD).1821 In other fields, position-based colorization of 3D ultrasound scans of the mitral valve assist cardiac surgeons intraoperatively, and colorized mapping of brain shift guides neurosurgical tumor resection.2224 Additionally, nonmedical fields, such as earth and atmospheric science, widely use colorization for topography of 3D mappings.25 The addition of data overlain on the 3D volume improves interpretation of complex imaging. 
However, to our knowledge colorization of volumetric imaging has not been applied to data acquired in real time to guide surgical maneuvers due to computational challenges. To be useful, colorization must carry meaning not otherwise inherent in the volume. This requires additional computation time, which can add to lag between image capture and display to the surgeon. Second, real-time imaging of surgical fields is subject to motion induced by the patient, surgeon, and/or instrumentation. Therefore, registration and stabilization of the imaged object is necessary to provide a consistent reference for colorization. 
We developed computational techniques to apply colorization to MIOCT volumes based on depth, and stabilized the color gradient relative to the scanned object's axial motion to test the following hypotheses: (1) colorization would improve perspective of thickness and relative positioning in 3D volumes compared to grayscale volumes, (2) use of colorization intraoperatively would allow for faster and more accurate microsurgical maneuvers, (3) stabilization against a relative reference point will increase colorization use in real-life surgical scenarios when axial motion is significant, and (4) with improved visualization with colorization, it may be feasible to perform microsurgical maneuvers without the microscope optical view. 
Methods
Colorization and Stabilization Algorithm
Real-time, microscope integrated OCT and 3D rendering has been described previously by this group.15,16 In brief, the MIOCT uses a 100 kHz, 1060 nm swept-source OCT engine and a custom scanner that introduces the OCT light into the infinity space of the surgical microscope, such that the OCT and microscope views are parfocal and coaxial. The systems share the objective of the operating microscope and can scan at up to 10 volumes per second. This technology produces real-time, live, volumetric 4D (i.e., 3D over time) imaging to guide anterior and posterior segment surgeries. Volumes are rendered with a ray casting methodology and displayed to the surgeon via an external monitor, a heads-up display in the microscope oculars, or an external 3D display. 
Color mapping was integrated into the established MIOCT rendering process. A unique color was assigned to several positions along the B-scan axial dimension and a color gradient applied as a linear interpolation in the RGB color space between each position. Axial positions above the most superficial position and below the deepest position were assigned the color of those respective positions. When rendering the volume, voxels at particular depths took on a color as specified by the gradient. 
Color gradients were assigned relative to the volume's center of mass to stabilize colors relative to movement of the scanned object. Before each volume was rendered, a histogram of pixel intensity values was constructed for the fully processed OCT data and a threshold value at the 99th percentile of pixel intensity identified. All voxels with reflectivity below this threshold were eliminated, isolating the brightest surface that can be tracked between volumes. When imaging the retina, this surface typically is the retinal pigmented epithelium. The axial center of mass of this data was calculated using the formula:  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\bf{\alpha}}\)\(\def\bupbeta{\bf{\beta}}\)\(\def\bupgamma{\bf{\gamma}}\)\(\def\bupdelta{\bf{\delta}}\)\(\def\bupvarepsilon{\bf{\varepsilon}}\)\(\def\bupzeta{\bf{\zeta}}\)\(\def\bupeta{\bf{\eta}}\)\(\def\buptheta{\bf{\theta}}\)\(\def\bupiota{\bf{\iota}}\)\(\def\bupkappa{\bf{\kappa}}\)\(\def\buplambda{\bf{\lambda}}\)\(\def\bupmu{\bf{\mu}}\)\(\def\bupnu{\bf{\nu}}\)\(\def\bupxi{\bf{\xi}}\)\(\def\bupomicron{\bf{\micron}}\)\(\def\buppi{\bf{\pi}}\)\(\def\buprho{\bf{\rho}}\)\(\def\bupsigma{\bf{\sigma}}\)\(\def\buptau{\bf{\tau}}\)\(\def\bupupsilon{\bf{\upsilon}}\)\(\def\bupphi{\bf{\phi}}\)\(\def\bupchi{\bf{\chi}}\)\(\def\buppsy{\bf{\psy}}\)\(\def\bupomega{\bf{\omega}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\begin{equation}{\it{Center\ of\ Mass}}_k = {{\mathop \sum \nolimits_i \mathop \sum \nolimits_j \mathop \sum \nolimits_k k*A\left( {i,j,k} \right)} \over {\mathop \sum \nolimits_i \mathop \sum \nolimits_j \mathop \sum \nolimits_k A\left( {i,j,k} \right)}}{\rm {.}}\end{equation}
 
Here i, j, and k represent the fast-scanning, slow-scanning, and axial dimensions of the MIOCT volume, respectively, and A(i, j, k) represents the voxel intensity at a specific location in the scan. The color gradient then was specified based on positions relative to this center of mass. As a result, color changes due to movements of the scanned surface (i.e., from patient motion, surgical manipulation, and so forth) were mitigated (Fig. 1, Supplementary Video S1). 
Figure 1
 
MIOCT volumes and B-scans demonstrate the colorization and stabilization processes. A non-colorized MIOCT volume (A) is filtered with a threshold at the 99th percentile of reflectivity values (B), center of mass is calculated (C, red line) and color is applied (D). Top and bottom sequences demonstrate stability with axial motion.
Figure 1
 
MIOCT volumes and B-scans demonstrate the colorization and stabilization processes. A non-colorized MIOCT volume (A) is filtered with a threshold at the 99th percentile of reflectivity values (B), center of mass is calculated (C, red line) and color is applied (D). Top and bottom sequences demonstrate stability with axial motion.
Given the need to process a large quantity of data in real-time, the algorithm was written using a parallel computational approach (NVIDIA CUDA Toolkit; NVIDIA Corp., Santa Clara, CA) on a graphics processing unit (GPU). Performance analysis was conducted using a GPU profiler (NVIDIA Visual Profiler; NVIDIA) to measure the time to calculate center of mass and apply colorization. 
Colorization and stabilization were validated using two models: layered tape (3M, Maplewood, MN) to emulate retinal layers and a porcine eye. This study adhered to the ARVO Animal Statement for the Use of Animals in Ophthalmic and Vision Research in the use of porcine eyes. The model was translated across the scanning range of the MIOCT system in discrete increments of 1 mm and the calculated center of mass recorded at each position. Validation was achieved by comparison of changes in the calculated center of mass against the known movement of the stage. Expert review of the MIOCT volumes was performed to assess subjective stability of the colorization. 
Intraoperative Volume Testing
Colorization was applied to prerecorded 4D MIOCT data from previous human vitreoretinal surgeries. Experienced surgeons (N = 7) were shown a combination of five grayscale still volumes and videos of surgical membrane peeling (Supplementary Document S1) and asked to determine for each whether retinal membranes were differentiable from retina, an instrument was present in the volume, and the instrument was in contact with tissue and/or deforming the retina if present. Surgeons then were shown each volume in color and asked to reassess using the same questions. Their subjective preference for color or grayscale also was recorded for each volume. Survey responses were compared with independent review of B-scans from the volumetric data as the gold standard. Statistical testing was performed using McNemar's test for paired, nominal data with a significance level of 0.05. 
Surgical Skills Testing
MIOCT Setup and Surgical Tasks
The MIOCT scanner described above was used to display volumes in stereo on an external, 65-inch, 3D OLED television (LG, Seoul, South Korea), viewed with polarized glasses. B-scans of the volumes were available to the MIOCT operator and retrospectively to the grader. but not to the participants. The optical view through the microscope was obscured to ensure that the participants were using the OCT only. Two sets of scanning parameters were used: a 10 × 10 × 3.7 mm field of view with 350 × 96 × 800 voxels for the thickness identification task and a 5 × 5 × 3.7 mm field of view with 250 × 85 × 800 for the other tasks to provide smoother surgical guidance. Colorization was applied with red superiorly, yellow medially, and blue inferiorly and the color boundaries set across 20% of the volume at positions described for each skill below. 
Thickness Identification
Scenes each containing five objects of varying height either elevated from a flat surface or recessed into a flat surface were constructed from clay (Polyform Products Company, Elk Grove Village, IL). Color gradients were positioned across the range of object heights and/or depths. Subjects were shown each scene sequentially as an MIOCT volume and were asked to rank each of the five objects by thickness on a provided scoring sheet. They were not permitted to directly see or manipulate the object during testing. The time to complete each assessment and number of incorrect assessments were recorded for each object. This test was repeated five times with elevated objects and five times with recessed objects. 
Surface Approach
A globe eye model was composed of a posterior, flat, clay, 2 cm diameter surface with an elevated rim covered by a soft plastic hemisphere (Phillips Studio, Bristol, United Kingdom) with apex cut away to allow for MIOCT visualization of the clay and a 25 g cannula (Alcon, Ft. Worth, TX) 3 mm posterior to the cut-away margin of the hemisphere. 
Subjects were provided with a flexible loop (Finesse Loop; Alcon) and instructed to bring the tip of the loop as close to the surface as possible without touching. Each trial was stopped when the subject indicated that they were satisfied with the position of the instrument. Color gradients were positioned such that the surface was blue and yellow indicating the space immediately above the surface. The time to complete this task was recorded. MIOCT data were recorded and retrospective analysis identified the closest position of the instrument to the surface in the volumes 2 to 3 volumes from the final volume. These volumes were used to minimize the impact of inadvertent motion of the instrument as the subject indicated completion. The distance between the instrument and surface was measured and recorded. This trial was repeated 4 times. 
Object Grasp
The model eye described above was used in this task. A 4 mm diameter clay ring was placed on the clay surface and a 2 mm square of transparency film (Grafix Plastics, Maple Heights, OH) was folded to form a V-shape and placed within the clay ring. Subjects were instructed to use a 25 g forceps (Alcon) to remove the object without contacting surrounding structures. Color gradients were positioned such that the surface was blue and the ring and object were yellow and red, respectively. The time to complete this task was recorded. MIOCT data were recorded and retrospective analysis recorded the number of grasps (closure of the forceps) and inadvertent contacts with the underlying surface. This trial was repeated three times. 
Human Participants and Randomization
This study was approved by the institutional review board of the Duke University Health System. The study adhered to the tenets of the Declaration of Helsinki and all subjects participated in surgical tests on model eyes with full written consent explaining that participation was voluntary, private, and confidential, and that participation and performance were not to be factored in future grades and/or evaluations. Identity of study participants was masked from senior authors to avoid potential conflict. 
A sample of 15 students and ophthalmic surgeons in training were recruited from the Duke University School of Medicine and the Duke University ophthalmology department. Subjects were block randomized by training level (student, resident, fellow) into colorized (C+) and gray-scale (C−) groups. A total of five medical students, three residents, and seven fellows were enrolled in the study. 
Subjects received a brief orientation to the experiment, MIOCT system, and colorization technique. Testing consisted of two trials: (1) subjects completed the described skills with visualization according to their assigned group and (2) subjects crossed over to the opposite group and repeated the skills with the opposite visualization (C+ in grayscale and C− with color). This crossover was intended to control for a learning effect associated with practice of the tested skills. All subjects performed one attempt at each task in each visualization as training and data from this introduction were excluded from analysis. After each task, subjects were asked to report their subjective confidence in completing the task on a numeric scale from 1 (least) to 5 (most). 
Survey
Subjects completed a brief qualitative survey (Supplementary Document S2) upon completion of both trials asking the extent to which colorized MIOCT improved subject's performance of the tasks, subject's preference for grayscale or colorized visualization, and their interest in using MIOCT for surgical guidance. All survey questions were reported on a five-level Likert scale with a score of 5 indicating highest alignment with colorized visualization. Subjects also were provided the opportunity to give qualitative feedback. 
Data Analysis
All statistical testing was performed as paired comparisons between colorized and grayscale trials of all participants using R (R Foundation for Statistical Computing, Vienna, Austria). Times, measured data and confidence assessments were compared using Wilcoxon signed rank for ratio, interval, and ordinal data. All tests were performed as two-tailed tests with a significance level of 0.05. Comparisons also were made between the first and second trials to determine whether the observed difference was due to a learning effect from repeated performance of the skill. 
Data were combined between randomized groups based on the visualization used, regardless of whether the subject used the particular visualization in the first or second trial. Where subjects completed multiple attempts during a single task, measures were summed across attempts. Results from the qualitative survey were reported with descriptive statistics. 
Results
Colorization Feasibility and Validation
Performance analysis was conducted on rendered volumes from real-time imaging of model eyes (N = 50). Colorization and stabilization required 2.69 ms (standard deviation [SD] 0.11 ms) of processing time, adding 2% to the volume processing time (130.48 ms, SD 2.93 ms). 
Center of mass normalized to the known position change yielded an average of 0.0 mm (SD 0.025 mm) for the tape model and 0.0 mm (SD 0.016 mm) for the porcine retina. The calculated center of mass fits a known change in axial position linearly with R2 = 0.999 for tape and porcine retina models (Supplementary Figure S1). Subjective, expert review of an MIOCT volume series with an object moving axially (Supplementary Video S1) confirmed that the colorization was stable relative to this movement. 
Intraoperative Volume Testing
Experienced surgeons viewing a standardized assessment of four still volumes and one video from human membrane peeling surgeries in color and grayscale (Fig. 2 and Supplementary Video S2) were better able to identify the presence of membranes, instrument contact with tissue, and instrument deformation of the retina when reviewing volumes in color compared to grayscale (Table). Surgeons preferred colorized volumes and videos over grayscale volumes and videos in 81% of cases and were indifferent in 17%. 
Figure 2
 
Grayscale and colorized MIOCT volumes from membrane peeling cases shown during intraoperative volume testing. Instrument traction on membrane (A), membrane pulled by forceps (B), retina deformation by flexible loop (C) and flexible loop above retina surface (D) are more clearly visualized in color. Color is applied with red superiorly, yellow medially and green (blue in [C]) inferiorly. Color boundaries were individually chosen to highlight surface features.
Figure 2
 
Grayscale and colorized MIOCT volumes from membrane peeling cases shown during intraoperative volume testing. Instrument traction on membrane (A), membrane pulled by forceps (B), retina deformation by flexible loop (C) and flexible loop above retina surface (D) are more clearly visualized in color. Color is applied with red superiorly, yellow medially and green (blue in [C]) inferiorly. Color boundaries were individually chosen to highlight surface features.
Table
 
Results of Intraoperative Volume Testing
Table
 
Results of Intraoperative Volume Testing
Surgical Skills Testing
Data are reported as a median difference between paired colorized and noncolorized trials (color data – grayscale data). 
Thickness Identification
When ranking recessed objects by depth (Figs. 3A, 3B, 4) and elevated objects, subjects required less time (−18.9 seconds, SD 28.2 seconds, P < 0.01 and −13.3 seconds, SD 19.6 seconds, P < 0.01, respectively) and made more correct assessments (+24%, SD 17%, P < 0.01 and +16%, SD 18%, P < 0.01) in color compared to grayscale. Subjects reported increased confidence when working in color compared to grayscale (+2 on a scale of 1–5, SD 0.9, P < 0.01). 
Figure 3
 
Grayscale (top) and colorized (bottom) MIOCT volume examples from each microsurgical skill. Example surfaces with recessed (A) and elevated (B) objects respectively, the surface approach skill with the flexible loop approaching a flat surface (C), and the object grasp skill with the forceps attempting to grasp a membrane-like object (D). Color is applied with red superiorly, yellow medially and blue inferiorly. Color boundaries were applied just above (just below for [A]) the surface. While this figure uses 2D representations of the 3D volume, subjects viewed stereoptic images while completing each task.
Figure 3
 
Grayscale (top) and colorized (bottom) MIOCT volume examples from each microsurgical skill. Example surfaces with recessed (A) and elevated (B) objects respectively, the surface approach skill with the flexible loop approaching a flat surface (C), and the object grasp skill with the forceps attempting to grasp a membrane-like object (D). Color is applied with red superiorly, yellow medially and blue inferiorly. Color boundaries were applied just above (just below for [A]) the surface. While this figure uses 2D representations of the 3D volume, subjects viewed stereoptic images while completing each task.
Figure 4
 
Box and whisker plots summarizing measured outcomes of the thickness assessment (A), surface approach (B) and object grasp (C) skills. Data are presented as paired differences between colorized and grayscale trials. Data for each outcome was normalized from −1 to 1 based on the largest absolute value for that outcome. As such, time and confidence measures are plotted on consistent axes between skills while other measures are on unique axes. Axes are oriented such that values to the right of center support colorization, while values to the left of center support grayscale for all outcomes. The grey line delineates the point of no difference between colorized and grayscale visualization. Dashed lines delineate measures for each of the three tested skills. P values for each paired comparison are listed on the left and marked (*) when meeting the specified significance value of 0.05.
Figure 4
 
Box and whisker plots summarizing measured outcomes of the thickness assessment (A), surface approach (B) and object grasp (C) skills. Data are presented as paired differences between colorized and grayscale trials. Data for each outcome was normalized from −1 to 1 based on the largest absolute value for that outcome. As such, time and confidence measures are plotted on consistent axes between skills while other measures are on unique axes. Axes are oriented such that values to the right of center support colorization, while values to the left of center support grayscale for all outcomes. The grey line delineates the point of no difference between colorized and grayscale visualization. Dashed lines delineate measures for each of the three tested skills. P values for each paired comparison are listed on the left and marked (*) when meeting the specified significance value of 0.05.
Comparisons between first and second trials showed no significant improvement in speed (Recessed: −15.4 seconds, SD 35.3 seconds, P = 0.08; Elevated: −11.2 seconds, SD 24.8 seconds, P = 0.28), correct assessments (Recessed: +20%, SD 36%, P = 0.26; Elevated: +4%, SD 524%, P = 0.62), or confidence (+1 on a scale of 1–5, SD 2.2, P = 0.95) with repeated trials. 
Surface Approach
Subjects asked to bring an instrument as close as possible to a surface without contacting it (Fig. 3C, 4B; Supplementary Video S3) required similar times (−0.1 seconds, SD 9.2 seconds, P = 0.98) and had no significant difference in minimum distance (−9 pixels, SD 19.6 pixels, P = 0.27) in color and grayscale. However, subjects inadvertently rested the instrument on the surface on less attempts (−1 touch, SD 1.0 touches, P < 0.01, with surface touches in 18 of 60 grayscale tests and in 4 of 60 color tests) and reported increased confidence (+1 on a scale of 1–5, SD 0.6, P < 0.01) in color compared to grayscale. 
Comparisons between first and second trials showed no significant improvement in speed (−3.9 seconds, SD 8.3 seconds, P = 0.09), distance to the surface (−9.8 pixels, SD 19.5 pixels, P = 0.30), contacts with the surface (+0 touches, SD 1.4 touches, P = 0.59), or confidence (+0 on a scale of 1–5, SD 1.1, P = 0.84) with repeated trials. 
Object Grasp
Subjects asked to grasp a membrane-like object on a surface without contacting surrounding surfaces (Figs. 3D, 4C; Supplementary Videos S4, S5) had no significant difference in time (−11.1 seconds, SD 72.1 seconds, P = 0.19), number of attempts to grasp (−1 attempt, SD 3.9 attempts, P = 0.16), nor inadvertent contact with nontarget surfaces (−1.5 touches, SD 4.23 touches, P = 0.11), but trends suggest improvement with color. However, subjects reported increased confidence when working in color compared to grayscale (+1 on a scale of 1–5, SD 0.7, P < 0.01). One subject was excluded from analysis due to failure to complete the skill in colorized and grayscale attempts. 
Comparisons between first and second trials showed no significant improvements in speed (+2.9 seconds, SD 79.5 seconds, P = 1.00), number of grasps (−0.5 attempts, SD 4.3, P = 0.49), contacts with the surface (+0 touches, SD 4.7 touches, P = 0.95), or confidence (+0 on a scale of 1–5, SD 1.4, P = 0.90) with repeated trials. 
Survey
Subjects responded to “Experience of using colorized MIOCT had the following effect on my performance of posterior-segment maneuvers” with an average of 4.7 (“very helpful”) and SD 0.5. They responded to “I preferred using the colorized MIOCT volumes over the noncolorized MIOCT volumes” with an average of 4.7 (“strongly agree”) and SD 0.6. They responded to “Following completion of this study, I am more or less likely to use colorized MIOCT in my future practice” with an average of 4.7 (“much more likely”) and SD 0.6. Qualitative feedback reinforced the preference for colorized volumes and indicated improved depth perception and instrument tracking with color. 
Discussion
In this study, we demonstrated a novel application of depth-based, axial motion-stabilized color gradients to MIOCT volumes in real-time that improves the visualization and use of intraoperative OCT. The 2% processing time increase was imperceptible relative to the total processing time, allowing for use of this technique during microsurgical maneuvers. Validation also demonstrated that center-of-mass calculation was a reproducible and accurate approach to stabilize the color gradient with respect to axial motion. 
On intraoperative volume testing, colorization improved experienced surgeons' interpretation of MIOCT volumes and was preferred relative to grayscale. Surgeons were better able to visualize membranes elevated above the retina and identify interfaces between the instrument and tissue, including retina and membrane, when reviewing colorized volumes. These results suggest that colorization could improve membrane edge identification, visualization of membrane during peeling, monitoring for retinal deformation with instrumentation, and evaluation of membrane remnants after peeling. 
In the randomized, prospective study with physicians in training, we demonstrated that colorized MIOCT improved performance of microsurgical skills relative to grayscale visualization, improved speed (24% and 40% median improvement for elevated and recessed objects respectively) and accuracy (25% and 100% median improvement, respectively) when identifying object thickness, and also reduced inadvertent contacts (40% of grayscale and only 9% in colorized trials) in the surface approach skill. Subjects were uniformly more confident in their microsurgical skills and subjectively reported improved depth perception and feature identification when working with color volumes. While the difference was not statistically significant for the object grasp skill, trends suggest that subjects may require fewer attempts and will less frequently contact a surface when conducting this skill with colorized volumes. In fact, subgroup analysis (Supplementary Table S1) showed that the most experienced subjects received greater benefit from colorization than the least experienced subjects for this skill. Paired comparison of first and second trials independent of visualization and nonpaired comparisons between first and second trials using a specific visualization did not demonstrate any significant effects, clarifying that repeated practice of the skills was not responsible for improved outcomes. 
Despite this benefit, stabilization of the color gradient is incomplete in the setting of rapid axial motion. Rapid movements (i.e., those faster than the volume scan rate) can cause artifacts in the center of mass, noticeably shifting color boundaries from the previous volume (Supplementary Videos S1, S2). In fact, an observer noted that subjects who moderated their rate of movement to match the volume refresh rate performed better than those with more rapid movements. While colorization is specifically impacted, this issue is an outcome of MIOCT scanning and grayscale processing limitations. Therefore, higher line rate laser sources, faster scanning speeds and improved computational processing separate from colorization subsequently will increase the refresh rate of colorization.26 New approaches to mapping and applying the color gradients also will help improve colorization efficacy in specific surgical scenarios. 
While we demonstrated that colorization improves surgical technique in an in vitro setting, there are several limitations to this study. First, study participants were drawn from individuals at a single institution and had a wide range of microsurgical experience, from medical students to ophthalmologists. To mitigate the differences in experience, data were analyzed as paired differences between trials. Next, the only demonstration on human surgery was with prerecorded intraoperative volumes while the prospective component used abstractions of microsurgical techniques. Similarly, the microscope-independent nature of our trials was feasible in the context of ex vivo surgical skills, but would require significant validation before extension to human surgery. While our results were supportive of the use of colorization, to further validate this technology, we would need to test the impact within the complex surgical settings encountered in human ocular surgery. 
Microscope-integrated OCT has become increasingly useful with continued technologic development of OCT systems, microscope-scanner integration and real-time data processing. Multiple studies have demonstrated the benefit of MIOCT across a wide variety of ophthalmologic surgeries.1,35,12,17,2729 Nevertheless, MIOCT, and more specifically volumetric MIOCT, use has been largely limited to a small number of surgeons actively involved in developing the technology. Our work presents a novel approach to address the data visualization problem inherent in MIOCT applications. Visualization improvements, such as the colorization described here, simplify the interpretation of MIOCT data and reduce the learning curve for its use. This study demonstrates that depth-based colorization of MIOCT volumes improves surgeon interpretation of intraoperative imaging and improves user performance of and confidence with some of the most basic skills required of ophthalmologic microsurgery. Additionally, the MIOCT guidance of model microsurgical tasks in this randomized trial suggests that volume colorization may represent progress towards OCT-guided microscope-independent microsurgery. Ultimately, we hope that these improvements will translate into measurable benefits in the outcomes of procedures performed with MIOCT. 
Acknowledgments
The authors thank Alexandria Dandridge, Tammy Hsu, and James Tian for assisting with the development of the experimental procedures. 
This research was supported by a National Institutes of Health/National Eye Institute Biomedical Research Partnership Grant #R01-EY023039 and a National Eye Institute (NEI) Core Grant #P30 EY005722. 
Disclosure: I.D. Bleicher, Providing Surface Contrast in Rendering of Three-Dimensional Images for Micro-Surgical Applications, #62/592,794 (P); M. Jackson-Atogi, Providing Surface Contrast in Rendering of Three-Dimensional Images for Micro-Surgical Applications, #62/592,794 (P); C. Viehland, Providing Surface Contrast in Rendering of Three-Dimensional Images for Micro-Surgical Applications, #62/592,794 (P); H. Gabr, None; J.A. Izatt, Providing Surface Contrast in Rendering of Three-Dimensional Images for Micro-Surgical Applications, #62/592,794 (P), Leica Microsystems (P, R), Carl Zeiss Meditec (R); C.A. Toth, Providing Surface Contrast in Rendering of Three-Dimensional Images for Micro-Surgical Applications, #62/592,794 (P), Alcon (R) 
References
Knecht PB, Kaufmann C, Menke MN, Watson SL, Bosch MM. Use of intraoperative fourier-domain anterior segment optical coherence tomography during descemet stripping endothelial keratoplasty. Am J Ophthalmol. 2010; 150: 360–365.
Ehlers JP, Tam T, Kaiser PK, Martin DF, Smith GM, Srivastava SK. Utility of intraoperative optical coherence tomography during vitrectomy surgery for vitreomacular traction syndrome. Retina. 2014; 34: 1341–1346.
Ehlers JP, Ohr MP, Kaiser PK, Srivastava SK. Novel microarchitectural dynamics in rhegmatogenous detachments identified with intraoperative optical coherence tomography. Retina. 2013; 33: 1428–1434.
Ehlers JP, Kernstine K, Farsiu S, Sarin N, Maldonado R, Toth CA. Analysis of pars plana vitrectomy for optic pit–related maculopathy with intraoperative optical coherence tomography. Arch Ophthalmol. 2011; 129: 1483.
Carrasco-Zevallos O, Keller B, Viehland C, et al. 4D microscope-integrated OCT improves accuracy of ophthalmic surgical maneuvers. In: Manns F, Söderberg PG, Ho A, eds. Proceedings of SPIE. Bellingham, WA: SPIE; 2016: 969306.
Heiland M, Schulze D, Blake F, Schmelzle R. Intraoperative imaging of zygomaticomaxillary complex fractures using a 3D C-arm system. Int J Oral Maxillofac Surg. 2005; 34: 369–375.
Senft C, Bink A, Franz K, Vatter H, Gasser T, Seifert V. Intraoperative MRI guidance and extent of resection in glioma surgery: a randomised, controlled trial. Lancet Oncol. 2011; 12: 997–1003.
Tormenti MJ, Kostov DB, Gardner PA, Kanter AS, Spiro RM, Okonkwo DO. Intraoperative computed tomography image–guided navigation for posterior thoracolumbar spinal instrumentation in spinal deformity surgery. Neurosurg Focus. 2010; 28: E11.
Zausinger S, Scheder B, Uhl E, Heigl T, Morhard D, Tonn J-C. Intraoperative computed tomography with integrated navigation system in spinal stabilizations. Spine (Phila Pa 1976). 2009; 34: 2919–2926.
Roth J, Biyani N, Beni-Adani L, Constantini S. Real-time neuronavigation with high-quality 3D ultrasound SonoWand in pediatric neurosurgery. Pediatr Neurosurg. 2007; 43: 185–191.
Carrasco-Zevallos OM, Viehland C, Keller B, et al. Review of intraoperative optical coherence tomography: technology and applications [Invited]. Biomed Opt Express. 2017; 8: 1607–1637.
Tao YK, Srivastava SK, Ehlers JP. Microscope-integrated intraoperative OCT with electrically tunable focus and heads-up display for imaging of ophthalmic surgical maneuvers. Biomed Opt Express. 2014; 5: 1877–1885.
Zhang K, Kang JU. Real-time intraoperative 4D full-range FD-OCT based on the dual graphics processing units architecture for microsurgery guidance. Biomed Opt Express. 2011; 2: 764–770.
Shen L, Carrasco-Zevallos O, Keller B, et al. Novel microscope-integrated stereoscopic heads-up display for intrasurgical optical coherence tomography. Biomed Opt Express. 2016; 7: 1711–1726.
Viehland C, Keller B, Carrasco-Zevallos OM, et al. Enhanced volumetric visualization for real time 4D intraoperative ophthalmic swept-source OCT. Biomed Opt Express. 2016; 7: 1815.
Carrasco-Zevallos OM, Keller B, Viehland C, et al. Live volumetric (4D) visualization and guidance of in vivo human ophthalmic surgery with intraoperative optical coherence tomography. Sci Rep. 2016; 6: 31689.
Ehlers JP, Dupps WJ, Kaiser PK, et al. The prospective intraoperative and perioperative ophthalmic imaging with optical coherence tomography (PIONEER) study: 2-year results. Am J Ophthalmol. 2014; 158: 999–1007.
Ahlers C, Simader C, Geitzenauer W, et al. Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography. Br J Ophthalmol. 2008; 92: 197–203.
Loduca AL, Zhang C, Zelkha R, Shahidi M. Thickness mapping of retinal layers by spectral-domain optical coherence tomography. Am J Ophthalmol. 2010; 150: 849–855.
Beenakker J-WM, Shamonin DP, Webb AG, Luyten GPM, Stoel BC. Automated retinal topographic maps measured with magnetic resonance imaging. Invest Ophthalmol Vis Sci. 2015; 56: 1033–1039.
Oh IK, Oh J, Yang K-S, Lee KH, Kim S-W, Huh K. Retinal topography of myopic eyes: a spectral-domain optical coherence tomography study. Investig Opthalmology Vis Sci. 2014; 55: 4313.
Nimsky C, Ganslandt O, Cerny S, Hastreiter P, Greiner G, Fahlbusch R. Quantification of, visualization of, and compensation for brain shift using intraoperative magnetic resonance imaging. Neurosurgery. 2000; 47: 1070–1080.
Chandra S, Salgo IS, Sugeng L, et al. Characterization of degenerative mitral valve disease using morphologic analysis of real-time three-dimensional echocardiographic images: objective insight into complexity and planning of mitral valve repair. Circ Cardiovasc Imaging. 2011; 4: 24–32.
Tsang W, Weinert L, Sugeng L, et al. The value of three-dimensional echocardiography derived mitral valve parametric maps and the role of experience in the diagnosis of pathology. J Am Soc Echocardiogr. 2011; 24: 860–867.
Smith MJ, Clark CD. Methods for the visualization of digital elevation models for landform mapping. Earth Surf Process Landforms. 2005; 30: 885–900.
Carrasco-Zevallos OM, Viehland C, Keller B, McNabb RP, Kuo AN, Izatt JA. Constant linear velocity spiral scanning for near video rate 4D OCT ophthalmic and surgical imaging with isotropic transverse sampling. Biomed Opt Express. 2018; 9: 5052–5070, https://doi.org/10.1364/BOE.9.005052.
Pasricha ND, Shieh C, Carrasco-Zevallos OM, et al. Real-time microscope-integrated OCT to improve visualization in DSAEK for advanced bullous keratopathy. Cornea. 2015; 34: 1606–1610.
Pasricha ND, Bhullar PK, Shieh C, et al. Four-dimensional microscope-integrated optical coherence tomography to visualize suture depth in strabismus surgery. J Pediatr Ophthalmol Strabismus. 2017; 54: e1–e5.
Kumar A, Kakkar P, Ravani RD, Markan A. Utility of microscope-integrated optical coherence tomography (MIOCT) in the treatment of myopic macular hole retinal detachment. BMJ Case Rep. 2017; 2017: bcr–2016-217671.
Supplemental Materials
Supplementary Video S1. MIOCT volume series demonstrates impact of stabilization on colorization. The first part of the video shows a porcine retina translated through the axial dimension of the scanning area. Colorization is applied without stabilization as demonstrated by the dramatic change in retinal surface color as the eye is translated. The second part of the video shows the same data with colorization applied and a narrow color gradient tracks well with axial motion. When the eye is translated above the scanning area mid-video, the remaining volume appears red because not enough data are available to update the center of mass. 
Supplementary Video S2. MIOCT volume series from a human membrane peeling surgery, presented first in grayscale followed by the same series with stabilized color. These videos were provided to participants of the intraoperative volume testing where they were asked to answer questions about membrane identification and instrument activity after review of these videos individually. 
Supplementary Video S3. Example MIOCT volumes demonstrates the surface approach skill. The first half of the video shows an attempt in grayscale. The video is stopped where the subject indicated they believed the flexible loop was as close as possible to the surface without touching. Provided B scans demonstrate that the subject was contacting the surface. The second half of the video demonstrates an attempt with colorization. Provided B scans demonstrate successful completion of this skill with a close approach of the flexible loop without contact with the surface. While these videos are presented in two dimensions, subjects performed these skills with stereoptic visualization of the MIOCT volumes. 
Supplementary Video S4. Two series of MIOCT volumes recorded during the object grasp skill of a single subject. The first and second halves of the video show attempts conducted with grayscale and colorization, respectively. While these videos are presented in two dimensions, the subject performed this skill with stereoptic visualization of the MIOCT volumes. 
Supplementary Video S5. Two series of MIOCT volumes recorded during the object grasp skill of different subjects. The first half of the video shows a particularly unsuccessful attempt conducted with grayscale and the second half shows a particularly skilled attempt conducted with colorization. While these videos are presented in two dimensions, subjects performed this skill with stereoptic visualization of the MIOCT volumes. 
Figure 1
 
MIOCT volumes and B-scans demonstrate the colorization and stabilization processes. A non-colorized MIOCT volume (A) is filtered with a threshold at the 99th percentile of reflectivity values (B), center of mass is calculated (C, red line) and color is applied (D). Top and bottom sequences demonstrate stability with axial motion.
Figure 1
 
MIOCT volumes and B-scans demonstrate the colorization and stabilization processes. A non-colorized MIOCT volume (A) is filtered with a threshold at the 99th percentile of reflectivity values (B), center of mass is calculated (C, red line) and color is applied (D). Top and bottom sequences demonstrate stability with axial motion.
Figure 2
 
Grayscale and colorized MIOCT volumes from membrane peeling cases shown during intraoperative volume testing. Instrument traction on membrane (A), membrane pulled by forceps (B), retina deformation by flexible loop (C) and flexible loop above retina surface (D) are more clearly visualized in color. Color is applied with red superiorly, yellow medially and green (blue in [C]) inferiorly. Color boundaries were individually chosen to highlight surface features.
Figure 2
 
Grayscale and colorized MIOCT volumes from membrane peeling cases shown during intraoperative volume testing. Instrument traction on membrane (A), membrane pulled by forceps (B), retina deformation by flexible loop (C) and flexible loop above retina surface (D) are more clearly visualized in color. Color is applied with red superiorly, yellow medially and green (blue in [C]) inferiorly. Color boundaries were individually chosen to highlight surface features.
Figure 3
 
Grayscale (top) and colorized (bottom) MIOCT volume examples from each microsurgical skill. Example surfaces with recessed (A) and elevated (B) objects respectively, the surface approach skill with the flexible loop approaching a flat surface (C), and the object grasp skill with the forceps attempting to grasp a membrane-like object (D). Color is applied with red superiorly, yellow medially and blue inferiorly. Color boundaries were applied just above (just below for [A]) the surface. While this figure uses 2D representations of the 3D volume, subjects viewed stereoptic images while completing each task.
Figure 3
 
Grayscale (top) and colorized (bottom) MIOCT volume examples from each microsurgical skill. Example surfaces with recessed (A) and elevated (B) objects respectively, the surface approach skill with the flexible loop approaching a flat surface (C), and the object grasp skill with the forceps attempting to grasp a membrane-like object (D). Color is applied with red superiorly, yellow medially and blue inferiorly. Color boundaries were applied just above (just below for [A]) the surface. While this figure uses 2D representations of the 3D volume, subjects viewed stereoptic images while completing each task.
Figure 4
 
Box and whisker plots summarizing measured outcomes of the thickness assessment (A), surface approach (B) and object grasp (C) skills. Data are presented as paired differences between colorized and grayscale trials. Data for each outcome was normalized from −1 to 1 based on the largest absolute value for that outcome. As such, time and confidence measures are plotted on consistent axes between skills while other measures are on unique axes. Axes are oriented such that values to the right of center support colorization, while values to the left of center support grayscale for all outcomes. The grey line delineates the point of no difference between colorized and grayscale visualization. Dashed lines delineate measures for each of the three tested skills. P values for each paired comparison are listed on the left and marked (*) when meeting the specified significance value of 0.05.
Figure 4
 
Box and whisker plots summarizing measured outcomes of the thickness assessment (A), surface approach (B) and object grasp (C) skills. Data are presented as paired differences between colorized and grayscale trials. Data for each outcome was normalized from −1 to 1 based on the largest absolute value for that outcome. As such, time and confidence measures are plotted on consistent axes between skills while other measures are on unique axes. Axes are oriented such that values to the right of center support colorization, while values to the left of center support grayscale for all outcomes. The grey line delineates the point of no difference between colorized and grayscale visualization. Dashed lines delineate measures for each of the three tested skills. P values for each paired comparison are listed on the left and marked (*) when meeting the specified significance value of 0.05.
Table
 
Results of Intraoperative Volume Testing
Table
 
Results of Intraoperative Volume Testing
Supplementary Figure S1
Supplementary Document S1
Supplementary Document S2
Supplementary Table S1
Supplementary Video S1
Supplementary Video S2
Supplementary Video S3
Supplementary Video S4
Supplementary Video S5
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×