July 2018
Volume 7, Issue 4
Open Access
Articles  |   July 2018
High-Performance Virtual Reality Volume Rendering of Original Optical Coherence Tomography Point-Cloud Data Enhanced With Real-Time Ray Casting
Author Affiliations & Notes
  • Peter M Maloca
    OCTlab, Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
    Moorfields Eye Hospital, London, UK
    Department of Ophthalmology, University of Basel, Basel, Switzerland
  • J. Emanuel Ramos de Carvalho
    Moorfields Eye Hospital, London, UK
  • Tjebo Heeren
    Moorfields Eye Hospital, London, UK
  • Pascal W Hasler
    OCTlab, Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
    Department of Ophthalmology, University of Basel, Basel, Switzerland
  • Faisal Mushtaq
    School of Psychology, University of Leeds, Leeds, West Yorkshire, UK
    Centre for Immersive Technologies, University of Leeds, Leeds, West Yorkshire, UK
  • Mark Mon-Williams
    School of Psychology, University of Leeds, Leeds, West Yorkshire, UK
    Centre for Immersive Technologies, University of Leeds, Leeds, West Yorkshire, UK
    Bradford Institute for Health Research, Bradford, UK
    National Centre for Vision, University of Southeast Norway, Kongsberg, Norway
  • Hendrik P.N. Scholl
    Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
    Department of Ophthalmology, University of Basel, Basel, Switzerland
    Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, USA
  • Konstantinos Balaskas
    Moorfields Eye Hospital, London, UK
    Moorfields Ophthalmic Reading Centre, London, UK
  • Catherine Egan
    Moorfields Eye Hospital, London, UK
  • Adnan Tufail
    Moorfields Eye Hospital, London, UK
  • Lilian Witthauer
    Center for Medical Image Analysis & Navigation, University Basel, Switzerland
  • Philippe C. Cattin
    Center for Medical Image Analysis & Navigation, University Basel, Switzerland
Translational Vision Science & Technology July 2018, Vol.7, 2. doi:https://doi.org/10.1167/tvst.7.4.2
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Peter M Maloca, J. Emanuel Ramos de Carvalho, Tjebo Heeren, Pascal W Hasler, Faisal Mushtaq, Mark Mon-Williams, Hendrik P.N. Scholl, Konstantinos Balaskas, Catherine Egan, Adnan Tufail, Lilian Witthauer, Philippe C. Cattin; High-Performance Virtual Reality Volume Rendering of Original Optical Coherence Tomography Point-Cloud Data Enhanced With Real-Time Ray Casting. Trans. Vis. Sci. Tech. 2018;7(4):2. https://doi.org/10.1167/tvst.7.4.2.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Feasibility testing of a novel volume renders technology to display optical coherence tomography data (OCT) in a virtual reality (VR) environment.

Methods: A VR program was written in C++/OpenGL to import and display volumetric OCT data in real time with 180 frames per second using a high-end computer and a tethered head-mounted display. Following exposure, participants completed a Simulator Sickness Questionnaire (SSQ) to assess for nausea, disorientation, and oculomotor disturbances. A user evaluation study of this software was conducted to explore the potential utility of this application.

Results: Fifty-seven subjects completed the user testing (34 males and 23 females). Mean age was 48.5 years (range, 21–77 years). Mean acquired work experience of the 35 ophthalmologists (61.40%) included in the group was 15.46 years (range, 1–37 years). Twenty-nine participants were VR-naïve. The SSQ showed a mean total score of 5.8 (SD = 9.44) indicating that the system was well tolerated and produced minimal side effects. No difference was reported between VR-naïve participants and experienced users. Overall, immersed subjects reported an enjoyable VR-OCT presence effect.

Conclusions: A usable and satisfying VR imaging technique was developed to display and interact with original OCT data.

Translational Relevance: An advanced high-end VR image display method was successfully developed to provide new views and interactions in an ultra high-speed projected digital scenery using point-cloud OCT data. This represents the next generation of OCT image display technology and a new tool for patient engagement, medical education, professional training, and telecommunications.

Introduction
Major progress has been made in virtual reality (VR) technology (defined as systems that support skilled sensorimotor interactions with computer generated information in three-dimensional [3D] space)1 in recent years. While the entertainment (and specifically gaming) industry has been driving much of the progress, these developments are expected to transform the medical field and are increasingly being used to provide clinicians with environments that augment their training,25 facilitate operative planning,6,7 and support clinical decision making.811 
The field of ophthalmology is particularly well placed to benefit from the VR revolution as it has been making use of 3D information for many years.12,13 Indeed, optical coherence tomography (OCT) as a noninvasive imaging technology that uses low-coherence light to capture optical scattering from tissue, is one of the fastest adopted retinal imaging technologies.14,15 OCT has made it possible to visualize the vitreoretinal interface, and especially the retina and its layers using a cross-sectional and 3D display.16 The integration of the OCT system into the operating microscope enhanced ophthalmic surgery and neurosurgery and has allowed OCT imaging of live surgery without disturbing a surgeons' workflow.17 Yet, despite the benefits of OCT, the technology is hindered in its utility through a lack of interactivity and limited spatial navigation. Methods that provide more accurate shadow rendering with increased image clarity and sharpness using 3D-OCT image stacks have previously been described.18 Even though comprehensive volumetric animations have been created from these OCT data, major drawbacks have been identified, namely the use of compressed images, the need of long post-processing rendering times of several seconds and the use of nonintegrated second party software. This means that such approaches do not yet allow for real-time data interaction, especially in clinical settings. 
Recently, new methods have been described in which stereolithography of volume-rendered choroidal vessels was used to enhance the spatial OCT visualization. Such printable models are based on a simplification of the natural tissue structure by the use of polygons to reduce the calculation requirements and, thus, make it printable.19 Typically, these renderings represent a geometric approximation of an object and are therefore incomplete as only the sides of the objects seen by the observer are rendered. While these polygon-based representations of tissue can be a useful, detailed presentation and clinical relevance may still be constrained. 
The aforementioned advances in VR have now made it possible to overcome these barriers. In contrast to previous reports, here, we introduce a novel VR imaging technique for instant volume OCT data rendering that is enhanced with real-time ray casting of shadows. Instead of polygons we used original, high-quality OCT point-cloud data. We describe how this digital transformation could alter the display of OCT and other imaging information, presenting a novel approach for visualization and interaction, for education purposes and for the conceptualization of retinal disorders. 
Methods
This research followed the tenets of the Declaration of Helsinki. The use of the data was approved by the local ethics committee (EKNZ-ID:2016-01948). Written informed consent was obtained from all subjects. 
Subjects
Participants trialed the software voluntarily and were given the option to interrupt or end the VR experience if desired or needed. Following exposure to the system, they were asked to complete a brief survey for feedback. Inclusion criteria were 18 years of age or older and anamnestic normal stereoscopic depth perception (stereopsis). The use of spectacles or contact lenses was allowed. Exclusion criteria were coexisting diseases, such as epilepsy, dementia, parkinsonian syndrome, serious mental health illness, developmental disability or cognitive impairment, general disability that would preclude adequate comprehension of the informed consent, or subjects who did not sign informed consent. 
Virtual Reality Environment
A VR program (SpectoVR/RetinopsyVR Version 1.0; Center for Medical Image Analysis & Navigation, University of Basel, Switzerland) was particularly created for an immersive experience and written in C++/OpenGL that enabled volume data import from different medical imaging resources, such as computed tomography (CT), magnetic resonance imaging (MRI), or OCT. Volume data were transferred as original .bmp stack files or DICOM format exported from the OCT scanner into the VR play arena (HTC Vive, Xindian District, New Taipei City, Taiwan) and instantly volume rendered as original point-cloud data. 
One major objective of the software from the outset was to provide a unique illusion of reality so that the user could both engage and enjoy the experience. For the highest level of immersion, a VR room large enough to safely walk around was programmed in room-scale to heighten the feeling of immersion (size play arena ∼4 × 4 × 3 m). The VR floor texture consisted of a high-resolution photograph of a linoleum surface, which also contained scratches, and thus simulated a very realistic environment. Several artificial windows with a view of the sea and sky were designed to avoid anxiety or the fear of being trapped. Room illumination consisted of an ambient light source that could be enhanced by a single ray-casting light, which itself could be manipulated in position and light intensity. The advanced ray-casting system was implemented to render shadows cast in real-time with 180 frames per second (2 × 90 frames per second and eye) to provide high realism and clarity of the rendering. 
The tethered head-mounted display (HMD) headset enabled a “room scale” tracking technology, allowing the subject to move in 3D space and use motion-tracked handheld controllers to interact with the VR model and VR environment (image resolution 1080 × 1200 pixel per eye, thus, 2160 × 1200, nominal field of view ∼110°, tracking system lighthouse with 2 base stations emitting 60-Hz pulsed infrared lasers, wired connection using HDMI 1.4). Virtual reality refresh rate was set at 180 frames per second for both eyes and the application was deployed on an XMG high end-notebook (Windows 10 home, 64bit, 32 RAM, NVIDIA GeForce GTX 980 8192MB GDDR5, CPU Intel Core i7-6700K CPU @ 4.00 GHZ, 4 cores; Schenker Technologies GmbH, Leipzig, Germany). 
A number of tasks were created for this VR application. Manipulation of the VR model featured change of first-person view angle, change of model size, walk through option, adjustment of ray-casting light to highlight region of interest, VR cutting with a plane, display and cutting of original point-cloud in any direction, and stereotactic, bimanual manipulations. The VR application is demonstrated in Supplementary Video S1
Participants
To explore the utility of this technology, 57 healthy participants volunteered to trial the VR experience and provide their feedback via a user preference survey constructed in collaboration with experts in the measurement of user interaction with virtual environments. The participation to the study was voluntary and written informed consent was obtained from each subject. 
Procedure
All participants were assigned to the same procedure. The HMD displayed a VR room, which contained a single virtual model of a peripheral retinal tear (swept-source OCT, 1050-nm wave-length, optical specimen of 3 × 3 × 2.6 mm, Topcon, Tokyo, Japan). There was no data preprocessing, speckle-noise reduction, thresholding, or image compression applied before the application started, but OCT data were imported as original .bmp volume stack files. Aspect ratio was adjusted before data import using the bounding-box diameter displayed by the manufacturer's software. The examiner was instructed on the correct use of the manipulators, fit of the VR glasses, and were immersed into the VR play arena up to a minimal immersion time of 5 minutes that could be individually extended if desired. Immersion time was recorded. No extended explanation, nor training, for the VR-naïve examiner was carried out. After the experience, participants were asked to complete the questionnaire without being disturbed or influenced. An experienced supervisor was assigned, for instance in case of sickness, disorientation, or a potential collision with real-world walls to avoid undesired incidents. The use of the OCT-imaging data for VR modeling was approved by the local ethics committee. Figure 1 visualizes the VR reconstruction system depicting the VR environment, the volume rendered retinal tear, and the manipulators. 
Figure 1
 
Stereoscopic illustration of the VR environment displaying volume OCT data of a peripheral retinal tear used in this study (images can be fused for a 3D sensation). (A) High-quality point-cloud rendering shows how the round border of the retinal tear is held with the left-hand VR handle (single arrow) and the origin of a retinal bridging vessel (white arrow head) is indicated with the right VR handle (double arrows). The neurosensory layer of the retina shows vitreoretinal traction (arrow head) and is separated from the underlying retinal pigment epithelium (RPE; double white arrow head), which is enhanced using shadow ray casting to provide realism and a sense of depth. Note that the used ray casting depicts near reality impression of floor scratches (asterisk) to produce the sensation of “being there” allowing reaction to stimuli as if they were in the real world, although the user is immersed in a synthetic environment. (B) The vitreoretinal traction is peeled off using a separate cutting plane (arrow) to highlight the bridging vessel (white arrow head). The cutting plane can be manipulated in all directions to deliver full freedom to operate. (C) Switch from VR-3D rendering (A, B) to original cross-section OCT mode in the same VR model and VR room shows the vitreoretinal traction (arrowhead) being continuous with the detached retina, which is separated from underlying RPE (double arrow head).
Figure 1
 
Stereoscopic illustration of the VR environment displaying volume OCT data of a peripheral retinal tear used in this study (images can be fused for a 3D sensation). (A) High-quality point-cloud rendering shows how the round border of the retinal tear is held with the left-hand VR handle (single arrow) and the origin of a retinal bridging vessel (white arrow head) is indicated with the right VR handle (double arrows). The neurosensory layer of the retina shows vitreoretinal traction (arrow head) and is separated from the underlying retinal pigment epithelium (RPE; double white arrow head), which is enhanced using shadow ray casting to provide realism and a sense of depth. Note that the used ray casting depicts near reality impression of floor scratches (asterisk) to produce the sensation of “being there” allowing reaction to stimuli as if they were in the real world, although the user is immersed in a synthetic environment. (B) The vitreoretinal traction is peeled off using a separate cutting plane (arrow) to highlight the bridging vessel (white arrow head). The cutting plane can be manipulated in all directions to deliver full freedom to operate. (C) Switch from VR-3D rendering (A, B) to original cross-section OCT mode in the same VR model and VR room shows the vitreoretinal traction (arrowhead) being continuous with the detached retina, which is separated from underlying RPE (double arrow head).
Analysis
All responses on the survey were voluntary, which resulted in incomplete information for some items on the survey. Therefore, percentages of completed responses are reported throughout the manuscript. Response data were processed using R (2017; R Core Team, Vienna, Austria) and figures were created using Prism v6.00 for Mac (GraphPad Software, La Jolla, CA). To examine the extent to which the OCT point-cloud visualization tool induced side effects, participants were asked to complete the 16-item Simulator Sickness Questionnaire (SSQ).20 The analysis process followed the procedure described in the literature previously20 using a custom-written R script. Forty-nine participants provided complete responses for the SSQ questions so only these participants' data were included in this specific analysis. Each item was rated on a scale of 0 to 3 and responses to these items were used to compute ratings of three distinct categories of symptoms: nausea (to capture gastrointestinal distress), disorientation (e.g. dizziness) and oculomotor disturbances (e.g., eye strain). Specifically, a weighted average of these three factors produced a total score measure, which provides some heuristic value for capturing systems that induce high levels of simulator sickness (where a score of 0 would indicate no symptoms, <5 = negligible symptoms; 5–10 minimal symptoms; 10+ significant symptoms through to a problem simulation). Participants were also asked to state the extent to which they agreed (on a 5-point Likert scale) with 12 statements related to the VR tool. 
The data were also analyzed to explore whether volunteers who were new to VR experiences had a higher discomfort score. For this purpose, a Wilcoxon rank sum test was performed to compare the scores of those who had and those who had not previously been exposed to VR. Finally, participants were asked to state what they believed to be the best feature(s) of the tool and where improvements could be made. 
Results
Of the 57 subjects, 34 were males (59.65%) and 23 were females (40.35%). Thirty-five were eye care professionals, such as ophthalmologists (61.40%), and 22 non-ophthalmologists (comprising 6 opticians and 16 other healthcare professionals; 38.60%). The mean age was 48.5 years (range, 21–77 years). Ophthalmologists had a mean acquired work experience of 15.46 years (range, 1–37 years of specific subspecialty training). Twenty-six subjects (45.61%) were using any prescription glasses or contact lenses (1 subject had previous refractive corneal surgery), 18 (31.57%) used prescription glasses and 11 (19.30%) contact lenses. 
Twenty-nine participants (54.72%) reported that they had never used VR before and 24 (45.28%) reported that they had previous VR experience (4 participants did not provide responses). The minimal immersion time for all participants was 5 minutes (mean 20.17 minutes, range, 5–50 minutes). 
The mean total score of 5.8 (SD = 9.44) on the SSQ indicated that the VR system was well tolerated and produced minimal symptoms. All participants completed the VR testing and no participants requested to end the experience prematurely. No external intervention by the supervisor was required in any of the tested subjects (0%). There was a stable VR application performance (100%) and no interruption of the application or technical shut down was noted. 
Almost all participant responses to the Likert scale statements exploring their perceptions of the system were positive. The percentage of each response on all items is visualized in Figure 2 alongside a summary statement for the survey item. 
Figure 2
 
Visual representation of the percentage agreement levels in response to 12 statements probing the utility of the virtual reality (VR) tool. Overall, the full volume rendering tool is well tolerated and beneficial.
Figure 2
 
Visual representation of the percentage agreement levels in response to 12 statements probing the utility of the virtual reality (VR) tool. Overall, the full volume rendering tool is well tolerated and beneficial.
A full list of the statements is provided in Supplementary Materials S1
The majority of participants either strongly agreed (66.66%) or acknowledged (31.25%) that the simulator provided a realistic presentation of the clinical case (0 disagreed or strongly disagreed). All participants agreed (16.32%) or strongly agreed (83.67%) that using the tool was an enjoyable experience. Consistent with this, 50% strongly agreed and 38% agreed that they would use the system again if offered the opportunity and only 2% stated that they would not. When asked whether participants felt that the tool was an unnecessary use of their time, only 4.16% of participants agreed (2.08%) or strongly agreed (2.08%). 
To explore the subjective educational value of the system, we asked whether participants agreed that the system enhanced their understanding of the clinical case, increased their confidence, and whether the system improved their knowledge of eye disease. For all three questions, participants had positive responses, with greater than 70% agreeing or strongly agreeing that these were valuable attributes of the system. 
Next, we wanted to explore whether the tool could be used to augment intraoperative performance (e.g., as a preoperative simulation tool).6,7 Therefore, we asked whether the ophthalmologists in our sample felt that the tool could help them anticipate potential problems and take appropriate action. Here, 14% strongly agreed and 46.9% agreed that it could be useful in this regard, 34.69% were ambivalent, and 4% disagreed. The results were similar when we asked whether participants felt that the system might make them less likely to make errors, 21% strongly agreed, 42.5% agreed, 31.9% neither agreed nor disagreed, and 4.2% disagreed. 
Finally, Wilcoxon rank sum test for comparison of the simulator sickness scores of those who had and those who had not previously been exposed to VR showed no statistically reliable difference between the two groups (W = 224, P = 0.2202). 
In summary, the system was positively viewed by our sample of participants and this is reflected in our final question, which enquired whether they would recommend the use of this system to their colleagues, with 89.5% agreeing (31.25%) or strongly agreeing (58.33%) that they would recommend to others and only 2.08% stating that they would not. Our final two questions asked participants what they felt were the best features of the system and where improvements could be made (see Supplementary Materials S1 for the list of statements provided). In line with our expectations that current approaches are limited by the lack of interactivity and that this tool could address this problem, respondents most frequently suggested this capability to be a strong feature of the tool. Reflecting on the VR performance, respondents also indicated that further work improvements could be made in terms of the usability (referring to handling, cutting), resolution, and wireless capabilities. 
Discussion
The physician as sole keeper of knowledge belongs to the primordial era of medicine. The advent of print marked the beginning of shared medical knowledge. Recently, information technology (IT) systems have been used for data digitisation.21 In our opinion, VR is not only the next logical evolution in knowledge diffusion but also the conveyer of a new learning experience. We explored the potential of this approach by applying VR to OCT, one of the most important imaging techniques in ophthalmology and visual science.22 
Current OCT systems normally use two-dimensional computer interfaces (e.g., flat display monitors, keyboard, and mouse) to create and interact with OCT data, which is presented as flat two-dimensional B-scans.23 The users' view is therefore restricted along an axis and is “framed.” Recent techniques of 3D OCT rendering add a new dimension to OCT interpretation, disclosing important spatial relationships within the neurosensory retina and choroid.2428 
The novel tool herein described enhances OCT image display and interactive capabilities by freeing OCT data from rigid frames—allowing access to OCT as a free-floating model that can be interactively explored and arbitrarily enlarged with no restriction on user movement. This occurs within a VR setting with no angular constraints and cutting planes in any location and direction. The application was positively accepted and judged, especially by the largest subgroup consisting of trained ophthalmologists who had undergone a mean of 15 years of postgraduate professional subspecialization. The VR software reported here is continuing in its development and is now available as a research tool. 
Furthermore, the system empowers the user to switch promptly between the 3D-OCT VR model and the corresponding original OCT data by means of the cutting plane. Instantly, the user can focus on any space, section. and junction, and selectively hide or display particular structures. This combined feature of synchronized mapping of original, highly complex, 3D-OCT point-cloud volume data and plane view offers an unprecedented and meaningful VR environment. This kind of 3D data modeling and data representation may especially be important as it was shown that VR rendering can mask pathologic tissue, such as intracranial aneurysms.29 Thus, the combined, multidimensional feature in this software can avoid missing critical information as the VR image is immediately correlated and made comparable with corresponding OCT images. 
A key advantage of the method reported here is that it is based on real-time rendering of original point-cloud data instead of polygons or meshes. The polygon extant approach approximates an object to save computational resources, but also reduces the complexity and detail level of a structure as an unwanted side effect in medical imaging. Thus, this novel method offers a valuable implementation for managing high complexity data that is enhanced with ray casting of shadows for a more natural and photorealistic illumination of the rendered data. Moreover, the VR data can be illustrated in different colored fashions using various transfer functions to highlight specific structures. This modality aided visualization of details of the vitreoretinal interface, vitreous body, or retinal layers that were not always apparent clinically or on standard B-scan OCT images.30,31 Such an intuitive VR tool may require less user training time, increase work productivity, and represent a novel paradigm for an advanced human–computer interaction.32,33 The described VR system is being introduced for medical teaching purposes at Moorfields Eye Hospital, London, United Kingdom, and at the University of Basel, Switzerland, with not only medical students and medical staff, but also patients and their relatives, including visually impaired patients. 
Given the increasing use of VR, evidence has emerged that these systems can limit the user experience by inducing general discomfort, headache, eye strain, disorientation, or even nausea, vomiting, and oculomotor symptoms, resulting in a form of motion sickness known as cybersickness.20,34 In addition, a positive correlation has been found between the frequency of negative side effects and the technical VR implementation to immersed people and the degree to which users were able to control movements during the immersion.3537 However, in this study, all subjects were able to successfully use and complete immersive VR tasks, such as walking around a VR-3D OCT model, cutting in different angles or handling of a ray-casting light source, with solely minimal discomfort reported afterward. It could be possible that a longer lasting immersion triggers more side effects, but the noted symptoms are within the range of previous studies.38,39 We did not find a difference with regard to discomfort between VR-experienced subjects and VR-naïve participants who were represented at a relative high percentage of all participants. On the contrary, the large majority of the subjects reported a positive and enjoyable VR experience. This supports the safety of the described VR system for a display of 3D-OCT data. Furthermore, disorientation, one of the commonest reported side effects of VR, was not noticed during the experiment in any subject. This fact could be justified by the sophisticated VR interface, which was aimed at total immersion of the subject in the VR environment by including familiar and inherent objects, such as a floor with scratches, which would react to the angle of incidence of the ray-casting light, windows with a view of the sea or the sky, as well as the representation of the three orthogonal OCT cross sections in the usual way within virtual big screen monitors. Thus, this study shows that the personal integrity, individual representation of room scale, and orientation within the VR coordinates was consistent with the personal metric, which is associated with the extremely high rendering speed of 180 frames per second of high-resolution images. Nevertheless, we still advocate systematic monitoring of unwanted side effects before, during, and after the immersion. 
Although the potential impact of VR in ophthalmology is high, there are major issues, such as lack of standardization in VR hardware and software and high expenses in programming and testing that must be overcome. Among the major criticisms regarding current VR systems, is the need for relatively heavy, tethered high-end headsets that require powerful (and expensive) high-end computers to operate. However, a number of forthcoming hardware and software solutions will likely reduce the financial burden of these systems. Given the current rate of progress in hardware development, higher resolution wireless head-mounted displays with controllers that provide more nuanced haptic information are expected to be introduced in coming years and we plan to ensure that the software described here is compatible with future systems. In such an image-information and communication environment, the capacity of our VR application will be further enhanced as the system already works across multiple imaging systems such as different OCT systems, CT, MRI, and 3D ultrasound. In Figure 3, some examples are shown to demonstrate other VR data integration, such as structural OCT, volumetric OCT angiography, and CT. 
Figure 3
 
Integration of different image data resources into the VR-imaging method to show the capability of VR as a novel multimodal imaging display platform. (A) The developed VR tool was capable to import original, structural swept-source OCT data, for example of vitreoretinal traction and showing the corresponding original point-cloud rendering (OCT optical specimen measuring 9 × 12 mm; Topcon) (B). (C) Another example is shown as VR OCT angiography (3 × 3 mm, spectral-domain Cirrus HD-OCT; Carl Zeiss Meditec) and en face rendering of the optic disc vessels (D). (E) VR CT of a skull with soft tissue rendering and corresponding original CT data with intensity display (F) to show that VR as a new medium may be beneficial for ophthalmology and potentially for medical education and other healthcare subspecialties, such as neuroradiology or neurosurgery.
Figure 3
 
Integration of different image data resources into the VR-imaging method to show the capability of VR as a novel multimodal imaging display platform. (A) The developed VR tool was capable to import original, structural swept-source OCT data, for example of vitreoretinal traction and showing the corresponding original point-cloud rendering (OCT optical specimen measuring 9 × 12 mm; Topcon) (B). (C) Another example is shown as VR OCT angiography (3 × 3 mm, spectral-domain Cirrus HD-OCT; Carl Zeiss Meditec) and en face rendering of the optic disc vessels (D). (E) VR CT of a skull with soft tissue rendering and corresponding original CT data with intensity display (F) to show that VR as a new medium may be beneficial for ophthalmology and potentially for medical education and other healthcare subspecialties, such as neuroradiology or neurosurgery.
Conclusion
This study presents the first report on the feasibility, performance and safety of a newly developed real-time volume rendering technique for interactive VR environments to display original OCT point-cloud data, including an integration of real-time shadows cast by a moving light source. The possibility to render high-quality environments may position VR as the next generation image display method and platform of OCT and all other volumetric medical data, and forge a new perception on how we communicate and work with this technology. 
Acknowledgments
The authors thank the individuals and Nadia Maloca, Laura Maloca and Noëlle Burri who generously shared their time, experience and materials for the purposes of this study. 
Supported by grants from the Werner Siemens Foundation through the MIRACLE project. 
Disclosures: P.M. Maloca, None; J.E.R. de Carvalho, None; T. Heeren, None; P.W. Hasler, None; F. Mushtaq, None; M. Mon-Williams, None; H.P.N Scholl, None; K. Balaskas, None; C. Egan, National Institute for Health Research (NIHR) Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust (F), UCL Institute of Ophthalmology (F); A. Tufail, National Institute for Health Research (NIHR) Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust (F), UCL Institute of Ophthalmology (F), Heidelberg Engineering (C), Optovue (C); L. Witthauer, None; P.C. Cattin, is the owner of the VR application described here (S) 
The views expressed are those of the author(s) and not necessarily those of the NEI, NHS, the NIHR or the Department of Health. 
References
Want JPM-W M. What does virtual reality NEED?: human factors issues in the design of three-dimensional computer environments. Int J Hum Comput Stud. 1996; 44: 829–847.
Mirghani I, Mushtaq F, Allsop MJ, et al. Capturing differences in dental training using a virtual reality simulator. Eur J Dent Educ. 2018; 22: 67–71.
Wright WG. Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds. Front Sys Neurosci. 2014; 8: 56.
Larsen CR, Oestergaard J, Ottesen BS, Soerensen JL. The efficacy of virtual reality simulation training in laparoscopy: a systematic review of randomized trials. Acta Obste Gynecol Scand. 2012; 91 (9): 1015–1028.
Larsen CR, Soerensen JL, Grantcharov TP, et al. Effect of virtual reality training on laparoscopic surgery: randomised controlled trial. BMJ. 2009; 338: b1802.
Yiasemidou M, Galli R, Glassman D, et al. Patient-specific mental rehearsal with interactive visual aids: a path worth exploring? Surg Endosc. 2018 32: 1165–1173.
Pike TW, Pathak S, Mushtaq F, Wilkie RM, Mon-Williams M, Lodge JPA. A systematic examination of preoperative surgery warm-up routines. Surg Endosc. 2017; 31: 2202–2214.
Kozak I, Banerjee P, Luo J, Luciano C. Virtual reality simulator for vitreoretinal surgery using integrated OCT data. Clin Ophthalmol. 2014; 8: 669–672.
Cook DA, Brydges R, Zendejas B, Hamstra SJ, Hatala R. Technology-enhanced simulation to assess health professionals: a systematic review of validity evidence, research methods, and reporting quality. Acad Med. 2013; 88 (6): 872–883.
Feudner EM, Engel C, Neuhann IM, Petermeier K, Bartz-Schmidt KU, Szurman P. Virtual reality training improves wet-lab performance of capsulorhexis: results of a randomized, controlled study. Graefes Arch Clin Exp Ophthalmol. 2009; 247: 955–963.
Rossi JV, Verma D, Fujii GY, et al. Virtual vitreoretinal surgical simulator as a training tool. Retina. 2004; 24: 231–236.
Spaide RF, Suzuki M, Yannuzzi LA, Matet A, Behar-Cohen F. Volume-rendered angiographic and structural optical coherence tomography angiography of macular telangiectasia type 2. Retina. 2017; 37: 424–435.
Karst SG, Salas M, Hafner J, et al. Three-dimensional analysis of retinal microaneurysms with adaptive optics optical coherence tomography [published online ahead of print January, 29, 2018]. Retina. https://doi.org/10.1097/IAE.0000000000002037.
Huang D, Swanson EA, Lin CP, et al. Optical coherence tomography. Science. 1991; 254: 1178–1181.
Fujimoto J, Huang D. Foreword: 25 years of optical coherence tomography. Invest Ophthalmol Vis Sci. 2016; 57: OCTi–OCTii.
Aaker GD, Gracia L, Myung JS, et al. Three-dimensional reconstruction and analysis of vitreomacular traction: quantification of cyst volume and vitreoretinal interface area. Arch Ophthalmol. 2011; 129: 809–811.
Carrasco-Zevallos OM, Viehland C, Keller B, et al. Review of intraoperative optical coherence tomography: technology and applications [Invited]. Biomed Opt Express. 2017; 8: 1607–1637.
Glittenberg C, Krebs I, Falkner-Radler C, et al. Advantages of using a ray-traced, three-dimensional rendering system for spectral domain Cirrus HD-OCT to visualize subtle structures of the vitreoretinal interface. Ophthalmic Surg Lasers Imaging. 2009; 40 (2): 127–134.
Maloca PM, Tufail A, Hasler PW, et al. D printing of the choroidal vessels and tumours based on optical coherence tomography [published online ahead of print December 14, 2017]. Acta Ophthalmol. https://doi.org/10.1111/aos.13637.
Kennedy RS, Drexler JM, Compton DE, Stanney KM, Lanham S, Harm DL. Configural scoring of simulator sickness, cybersickness and space adaptation syndrome: similarities and differences? NASA Johnson Space Center 2001. Available at: https://ntrs.nasa.gov/search.jsp?R=20100033371. Accessed March 23, 2018.
Masic I, Pandza H, Toromanovic S, et al. Information technologies (ITs) in medical education. Acta Inform Med. 2011; 19: 161–167.
Kostanyan T, Wollstein G, Schuman JS. New developments in optical coherence tomography. Curr Opin Ophthalmol. 2015; 26: 110–115.
Kansal V, Armstrong JJ, Pintwala R, Hutnik C. Optical coherence tomography for glaucoma diagnosis: an evidence based meta-analysis. PLoS One. 2018; 13: e0190621.
Maloca P, Gyger C, Hasler PW. A pilot study to image the vascular network of small melanocytic choroidal tumors with speckle noise-free 1050-nm swept source optical coherence tomography (OCT choroidal angiography). Graefes Arch Clin Exp Ophthalmol. 2016; 254 (6): 1201–1210.
Spaide RF. Volume-rendered angiographic and structural optical coherence tomography. Retina. 2015; 35 (11): 2181–2187.
Maloca PM, Spaide RF, Rothenbuehler S, et al. Enhanced resolution and speckle-free three-dimensional printing of macular optical coherence tomography angiography [published online ahead of print November 13, 2017]. Acta Ophthalmol. https://doi.org/10.1111/aos.13567.
Rothenbuehler SP, Maloca P, Scholl HPN, et al. Three-dimensional analysis of submacular perforating scleral vessels by enhanced depth imaging optical coherence tomography. Retina. 2018; 38: 1231–1237.
Liu JJ, Witkin AJ, Adhi M, et al. Enhanced vitreous imaging in healthy eyes using swept source optical coherence tomography. PLoS One. 2014; 9: e102950.
Hwang SB, Kwak HS, Han YM, Chung GH. Detection of intracranial aneurysms using three-dimensional multidetector-row CT angiography: is bone subtraction necessary? Eur J Radiol. 2011; 79: e18–e23.
Barteselli G, Bartsch DU, Weinreb RN, et al. Real-time full-depth visualization of posterior ocular structures: comparison between full-depth imaging spectral domain optical coherence tomography and swept-source optical coherence tomography. Retina. 2016; 36: 1153–1161.
Adhi M, Badaro E, Liu JJ, et al. Three-dimensional enhanced imaging of vitreoretinal interface in diabetic retinopathy using swept-source optical coherence tomography. Am J Ophthalmol. 2016; 162: 140–149.e1.
Hou Y, Shi J, Lin Y, Chen H, Yuan W. Virtual surgery simulation versus traditional approaches in training of residents in cervical pedicle screw placement. Arch Orthop Trauma Surg. 2018; 138: 777–782.
Hou Y, Lin Y, Shi J, Chen H, Yuan W. Effectiveness of the thoracic pedicle screw placement using the virtual surgical training system: a cadaver study [published online ahead of print March 14, 2018]. Oper Neurosurg (Hagerstown). https://doi.org/10.1093/ons/opy030.
Pesudovs K. The development of a symptom questionnaire for assessing virtual reality viewing using a head-mounted display. Optom Vis Sci. 2005; 82: 571; author reply 572.
Kennedy RS, Drexler J, Kennedy RC. Research in visually induced motion sickness. Appl Ergon. 2010; 41: 494–503.
Kim YY, Kim HJ, Kim EN, Ko HD, Kim HT. Characteristic changes in the physiological components of cybersickness. Psychophysiology. 2005; 42: 616–625.
McCauley ME, Sharkey TJ. Cybersickness: perception of self-motion in virtual environments. Presence (Camb). 1992; 1: 311–318.
Nichols S, Patel H. Health and safety implications of virtual reality: a review of empirical evidence. Appl Ergon. 2002; 33: 251–271.
Lawson BD GD, Mead AM, Muth ER. Signs and symptoms of human syndromes associated with synthetic experience. In: KS, Hale Stanney KM, eds. Handbook of Virtual Environments: Design, Implementation, and Applications. Boca Raton FL: CRC Press; 2002: 589–618.
Figure 1
 
Stereoscopic illustration of the VR environment displaying volume OCT data of a peripheral retinal tear used in this study (images can be fused for a 3D sensation). (A) High-quality point-cloud rendering shows how the round border of the retinal tear is held with the left-hand VR handle (single arrow) and the origin of a retinal bridging vessel (white arrow head) is indicated with the right VR handle (double arrows). The neurosensory layer of the retina shows vitreoretinal traction (arrow head) and is separated from the underlying retinal pigment epithelium (RPE; double white arrow head), which is enhanced using shadow ray casting to provide realism and a sense of depth. Note that the used ray casting depicts near reality impression of floor scratches (asterisk) to produce the sensation of “being there” allowing reaction to stimuli as if they were in the real world, although the user is immersed in a synthetic environment. (B) The vitreoretinal traction is peeled off using a separate cutting plane (arrow) to highlight the bridging vessel (white arrow head). The cutting plane can be manipulated in all directions to deliver full freedom to operate. (C) Switch from VR-3D rendering (A, B) to original cross-section OCT mode in the same VR model and VR room shows the vitreoretinal traction (arrowhead) being continuous with the detached retina, which is separated from underlying RPE (double arrow head).
Figure 1
 
Stereoscopic illustration of the VR environment displaying volume OCT data of a peripheral retinal tear used in this study (images can be fused for a 3D sensation). (A) High-quality point-cloud rendering shows how the round border of the retinal tear is held with the left-hand VR handle (single arrow) and the origin of a retinal bridging vessel (white arrow head) is indicated with the right VR handle (double arrows). The neurosensory layer of the retina shows vitreoretinal traction (arrow head) and is separated from the underlying retinal pigment epithelium (RPE; double white arrow head), which is enhanced using shadow ray casting to provide realism and a sense of depth. Note that the used ray casting depicts near reality impression of floor scratches (asterisk) to produce the sensation of “being there” allowing reaction to stimuli as if they were in the real world, although the user is immersed in a synthetic environment. (B) The vitreoretinal traction is peeled off using a separate cutting plane (arrow) to highlight the bridging vessel (white arrow head). The cutting plane can be manipulated in all directions to deliver full freedom to operate. (C) Switch from VR-3D rendering (A, B) to original cross-section OCT mode in the same VR model and VR room shows the vitreoretinal traction (arrowhead) being continuous with the detached retina, which is separated from underlying RPE (double arrow head).
Figure 2
 
Visual representation of the percentage agreement levels in response to 12 statements probing the utility of the virtual reality (VR) tool. Overall, the full volume rendering tool is well tolerated and beneficial.
Figure 2
 
Visual representation of the percentage agreement levels in response to 12 statements probing the utility of the virtual reality (VR) tool. Overall, the full volume rendering tool is well tolerated and beneficial.
Figure 3
 
Integration of different image data resources into the VR-imaging method to show the capability of VR as a novel multimodal imaging display platform. (A) The developed VR tool was capable to import original, structural swept-source OCT data, for example of vitreoretinal traction and showing the corresponding original point-cloud rendering (OCT optical specimen measuring 9 × 12 mm; Topcon) (B). (C) Another example is shown as VR OCT angiography (3 × 3 mm, spectral-domain Cirrus HD-OCT; Carl Zeiss Meditec) and en face rendering of the optic disc vessels (D). (E) VR CT of a skull with soft tissue rendering and corresponding original CT data with intensity display (F) to show that VR as a new medium may be beneficial for ophthalmology and potentially for medical education and other healthcare subspecialties, such as neuroradiology or neurosurgery.
Figure 3
 
Integration of different image data resources into the VR-imaging method to show the capability of VR as a novel multimodal imaging display platform. (A) The developed VR tool was capable to import original, structural swept-source OCT data, for example of vitreoretinal traction and showing the corresponding original point-cloud rendering (OCT optical specimen measuring 9 × 12 mm; Topcon) (B). (C) Another example is shown as VR OCT angiography (3 × 3 mm, spectral-domain Cirrus HD-OCT; Carl Zeiss Meditec) and en face rendering of the optic disc vessels (D). (E) VR CT of a skull with soft tissue rendering and corresponding original CT data with intensity display (F) to show that VR as a new medium may be beneficial for ophthalmology and potentially for medical education and other healthcare subspecialties, such as neuroradiology or neurosurgery.
Supplement 1
Supplement 2
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×