Real Time Simulator of Age-Related Macular Degeneration

From Psych 221 Image Systems Engineering
Revision as of 02:03, 12 December 2015 by imported>Projects221 (Appendix)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Introduction

Important applications of Virtual Reality (VR) have been in the context of medical research. For this project, we have simulated some of the typical symptoms of age related macular degeneration (AMD) within a Unity3D environment, to be displayed via Oculus Rift. The goal was to produce visual effects subtle enough to capture the actual experience of affected patients. Once the results are deemed accurate via subjective comparison, this setup could be used not only as a pseudo-phenomenological model but also as a starting point for further simulation and testing of visual acuity or impairment. Additionally, it has the potential of conveying to healthy individuals, e.g. family, friends and physicians, the experience of a person suffering from AMD.

Background

AMD generally affects individuals above the age of 60, though symptoms may begin earlier and are not always noticeable at the onset. The region of the retina that is affected by AMD is called the macula and includes the center point of the visual field (the fovea). This corresponds to the point of fixation, and is the most sensitive to spatial and chromatic detail.

Based on the cause, AMD can assume one of two forms. Dry AMD occurs when yellow protein deposits called drusen accumulate underneath the macula. This form of AMD also occurs when the photoreceptors or surrounding tissue break down, causing gaps in the retinal image. This can lead to partial or total loss of central vision.

Wet AMD occurs by neovascularization, where retinal blood vessels expand and leak fluid into the macula, damaging and distorting its surface. This can lead, again, to vision loss, but also introduces warping since straight lines appear curved (a condition called metamorphopsia). This latter condition is tested qualitatively by presenting patients with the Amsler grid.

Out of all patients affected by AMD, 10% have the wet form, and are more likely to experience permanent loss of vision at fixation.

Methods

Rendering Pipeline

The chosen rendering environment is Unity3D, a powerful multi-platform engine for real time simulation usually used for gaming applications. Simple UI elements can be designed programmatically to interact with the program while it renders the output in real time. This is a useful feature for subjective testing, allowing patients to continuously vary the parameters driving each perceptual effect. For our purposes, a scene model containing various 3D objects was provided by Matt Vitelli.

Each perceptual effect was scripted as a post-processing shader using embedded CG/HLSL syntax. These scripts can be called from any component within Unity, and thus layered up to any desired combination. Individual effects can be activated or deactivated according to user feedback.

A shader component can be attached to any of the camera objects positioned in the virtual space. Two cameras generate the field of view of the left and right video channels respectively, which are then streamed directly to the Oculus Rift. This simulation of stereoscopic vision can be useful not just for the purposes of realism but for also for testing. By applying a post-processing shader to only of the two channels, a realistic assumption since AMD does not affect both eyes identically, it is possible to carry out immediate left/right visual comparisons.

Implementation

We have simulated various hypothesized symptoms of AMD. We refer to them collectively as scotomas to indicate any region of impaired vision, though the exact nature of the impairment can range from unconscious blind spots to perceivable blur. The effects we considered are the following.

Desaturation

Pixel by pixel RGB values were extracted and used to estimate luminance as a weighted sum of gamma compressed inputs. The result is then interpolated radially up to a user defined radius from the point of fixation.

Blur

We implemented a simple gaussian filter using a 4x4 kernel. As before, the result is blended according to distance from the center.

Dark spot

Another interpolated effect was implemented as a simple dark spot at the point of fixation. The radius of this effect can be manipulated independently of the others, and constitutes perhaps the more naive interpretation of what a patient "sees."

Warping

We implemented a separate shader to distort the image around the center point. The current transformation simply stretches the image radially.

In order to produce a more interesting warping effect, we have also generated a series of noise textures in Matlab. We applied four gaussian bandpass filters to a 2D gaussian noise image, with center frequency increasing by octaves from 4 to 64cpi. These textures can now be used to determine the horizontal and vertical offsets of each pixel within the warped region. Lower frequency textures will generate waves from straight lines in the input image, whereas high frequency textures will produce grainier results. This is the same procedure as in [6], and can be used as a virtual experimental setup to measure the effects of metamorphopsia on image recognition.

Results

In the following snapshots, we show the resulting visual effects. In the case of blur and desaturation, we hope that these may be subtle enough to affect detail in the center of the visual field without overtly interrupting the surrounding region.

original view
central blur
blur and desaturation
exaggerated dark spot
exaggerated warp
all effects combined

Conclusions and Future Work

The immediate next step is to implement the more sophisticated warping effect by mean of the textured noise. A study of the extent to which differences in spatial frequency affect recognition could provide useful comparisons with previous work.

Another step is patient validation of the simulated effects. These approximations to what a person affected by AMD actually sees are motivated by subjective reports. Carefully designed experiments are required to verify our assumptions. To be included in this design:

  • distinction between wet and dry degeneration, and the exact associated symptoms
  • guarantee a given point of fixation, or incorporate eye tracking
  • compute the actual region of vision loss (this simulation only assumes a circular region)
  • A/B comparison between healthy/affected eye, since the subjective difference must be discerned by the patients themselves

Further developments should be in the direction of less naive simulations. It could be that what characterizes a scotoma is not so much an occlusion of the visual field, in direct correspondence with the retinal image, but a higher level lack of sight or object recognition. Although this might be impossible to simulate by operating only on the input image, approximations can be made by textural and structural inpainting or object recognition and elimination. We made some early attempts, but quickly realized that real time inpainting would require the careful design of an optimized algorithm. Heavy processing could be performed in Unity by maintaining various subsampled levels of the rendered image.

Finally, the main purpose of the project is to accomplish all this in a live stream to the Oculus. The setup is simple and has already been informally tested. Now, a stereo camera for live capture can be added so that the post-processing will not occur over a simulated environment but over the actual environment. This will require more care in adjusting for distance effects, accommodation, and other details of the ocular system.

Acknowledgements

We would like to give our deepest thanks to Trisha Pei-Wei Lian and Cordelia Erickson-Davis for their proposal of this project and guidance throughout it. Thanks to Matt Vitelli for his great help in setup and introduction to the environment. Finally, we would like to thank Prof. Brian Wandell and Prof. Joyce Farrell for a fascinating course, and engaging teaching.

Appendix

Related Literature

[1] Marmor, David J., and Michael F. Marmor. "Simulating vision with and without macular disease." Archives of ophthalmology 128.1 (2010): 117-125.

[2] Lewis, J., L. Shires, and D. J. Brown. "Development of a visual impairment simulator using the Microsoft XNA Framework." Proc. 9th Intl Conf. Disability, Virtual Reality & Associated Technologies, Laval, France. 2012.

[3] Ai, Zhuming, et al. "Simulation of eye diseases in a virtual environment." System Sciences, 2000. Proceedings of the 33rd Annual Hawaii International Conference on. IEEE, 2000.

[4] Jin, Bei, Zhuming Ai, and Mary Rasmussen. "Simulation of eye disease in virtual reality." Engineering in Medicine and Biology Society, 2005. IEEE-EMBS 2005. 27th Annual International Conference of the. IEEE, 2005.

[5] Ates, Halim Cagri, Alexander Fiannaca, and Eelke Folmer. "Immersive simulation of visual impairments using a wearable see-through display." Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction. ACM, 2015.

[6] E. Wiecek, S. C. Dakin, and P. Bex, "Metamorphopsia and letter recognition." Journal of Vision (2014) 14(14):1, 1-10

Oculus Specifications

  • Display Technology: OLED
  • Resolution: 2160×1200 (1080×1200 per eye)
  • Refresh Rate: 90 Hz
  • FOV (Nominal): 110 degrees or greater
  • Head Tracking: 6DOF (3-axis rotational tracking + 3-axis positional tracking)
  • Weight: TBA (Lighter than 380g)
  • Platforms: Microsoft Windows (OS X and Linux planned)
  • Connection: 1x HDMI 1.3 and 2x USB 3.0

Group Members

Shaimaa Bakr: research, write up;

Gabriele Carotti-Sha: implementation, write up

Project Link: AMD Unity Project