Simulating Vision through Retinal Prothesis
Group Members: Alex Martinez
Back to Psych 221 Projects 2014
Introduction

Retinal degenerative diseases such as age-related macular degeneration or retinitis pigmentosa are among the leading causes of blindness in the developed world. These diseases lead to a loss of photoreceptors, while the inner retinal neurons survive to a large extent. Electrical stimulation of the surviving retinal neurons has been achieved either epiretinally, in which case the primary targets of stimulation are the retinal ganglion cells (RGCs), or subretinally to bypass the degenerated photoreceptors and use neurons in the inner nuclear layer (bipolar, amacrine and horizontal cells) as primary targets. Other fully optical approaches to restoration of sight include optogenetics, in which retinal neurons are transfected to express light-sensitive Na and Cl channels, small- molecule photoswitches which bind to K channels and make them light sensitive [22] or photovoltaic implants based on thin-film polymers.
Recent clinical studies with epiretinal and subretinal prosthetic systems have demonstrated improvements of the visual function in certain tasks, with some patients being able to identify letters with equivalent visual acuity of up to 20/550. Despite progress in improving visual acuity, normal vision at this resolution lacks much functionality. Simulating vision through retinal prothesis, as well as processing the image through various means could determine better methods in transferring information through the retina at this limited bandwidth. In order to aid the development of future image processing software, this group will simulate vision through the retinal prothesis developed by the Palanker Lab.
What Has Been Done in the Past
Our group investigated prior simulations and descriptions given by patients about their restored vision. In particular, we will look at Project Xense in regards to vision simulation
Project Xense


Developed at the Entertainment Technology Center at Carnegie Mellon University, Project Xense is a collection of three musuem exhibits about medical implants and prostheses technology. One of these exhibits simulate vision through a retinal implant. Guests wear a head-mounted display equipped with a camera; the display will show video from the camera that has been processed in real-time to resemble the low-resolution vision afforded by retinal implants.
When guests put on the head-mounted display, their field of vision will be blocked out and replaced by small screens that show processed live video from a camera mounted on the headset. The processed video reduces images into a black-and-white grid based on brightness; the resolution of the grid directly corresponds to different important historical and theoretical milestones in retinal implant technology. If guests press the control buttons, they can scan through these various resolutions to experience what it would be like to have the corresponding retinal implant.
While this simulation gives a good sense of vision through a retinal prothesis, it is not robust enough to capture variation between different models of retinal protheses. For instance, this group will specifically model the retinal prothesis developed by the Palanker Lab. The group anticipates implanting 1000 pixels subretinally, in a concentric arrangement. Additionally, improvements in hardware should both allow for better localization and control of the voltage applied by a pixel, resulting in a more stable observed pixel radius with greater dynamic range.
Argus II Retinal Prothesis
With the Argus II system, a camera mounted on a pair of glasses captures images, and corresponding signals are fed wirelessly to chip implanted near the retina. These signals are sent to an array of implanted electrodes that stimulate retinal cells, producing light in the patient’s field of view.
So far, the Argus II can restore only limited vision. One testimonial features a male patient from Manchester, England, who received the Argus II Retinal Prosthesis System in 2009. He describes the experience, recalling how at a fireworks display he saw, "flashing lights and rockets going off" and, upon going to a pub, he can "know where the people are. [He] can't make out the faces."[6] Although we can acquire from patients an understanding of recovered vision functionality, it's difficult to understand exactly what that restored eye sight is like. http://ophthalmologytimes.modernmedicine.com/ophthalmologytimes/news/user-defined-tags/paulo-stanga/testimonial-argus-ii-retinal-prosthesis-syste#sthash.Wt2kSrY0.dpuf
What We Intend to Do
Judging from the sparsity of information of clinical trials, as well as the difficulty in understanding what the patient actually sees from testimonials, it is beyond the scope of this project to determine what exactly a patient sees through the retinal protheses developed by the Palanker Lab. However, we can note a relationship between pixel density and visual acuity, as well as voltage control/ localization and perceived pixel dynamic range/size.
In light of this, our group will develop a simulation of restored vision with the adjustment of these two parameters as focal points. In addition, we will incorporate elements of computer vision, such a facial recognition, in enhancing what information is output by the simulation.
Background
Starting from a basic understanding of computer vision, I consulted with Professors Daniel Palanker and E.J. Chichilnisky on what features my restored vision simulator should incorporate. I also consulted with Ph.D. candidate Georges Goetz and Ariel Rokem Ph.D. on how I should build my simulator. Besides our own basic knowledge on slow shutter speed photography, we also searched the internet for the basics on motion blurring and painting with light and interviewed a professor for continuing studies, Joel Simon, for more helpful tidbits. We aimed to make slow shutter speed photography as intuitive as possible for the main photographer, but in the end, practice is what will make anyone better at taking photos under any conditions.

Methods
Results
Conclusions and Future Work
References - Resources and related work
References
[1] "Project Xense Retinal Implant Simulation" etc.cmu.edu. Carnegie Mellon University, 2012. Web. 14 Mar 2014. <http://www.etc.cmu.edu/projects/tatrc>.
[2] "Photo to colored dot patterns with OpenCV" opencv-code.com OpenCV Code, Feb 13, 2013. Web. 3 Mar 2014 <https://opencv-code.com/tutorials/photo-to-colored-dot-patterns-with-opencv>.
[3] "Introduction to OpenCV" opencv-python-tutroals.readthedocs.org. OpenCV-Python Tutorials, 18 Feb 2014. Web. 15 Mar 2014. <http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_setup/py_table_of_contents_setup/py_table_of_contents_setup.html>
[4] Brian A. Wandell, Foundations of Vision, Chapter 9. https://www.stanford.edu/group/vista/cgi-bin/FOV/chapter-9-color/#Linear_Models
[5] "The Argus® II Retinal Prosthesis System" 2-sight.eu/en/product-en. The Argus II Retinal Prosthesis System, 2014. Web. 15 Mar 2014. <http://2-sight.eu/en/about-us-en>
[6] "Testimonial for Argus II Retinal Prosthesis System" ophthalmologytimes.modernmedicine.com. Ophthalmology Times, 28 Feb 2013. Web. 28 Feb 2014.<http://ophthalmologytimes.modernmedicine.com/ophthalmologytimes/news/user-defined-tags/paulo-stanga/testimonial-argus-ii-retinal-prosthesis-syste>
Software
Image Systems Engineering Toolbox http://imageval.com/
Python OpenCV http://http://opencv.org/
Appendix I - Code and Data
In the belief that the techniques used may be illustrated best by example, the Python code used to perform the image processing is available below.
Code
All code was written in Python 2.7.5 for Mac OSX, Mavericks. External dependencies include the OpenCV and NumPy
Presentation
This project was given as a 5-minute presentation to the PSYCH221 Winter 2013 class at Stanford. The presentation files used are linked below.