Simulation of Human Visual Degradation and Computer-aided Enhancement in Oculus

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search

Introduction

Augmented reality (AR) is an emerging technology in which a view of reality is augmented (possibly even diminished rather than augmented for certain purposes) by a computer. By contrast, virtual reality (VR) replaces the real world with a simulated one.[1] The entire project Streaming and Augmenting Stereo Camera Images as suggested in the Psych221 Project Ideas for 2015 aims to combine a stereo camera (a RGB-depth sensor) together with a VR headset (Oculus Rift DK2) to achieve AR. One of this project's long term goal is to simulate certain kinds of human visual degradation in the real scene, and then to create computer-aided algorithms for enhancements to counteract those degradation. The whole software pipeline is illustrated in the following figure 1.

Fig. 1 – Streaming and Augmenting Stereo Camera Images-Pipeline

The pipeline is almost ready except for the object recognition involving common problems like edge detection in computer vision, which is a crucial to performing the computer-aided algorithms. For this year, my project focuses on the imaging processing part of the pipeline, that is to simulate certain degradation of the human visual system and then enhance the images by computer-aided algorithms. Because of lacking accurate object recognition, it is much more convenient to work in a virtual reality software environment instead of streaming the data all the way from the stereo camera.

Background

The particular human visual degradation studied in this project is color blindness.

Color blindness

Color blindness, or color vision deficiency, is the inability or decreased ability to see color, or perceive color differences, under normal lighting conditions. Color blindness affects a significant percentage of the population, with protanopia and deuteranopia (red–green color blindness) being the most common types.

Both protanopia and deuteranopia as well as tritanopes (blus color blindness) are dichromats, which means they can match any color they see with some mixture of just two primary colors while normally humans are trichromats and require three primary colors. The physiological mechanics of dichromats originates from the lack of certain kind of cone cell in retina.

Fig. 2 – Illustration of the distribution of cone cells in the retina of an individual with normal color vision (left), and a color blind (protanopic) one.

Opponent color

The opponent color theory that states that the human visual system interprets information about color by processing signals from cones and rods in an antagonistic manner, that is responses to one color of an opponent channel are antagonistic to those to the other color. The opponent color theory was first proposed by Ewald Hering in 1892. In 1957, Hurvich and Dorothea Jameson provided quantitative data for opponent color theory by doing "hue cancellation experiment". Hue cancellation experiments start with a color (e.g. yellow) and attempt to determine how much of the opponent color (e.g. blue) of one of the starting color's components must be added to eliminate any hint of that component from the starting color. [3]

Methods

Hardware

The hardware in this project is an Oculus Rift headset connected to a Mac computer.

Fig. 3 – The Oculus Rift headset.

Software

The software environment is a 3D game engine Unity (version 5.2.2 personal) to simulate the human visual degradation and computer-aided enhancement.

Based on the opponent color theory, I tune the illumination by adding a colorful global shader to the Camera (user's view) in Unity. For example, the yellow shader is used to suppress the response of blue channel so as to simulate the tritanopes. To help people recognize object in the dichromats view of the scene, different kinds of object shaders are applied to the specific object.

Fig. 4 – Snapshot of unity.Also showed where the global shader and object shader is applied.


Process

The overall process is first generating virtual 3D scene in Unity, performing algorithms for the simulation of visual degradation and computer-aided enhancement and finally feeding the images to the Oculus screen.

Fig. 5 – Oculus screens' view of simulation.

Results

In order to give users an optical illusion of binocular disparity to achieve a 3D point of view, the two-screen images of Oculus are blured in certain ways (notice the colorful fringes on the edge of the object in Figure 5). Because of that, the simulation result is showed in a one screen which is an approximation of the real human vision, instead of two-screen images.

Simulation of color blindness

As mentioned before, the project use the idea of hue cancellation to simulate the view of color blindness, that is suppressing the response of certain color channel by applying an opponent color global shader. The three primary color of human vision is Red, Green and Blue (RGB); thus the opponent color global shader will be Cyan, Magenta and Yellow(CMY).

Fig. 6 – Simulation of color blindness. (a) Original scene. (b) Scene with yellow shader to simulate the view of tritanopes. (c) Scene with cyan shader to simulate the view of protanopia. (d Scene with magenta shader to simulate the view of deuteranopia.


Simulation of computer-aided enhancement

In order to help user recognize object easier in the dichromat's point of view, individual shaders are applied to the target object, which change the object's texture, color strategy and even shape. The effect of these computer-aided shaders are presented in continuously changing global shaders. The object chosen in this set of experiment is the ceramic vase on the desk. The original videos, which can be downloaded from the link in the appendix, are compressed to gif images to be shown on the wiki page.

Fig. 7 – Orignial vase in the dichromat's point of view.

Due to its white ceramic texture, original vase in the dichromat's point of view are quite hard to recognize. To enhance the user's recognition toward the vase, the first object shader applied is brown pottery texture, which make the vase more recognizable in almost every global shader.

Fig. 8 – Vase with pottery shader in the dichromat's point of view.

The second experiment is done by changing the color of the vase, particularly in Figure 9 the vase is colored all red.

Fig. 9 – Vase with all-red shader in the dichromat's point of view.

Further exploring the color strategy, the third experiment color the vase according to its surface normal which relates its shape with its color appearance.

Fig. 10 – Vase with surface normal color strategy shader in the dichromat's point of view.

The final experiment is done by changing the shape of the object, specifically segmentalizing the vase into slides.

Fig. 11 – Vase with segmentalizing shader in the dichromat's point of view.

Conclusions

This project present the simulation of human visual degradation (color blindness) and computer-aided enhancement for the virtual reality headset Oculus Rift. The results show that changing the object's texture and shape (segmentalization) will be helpful for users to recognize object in the dichromat's point of view; while changing the color strategy works depending on the certain kind of dchromat. The future work will be conducting more careful psychological experiments to evaluate the effects of these shaders regarding the recognition.

References

1. Steuer, Jonathan. Defining Virtual Reality: Dimensions Determining Telepresence, Department of Communication, Stanford University. 15 October 1993.
2. Wong, Bang (2011). "Color blindness". Nature Methods 8 (6): 441. doi:10.1038/nmeth.1618. PMID 21774112.
3. Hurvich, Leo M.; Jameson, Dorothea (November 1957). "An opponent-process theory of color vision". Psychological Review 64 (6, Part I): 384–404. doi:10.1037/h0041403. PMID 13505974.

Appendix

The entire unity project is available in this dropbox link: [1]; The uncompressed videos are also available here.[2]