RayChenPsych2012Project: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Psych2012
imported>Psych2012
Line 16: Line 16:




<center>[[File:rc_gweqn.png |500px]]</center>
<center>[[File:rc_gweqn.png |250px]]</center>


== Max RGB ==
== Max RGB ==

Revision as of 21:12, 19 March 2012

Background

The human visual system features color constancy, meaning that the perceived color of objects remain relatively constant under varying lighting conditions. This helps us identify objects, as our brain lets us recognize an object as being a consistent color regardless of lighting environments. For example, a red shirt will look red under direct sunlight, but it will also look red indoors under fluorescent light.

However, if we were to measure the actual reflected light coming from the shirt under these two conditions, we would see that they differ. This is where problems arise. Think about the last time you took a picture with your digital camera, and the colors just seemed wrong. This is because cameras do not have the ability of color constancy. Fortunately, we can adjust for this by using color balancing algorithms.

Methods

In this project, I explore a number of popular color balancing algorithms. Specifically, I implement Gray World, Max-RGB, and Gray-Edge. In addition, I compare the results to an existing state-of-the-art application of gamut mapping [1].

Gray World

A simple but effective algorithm is Gray World. This method is based on Buchsbaum's explanation of the human visual system's color constancy property, which assumes that the average reflectance of a real world scene is gray. Meaning that according to Buchsbaum, if we were to take an image with a large amount of color variations, the average value of R, G, B components should always average out to gray. The key point here is that any deviations from this average is caused by the effects of the light source.

In other words, if we took a picture under orange lighting, the output image would appear more orange throughout, effectively disturbing the Gray World assumption. If we were to take this image and rescale the RGB components to average out to gray, then we should be able to remove the effects of the orange light. To do this, we just scale the RGB values of the input image by the average of the RGB values (calculated using the equations below) relative to the average of the gray value.


Max RGB

Results - What you found

asdf

Conclusions

References - Resources and related work

[1] A. Gijsenij, T. Gevers, J. van de Weijer. "Generalized Gamut Mapping using Image Derivative Structures for Color Constancy.

[2] J. van de Weijer, T. Gevers, A. Gijsenij. "Edge-Based Color Constancy"

Appendix I - Code and Data

Code