RPoulsonPsych221Project: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Psych2012
imported>Psych2012
Line 251: Line 251:


White Point, Diagonal
White Point, Diagonal
D50  - 1.0277
*D50  - 1.0277
D75 - 0.6308
*D75 - 0.6308
Fluorescent - 2.5516
*Fluorescent - 2.5516
Fluorescent11 - 1.8634
*Fluorescent11 - 1.8634
FluorescentOffice - 3.2820
*FluorescentOffice - 3.2820
Tung - 2.5934
*Tung - 2.5934


White Point, Full
White Point, Full
 
*D50 - 1.1639
D50 - 1.1639
*D75 - 0.5742
D75 - 0.5742
*Fluorescent - 2.2295
Fluorescent - 2.2295
*Fluorescent11 - 4.9412
Fluorescent11 - 4.9412
*FluorescentOffice - 3.1519
FluorescentOffice - 3.1519
*Tungsten - 3.1640
Tung - 3.1640


Full Image, Diagonal
Full Image, Diagonal
D50 - 1.0277
*D50 - 1.0277
D75 - 0.5181
*D75 - 0.5181
Fluorescent - 2.5516
*Fluorescent - 2.5516
Fluorescent11 - 1.8634
*Fluorescent11 - 1.8634
FluorescentOffice - 3.2820
*FluorescentOffice - 3.2820
Tungsten - 2.5934
*Tungsten - 2.5934


Full Image, Full
Full Image, Full
D50- 0.2788
*D50- 0.2788
D75 - 0.1245
*D75 - 0.1245
Fluorescent 1.4601
*Fluorescent 1.4601
Fluorescent11 - 1.0776
*Fluorescent11 - 1.0776
FluorescentOffice - 1.6314
*FluorescentOffice - 1.6314
Tung - 0.7635
*Tung - 0.7635


=Conclusions=
=Conclusions=

Revision as of 05:00, 21 March 2012

Introduction

A beautifully rendered image on a computer screen or cell phone is the result of complex algorithms, careful measurements, intrinsically elegant machinery, and hard work. Designers must take into account the limitations and brilliance of the human visual system in order to produce an outcome that looks as close to the real scene as possible. Through a variety of processes, accounting for different technical limitations as well as human-related issues, a vivid replica is created for viewing delight. One of these steps is that of creating color constancy (or chromatic adaptation)-- or specifically, mimicking the human visual system’s ability to perceive the color of an object or a scene of objects as identical, not matter what the illumination on the object truly is (Gevers & Gijsenij, 2011). This feature of the human visual system is necessary to correctly identify features of objects. For example, an apple viewed under the fluorescent light of a kitchen is red, but the same apple is also red when viewed in daylight.


Specifically, my project dealt with altering the illumination of a painting, and attempting to create color constancy with a variety of methods to find the most closely depicted replica to a direct rending of the image under a preferred light source. For instance, my preferred light source was D65, or daylight, and I changed the illumination on the image to fluorescent and tried a variety of transforms to perform color balancing on the resulting image. These transformations occurred on a hyperspectral image of “Virgin, Child and St. John,” a painting by 15th century Italian artist Jacopo del Sellaio, which is currently on display at the Cantor Art Center.

Methods

Changing the illuminant of an image is simply – one needs only to apply a linear transform of the color matching transforms. The more computationally interesting component is creating color balancing. I set the illumination on the Sellaio Face image to one of five different lights (D50, D75, Fluorescent, Fluorescent11, and Tungsten); I then created four different transforms to attempt color constancy/balancing. The resulting image was analyzed using the Delta E value to find the best match.

Simple White Point XYZ Scaling

The creation of color constancy is possible through a variety of methods. In a simple first attempt, I created a very simple transform to create an easy diagonal transform. Taking a cue from the color-balancing lecture, I sampled the XYZ values of a white point in the image under the current illumination and the illumination into which I wished to convert. This is labeled the “White Conversion.” For thoroughness, I also created a full 3x3 matrix using a sample of white points to create a more rich transformation.


Among the worst: A rendering from tungsten to daylight using the Simple White Point XYZ Scaling diagonal transform.


Among the worst: A rendering from tungsten to daylight using the Simple White Point XYZ Scaling full 3x3 transform


Full Image XYZ Scaling

In addition to the white conversion, I also used a script that created a transform by using the entirety of the images in relation to one another instead of simply using single points. These transforms were created both in the full 3x3 form and just the diagonal for comparisons. Specifically this script solved for the transformation, L, of one XYZ into another, satisfying:


Best image result -- D75 rendered into Daylight using a Full Image Full 3x3


Second Best image result -- D75 rendered into Daylight using a Full Image Diagonal.



Implementation

In order to complete this project, several functions were written to create the transforms, do the transformation, and evaluate the quality of the resulting image. The transformations were created via four scripts, one for each type of scaling, and within each scaling both full and diagonal transformations. The transformation script itself simply applies the transformation to the XYZ value for each pixel on the screen and assembles a new version. The result was evaluated using a delta E calculation both by comparing various single points across the image and the image as a whole. I also took a look at the predicted ranges of XYZ values according to the transform.

For completeness, the following scripts and functions were created (or altered) and used in the implementation. These are found in the attached code archive:

      • Please note, these functions are specific to the hyperspectral image of SellaioFace1.
  • PerformXYZTransform.m: Takes the transform, L, and the XYZ of the image that needs to be color balanced. It performs the transformation, and returns the XYZ values of a new balanced image.
  • createDiagonalInnocentTransform.m: Takes the XYZ values of the image we have and the image that we want (created using the knowledge of both the current illumination and the wanted one). It returns a transform that is created by making a diagonal of the rations of the XYZ values of a white point in the image.
  • createFullInnocentTransform.m: This function takes the XYZ values of the image we have and the image we want, creating a transform from the inverse multiplication of the XYZ values on one to the other.
  • CreateFullImageXYZTransform.m: This is ImagEval Consult’s s_XYZsceneIlluminantTransforms.m slightly altered to be a function that accepts as a parameter the current type of lighting on the painting in the form of a filename. The output is a transform created by using the entire Sellaio image under both lights and solving the


equation to find a full 3x3 transformation.

  • CreateDiagonalImageXYZTransform.m: This is ImagEval Consult’s s_XYZsceneIlluminantTransforms.m slightly altered to be a function that accepts as a parameter the current type of lighting on the painting in the form of a filename. The output is a transform created by using the entire Sellaio image under both lights and solving the


equation to find a column-by-column transformation in the form of a diagonal.

  • paintingIllumination.m: This is the function that calls all others. In current form, it takes the parameter of the type of light you want to start with in file form, and displays an image that starts with that type of light and is color balanced for daylight.
  • getSampleXYZ.m: This a function that takes the XYZ values for a version of the Sellaio Face image and returns 5 points in an array for a point by point calculation of the DeltaE values.
  • getWhiteXYZ.m: This function take the XYZ values of a version of the Sellaio Face image and returns the XYZ values for the example white point in the particular version.
  • calculateDeltaE.m: This function takes the XYZ value of the image under the known illuminant, the image under the ideal illuminant and the white point to find and return the DeltaE values for five separate points in the images.
  • calculateAllImageDeltaE.m: This function takes the XYZ value of the image under the known illuminant, the image under the ideal illuminant and the white point to find and return the DeltaE values for the entire image.

Results





Transformation D50 to D65 using Full Image Scaling in a Diagonal

Transformation D50 to D65 using Full Image Scaling in a Full 3x3

Transformation D50 to D65 using White Point Scaling in a Diagonal

Transformation D50 to D65 using White Point Scaling in a Full 3x3


Transformation D75 to D65 using Full Image Scaling in a Diagonal


Transformation D75 to D65 using Full Image Scaling in a Full 3x3


Transformation D75 to D65 using White Point Scaling in a Diagonal


Transformation D75 to D65 using White Point Scaling in a Full 3x3


Transformation Fluorescent to D65 using Full Image Scaling in a Diagonal


Transformation Fluorescent to D65 using Full Image Scaling in a Full 3x3


Transformation Fluorescent to D65 using White Point Scaling in a Diagonal


Transformation Fluorescent to D65 using White Point Scaling in a Full 3x3


Transformation Fluorescent11 to D65 using Full Image Scaling in a Diagonal


Transformation Fluorescent11 to D65 using Full Image Scaling in a Full 3x3


Transformation Fluorescent11 to D65 using White Point Scaling in a Diagonal


Transformation Fluorescent11 to D65 using White Point Scaling in a Full 3x3


Transformation FluorescentOffice to D65 using Full Image Scaling in a Diagonal


Transformation FluorescentOffice to D65 using Full Image Scaling in a Full 3x3


Transformation FluorescentOffice to D65 using White Point Scaling in a Diagonal


Transformation FluorescentOffice to D65 using White Point Scaling in a Full 3x3


Transformation Tungsten to D65 using Full Image Scaling in a Diagonal


Transformation Tungsten to D65 using Full Image Scaling in a Full 3x3


Transformation Tungsten to D65 using White Point Scaling in a Diagonal File:Tung, My, Dia.png


Transformation Tungsten to D65 using White Point Scaling in a Full 3x3


Delta E Calculations on the Images (calculated in direct difference from the directly rendered image):


White Point, Diagonal

  • D50 - 1.0277
  • D75 - 0.6308
  • Fluorescent - 2.5516
  • Fluorescent11 - 1.8634
  • FluorescentOffice - 3.2820
  • Tung - 2.5934

White Point, Full

  • D50 - 1.1639
  • D75 - 0.5742
  • Fluorescent - 2.2295
  • Fluorescent11 - 4.9412
  • FluorescentOffice - 3.1519
  • Tungsten - 3.1640

Full Image, Diagonal

  • D50 - 1.0277
  • D75 - 0.5181
  • Fluorescent - 2.5516
  • Fluorescent11 - 1.8634
  • FluorescentOffice - 3.2820
  • Tungsten - 2.5934

Full Image, Full

  • D50- 0.2788
  • D75 - 0.1245
  • Fluorescent 1.4601
  • Fluorescent11 - 1.0776
  • FluorescentOffice - 1.6314
  • Tung - 0.7635

Conclusions

My computationally simple transform worked reasonably well with lights close to daylight, but there were not close in the realm of fluorescents or tungsten, adding a yellow tinge to these. Using the full image to create the transform yielded the best results, with DeltaE values less than one, making the image undistinguishable from the direct image. The best image resulted from a transform from D75 to daylight. Curiously, the full transform outperformed the diagonal in the Full Image Scaling, but resulted in a worse result in the Simple White Point Scaling.


If I could keep working on the project, I think it would be fascinating to look at what type of information the hyperspectral data could add to the color constancy effect.

References

Appendix I