PetykiewiczBuckley: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Psych2012
imported>Psych2012
Line 14: Line 14:
<br><math>I^d_k = \frac{I_{k-1}^a-I_k^a}{I_k^a}</math>
<br><math>I^d_k = \frac{I_{k-1}^a-I_k^a}{I_k^a}</math>
<br>An example of original, approximation and detail images is shown below.
<br>An example of original, approximation and detail images is shown below.
{| cellpadding="2" style="border: none; text-align:center;"
{| cellpadding="2" style="border:none; margin:auto;"
|- border="0"
|- border="0"
| [[File:dsj-pht-1.png|360px|The original color image]]
| [[File:dsj-pht-1.png|360px|The original color image]]

Revision as of 01:35, 21 March 2012

Introduction

Haze is caused by the scattering of Rayleigh and Mie light by particles in the atmosphere, such as droplets of water or smoke. These particles, which are between the viewer and a distant object, scatter light from other areas toward the viewer (airlight), and prevent some of the light from the distant object from reaching the viewer (see schematic). Restoring an image to non-hazy conditions is desirable because we like clear days. Since the amount of Rayleigh scattering increases proportional to , where is the wavelength of light, for longer wavelengths there will be less scattering, and thus the image will appear less hazy at these wavelengths. Here, we investigate using NIR and red spectral data to add detail and thus dehaze images, using a modified version of the algorithm described in [1]. Our study differs from the investigation in [1] as we do additional spectral characterization, made possible by the hyperspectral data made available to us. This algorithm takes detail from the long wavelength channel and adds it to a visible luminance channel to dehaze the image. Our investigation includes data from 400-1000 nm, and we investigate the different possibilities for implementing a NIR filter for dehazing, and the possibility of using the red camera sensor from three different commercial cameras to dehaze all three color channels. We perform a viewer study to assess the effectiveness of our dehazing. We also explain our results using spectral data and comparing to an online database[2] of reflectance spectra. We also dehaze RGB images taken from the dataset from [1] available here compare dehazing of these images with NIR data (also available in the dataset) and with the R pixel values, obtaining (to us) more attractive results with the R camera sensor.

Methods

We started with the hyperspectral panorama taken from the dish. Initially, we converted (scenes from) this data to the CIEXYZ space by interpolating the XYZ curves from ISET at the wavelengths at which the hyperspectral data was taken, and using these curves to weight the spectral data (see attached code). To view the image, we then converted the XYZ data to sRGB and used the Matlab image processing toolbox to display it.

To dehaze the image, we used as a starting point the algorithm described in reference [1], operating on the L* channel of the CIELAB representation of both the visible and NIR data. This algorithm uses a weighted least squares transform (described in [3]) to decompose the image into an over-complete multiscale representation consisting of a series of approximation and normalized detail images of different degrees. The detail images formed from the NIR and visible intensity images are then compared pixel-wise and combined to form a composite image containing features from both the NIR and visible data. The approximation images are given by:

where is the kth level approximation image (which can either be visible, or NIR, , and where we implement levels from 0 to n. is a measure of the coarseness of the approximation image, and is for the first image. As in [1] we chose , c = 2 and n = 6, although we experimented with different values and obtained similar results. is either an intensity channel of the visible image or the NIR image . We compared both a linear intensity channel and nonlinear (L*) channel for the dehazing, obtaining similar results. The NIR intensity image was also converted to an L* nonlinear scale before applying the filter in the case that the L* visible image channel was used. The function W performs the approximation operation using weighted least squares described in [2], the matlab code for implementing this function is freely available here. The detail images are differences of approximation images, as described in [1],[4], and are given by

An example of original, approximation and detail images is shown below.

The original color image An example intensity approximation image An example intensity detail image
The original color image An example intensity approximation image An example intensity detail image


The synthesis procedure is based on the observation that

and that the NIR image has higher contrast when there is haze, and therefore we can take from the detail image whichever has a higher intensity to create the fused image

In practice, we found that simply replacing the visible detail with the NIR detail was not much different from this. Our first attempts at dehazing, using bands in the NIR (850 nm) resulted in a dehazed sky but "snowy" mountsins (see results section). After attempting to modify the algorithm to fix this, we looked more closely at the spectral data to explain this result. Simply looking at the intensity images in different spectral bands it is clear that the images change as you move from the visible to the NIR. The images almost seem to invert.

Z channel Y channel X channel
Z channel intensity Y channel intensity X channel intensity
700-775 nm 775-850 nm 850-920 nm
700-775 nm intensity 775-850 nm intensity 850-920 nm intensity

Taking the intensity image at different wavelength bands in the hyperspectral data and summing in the x direction yielded the plot of intensity versus image y-height versus wavelength shown below left. The horizon is clearly visible, and the image intensities seem almost to invert at around 700 nm. Plotting spectra by this method for different areas of the image we were able to discern that the problem was perhaps resulting from different reflectances in the visible and NIR. In addition, atmospheric absorption lines are noteable in the data.

Spectral reflectances of selected materials

Consulting an online database [2] (see figure), it was became apparent that this is a result in the huge jump in reflectance for vegetation at around 700 nm. At longer (NIR) wavelengths, vegetation is highly reflective, and at certain wavelengths there will be no contrast at all between soil and trees for example. This means that at these wavelengths even if you can see through haze, there will be no contrast (details) to add!


We modified this algorithm to use as the NIR data for dehazing (1) wavelength bands of hyperspectral data, from 574 nm to 974 nm (shorter than that made the image more hazy) (2) gaussian bands of hyperspectral data in the same range (3) Use input red camera sensor filter spectrum to both "read" (as opposed to the XYZ curves we initially used) the hyperspectral data and dehaze it. We also examined the hyperspectral data for spectral characteristics that would help us to determine the best wavelength range to use, and why specific wavelength ranges worked better than others. We performed a viewer study on five people, in which we displayed side by side images that we had dehazed and original images, and asked which they thought was better (or if they couldn't tell them apart). Images included were (i) dehazed with 700 nm (2 nm spectral band), without and (ii) with photoshop white balancing, dehazed with 775 nm (10 nm spectral band), (iii) dehazed with QImaging red sensor , without and (iv) with photoshop whiteness balancing, (vi) a panorama dehazed with the same 700 nm spectral band and (vii) a panorama dehazed with the QImaging red sensor. Once we had determined that we could use the red camera sensor to dehaze images, we downloaded [ images] (available online) used in the paper [1] and dehazed them using both the R camera sensor and the NIR data to compare these. Results are discussed in that section.

Results

Original Dehazed with X-.194Z Dehazed with IR
Original Dehazed with X-.194*Z Dehazed with IR
Original Dehazed with X-.194Z Dehazed with IR


Original Dehazed with 684 nm Dehazed with 694 nm
Original Dehazed with 684 nm Dehazed with 694 nm
Dehazed with 704 nm Dehazed with 714 nm Dehazed with 734 nm
Dehazed with 704 nm Dehazed with 714 nm Dehazed with 734 nm
Dehazed with 775 nm Dehazed with 854 nm Dehazed with 914 nm
Dehazed with 775 nm Dehazed with 854 nm Dehazed with 914 nm


Summary and Conclusions

  • Implemented dehazing algorithm from Schaul et al.
  • Dehazed images with different spectral bands
  • Connected results with material spectral reflectivities
  • Red camera sensor can dehaze images!
  • Performed viewer study

Future Work

  • Improve dehazing algorithm, eg. using work by He et al.
  • Create a metric for dehazing quality, eg. comparing to same pictures taken on a clear day, or using polarization data and algorithm from Schechner et al.
  • Use linear combination of RGB data to extrapolate intensities in a narrow-band at an optimal wavelength

References

  1. L. Schaul, C. Fredembach, and S. Süsstrunk, Color Image Dehazing using the Near-Infrared, IEEE International Conference on Image Processing, 2009.
  2. Baldridge, A. M., S.J. Hook, C.I. Grove and G. Rivera, 2009.. The ASTER Spectral Library Version 2.0. Remote Sensing of Environment, vol 113, pp. 711-715
  3. Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski, “Edgepreserving decompositions for multi-scale tone and detail manipulation,” International Conference on Computer Graphics and Interactive Techniques, pp. 1–10, 2008.
  4. A. Toet, “Hierarchical image fusion,” Machine Vision and Applications, vol. 3, no. 1, pp. 1–11, 1990.
  5. Y.Y. Schechner, S.G. Narasimhan, and S.K. Nayar, “Instant dehazing of images using polarization,” IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 325–332, 2001.
  6. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1957–1963, 2009.
  7. C. Fredembach and S. S¨usstrunk, “Colouring the near infrared,” IS&T 16th Color Imaging Conference, pp. 176–182, 2008.

Appendix I

Appendix II

  • Jan and Sonia split the work exactly down the center.