Arthur Alaniz, Tina Mantaring: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Cmgmant
imported>Cmgmant
Line 103: Line 103:


<span id="Noise-Modeling-Segmentation">[4]</span> [http://www.cs.tut.fi/~foi/papers/Foi-NoiseMeasurementNonUniform-2007-IEEE_SJ.pdf Foi, A., S. Alenius, V. Katkovnik, and K. Egiazarian, “Noise measurement for raw data of digital imaging sensors by automatic segmentation of non-uniform targets”, IEEE Sensors Journal, vol. 7, no. 10, pp. 1456-1461, October 2007.]
<span id="Noise-Modeling-Segmentation">[4]</span> [http://www.cs.tut.fi/~foi/papers/Foi-NoiseMeasurementNonUniform-2007-IEEE_SJ.pdf Foi, A., S. Alenius, V. Katkovnik, and K. Egiazarian, “Noise measurement for raw data of digital imaging sensors by automatic segmentation of non-uniform targets”, IEEE Sensors Journal, vol. 7, no. 10, pp. 1456-1461, October 2007.]
<span id="Bilateral>[5]</span> [http://users.soe.ucsc.edu/~manduchi/Papers/ICCV98.pdf C. Tomasi and R. Manduchi. Bilateral Filtering for Gray and Color Images. In Proceedings of the IEEE International Conference on Computer Vision, 1998.]


= Appendix =
= Appendix =

Revision as of 23:14, 18 March 2011

Introduction

As the use of digital cameras becomes more and more widespread, the imaging algorithms that go into these cameras have also become more sophisticated. The typical digital camera uses a processing pipeline to convert raw sensor values to a final image, and the different parts of the pipeline may involve correcting for hardware defects, interpolating color filter array values, removing noise, and performing color space transformations. As more and more focus is being given to this field, the pipeline algorithms are also becoming more and more complex. However, one must ask the question: do these complex algorithms really outperform their simpler, more straightforward counterparts?

Methods

Denoising and Demosaicking

For our project, the primary algorithm we chose for our imaging pipeline was the one presented in the paper “Denoising and Interpolation of Noisy Bayer Data with Adaptive Cross-Color Filters”, by D. Paliy, A. Foi, R. Bilcu, and V. Katkovnik. Their technique performs simultaneous denoising and demosaicking using directional adaptive filters that are based on the concepts of local polynomial approximation (LPA) and intersection of confidence intervals (ICI). Their paper is summarized in this section.

Local Polynomial Approximation

Local polynomial approximation, or LPA, works on the assumption that the data in some local region can be fitted to a polynomial function. The cross-color filters used in the algorithm are linear combinations of LPA smoothing and difference filters that operate on complimentary color channels.

The above figure shows an example of an LPA smoothing filter (left) and an LPA difference filter (right). The combined LPA cross-color filter is shown below, where the differing colors illustrate that the smoothing and difference components work on different color channels.

Letting denote a directional 1D convolution kernel, where is the polynomial order, is the scale or size of the kernel (kernel width), is the order of the derivative, and is the direction, then the interpolation kernels and the denoising kernels can be written as follows:

The paper used 4 possible directions for the interpolation filters and 8 directions from the denoising kernels. Aside from this, the structures for the interpolation and denoising kernels were very similar. Indeed, the primary difference between them was that their component filters operated on different support channels. This is shown in the figure below.

In the figure, (a) illustrates how the green pixel is interpolated at position g(0). The smoothing components of the interpolation filter operate on the green pixels, while the difference component operates on the red pixel. In (b), we see how denoising of the green pixel at g(0) is done. The smoothing components of the denoising filter operate on the green pixels, while the difference component operate on the red pixels. Similarly, when we are denoising a red pixel, as in (c), the smoothing components operate on the red pixels, while the difference component operates on the green pixels. Finally, since the denoising filters have 8 directions, they can operate along diagonals, which is shown in (d). When denoising green pixels along diagonals, we have only green pixels to deal with, and so there is no difference component. However, when we are denoising red or blue pixels, the smoothing component operates on the same channel while the difference component operates on the complimentary color channel.

Since the filters are directional, the origin of the filter is not in the center, but is located to one side. Obtaining the estimate is done by convolving the filter with the input pixels. That is, if is our noisy CFA data, then for each scale , we obtain the interpolation and denoising estimates and at each pixel position as follows:

Intersection of Confidence Intervals

The interpolation and denoising filters and were created for different scales . The goal of ICI was to choose the largest scale for which the local polynomial approximation is still valid. This was done as follows:

First, the standard deviations of the interpolation and denoising estimates were found using some estimate of the standard deviation of the noise. This was done as follows:

In the above formulas, is the estimated standard deviation of the noise. The paper gave two examples of signal-dependent noise models, and their corresponding standard deviation estimates:

  • Poisson noise: , and
  • Nonstationary Gaussian noise with signal-dependent standard deviation: , and

Next, for the sets of estimates and , they obtained sets of confidence intervals as follows:

where denotes the index of the scale that was used. Note that the threshold parameters are design parameters.

Finally, the adaptive scale for interpolation was found by finding the largest such that the intersection of confidence intervals is nonempty. Using this scale , the interpolation estimate would then be . Similarly, the adaptive scale for denoising was found by finding the largest such that the intersection of confidence intervals is nonempty. This results in the denoising estimate of .

Anisotropic Denoising and Interpolation

Combining the adaptive-scale kernels in all directions gives us an idea of the best local space for which the polynomial model fits the data. The figure below shows the combined denoising kernels for the red pixel in the center.


After applying the ICI criterion, they obtained adaptive-scale estimates from each direction . These directional denoised and interpolated estimates were then linearly combined to produce the final pixel value at a location . The actual formulas used to do this can be found in [1].

The block diagram of the joint denoising and demosaicking algorithm is shown below.

We see here that, using the pre-designed filter kernels, the denoising of the CFA values and the interpolation of green pixels at the red and blue locations happens simultaneously. Then, using the denoised Bayer data and the noisy, fully-interpolated green channel, the full-noise free green channel is produced, as well as the interpolated red and blue pixels at the blue and red locations, respectively. Finally, using the noise-free green channel, the red and blue pixels are interpolated at the green locations, producing the final, 3-channel image.

Noise Estimation

In the previous section, an estimate of the standard deviation of the noise was needed to properly perform demosaicking and denoising. Using the noise models that were used in #LPA-ICI-CFAI (and mentioned briefly in the previous section) gave pretty poor results. Thus, we wanted to use a more accurate noise model.

We used the noise estimation algorithm presented in the paper “Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data” by A. Foi, M. Trimeche, V. Katkovnik, and K. Egiazarian. This was the same noise-estimation algorithm that was used in #LPA-ICI-CFAI to obtain their results using cameraphone images.

In the paper by Foi et. al, they assumed that a noisy pixel observation at a location would be of the form

where is the true noise-free value, is zero-mean independent random noise whose standard deviation is 1, and is a function of y that gives the standard deviation of the overall amount of noise.

Furthermore, they assumed that the noise term was composed of a Poisson component , which models the photon noise of the sensor, and a Gaussian component , which models noise that is independent of the signal (eg. thermal noise). That is,

Using properties of the Poisson and Gaussian distributions, and some elementary algebra (see 3 for the complete derivation), they found that the overall standard deviation of the observed noisy signal would have the form

where and are parameters of the sensor.

The rest of the paper describes a method for estimating and using only a single image. First, they preprocess the data by converting it into the wavelet domain. This allows them to segment the data into non-overlapping regions (level sets), where the value for each set should be smooth. Next, for each of the sets, a (mean, standard deviation) pair is computed. Finally, the maximum-likelihood (ML) approach is used to fit a global model to the set of (mean, standard deviation pairs).

The paper also adjusts the model for the clipped case; that is, when the noise might be causing the sensor values to go below the minimum or maximum values, and would thus be clipped to the minimum or maximum values, respectively. In such a case, clipping would cause the variance of the noise would be lower, something the model needs to account for.

Post-Filtering

After performing denoising and demosaicking, we noticed that there was still some noise in the final image, especially for lower luminance values. Thus, we wanted to add another filtering step that would hopefully get rid of this noise.

The filter we chose was the bilateral filter, proposed by C. Tomasi and R. Manduchi #Bilateral.

Color Correction

Results

Conclusions

References

[1] Paliy, D., A.Foi, R. Bilcu, V. Katkovnik, “Denoising and Interpolation of Noisy Bayer Data with Adaptive Cross-Color Filters”, SPIE-IS&T Electronic Imaging, Visual Communications and Image Processing 2008, vol. 6822, San Jose, CA, January 2008.

[2] Paliy, D., V. Katkovnik, R. Bilcu, S. Alenius, K. Egiazarian, “Spatially Adaptive Color Filter Array Interpolation for Noiseless and Noisy Data”, International Journal of Imaging Systems and Technology (IJISP), Special Issue on Applied Color Image Processing, vol. 17, iss. 3, pp. 105-122, October 2007.

[3] Foi, A., M. Trimeche, V. Katkovnik, and K. Egiazarian, “Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data”, IEEE Trans. Image Process., vol. 17, no. 10, pp. 1737-1754, October 2008.

[4] Foi, A., S. Alenius, V. Katkovnik, and K. Egiazarian, “Noise measurement for raw data of digital imaging sensors by automatic segmentation of non-uniform targets”, IEEE Sensors Journal, vol. 7, no. 10, pp. 1456-1461, October 2007.

[5] C. Tomasi and R. Manduchi. Bilateral Filtering for Gray and Color Images. In Proceedings of the IEEE International Conference on Computer Vision, 1998.

Appendix

Sources and Results

Work Breakdown