Stephanie Pancoast, Aaron Zarraga, Edouard Yin
Introduction
In order to obtain a high quality image from a digital camera sensor, it is necessary to demosaick color filter array (CFA) values to estimate RGB values for each pixel. It is also important to apply a denoising algorithm to make the image look crisp and clean. Challenges arise, however, when trying to match a demosaicking and denoising algorithm. Many demosaicking algorithms are created under the assumption that the CFA values are free of noise:
- "...most demosaicking methods are developed under the unrealistic assumption of noise-free data. In the presence of noise, the performances of the algorithms degrade drastically, since their sophisticated nonlinear mechanisms are generally not robust to noise. Moreover, denoising after demosaicking is untractable, because demosaicking distorts the characteristics of the noise in a complex and hardly computable form." - Laurent Condat
For this reason, "denoisaicking" algorithms have been developed to combine denoising and demosaicking. The purpose of this project is to implement, analyze, and characterize one such denoisaicking algorithm written by Laurent Condat. We use a "simple" algorithm (a bilinear demosaick followed by a linear color correction) as a baseline for comparison.
Methods
For this project, all major calculations and algorithms were implemented in Matlab.
Overview of Condat's Algorithm
Condat's algorithm essentially builds on the demosaicking technique called "frequency selection". This algorithm estimates the Red/Blue and Green/Magenta chrominance channels by modulating the CFA input and convolving the result with a low pass filter. The estimation of these two channels is then remodulated and subtracted from the original input to estimate the luminance channel values.
Condat creates his denoisaicking algorithm by denoising after two main steps. The algorithm denoises the chrominance channel estimations and then once more after the luminance channel estimation.
Note on Hardware Performance: The convolutions estimating the two chrominance channels are expensive operations. However, they are independent and could be implemented in parallel on a chip.
Pipeline Integration
For our testing, we edited wrapperimage.m to call Condat's algorithm instead of the simple version. In order to integrate Condat's algorithm, we first had to normalize the provided CFA values so they would be in a range that Condat's algorithm expects. Condat's algorithm expects values in the range from 0-255, and the CFA contains values ranging from 0 to ~5. To calculate this normalization, we created a "cfa_factor":
cfa_factor = 255/max(CFA) (where max(CFA) is the maximum value found in the CFA matrix)
We scaled the CFA matrix by multiplying it by cfa_factor. We then fed this normalized CFA into Condat's algorithm. To compensate for this adjustment on the output of Condat's algorithm, we divided the output by cfa_factor. The figure below shows the integration of Condat's algorithm into the wrapperimage pipeline.
After this was complete, we were getting visible images out of the wrapperimage pipeline, but the images were pink. This is due to the fact that Condat's algorithm and the wrapperimage use shifted versions of the Bayer CFA. In order to fix this problem, we removed the first the first column of pixels. (This is acceptable, since the metrics scripts ignore the outer 10 pixels).
Condat's algorithm takes in the CFA as well as a standard deviation estimate for the noise present. To complete the integration of Condat's algorithm into the wrapperimage pipeline, we wanted to experiment with this noise parameter to see if we could improve the performance of the denoisaicking. Instead of using the suggested value of 25 (based on a range of 0-255), we calculated the actual standard deviation of the noise added to the image. In practice, this parameter would be unavailable, but we wanted to push the parameter to its limit. We found that by using the actual standard deviation of noise, we were not able to perform any better than with using the magic "25" estimation. Therefore, we kept this parameter at 25 for the remainder of the project.
Metrics
We obtained metrics for our pipeline by not only examining the metrics calculated in the provided project code, but also by conducting a "preference survey" with over 100 people. This allowed us to evaluate the denoisaicking algorithm both quantitatively and qualitatively. It was interesting to see how well the numerical metrics matched up with people's preferences. This will be discussed in more detail in a later section.
Results
- Eddie
Organize your results in a good logical order (not necessarily historical order). Include relevant graphs and/or images. Make sure graph axes are labeled. Make sure you draw the reader's attention to the key element of the figure. The key aspect should be the most visible element of the figure or graph. Help the reader by writing a clear figure caption.
Using the framework built by Steven Lansel, we could test Laurent Condat's joint denoising and demoisaicing algorithm. This framework gives us the possibility to simulate different conditions of luminosity for a given scenery, and to add in some realistic noise. Then we can feed those images to the algorithm in test and get processed images as outputs. Moreover this framework provides us with some metrics on the quality of the output images.
To understand better the qualities and defaults of this algorithm, it is interesting to compare the output images for very different kinds of input images. We tested this algorithm with six different input images, selected for being representative of different kinds of pictures and for the different challenges the pictures can pose. Additionnally, we tested the algorithm on the MacBeth color checker.
Here is an example of the outputs we got from this framework:
We can see that the algorithm gives very good images for high luminosity. From the human point of view, the resulting images seem almost perfect to perfect. However in low luminosity, the quality of the images is not as good: the algorithm was not able to reconstruct all the data from the noisy image.
To have an objective measure of the quality of the output images, we used several metrics: MCC color bias, MCC color noise, delta E in S-CIELAB metric space, and signal-to-noise ratio. We used those metrics to compare our output images to those obtained from using a simple pipeline applying sequentially demoisaicing and denoising.
MCC-color-bias | MCC-color-noise | S-Cielab delta E | SNR |
---|---|---|---|
MacBeth Color Checker
Looking at the plots for MCC color bias, we can see that the difference between the algorithms in MCC color bias is neglectable. Using joint denoising and demoisaicing does not reduce further the measured color bias. On the other side, the plots for MCC color noise are very different. Condat's algorithm performs much better than the simple pipeline. This difference is reflected by the absence of grain in our outputs, as opposed to those coming from the simple pipeline. Condat's algorithm output images are smoother and has decreased the noise more efficiently.
MCC for simple pipeline | MCC for Condat's algorithm |
---|---|
S-CIELAB
The S-CIELAB metric is an extension of the CIELAB metric to include the effect of spatial pattern on human vision. We used the S-CIELAB metric to measure the visual difference between the original image and the output image. The plot shows two different domains: - on low luminosity, the joint denoising-demoisaicing performs better than the simple pipeline. - on higher luminosity, it performs not as well as the simple pipeline.
The shift between those domains is at approximately 50 cd/m².
However, at this point, the delta E value is only 4. Considering that human vision accepts differences of delta E of a few units, the human eye would not be disturbed significantly by the error left out by our algorithm.
Signal-to-noise ratio
The signal-to-ratio plot gives similar interpretations of Condat's algorithm behavior. This algorithm performs very well in low luminosity, decreasing the ratio by several units. But at approximately 50 cd/m², the algorithm does not perform so well compared to a simple pipeline.
The algorithm seems to reach a plateau at a value of SNR of 20. Although we wish it could perform better on high luminosity, the algorithm fulfills its tasks of reaching an overall good signal-to-noise ratio. It is especially important that it performs well on low luminosity conditions, where pictures usually have a poor ratio.
We can illustrate the effects of Condat's algorithm on this comparison:
Output image for simple pipeline | Output image for Condat's algorithm |
---|---|
Human vision: the survey
The survey corroborates our interpretation of the metrics. In low luminosity conditions, the people we surveyed clearly preferred the output images of Condat's algorithm. In higher luminosity, they thought that the images were not as good as the output images of the simple pipeline.
Conclusions
Throughout this project, we discovered the potential of joint denoising and demosaicing. Whereas most algorithms usually do those operations sequentially, Laurent Condat's algorithm interleaves them to improve the quality of the result images. Joint denoising and demosaicing is very promising, as it yielded very good results in low luminosity conditions, and satisfactory results in high luminosity conditions. Both our measures and the survey confirm those conclusions. Even though we wish we had got better results in high luminosity, we want to stress out that the main goal is to find efficient algorithms in low luminosity. Indeed, consumer cameras already offer good pictures in high luminosity, while in most cases give very poor results in low luminosity.
The algorithm we used seems to be very aggressive on noise, which explains the good results in low luminosity. However, this is the reason why results are not as good in high luminosity: details are interpreted as noise, and are smoothed. Therefore, images with high-frequency patterns are damaged, while images with low-freqyancy patterns look very neat.
Further improvements
If given more time, we wish to improve on Laurent Condat's algorithm. Laurent Condat fixed several coefficients that parameter his denoisaicing. For example, he uses a fixed sigma value of 25. We tested the algorithm for different values of sigma to improve the output images. Given that we generate noise ourselves, we were able to measure the standard deviation of the noise, to feed it to Condat's algorithm. This way, we were able to get the theoretical best results from this algorithm. Our tests showed that the difference was barely noticeable. Therefore we decided to keep a value of 25. We would like to find the role of other coefficients that Laurent Condat did not detail. We are especially interested in finding the ones that control the aggressivity of denoising.
Besides, if given even more time, we would like to write Condat's algorithm in an actual programming language, such as C. The time measurements would be more relevant. This would give us the runtime of this algorithm on a computer in normal conditions. This would tell us the suitability of this algorithm for pictures post-processing.
A further step would be to write this code in Verilog. This would give us the performances of a dedicated chip, that would be embeddable in a camera.
References
- Laurent Condat's homepage [1]
- A simple, fast and efficient approach to denoisaicking: joint demosaicking and denoising [2]
Appendix I
Code
- File:Pipeline Code.zip contains all the files needed to run and test the algorithm with the exception of the Iset files. Included are the basic simple pipeline files and data containing the original images. To find a description of these files, see Psych221 Pipeline. We slightly modified some of these files to incorporate Condat's algorithm and save sets of images and save the metric plots. These are described below.
- Iset 4.0 [3]
- A BM3D Wiener filter is needed for Condat's algorithm. The versions for the 64-bit and 32-bit Mac, Linux, and Windows version are included in the Pipeline Code above, but if another BM3D file is needed or a different version, it is available on the M of the Wiener filter needed in the Pipeline Code above through the Tampere University of Technology site [4]
Edits on Provided Software and New Files
- wrapperImage: Modified to incorporate Condat's algorithm, as describe in the Methods section.
- metricplots: Modified to save metric plots as jpeg files in subfolder for easier access later.
- showresultimages: Modified to save the set of 8 images created by the pipeline as a jpeg file.
- show1: Displays a single existing image file. This is useful for looking closer at single outputs from the pipelines.
- show2: Displays two images side by side. These images should come from different files but have the same image number and luminance level and follow the file naming format from the wrapperimage script. We used this function to create the image pairs displayed on the survey.
Survey Results
Using SurveyMonkey, we asked for people to choose which of two side by side images they prefer. The results of this survey are summarized below. The questions are listed in the order they appeared along with the corresponding image and luminance value that served as the input for the denoisaicking and simple pipeline. Some responders did not answer every question.
Question | Image # | Lum. (cd/m^2) | Prefer Condat | Prefer Simple |
---|---|---|---|---|
A | 2 | 2 | 58 | 42 |
B | 6 | 6 | 80 | 20 |
C | 1 | 2000 | 1 | 99 |
D | 4 | 200 | 60 | 40 |
E | 5 | 600 | 15 | 85 |
F | 3 | 200 | 41 | 56 |
G | 7 | 3 | 83 | 13 |
H | 6 | 600 | 11 | 85 |
I | 3 | 6 | 82 | 13 |
J | 5 | 2000 | 11 | 83 |
K | 7 | 200 | 53 | 35 |
L | 4 | 2 | 64 | 24 |
M | 2 | 60 | 60 | 28 |
N | 1 | 3 | 28 | 60 |
Images
We were provided seven images to test our algorithm against the simple algorithm provided with the project. Below is the results from our algorithm using Condat's denoisaicking function at luminocity 6cd/m^2 and at 600cd/m^2. The final column consists of the original image with no noise added for comparison.
Luminocity = 6cd/m^2 | Luminocity = 600 cd/m^2 | Original |
---|---|---|
Appendix II
All members of the group worked equally on obtaining the code and figuring out how to adapt to the wrapper files provided to us at the beginning of the project. This was the main aspect of the assignment. The remainder was broken down as follows:
Stephanie Pancoast
- Modified provided code to save files and display images
- Created survey and compiled results to test human component of algorithm
- Human component aspect of presentation
- References, Appendix I, and Appendix II sections of Wiki
Edouard Yin
- Tested effectiveness of scaling sigma values according to noise in image
- Modified code to account for magnitude scaling
- Metric results aspect of presentation
- Results and Conclusions sections of Wiki
Aaaron Zarraga
- Modified code to account for magnitude scaling
- Studied Condat's algorithm closely to better understand what the code was doing
- Algorithm aspect of presentation
- Introduction and Methods section of Wiki