Psych221 Pipeline
Processing Pipeline Project for Psych 221
Currently this page is not linked anywhere while it is being worked on.
We provide noisy sensor images and desired sRGB renderings of those images. Your implement processing algorithms (color transform(s), denoising, demosaicking, and display rendering steps) to approximate the high quality desired sRGB renderings. This can (and should) be a team project with students who implement different parts of the image processing pipeline. We evaluate your methods by providing new test images for your pipeline and evaluate the quality of the sRGB rendering.
The goal of this project is to implement and evaluate an image processing pipeline to take raw output from a camera sensor and generate a pleasing image of the original scene.
The project involves two important steps:
- Leverage existing image processing algorithm(s) to generate a functioning pipeline.
- Evaluate the perceptual quality of the rendered images.
Processing Pipeline Background
A number of calculations are required to take the output from a camera sensor and generate a nice sRGB image.
- Demosaicking: In almost all sensors for color imaging, individual photoreceptors (pixels) have one of a few optical filters placed on them so the photoreceptor measures only a particular color of light. These optical filters over each individual pixel make up the color filter array (CFA). Demosaicking is the process of estimating the unmeasured color bands at each pixel to generate a full color image.
- Denoising: Since measurements from the sensor contain noise, denoising attempts to remove any unwanted noise in the image while still preserving the underlying content of the image.
- Color transformation: A color transformation is necessary to convert from the color space measured by the sensor into a desired standard color space such as XYZ or sRGB.
There are more steps in real pipelines but these are the most challenging and the relevant ones for this project. There are dozens of algorithms published for both demosaicking and denoising. Traditionally pipelines contain these calculations as independent steps.
Recently some researchers have suggested combining the demosaicking and denoising calculations into a single algorithm that performs both calculations. Although a combined approach is not required for the project, we recommend it. Implementing and understanding a single algorithm is much easier than implementing two totally separate algorithms.
The easiest color transform you could implement is a linear transformation (multiply by 3x3 matrix) from the sensor's color space to XYZ. But there are reasons this may be improved especially for low noise. Maybe your project will improve upon this basic approach, maybe not.
Recommended Existing Algorithms
The following are algorithms that perform demosaicking and denoising. The authors have provided a Matlab implementation, although I cannot vouch for the quality of the code. Feel free to pick an algorithm from this list or select one from the literature that you find. If you intend to use a particular algorithm, please let me know so other students do not pick the same one.
- D. Paliy, A. Foi, R. Bilcu and V. Katkovnik, "Denoising and interpolation of noisy Bayer data with adaptive cross-color filters," in 2008, pp. 68221K. [1]
- K.Hirakawa, T.W. Parks, "Joint Demosaicing and Denoising." [2] (and related papers from site)
- L. Zhang, X. Wu, and D. Zhang, "Color Reproduction from Noisy CFA Data of Single Sensor Digital Cameras," IEEE Trans. Image Processing, vol. 16, no. 9, pp. 2184-2197, Sept. 2007. [3]
- L. Condat, “A simple, fast and efficient approach to denoisaicking: Joint demosaicking and denoising,” IEEE ICIP, 2010, Hong Kong, China. [4]
Assistance
Send questions to Steven Lansel, slansel@stanford.edu.