WuYang
Back to Psych 221 Projects 2014
PSF analysis and image deblurring using a simulated camera lens
Introduction
The image capture process in often produce a sampled image that contains a blurred signal due to various optical parameters, including the lens and sensor objects. With the knowledge of the blur, which is fully characterized by a set of point spread functions, one for each position in the scene, it is possible to recover a sharp image by deblurring the captured imaging using the known point spread functions using deconvolution. In this way, it is possible to computationally correct for optical aberrations in the camera system.
The first column of Figure 1 shows the scene (ground truth image) whereas the second column shows the blurred simulated image. The goal of this project it to recover the sharp image from the blurred image using a spatially varying deconvolution from the known point spread functions.
- Figure 1
-
Sample scenes (left column) and blurry simulated images (center column) along with the recovered sharp image (right column) from our deconvolution implementation. High frequency detail in all of the scenes was recovered, though noise is amplified.
Background
Much work as been done in the areas of blind and non-blind image deconvolution. [1] discusses the use of deconvolution to compensate for lens aberrations. Some methods are available which attempt to indirectly estimate the point spread function (PSF), such as the work in [2] which uses sharp edges to estimate the PSF, but this is challenging and requires regularization which can result in oversmoothing. More accurate approaches for determining the PSF involve directly measuring the PSF through some calibration process, but this can be time consuming. In [3], the authors develop a method for extending a PSF calibration measurement done at a single depth, to other depths, greatly minimizing the amount of calibration data necessary for acquiring the PSF data. To simplify this process further, the authors in [4] developed a method for calibration relying only on imaging a binary white noise target, and introduce the use of a novel cross channel prior in their optimization formalization for estimating the PSFs. FInally, in [5] the concept of using a calibration target for forming a well posed inverse problem for solving the the PSFs is extended to perform with sub pixel accuracy. Overall, these methods involve accurate estimation of the spatially varying PSFs and then deconvolution to remove the blur introduced by the optics.
Methods
We start by obtaining the spatially variant PSF of the system. A grid of points is used as the scene (Figure 2a) and then captured with the sensor through the optics. A grid of PSFs are obtained (Figure 2b). To unblur an image, it is deconvolved with each PSF using the Richardson-Lucy deconvolution algorithm (Figure 2c). The resulting images are windowed at the appropriate places according to the location of the PSF used and superimposed together to form the final image. The effect of different window shapes are shown on a slanted bar image in Figures 2d and 2e.
- Figure 2
-
Image processing pipeline. (a) a grid of dots is passed into the camera simulation in order to produce (b) the spatially varying point spread functions. (c) a blurry target input image is deconvolved once per PSF and merged to produce the final image using either (d) simple cropping or (e) a linear weighting function
Image Simulations
All data used in this project was simulated using Image Systems Engineering Toolbox (ISET) (http://imageval.com/).
We used the Zemax ray tracing data found in rtZemaxExample.mat. In all of the simulations, we used 1um X 1um pixels with a 512 pixel X 512 pixel resolution sensor. Autoexposure was used to obtain the simulated photos.
Image Formation Model
A scene image radiance, assumed to be planar (at the same fixed depth) is used as input to an optical ray tracing simulation which captures any optical blurring introduced in the sampling process within the camera. Next, various types of sensor noise are included to produce the final simulated output image. We used a monochromatic camera sensor for all of the simulations. The first column of Figure 1 shows sample input scenes.
Measuring Point Spread Functions
To accurately measure the point spread functions, we simulated an image of a grid of points spaced 32 pixels apart, shown in Figure 2a. We ran this simulation with no sensor noise, so as to get a measurement most resembling the PSFs as measured in the ideal case. Afterwards, we cropped the resulting image to get a PSF for each region of the output image. Figure 2b shows cropped images of the PSFs, which are spatially varying.
Spatially Varying Deconvolution
Once we had our PSFs measured, along with simulated (potentially noisy) data, we then implemented a spatially varying deconvolution. This was done by first deconvolving the blurred image using the image region's local PSF with Lucy-Richardson deconvolution. Figure 2c shows the intermediate result, which is a stack of images, each deconvolved with a different PSF that's optimal for only its specific region of the image. Once this was repeated for each subimage, the final output image was formed as a weighted combination of the deconvolved images.
Choice of weighting function
Of particular importance here was the choice of the weighting function. The weighting function is the weight any particular intermediate deconvolved image (any image in Figure 2c) has in the construction of the final output image. In the first naive approach, we used a rectangular windowing function, where we simply cropped parts of the intermediate deconvolved images in order to piece together the final deconvolved image, as shown in Figure 2d. Note the grid artifact created by the boundaries where the subimages are stitched together. The scene in this case was a slanted bar resolution target. We found a much better approach was to use a triangle function which linearly blends adjacent subimages together, as shown in Figure 2e. The triangle window function was used for the remainder of the experiments.
Modulation Transfer Function Characterization
The modulation transfer function (MTF) was characterized according to the standard ISO 12233. A slanted bar image was simulated and deconvolved using our method with various levels of noise and the resulting output was analyzed according to the ISO 12233 standard.
Noise Performance Characterization
We characterized the performance of the deconvolution method with various types of noise and noise levels according to the visual signal to noise metric (VSNR).
Results
Organize your results in a good logical order (not necessarily historical order). Include relevant graphs and/or images. Make sure graph axes are labeled. Make sure you draw the reader's attention to the key element of the figure. The key aspect should be the most visible element of the figure or graph. Help the reader by writing a clear figure caption.
Qualitative Comparison
The last column of Figure 1 shows some simulated images deconvolved with our method. In dark scene, such as Figure 1c, it's evident that noise in the scene is amplified. In scenes with smooth backgrounds, the deconvolution process introduces some high frequency noise, as seen in Figure 1f. Lastly, we can see that the spatially varying blur is essentially removed equally well across the field of view, as seen in Figure 1i.
Modulation Transfer Function Analysis
- Figure 3
-
Noise analysis of modulation transfer function. (a) Sample MTF graph of a sensor image and the deconvolved image with no read or dark noise. (b) MTF50 compared to the amount of read noise. (c) MTF50 compared to the amount of dark noise.
Visual Signal to Noise Ratio Analysis
- Figure 4
-
Signal to noise ratio in comparison to read and dark noise. (a) VSNR vs amount of read noise. As read noise increases, the VSNR decreases as expected. However, the deconvolved image always has a lower VSNR than the original sensor image as it adds some noise. (b) VSNR vs amount of dark noise. We see that there is little correlation between the VSNR and the dark noise because the dark noise assumes a constant dark current for all pixels, thus effectively raising the background levels of the image.
Conclusions
A method to unblur images by deconvolution with the spatially varying PSF was implemented and tested on simulated camera data. We qualitatively see a sharpening of the image as well as a quantitative increase in the MTF50, a measurement of the resolution, with this method. However, the deconvolution also amplified the noise in the image, as evident in a lower VSNR score. While using a triangle window function to stitch the deconvolved images together worked better than the square window function, this was not able to reduce the amount of noise amplified by the deconvolution. Our conclusions were found to hold with varying levels of both read noise and dark noise. In all cases, the images were sharper but noisier. In the future, a more advanced method of deconvolution such as [4] to keep the noise amplification to a minimum while maximizing the MTF50.
References
[1] Scalettar, B. A., et al. "Dispersion, aberration and deconvolution in multi‐wavelength fluorescence images." Journal of microscopy 182.1 (1996): 50-60.
[2] Joshi, Neel, Richard Szeliski, and David Kriegman. "PSF estimation using sharp edge prediction." Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2008.
[3] Shih, Yichang, Brian Guenter, and Neel Joshi. "Image enhancement using calibrated lens simulations." Computer Vision–ECCV 2012. Springer Berlin Heidelberg, 2012. 42-56.
[4] Heide, Felix, et al. "High-quality computational imaging through simple lenses." ACM Transactions on Graphics (TOG) 32.5 (2013): 149.
[5] Delbracio, Mauricio, Pablo Musé, and Andrés Almansa. "Non-parametric sub-pixel local point spread function estimation." Image Processing On Line (2012).
Appendix I
Source code: http://white.stanford.edu/teach/images/a/ad/Project.zip
Image Systems Engineering Toolbox http://imageval.com/
Appendix II
Division of labor:
Tony Wu - wrote PSF acquisition code, implemented read and dark noise sweeps, wrote wiki documentation, made figures and slides
Samuel Yang - wrote PSF acquisition code, implemented weighting function, wrote wiki documentation, made figures and slides