Ademola-IdowuKhwajaGhosh

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search


Analysis of Real Camera Lenses - Atinuke Ademola-Idowu, Ayesha Khwaja, Pallabi Ghosh

Introduction

Image blur can be cause by a variety of factors which are either external or internal to the camera. Factors external to the camera include motion blur, misplaced focal point, shallow depth of field, slow shutter speed, dirty lens and so on. Factors internal to the camera include pixel size, anti- aliasing filter, sensor resolution, lens aberration, light diffraction and so on. While these external factors can easily be corrected by users’ best practice, the internal factors cannot be easily corrected since they are inherent and unique to the camera. These internal factors can be accounted for by estimating the Point Spread Function (PSF) of the camera lens.

Point Spread Function (PSF)

The PSF of a camera system can be described as the image of any point object captured by the camera system. Due to factors inherent to the camera, the point will be blurred and appear as a blob, circular or elliptical, based on the point's location. So point spread function is the 2D impulse response of the system. It is expected that if the external factors are corrected, then the image captured will be a convolution of the ideal image and the PSF. Therefore in obtaining the PSF of the camera, any image captured by the camera can be deblurred to obtain the ideal image.

Project Objective

The aim of this project is to estimate the PSF of a camera lens. The external factors will be corrected as much as possible leaving the internal factors to be estimated by the computed PSF. The PSF obtained will be applied to a test image to see if it matches the same image captured by the camera.

Background

Several methods have been used in estimating PSF, each focusing on different techniques. In estimating PSF using Sharp Edge Prediction [4], Joshi et al estimates the PSF at subpixel resolution from a single image. Their algorithm handles image blur due to external factors like defocus and slight camera motion as well as the internal factors. The algorithm operates by predicting a sharp version of a blurry input image and uses the sharp and blurry image to solve for a PSF.

A method by Delbracio et al [1] involves developing an algorithm for non-Parametric Sub-Pixel Local PSF Estimation. In this approach, the pattern position and its illumination conditions are accurately estimated to ensure geometric and radiometric correction, then the PSF was obtained by inversion of a linear system. Their method was quite accurate as it had a relative error level in the order of 2 to 5%.

Another method involves using random noise targets with markers to estimate the PSF [2]. For this approach, the PSF is obtained by direct comparison between a synthetic prototype image and the image itself. Using the noise target allows for evaluation at all frequencies due to the approximately ‘white’ spectrum of the target.

Method

Overview

We use a method similar to the one followed by Delbracio et. all [1] to compute the average PSF of various parts of the camera lens. The main idea is to take into account all possible external factors that make the captured scene as close as possible to the displayed scene so that the only difference between them is that the displayed scene has been blurred by the camera's PSF. To do this, first the pattern position and its illumination conditions are estimated to take into account for the geometric and radiometric correction. Now, the PSF can be computed by inverting a linear system.

Set-up

In order to obtain the PSF of the camera lens, a test patch arranged in a 3x5 array was displayed on a monitor and captured using a Nikon D2Xs camera. We did this for different exposure values in order to determine how the PSF varies with exposure time.

We displayed the following pattern on the monitor:

Figure1: pattern displayed on monitor

The following was the scene the camera captured:

Figure2: scene captured by camera

Next we try and estimate the pattern position and illumination conditions as discussed in the next section.

Radiometric Correction

A white point displayed on the scene does not appear to be white on the captured image as can be seen from Figure 2 above. Further more, the monitor makes these white points darker at the edges than at the middle of the screen. To take this into account, we use a scene of a white background captured with the same settings as were used to capture the test pattern. The idea is that each pixel is affected equally due to the above mentioned reasons in both the images and hence if we take the ratio of the intensity values of each pixel, we would get an image which is free from illumination.

The following image is obtained after we apply for radiometric correction to our captured scene:

Figure3: scene captured independent of illumination

Geometrical Registration

As can be seen above from Figure 1 and 3, the plane of the display and that of the digital pattern are not parallel to each other. To compute the PSF, we want these two planes to be parallel to each other. The plane of the display in the captured scene of figure 3 can be made parallel to the plane of the digital pattern by calculating the image homography.

Figure4: Geometrical Registration of an image

The homography is an invertible transformation between two spaces, where there is one-to-one correspondence between each point of the two spaces. Let P be a point (in homogeneous coordinate space) in space 1 and P' be its corresponding point in space 2. Then, homography matrix, H, relates P and P' as P = HP'. H is a 3x3 matrix with 8 degrees of freedom and hence can be determined given the positions of four points in a plane, and the positions of these four points in another plane. To get these total of 8 points, we use manual corner detection.

Once we have the homography matrix H, we transform each point in figure 3 by pre-multiplying it with H. The result we obtain is a geometrically registered image as shown in the figure below.

Figure5: Geometrically Registered image of captured scene

PSF Estimation

We now have a illumination corrected and geometrically registered image, which defers from the test pattern in only the sense that the test pattern has been blurred by the camera PSF. This is a linear system and hence the camera PSF can be found out by inverting this linear system. We do this in the Fourier domain.

Let D be the Digital pattern, C be the illumination corrected and geometrically registered captured image and PSF be camera PSF. Then

Therefore, in Fourier domain,

Thus, PSF of the camera lens can be found by pointwise dividing the fourier transform of the illumination corrected and geometrically registered captured image by that of the digital image and then taking the inverse fourier transform of the result.

The PSF obtained using this method is an average of the overall PSF of the camera lens in the region where this is performed. To get a better idea of the camera lens PSF (PSF is usually space variant), we carry out this method by dividing D and C into its individual patches, that is 15 patches (they are a 3x5 patch array). Then, the concatenated PSF matrix of 15 different blocks gives us the PSF for the whole image.

Results

Using the algorithm as described above the PSF of the camera system was calculated as shown in the figure below:

Figure6: Estimated PSF

Next we also magnify each block so that the nature of the PSF is visible more clearly, and show the results in the next figure.


Figure7: PSF in each block magnified

We see that there is a lot of noise in the estimated PSF. This noise can be attributed to the erroneous selection of points while trying to remove perspective distortion. The process was manual, and could have been prone to human error. Although we tried to remove this error through multiple selections of points, it improved the results slightly, but didn’t remove it totally. That is the reason why, selection of different regions while doing the division of Fourier transforms, gave different erroneous regions in the final PSF estimation.

Testing for the validity of estimated PSF

To check whether our PSF estimation is correct, we generated 5 patterns, as shown below

Figure8: 5 Generated patterns

We captured the images of these 5 patterns using 3 different camera settings with 3 different f numbers. We took the Fourier transform of each pattern and multiplied it with the Fourier Transform of the PSF. Then we took the inverse Fourier transform and compared it to the captured images. The following figure shows the results for the radial pattern. The first column is the pattern, the second column shows captured images at F# 5.6,18 and 34 and the third column shows the corresponding convolved images (image obtained from convolution of PSF and the pattern).

Figure9: Results

Theoretically they should be equivalent, but due to a number of factors they are not. For example the exposure duration of the captured image of the test pattern is different from the exposure duration of the captured image from which the PSF is actually computed. The exposure duration of the test pattern is larger, hence it is brighter and more blurry.

But there are some factors which are really similar in the two sets. For example, the middle column consisting of captured image, has the 1st image as the least blurry. The blurriness increases as we go down the column. The same observation can be made in the last column as well. This shows the similarity in their nature, although due to factors mentioned above they look really different.

To check whether our results are correct, we tried out the same experiment on some synthetic data, where we generated the captured image in ISET. Using that we followed the same process. In this case the captured image seems to match well with the convolution of pattern with PSF. The following figure shows the scatter plot of the the captured and convolved image. The results seems to be correct, which shows that the algorithm is generating the proper PSF.

Figure10: Scatter plot between captured image(generated using ISET) and convolved image(obtained using our algorithm)

Conclusions

The project led us to 3 main concluding points

  • The results in figure 7 shows that the central block has a symmetric PSF, whereas it is assymetric in the corner blocks and bent toward the corners. That is the characteristics of any PSF as PSF is space variant. So our results seem to be correct.
  • Our algorithm uses the white image to do illumination control, hence it is robust to illumination changes
  • But on the other hand our PSF estimation is noisy which causes the huge difference in captured and convolved images in figure 9. We would try to reduce the noise in the future. Including the camera characteristics in the calculation might reduce the noise. Also better method of selection of points to remove perspective distortion can be useful in noise reduction.

References

[1] Mauricio Delbracio, Pablo Musé and Andrés Almansa. Non-parametric Sub-pixel Local Point Spread Function Estimation, Image Processing On Line,vol. 2012, pp. 8–21.

[2] Johannes Brauers, Claude Seiler and Til Aach. Direct PSF Estimation Using a Random Noise Target, Germany, Digital Photography, volume 7537 of SPIE Proceedings, page 75370. SPIE, (2010)

[3] Felix Heide, Mushfiqur Rouf, Matthias B. Hullin, Bjorn Labitzke, Wolfgang Heidrich, Andreas Kolb. High-Quality Computational Imaging through SImple Lenses.

[4] Neel Joshi, Richard Szeliski and David Kriegman. PSF Estimation using Sharp Edge Prediction

Appendix A - Source codes and results

Source codes, results and images used, along with a README for this project can be downloaded from here.

Appendix B - Breakdown of Work

Atinuke: Error analysis, Reading raw camera data

Ayesha: Geometric and Radiometric correction on captured scene; Testing generated scenes with the computed PSF

Pallabi: Computing patchwise PSF, Generating different scenes for testing

Atinuke, Ayesha, Pallabi: Literature survey, Result analysis, Conclusions, Wiki page and Presentation Slides