Google Pixel 4a Lens Shading
Jiahui Wang
Introduction
Lens shading, also known as vignetting, is common in optics and photography. As shown in Fig. 1 [1,2], the peripheral region of the image is darker than the center region. This phenomenon is intrinsic because of the geometric of the lens and thus the relative illumination at the edge is lower [2]. This interesting artifact has been widely used by photographer and artists to increase the depth and history of their photographs [1]. But we as an engineer should consider the correction of the artifact and provide people freedom to add the effect later on.
In this project, we are going to utilize the photos taken using Google Pixel 4a to model the lens shading effect of Google Pixel 4a camera. We analysis the lens shading effects for different color channels, different exposure time and different isospeed. We compare our model with the standard costh4 model [2, 4, 5]. And finally, we build a pipeline of the lens shading correction using different images.
Datasets and pre-processing
The pictures are taken using Google Pixel 4a from an ‘integrating sphere’, which provides a uniform field illumination such that the nonuniformity in pictures correspond to the lens distortion and camera noise.
The pictures include different exposure time and isospeed, which will be used later to analysis the model dependence to exposure time and isospeed. The data are stored as raw images using DNG formats [7]. We are going to utilize ieDNGRead [3,4] to read and extract the image information.
The pictures are the raw image including pixel-wise information. As shown in Fig. 2, if we take the data at the black line directly using the raw DNG data, the two color channels will be mixing and it looks like large, which is not true. So we need to separate the color channels for image processing.
Methods
We can read from the DNG file about the color filter array pattern. In this case, the uncompressed data have 'bggr' pattern. We are going to separate the data to b, g1, g2, r directly from the raw data. Then fit our models separately for different color channels. We use 2D polynomial models to fit the data for different color channels.
Poly22
We first fit the polynomial function up to the second order:
sfr(x,y) = p00 + p10*x + p01*y + p20*x^2 + p11*x*y + p02*y^2,
where pij is the fitting parameters and x, y are the normalized location parameters.
After getting the fitting map sfr(x,y), we elementwise divide the original data by the fitting map to get the corrected data. Figure 3 shows the original data, correction maps and corrected data for different color channels as 2D color maps. Because g1 and g2 has little difference, here we just plot one, g1, and the three channels will be R, G, B channels respectively.
Poly44
Lens shading correction pipeline
Results
Exposure
ISO speed
We now fix the exposure time and analysis the influence of the ISO speed. The ISO speed has a great influence on noise level and the total illumination. The provided data has a small number of ISO with the same exposure time. We can only find at most 3 iso speeds with the same exposure. We then perform similar analysis as in the exposure duration. You may find the data and figures in the supplementary slides and codes. The conclusion is that, ISO speed seems to have a larger influence to the correction map, which might be better to re-fit the model with different iso speeds.
Cos4th
Lens shading correction for real pictures
The teaching team provides the DNG files taken using Google pixel 4a at different situations. We are going to use the correction color map we get from our model poly22 and poly44 to correct the lens shading artifacts.From the examples shown in Figs. 13 and 14, we observe that the corner becomes lighter. Poly22 caused more unwanted distortions and poly44 behaves better. There is little color aberration when we apply the lens shading algorithms, which might be studied in the future for color correction project.
Conclusions
In conclusion, we have explored different polynomial models for lens shading correction under different conditions. The results show that the lens shading may be corrected better with higher order polynomial function. We conclude that for different color channels, one need to fit a different lens shading map for correction. For different exposure duration, one might use the same lens shading correction map with data normalization methods. For different ISO speed, it might be better to re-fit the lens shading model before correction. There are some other issues we meet during the data analysis process.
When we process the image, we notice that there will be some color distortion when performing the correction, which has also been noticed in Ref. [8]. It would be interesting in the future to consider color calibration with lens shading algorithms.
Slides and Codes
Check the OneDrive link: https://office365stanford-my.sharepoint.com/:f:/g/personal/jiahuiw_stanford_edu/EuQqRyY9ZmZLqCuHfQ_84GEBggFib-E5s55BCpqunB4ZAw?e=t2Y67k
References
[1]. When to use vignetting in photographs. https://www.lightstalking.com/vignetting/
[2]. Psych221 Class slides; Wiki Vignetting. https://en.wikipedia.org/wiki/Vignetting
[3]. Farrell, Joyce E., et al. "A simulation tool for evaluating digital camera image quality." Image Quality and System Performance. Vol. 5294. International Society for Optics and Photonics, 2003.
[4]. Farrell, Joyce E., Peter B. Catrysse, and Brian A. Wandell. "Digital camera simulation." Applied optics 51.4 (2012): A80-A90.
[5]. Kerr, Douglas A. "Derivation of the Cosine Fourth Law for Falloff of Illuminance Across a Camera Image." (2007).
[6]. Chen, Chih-Wei, and Chiou-Shann Fuh. "Lens Shading Correction for Dirt Detection." Pattern Recognition, Machine Intelligence and Biometrics. Springer, Berlin, Heidelberg, 2011. 171-195.
[7]. Adobe Digital Negative (DNG) specification. https://wwwimages2.adobe.com/content/dam/acom/en/products/photoshop/pdfs/dng_spec_1.5.0.0.pdf
[8]. Silva, Varuna De, Viacheslav Chesnokov, and Daniel Larkin. "A novel adaptive shading correction algorithm for camera systems." Electronic Imaging 2016.18 (2016): 1-5.
[9]. For more information and figures, please check our slides and codes.