Google Pixel 4a Lens Shading: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Student221
imported>Student221
Line 14: Line 14:


The pictures are the raw image including pixel-wise information. As shown in Fig. 2, if we take the data at the black line directly using the raw DNG data, the two color channels will be mixing and it looks like large, which is not true. So we need to separate the color channels for image processing.
The pictures are the raw image including pixel-wise information. As shown in Fig. 2, if we take the data at the black line directly using the raw DNG data, the two color channels will be mixing and it looks like large, which is not true. So we need to separate the color channels for image processing.
[[File:Jiahui_fig2.png|600px|center|thumb|'''Figure 2''': 1D raw data (black) vs color separated data (colored).]]


== Methods ==
== Methods ==

Revision as of 04:03, 27 November 2020

Jiahui Wang

Introduction

Lens shading, also known as vignetting, is common in optics and photography. As shown in Fig. 1 [1,2], the peripheral region of the image is darker than the center region. This phenomenon is intrinsic because of the geometric of the lens and thus the relative illumination at the edge is lower [2]. This interesting artifact has been widely used by photographer and artists to increase the depth and history of their photographs [1]. But we as an engineer should consider the correction of the artifact and provide people freedom to add the effect later on.

In this project, we are going to utilize the photos taken using Google Pixel 4a to model the lens shading effect of Google Pixel 4a camera. We analysis the lens shading effects for different color channels, different exposure time and different isospeed. We compare our model with the standard costh4 model [2, 4, 5]. And finally, we build a pipeline of the lens shading correction using different images.

Figure 1: Left: Geometric lens that caused the lens shading. Right: a picture with lens shading artifact (from [1]).

Datasets and pre-processing

The pictures are taken using Google Pixel 4a from an ‘integrating sphere’, which provides a uniform field illumination such that the nonuniformity in pictures correspond to the lens distortion and camera noise.

The pictures include different exposure time and isospeed, which will be used later to analysis the model dependence to exposure time and isospeed. The data are stored as raw images using DNG formats [7]. We are going to utilize ieDNGRead [3,4] to read and extract the image information.

The pictures are the raw image including pixel-wise information. As shown in Fig. 2, if we take the data at the black line directly using the raw DNG data, the two color channels will be mixing and it looks like large, which is not true. So we need to separate the color channels for image processing.

Figure 2: 1D raw data (black) vs color separated data (colored).

Methods

Poly22

Poly44

Cos4th

Lens shading correction pipeline

Results

Poly22

Poly44

Cos4th

Conclusions

Slides and Codes

Check the OneDrive link: https://office365stanford-my.sharepoint.com/:f:/g/personal/jiahuiw_stanford_edu/EuQqRyY9ZmZLqCuHfQ_84GEBggFib-E5s55BCpqunB4ZAw?e=t2Y67k

References

[1]. When to use vignetting in photographs. https://www.lightstalking.com/vignetting/

[2]. Psych221 Class slides; Wiki Vignetting. https://en.wikipedia.org/wiki/Vignetting

[3]. Farrell, Joyce E., et al. "A simulation tool for evaluating digital camera image quality." Image Quality and System Performance. Vol. 5294. International Society for Optics and Photonics, 2003.

[4]. Farrell, Joyce E., Peter B. Catrysse, and Brian A. Wandell. "Digital camera simulation." Applied optics 51.4 (2012): A80-A90.

[5]. Kerr, Douglas A. "Derivation of the Cosine Fourth Law for Falloff of Illuminance Across a Camera Image." (2007).

[6]. Chen, Chih-Wei, and Chiou-Shann Fuh. "Lens Shading Correction for Dirt Detection." Pattern Recognition, Machine Intelligence and Biometrics. Springer, Berlin, Heidelberg, 2011. 171-195.

[7]. Adobe Digital Negative (DNG) specification. https://wwwimages2.adobe.com/content/dam/acom/en/products/photoshop/pdfs/dng_spec_1.5.0.0.pdf

[8]. Silva, Varuna De, Viacheslav Chesnokov, and Daniel Larkin. "A novel adaptive shading correction algorithm for camera systems." Electronic Imaging 2016.18 (2016): 1-5.

[9]. For more information and figures, please check our slides and codes.