KuoKondapaniAravindh: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Student221
Created page with '== Introduction =='
 
imported>Student221
 
(408 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Introduction ==
== Introduction ==
This goal of the project is to evaluate the image sensor characteristics of the Google Pixel 4 from an image-systems perspective. Lens shading effects of the illuminant on the image sensor is analyzed. Furthermore, image sensors are typically characterized by their signal-to-noise ratio. This project looks at different settings that a user would set on a camera and estimates the noise levels and compares the trends to the theoretical understanding gained in the course. The project execution was done by (i) Tzu-Sheng Kuo (MS EE student), (ii) Aravindh Vangal (SCPD student) and (iii) Phani Kondapani (SCPD student).
== Theory: Lens Shading ==
The lens shading effect, also known as vignetting, describes an image's reduction of brightness from the center to peripheral areas. The illustration below describes the cause of the lens shading effect. Specifically, the central area of the sensor receives more incoming light (green rays) in comparison to the peripheral regions (purple rays), even if the rays arriving at the lens span all angles uniformly. As a result, the relative illumination at the edge of the sensor is lower.
In this project, we are interested in modeling the lens shading effect using the images captured under different exposure duration, ISO speed, and color channels. Furthermore, we like to apply our model to remove the effect on several real-world images and to support the analysis of sensor noise, particularly DSNU and PRNU estimations.
[[File:Theory lens shading.png|thumb|500px|center|An illustration of the lens shading effect.]]
== Theory: Sensor Noise ==
The block diagram of the sensor can be divided into two parts - the photodiode pixel and the readout circuitry. Each of the components contribute to the total charge collected at the output. We start by looking at each component in order of time from when the proton arrives up to the point where voltages are available to the user. Component 1 marked in the picture is the photodiode. Photodiode works as a bucket that receives photons and fills the bucket up with electrons. More the amount of photons incident on the photodiode, more is the collection of electrons. This is the operation that is happening fundamentally at point 1 in the picture. At point 2 , the charges collected by the pixel are being sensed for driving it to the output. This is called as the sense node. Now, the charge sensed at point 2 has some contribution from the reset signal as well, as this flag is necessary to transfer charges out of the pixel. Hence, what is sensed at point 2 is the bucket of electrons that the photodiode has collected plus the signal from the reset switch.
Now, these electrons travel through a series of transistors and find their way into the readout circuit. All the electrons lost or added during the process of travel path from the sense node to the output can be modeled as transfer charges. Now the readout circuit comes with an analog gain which is labelled 'g' in the figure. This function of the analog gain is to amplify the signal at the input by a factor g at the output. The reasons for a user to want to do so is discussed in a later section where we discuss ISO Speeds. What is clear from the travel path is that all the charges or electrons available at the input of the analog gain circuit go through a transformation of the factor given by the gain. With this setup to be the context of our camera, we will now attempt to understand from theory the effects one is supposed to expect with varying settings available to the user in the camera. We will first start by defining some of them .
=== A. Camera Options and Impact ===
1. '''Exposure time:''' The exposure time is the time for which the photo-diode is allowed to integrate photons. It is clear from this diagram what follows from this is that the charge accumulated at the sense node before readout is a direct function of the exposure time. All electrons accumulated in the photodiode irrespective of the process is subject to a change by variation of exposure time.
2. '''ISO speed:''' The ISO speed in the camera is an option available to the use to capture images of objects in motion at high speeds. How this works is that the read column shutter switches at a faster speed for a higher speed setting. So from here, this follows that if the shutter is switched faster, there must be a circuitry to compensate for the loss of signal. This is exactly what the gain functionality in the analog circuit serves.
[[File:Image_sensor_noise_sources2.PNG|thumb|600px|center|An illustration of the different components of the sensor and their resultant noise contributions. [6]]]
=== B. Electron generation phenomenon ===
1. '''Photoelectric voltage:''' Photons are converted to electrons due to photoelectric effect in the photodiode. The voltage accumulated by the photodiode this way is the photoelectric voltage. This is the signal voltage that we would ideally like to see in our image. What is important to note is that as these photons are incident, they follow a Poisson's distribution as it is fundamental to nature. This statistical variation is captured in the term shot_noise and is a function of Poisson's distribution and since the circuit multiplies it all by a grain factor g, also a function of gain.
2. '''Dark Voltage or thermal voltage:''' Electrons are also generated in the photodiode by virtue of heating in the pixel. This is referred to thermal voltage or dark voltage. Note that even in absence of any photons or illumination, these electrons are generated simply by heat.
Note that for both 1) and 2) above, allowing the pixel to integrate charge will create more electrons at the sense node. Thus, in the equation show in the image above we see that the dark current and the signal is a function of both the exposure time and the gain 'g' which is downstream in the flow of charge. Also note that the electrons accumulated in the pixel due to photoelectric effect is a function of incident photons (or illumination), and that dark current is independent of this process since the driving factor is heat.
== Theory: Dark Current Noise ==
'''Introduction:'''
The Dark Current Noise is induced from thermally generated electrons which are indistinguishable from photo-generated electrons. Dark Current depends on the CCD temperature thus the high-end CCD’s are designed with a certain mechanisms such as by cooling or in CMOS special circuity to reduce the amount of dark current.
Even in the absence of light, the charge will accumulate in each pixel and this accumulation of charge constitutes ‘dark current’ noise. A well-known method for measuring this dark current noise is by checking the slope of the dark images with respect to different exposure durations.
 
[[File: DC Intro.png|500px|center|thumb|Example of dark current (Source Kodak, KA 0401 sensor).]]
<br /> There is another aspect of noise associated with dark signal is called DSNU i.e., is the Dark Signal Non-Uniformity and it is seen as an offset between pixels in dark.
== Methods ==
The method used to drive down to evaluate each of the parameters for a sensor is to identify parameters with the least amount of dependencies, control all the other parameters to help evaluate. We start with lens shading.
=== A. Lens Shading ===
Before diving into details of the algorithm that models the lens shading effect, we first introduce the data provided for this project.
==== Data ====
The main folder is PRNU_DSNU, which contains six subfolders with images captured under different ISO speeds (55, 99, 198, 299, 395, and 798). Within each of these subfolders, there are five image bursts that correspond to five different exposure duration. For each image burst, there are five images. The illustration below summarizes the described folder structure.
[[File:Folder structure.png|thumb|400px|center|An illustration of the folder structure.]]
==== Algorithm ====
Given the provided data, we run our algorithm for each image burst independently. The illustration below shows the modeling pipeline.
First, we perform element-wise matrix averaging on five images within an image burst to mitigate random noise. We then extract the digital values of each color channel in the averaged image to get a smaller matrix with half the original image size. For example, in the illustration below, we plot the matrix of the blue channel in 3-dimension with their digital value in the z-axis. We can clearly see the lens shading effect, which causes a higher digital value in the central area. Note that for this step, we get different matrices for the G1 and G2 green channels in the Bayer pattern. Next, for each of the smaller matrices, we use the [https://www.mathworks.com/help/curvefit/fit.html fit] function from MATLAB's [https://www.mathworks.com/help/curvefit/index.html Curve Fitting Toolbox] to fit a 2-dimensional polynomial function to the digital values of each color channel. For example, the illustration below shows the fitting processing of the blue channel. We refer to these fitted values for each color channel as a color-wise lens shading map. In total, we get four maps that correspond to four color channels from each image burst. For the final step, which is not shown here, we combine four color-wise maps into a single lens shading map by assigning their values to a new image-size matrix based on their position in the Bayer pattern of the averaged matrix. This new matrix is the final lens shading map for each image burst.
[[File:Algorithm full.png|thumb|800px|center|The lens shading modeling pipeline.]]
=== B. Read Noise ===
It can be clearly inferred from the circuit diagram above that the read noise in the circuit is physically not a consequence of the pixel. It is independent from the pixel but adds onto the pixel signal. Hence, exposure time is of no consequence for readout noise. Hence, the way to determine this is if we could set everything else to zero. We do know from the previous paragraph that both the dark current and signal need a finite exposure time or integration time in which to collect the charge. By denying the camera any exposure time and illumination, we ascertain that all the voltages being read out are solely a function of the read noise. Read noise can include a combined effect of Reset switch noise as well as circuit noise. This is expected to increase with increasing gain. Typically information is read out either all pixels at once or column by column. So read noise can show up sometimes as column noise also.
1. '''Read noise estimation flowchart:''' Basically the read noise from images would correspond to the noise floor in the images. There are a couple of different ways to go about this given the measurement accuracy and conditions. The first method described is by evaluating difference pairs from the raw image dataset and comparing evaluating the standard deviations. The second method, a more accurate one, is to calculate mean images and subtract each individual images from the mean image and derive noise statistics by taking the standard deviation of all the pixels in the image.
[[File:readnoise_flow1.PNG|thumb|center|500px|Read noise Estimation Procedure. Flow 1 - With difference image pairs]] [[File:readnoise_flow2.PNG|thumb|center|500px|Read noise Estimation Procedure. Flow 2 - Using subtraction from the mean image]]
2. '''Results:''' The difference images typically looks like salt and pepper noise spread around the entire image sensor. We couldn't find any spatial correlation along the rows or columns of the sensor. The pixel channel is irrelevant to this analysis as the illumination is fixed to zero and there is no accumulation of electrons at all in the pixel for this experiment. For flow 1, a normalizing factor of sqrt(2) was used to represent noise in the target image instead of the difference image. The results below are plotted in micro-volts versus the gain.
===='''Difference Image'''====
A few words on the difference images. Typical difference image looks like below: a series of salt and pepper noise spread across the image sensor.
[[File: diffimage.PNG|thumb|500px|center|Difference image of pair of images at zero exposure and illumination. ]]
Spatial samples at fixed portions of the sensor were selected to analyze for any variations in noise statistics. Histograms were plotted of the difference images and the observatino ws that there was no spatial correlation of the pixel to the readout noise. This is also something that was expected.
[[File: diffimagespatial.PNG |thumb|500px|center|Difference image evaluated across different spatial regions within the sensor.]]  [[File: diffimagespatial_hist.PNG|thumb|500px|center|Corresponding histograms evaluated for each of these difference images. Note that there is almost no difference, the position of the sensor doesn't show correlation to read noise indicating the readout circuit components are fabricated very uniformly.]]
===='''Normalization principle for flow 1'''====
[[File:noise_estimate_readnoise.PNG]]
=== C. Dark Current ===
The provided DarkCurrentRate data was used to perform dark current noise estimation. The data consists of various images captured in the dark with different exposure durations which are in the range of 0.01, 0.02, 0.04, 0.08, and 0.16 seconds and the longest duration is 0.16 seconds. The corresponding ISO of the data ranging from ISO 55 to ISO 798. The data is used to measure the growth in the dark noise with time and used the data with ISO55 to simplify the mathematical computation since the gain for ISO55 is 1. The Image Systems Engineering Toolbox for Cameras (ISETCAM) is a Matlab toolbox provided by Stanford University was used to calculate dark current noise and DSNU measurements.
For Dark Current, the mean value of 5 images was extracted since we can only get digital values, this is the best accuracy we could have in terms of “unprocessed signal”. The slope was calculated by using the mean values with respect to various exposure times.
[[File:DC_Procedure.png|200px|center|thumb|Dark current measurement procedure.]]
The zoom in the image of .dng shows the various pixels which correspond to accumulated Noise in the image
[[File:DC Image Zoom.png|800px|center]]
=== D. DSNU Estimation ===
The Dark Signal Non-Uniformity and it is seen as an offset between pixels in dark.
For DSNU, the method used is similar to the Dark Current Noise Measurement except we corrected the lens shading from DSNU result and used y-intercept to calculate the result.
[[File:DSNU.png|200px|center]]
=== E. PRNU Estimation ===
PRNU is defined as pixel response non-uniformity. The aim is to calculate pixle response and determine the non-uniformity. To understand this in more detail, please refer to the figure below. Here, a plot of the output voltage versus the number of photons is shown. If Even under no illumination (0 photons), there is a finite dark current by virtue of thermal effects as seen in previous section. Hence, the output voltage will not start at zero for zero incoming photons but will have some dark current present in the output.
Now, as light becomes incident on the pixel, the pixel starts converting the photons into electrons and this is in the linear regime. There is indeed a minimum number of photons that need to be incident on the pixel to start creation of electrons. This is determined by pixel design. There do exist pixels that can count single photons. For the purpose of our analysis with constant illumination, we will vary the exposure time to simulate the x-axis (# of photons). The output voltage is expressed in volts, which then gets converted to a grey level in our image. Essentially, the idea is to fit a line between pixel value and the exposure time in the linear regime of the graph and compare the variation in the slopes. The standard deviation of the slopes is measured and the ratio w.r.t the mean is taken and expressed as a percentage. The mathematical formula is defined below:
[[File: formula.PNG|thumb|400px|center|PRNU formula, expressed as a percentage [7]]]
'''Discussion:''' We saw from section A that lens shading effect determines the reception of photons at the image sensor spatially. Hence, as a precursor to estimation of PRNU, we will first seek to compensate for the lens shading effects by using the results generated in sec. A. This gives rise to the below workflow for estimation of PRNU of the sensor. The first portion is to evaluate a mean image from the repeated experiments per ISO value per exposure time. This will compensate to the extent possible for shot noise although, taking the mean will not get rid of the noise entirely. Second step is to correct for lens shading, then followed by cropping out a 1 MP portion from the sensor, and further dividing the 1MP into 16 tiles. The idea is then to valuate PRNU for patches of size 64x64 pixels to estimate PRNU.
[[File: prnuflow1.PNG|thumb|1000px|center|Workflow for PRNU estimation]]
We observe that there is still some residual errors after correcting for lens shading after polynomial fit. The following image demonstrates the need to be more rigorous in order to arrive at the right estimates for PRNU. We go for a cropped portion at the center (~ 1 MP).  However, even at the cropped portion of 1 MP out of the total of 12 MP on the image sensor, we observe that the pixel responses still follow the gradual spatial variation in pixel values. Hence, the approach is to divide this 1MP cropped portion of the image into 16 different patches of size 64x64 px, thereby analyzing 4k pixels at a time. The results are shared in the results section.
[[File: prnu_images_1.PNG|thumb|600px|center|An illustration of the gray-level intensity profiles at the sensor from left to right (a): Before correction for lens shading, (b): After applying map for lens shading correction, (c): A localized ~ 1 MP patch within the 12 MP camera and (d): A further localized 64x64 px. patch ]] 
Shown below are a surface comparison of the pixel values and pixel responses extracted out of the data with varying exposure times. The strong correlation between the surfaces tells us that any standard deviation evaluated on these pixel responses are likely to contain a significant aspect or contribution of the lens shading artefacts and thus won't be truly indicative of the non-uniformities in pixel response. Hence, the idea to subdivide this into a further 16 patches.
[[File: prnu_images_2.PNG|center|thumb|500px|center|From left to right (a): Gray-level , (b): Pixel response evaluated as a slope evaluated by plotting exposure time on x-axis and gray-level on the y-axis per pixel]]
'''Slope estimation routine: ''' A function to evaluate PRNU from an image with specification of tilesize has been created as follows. The critical part is the fitting of the line itself which is done using the MATLAB function 'polyfit(x, y, 1)' to fit a 1 degree polynomial between 'x' and 'y'. The split_channels() function used in this code splits the evaluated slopes and intercepts per channels and then estimates the mean and standard deviation per channels. Thus, we have obtained PRNU per channel per ISO speed for our experiments. Link to the complete code is shared at the end of the report.
'''Function taking in Image and tilesize as arguments and retrieving the slopes and intercepts for each color channel as well as the entire pixel overall'''
function [slopes, intercepts, slopes_r, slopes_g, slopes_b, intercepts_r, intercepts_g, intercepts_b]  = evaluate_prnu_dsnu(x, Im_stack_cropped, tileSize)
    rows = tileSize(1);
    cols = tileSize(2);
 
    out_3d_dim = [2, rows, cols];
 
    final_coeffs = zeros(out_3d_dim);
    tic
    for i=1:1:rows
        for j = 1:1:cols
            y = Im_stack_cropped(:,i,j);
            coeff = polyfit(x,y,1);
            final_coeffs(1,i,j) = coeff(1);
            final_coeffs(2,i,j) = coeff(2);
        end
    end
    toc
    t = toc;
    slopes = reshape(final_coeffs(1,:,:), tileSize);
    intercepts = reshape(final_coeffs(2,:,:), tileSize);
    [slopes_r, slopes_g, slopes_b] = split_channels(slopes);
    [intercepts_r, intercepts_g, intercepts_b] = split_channels(intercepts);
end
We notice that the pixel response varies to a much smaller extent over the 64x64 pixel patch licpes from the figure below. This validates our choice of going for a 64x64 pixel patch clip for evaluating PRNU for this image sensor. The PRNU values calculated per channel are available in the Results section.
[[File: prnu_pr_1.PNG|thumb|500px|center|Plot of the pixel responses across the 1 MP patch. Illustrates the variance in slope even in a 1MP cropped portion of the image sensor]]
[[File: prnu_pr_2.PNG|thumb|500px|center|Plot of the pixel responses across a 64x64 tile. Shows significantly lower variance in slope. PRNU is more effectively measured this way as the residual effects of lens shading correction are ignored.]]
== Results and discussion ==
===='''A. Lens Shading '''====
For the experiment results, we first look at the lens shading map of a single image burst. We then compare the maps of different exposure duration and ISO speed. In addition, we also compare our map with another standard approximation approach. Finally, we apply the maps to a set of real-world images to remove their lens shading effect.
===== Single Image Burst =====
In this section, we model the lens shading map of a single image burst captured with an ISO speed of 55 and an exposure duration of 1/100 second. Following the modeling pipeline described above, we can first visualize the digital value of each color channel using 3-dimensional plots, which are shown below. While the range of digital values (z-coordinates) is different among color channels, every plot is higher in the central area and lower in the periphery. This observation indicates that all the color channels have a lens shading effect.
[[File:Raw.png|thumb|500px|center|The 3-dimensional plots of digital values.]]
Next, we fit 2-dimensional polynomial surfaces to each of these plots separately. The resulting color-wise maps are shown below.
[[File:Fit raw.png|thumb|500px|center|The 3-dimensional plots of color-wise maps.]]
Since our goal is to get the relative illumination, we normalize each color-wise map as shown below. Now, the maximal value of each map is equal to one.
[[File:Fit normal.png|thumb|500px|center|The 3-dimensional plots of normalized color-wise maps.]]
Finally, we stacked these normalized color-wise maps together by assigning their values to an image-size matrix based on their corresponding positions in the original Bayer pattern. The resulting matrix, as visualized below, is our final lens shading map for the image burst captured with an ISO speed of 55 and an exposure duration of 1/100 second. As we can see, there are multiple unaligned layers in this final map, indicating that the lens shading effect of each color channel is slightly different. We name this lens shading map as our base map, which we will use to compare against the others and apply for the analysis of sensor noise.
[[File:Iso 55 map 1.png|thumb|500px|center|The lens shading map.]]
===== Exposure Duration & ISO Speed =====
Following the same modeling pipeline, we may compute the lens shading map for every image burst. For example, consider image bursts with the same 55 ISO speed. We get each of their maps as shown below. To understand how different they are, we compute their relative difference to the base map, which is the one we get above and also the left-most one in this figure. We visualize their differences in 2-dimensional plots shown under each lens shading map. The value of the differences is about 0.001% in the central area and 10% in the corners. We conclude that the exposure duration does not significantly affect the lens shading effect.
[[File:Diff.png|frame|center|The lens shading map for different exposure duration and their corresponding differences to the base map.]]
Similarly, we may compute the lens shading map for each ISO speed and compare their differences to our base map. The image below visualizes their differences. We again conclude that the ISO speed does not significantly affect the lens shading effect.
[[File:All.png|frame|center|The differences for each lens shading map in comparison to our base map.]]
In summary, both the exposure duration and ISO speed do not significantly change the lens shading map in our experiment. We believe that this experimental result is reasonable because exposure duration and ISO speed have nothing to do with the cause of the lens shading, as we introduce previously in the theory section. One possibility that these factors may affect the lens shading effect is that the sensor saturates after a long exposure period. In this case, both the digital values at the central and peripheral areas of the sensor will be identical, and the fitted map will become a flat surface. However, we do not observe any saturation in our data and are confident that our conclusion should be valid for our experiment.
===== Standard Approach =====
According to our analysis, we may use a single lens shading map for all exposure duration and ISO speed because there are no significant differences. As a result, we choose our base map as the final lens shading map and compare it to another standard modeling approach: general approximation. As shown in the illustration below, general approximation estimates the value of the lens shading map using a cosine function. When theta equals zero, which means the central axis, the cosine equals one, which is identical to our lens shading map.
[[File:Approx.png|frame|400px|center|An illustration of the general approximation approach and its equation.]]
As shown below, we plot our lens shading map (left) and the one obtained from general approximation (right) side-by-side.
[[File:Cos.png|frame|500px|center|The 3-dimensional plot of our lens shading map and the one obtained using general approximation.]]
To further visualize their difference, we compute their relative difference and plot it both 2- and 3- dimensionally, as shown below. Overall, we find that the central area and the corners have a difference closer to zero, which means that the two maps are similar in these regions, except for one color channel that has a more salient difference between the two.
[[File:Compare cos.png|frame|500px|center|The comparison between our map and the general approximation.]]
===== Application =====
Finally, we apply our map to remove the lens shading effect of five provided images in this project. The images below in the first row are the original images without processing. The images in the second row are those with the lens shading effect removed. By comparing the images in two rows, we notice some artifacts at the corners. We hypothesize that these artifacts are due to the microlens that causes a second lens shadowing effect. Specifically, as the viewing angle increases, the light may fall out of the photosensitive area due to microlens, causing our modeled lens shading map to be smaller than expected at corners. When we later apply them by multiplying the original images with the reciprocal of our lens shading map, the corners thus have higher values than they should be. Further experiments are required to verify this hypothesis.
[[File:161310_raw.jpg]]
[[File:173503_raw.jpg]]
[[File:110402_raw.jpg]]
[[File:111511_raw.jpg]]
[[File:161055_raw.jpg]]
[[File:161310_lens.jpg]]
[[File:173503_lens.jpg]]
[[File:110402_lens.jpg]]
[[File:111511_lens.jpg]]
[[File:161055_lens.jpg]]
==='''B. Read Noise'''===
The results are along expected lines with regards to the trend w.r.t gain. The read noise shows a near linear increase with increasing gain (ISO Speed) on the x-axis. Moreover, it is understandable that the noie estimates for difference images is slightly higher than the method 2 as difference images nearly double the signal and standard approximation of a factor of sqrt(2) for uncorrelated images might not work as effectively as the images are not uncorrelated in the true sense of it. They are burst images so each pixel corresponds to the same physical readout circuitry associated with it. Hence, we will use the results obtained from flow 2 for the rest of the exercise.
[[File:finalresults_readnoise.png|thumb|1000px|center|Final read noise estimates evaluated on the google pixel 4A image sensor.]]
=== '''C. Dark Current ''' ===
The below plot consists of a .dng image (left), the overall histogram which represents the intensity histogram of a grayscale image, and the slope of the dark current image with respect to different exposure times is shown on the right.
The pixel concentration in the histogram at 64 shows the black level
[[File:DC output plot.png|900px|center]]
'''Dark Noise Result:'''
[[File:DC noise result.png|800px|center]]
'''Voltage conversion:'''
[[File:DC voltage conversion.png|400px|left]]
This is a 10-bit sensor, so the white level is 1023
<br /> The sensor model was created with the dark image using ISETCAM software and the Voltage swing obtained from the sensor is 0.459V
* ADC reading is 0.0640 (Slope of dark image vs exposure time)
* Here is the method used for voltage conversion,
* Voltage measured = (0.0640) *(0.459)/1023 = 2.87 e-05V = 28.7 Micro-Volts
'''Examine the relationship with different channels (R,G,B):'''
Once the dark noise estimate is complete, then we further examined the noise relationship with respect to different channels (R,G,B)
<br /> Interestingly, the slope output of ‘R(0.053)’ is lower, ‘G(0.071) is higher, and the ‘B(0.062)’ is close to the mean value
<br /> We believe the variability in the slope can be pretty high if the longest duration is not very long.
[[File:DC Noise wrt RGB.png|center|1080px]]
<br />
<br />
<br />
=== '''D. DSNU''' ===
[[File:DSNU Result.png|750px|center]]
'''Relationship between noise estimates and gain/exposure duration:'''
<br />The below table shows the slope and y-intercept results corresponding to different ISO (55 to 798)
<br /> settings shows a linear relationship i.e., the slope increases with respect to an increase in gain values.
[[File: DC Noise & Gain Relationship.png|350px|center]]
=== '''E. PRNU ''' ===
'''PRNU by channel:''' The below figure compares the results of estimation of the PRNU per channel with and without taking into effect the residual illumination pattern (which is a function of the lens) on the image sensor. The data tells that on an average the PRNU values estimated after locally analyzing 64x64 pixel patch clips is roughly 50% lower than what was estimated with residual lens shading effects. This shows a remarkable improvement in accuracy of evaluation of the PRNU. A direct inference from this graph below is that it is not a good idea to analyze PRNU by not analyzing each channel separately. The illuminant has a spectral density for which the color filters at the head of the sensor have s specific response. This response is different for different channels because the spectral power that each channel sees is different. The results reported are in percentage as defined in the Method section in 4.E.
[[File: prnu_results2.PNG|thumb|1000px|center|PRNU evaluation (expressed as percentage as described by formula in the previous section) at ISO 55 per channel of the sensor with the 1MP patch and '''one of the''' 64x64 tiles.]]
'''Workflow validation:''' To confirm that the PRNU in the 1 MP image was a result of lens shading artefact, we plotted the PRNU values of each of the 64x64 patch clips and found all of them to be consistently around the same value. This gives us confidence that our methodology of evaluation of pixel non-uniformities is pointing in the right direction. See figure below:
[[File: prnu_results1.PNG|thumb|1000px|center|PRNU data (expressed as percentage as described by formula in the previous section) at ISO 55 for each tile of the 16 tiles of 64x64 px. in the central 1MP patch. Although the pixel responses at these tiles are different, the variation in the pixel responses (standard deviation) is nearly constant.]]
'''Variation with gain:''' The PRNU was evaluated for different values of gain. We observed a weakly increasing trend with higher ISO speed. Since we already posited that the right way to calculate PRNU is to analyze each channel individually, only the trends noted in individual channels are considered valid trends for this analysis. The trend makes sense as the pixel responses are expected to increase with higher gain. So when the pixel responses increase, the standard deviation also tends to increase slightly. Please see the figure below:
[[File: prnu_results3.PNG|thumb|1000px|center|Variation of PRNU across ISO speeds. Shows a weak increasing trend with increase in the value of gain, which is expected.]]
== Conclusions ==
In this project, we first model the lens shading effect by fitting a 2-dimensional polynomial to images captured with the Google Pixel 4a of a uniform field. We compare the lens shading effect under different exposure time, ISO speed, and against one standard approximation approach. We use our modeled map to remove the lens shading effect of some images and the data for sensor noise analysis.
Secondly, the dark current noise was calculated by measuring the slope of the dark images with respect to different exposure durations and the corresponding voltage value was obtained by measuring voltage swing from the sensor model and the slope results. We also examined the noise relationship with respect to different channels (R,G,B) followed by slope calculations with different gain settings. For DSNU, the method used is similar to the Dark Current Noise Measurement except we corrected the lens shading from the DSNU result and used y-intercept to calculate the result.
Finally, pixel non-uniformities were estimated from the pixel responses of the pixels in the camera. Since we couldn't completely correct the effect of lens shading, we went for a smaller tile based approach to address the issue of pixel non-uniformities. The results obtained are that in general the green pixels seem to suffer from lower non-uniformities across the board for different ISO speeds. Moreover, each color filter triggers a unique pixel response so we also saw it is not a good idea to use all the pixel responses of all channels to evaluate aggregated PRNU. The right thing to do is to segment PRNU estimation by color filter array pattern.
== References ==
[1] [http://scarlet.stanford.edu/~brian/papers/pdc/SPIE-2004-Simulator-5294-17.pdf A simulation tool for evaluating digital camera image quality](2004). J. E. Farrell, F. Xiao, P. Catrysse, B. Wandell . Proc. SPIE vol. 5294, p. 124-131, Image Quality and System Performance, Miyake and Rasmussen (Eds). January 2004
<br /> [2] [http://scarlet.stanford.edu/~brian/papers/pdc/2012-DCS-OSA-Farrell.pdf Digital camera simulation] (2012). J. E. Farrell, P. B. Catrysse, B.A. Wandell . Applied Optics Vol. 51 , Iss. 4, pp. A80–A90
<br /> [3] [http://scarlet.stanford.edu/~brian/papers/pdc/2014-ISE-Farrell-Kriss.pdf Image Systems Simulation] (2015). J.E. Farrell and B.A. Wandell Handbook of Digital Imaging (Edited by Kriss). Chapter 8. ISBN: 978-0-470-51059-9
<br /> [4] Joyce Farrell and Brian Wandell 2020, ISETCam, "https://github.com/ISET/isetcam".
<br /> [5] Philip Kaaret, “Astronomical Laboratory ASTR:4850”, Spring 2018
<br /> [6] Assim Boukhayma, Arnaud Peizerat and Christian Enz "Noise Reduction Techniques and Scaling Effects towards Photon Counting CMOS Image Sensors", Sensors 2016, 16(4), 514; doi: 10.3390/s16040514
<br /> [7] "Standard for Characterization of Image Sensors and Cameras", https://www.emva.org/wp-content/uploads/EMVA1288-3.0.pdf - Nov 2010
== Appendix ==
=== A. Lens Shading ===
The code for lens shading map includes five MATLAB files: main.m, getMap.m, analysis.m, cos4th.m, and interesting.m. To run these files, they should be on the same path as the folder cameranoise, which contains two subfolders: PRNU_DSNU and Interesting. While we have introduced the subfolder PRNU_DSNU above, the subfolder Interesting includes five .dng real-world images. The folder structure looks like this:
.
├── main.m              % model the lens shading map
├── getMap.m            % a function that helps model the lens shading map
├── analysis.m          % analyze maps of different exposure duration and ISO speed
├── cos4th.m            % compare maps between ours and the one using general approximation
├── interesting.m      % remove the lens shading of real-world images
└── cameranoise
    ├── PRNU_DSNU      % data folder, as introduced above
    └── Interesting    $ contain five .dng real-world images
==== main.m ====
The script main.m is responsible for generating the 3d plots of raw digital values, fitted maps, coefficients of 2d polynomials, and the final lens shading map. To switch between these four outputs, specify the variable 'task' as one of the followings: 'Raw', 'Fit', 'Table', or 'Map'. You may keep only your interested exposure duration and ISO speeds by removing some elements in the arrays 'exposure' and 'isoSpeed'.
clc; clear; ieInit;
task = 'Map'; % task = 'Raw', 'Fit', 'Table', 'Map'
imageSize  = [3024,4032]; % 1.4 μm pixel width
burstNumber = 5;
exposure  = [1, 2, 3, 4, 5];
isoSpeed  = [55, 99, 198, 299, 395, 798];
tableR    = zeros(length(isoSpeed) * length(exposure), 6);
tableG1  = zeros(length(isoSpeed) * length(exposure), 6);
tableG2  = zeros(length(isoSpeed) * length(exposure), 6);
tableB    = zeros(length(isoSpeed) * length(exposure), 6);
normalize = true;
% meshgrid
x = 1:imageSize(1)/2;
y = 1:imageSize(2)/2;
[mesh_x,mesh_y] = meshgrid(x,y);
mesh_x = mesh_x';
mesh_y = mesh_y';
for iso_index = 1:length(isoSpeed)
    iso = isoSpeed(iso_index);
    folder = ['cameranoise/PRNU_DSNU/ISO_' int2str(iso) '/'];
   
    if strcmp(task,'Raw') || strcmp(task,'Fit')
        figure('Name',[task ' Relative Illumination - isospeed ' int2str(iso)]);
    end
   
    for i = 0:4 % filenames end with i
        folderInfo  = dir([folder, '*', int2str(i), '.dng']);
        folderName  = {folderInfo.name};
        digitalValue = zeros(imageSize);
        for j = 1:5 % 5 burst
            fname = [folder, folderName{j}];
            [sensor, info] = sensorDNGRead(fname);
            digitalValue = digitalValue + double(sensorGet(sensor, 'dv'));
        end
        % extract digital values
        % B  G1
        % G2  R
        digitalValue = digitalValue ./ 5;
        G1 = digitalValue(1:2:end,2:2:end);
        G2 = digitalValue(2:2:end,1:2:end);
        R  = digitalValue(2:2:end,2:2:end);
        B  = digitalValue(1:2:end,1:2:end);
        % fitting poly22
        sfR  = fit([mesh_x(:) mesh_y(:)],  R(:), 'poly22');
        sfG1 = fit([mesh_x(:) mesh_y(:)], G1(:), 'poly22');
        sfG2 = fit([mesh_x(:) mesh_y(:)], G2(:), 'poly22');
        sfB  = fit([mesh_x(:) mesh_y(:)],  B(:), 'poly22');
        if strcmp(task,'Raw') 
           
            subplot(4,5,i+0*5+1);
            scatter3(mesh_x(:), mesh_y(:),  R(:), 1, 'r');
            title('Red');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);
            subplot(4,5,i+1*5+1);
            scatter3(mesh_x(:), mesh_y(:), G1(:), 1, 'g');
            title('Green 1');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);
           
            subplot(4,5,i+2*5+1);
            scatter3(mesh_x(:), mesh_y(:), G2(:), 1, 'g');
            title('Green 2');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);
            subplot(4,5,i+3*5+1);
            scatter3(mesh_x(:), mesh_y(:),  B(:), 1, 'b');
            title('Blue');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);
           
        elseif strcmp(task,'Fit')
           
            tmp = subplot(4,5,i+0*5+1);
            plot(sfR);
            colormap(tmp, autumn);
            title('Red');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);
            tmp = subplot(4,5,i+1*5+1);
            plot(sfG1);
            colormap(tmp, summer);
            title('Green 1');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);
            tmp = subplot(4,5,i+2*5+1);
            plot(sfG2);
            colormap(tmp, summer);
            title('Green 2');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);
           
            tmp = subplot(4,5,i+3*5+1);
            plot(sfB);
            colormap(tmp, winter);
            title('Blue');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);           
           
        elseif strcmp(task,'Table')
            row = (iso_index - 1) * 5 + i + 1;
            tableR(row,1:8)  = [iso, exposure(i+1), sfR.p00, sfR.p10, sfR.p01, sfR.p20, sfR.p11, sfR.p02];
            tableB(row,1:8)  = [iso, exposure(i+1), sfB.p00, sfB.p10, sfB.p01, sfB.p20, sfB.p11, sfB.p02];
            tableG1(row,1:8) = [iso, exposure(i+1), sfG1.p00, sfG1.p10, sfG1.p01, sfG1.p20, sfG1.p11, sfG1.p02];
            tableG2(row,1:8) = [iso, exposure(i+1), sfG2.p00, sfG2.p10, sfG2.p01, sfG2.p20, sfG2.p11, sfG2.p02];           
        elseif strcmp(task,'Map')
            map = getMap(sfR, sfG1, sfG2, sfB, imageSize);
        else
            disp('error');
        end
    end
end
==== getMap.m ====
This is a helper function for main.m. Basically, it stacks the fitted color-wise maps together to generate the lens shading map.
function map = getMap(sfR, sfG1, sfG2, sfB, imageSize)
    % meshgrid
    x = 1:imageSize(1)/2;
    y = 1:imageSize(2)/2;
    [mesh_x,mesh_y] = meshgrid(x,y);
    mesh_x = mesh_x';
    mesh_y = mesh_y';
    % B  G1
    % G2  R
    map = zeros(imageSize);
    R  =  sfR(mesh_x(:),mesh_y(:));
    G1 = sfG1(mesh_x(:),mesh_y(:));
    G2 = sfG2(mesh_x(:),mesh_y(:));
    B  =  sfB(mesh_x(:),mesh_y(:));
    R  =  R ./ max(R, [],'all');
    G1 = G1 ./ max(G1,[],'all');
    G2 = G2 ./ max(G2,[],'all');
    B  =  B ./ max(B, [],'all');
    R  = reshape( R, imageSize./2);
    G1 = reshape(G1, imageSize./2);
    G2 = reshape(G2, imageSize./2);
    B  = reshape( B, imageSize./2);
    map(2:2:end,2:2:end) = R;
    map(1:2:end,2:2:end) = G1;
    map(2:2:end,1:2:end) = G2;
    map(1:2:end,1:2:end) = B;
end
==== analysis.m ====
To run this script, we will need to generate the map of every image burst first using main.m introduced above. The script will create a 2d visualization of the differences between each map and base map.
imageSize = [3024,4032]; % 1.4 μm pixel width
isoSpeed  = [55, 99, 198, 299, 395, 798];
Ni = length(isoSpeed);
Ne = 5;
mapBase = load('map_55_1.mat');
figure('Name','Relative Illumination - Analysis');
for i = 1:Ni
    for e = 1:Ne
        mapCompare = load(['map_' int2str(isoSpeed(i)) '_' int2str(e) '.mat']);
        subplot(Ni, Ne, (i-1)*Ne+e);
        title(['ISO ' isoSpeed(i) 'EXP ' int2str(e)]);
        imagesc((mapCompare.map - mapBase.map) ./ mapBase.map);
        ax2 = gca;
        caxis(ax2,[-1 1]);
        colorbar;
    end   
end
==== cos4th.m ====
This script will create 3d visualizations of our base map and the one obtained with general approximation. It will also compute their relative difference and plot it in 2d.
data      = load('map_55_1.mat');
mapBase  = data.map;
imageSize = [3024,4032];
unit      = 1.4 * 1e-6; % 1.4 μm pixel width
center_x  = imageSize(1)/2;
center_y  = imageSize(2)/2;
% meshgrid
xx = 1:imageSize(1);
yy = 1:imageSize(2);
[mesh_xx,mesh_yy] = meshgrid(xx,yy);
mesh_xx = mesh_xx';
mesh_yy = mesh_yy';
d = 4.38 * 1e-3; %focal length: 4.38 mm
h = sqrt( (mesh_xx - center_x).^2 + (mesh_yy - center_y).^2 ) .* unit;
mapCompare = ( d ./ sqrt(d^2 + h.^2) ).^4;
figure('Name', 'mapCompare');
scatter3(mesh_xx(:), mesh_yy(:), mapCompare(:), 1, mapCompare(:));
figure('Name', 'mapBase');
scatter3(mesh_xx(:), mesh_yy(:), mapBase(:)  , 1, mapBase(:));
figure('Name', 'diff');
imagesc((mapCompare - mapBase)./mapBase);
ax2 = gca;
caxis(ax2,[-1 1]);
colorbar;
==== interesting.m ====
This script can visualize the image before and after the removal of lens shading using the provided map. We may load different maps using the 'load' function. We may also specify different images by changing the variable 'fname'. Set the variable 'remove' as true to remove lens shading. For some images, the rotation of the Bayer pattern may be different from the provided map. We may check its rotation using the function 'sensorWindow(sensor)'.
load('map.mat');
remove = false;
rotate = false;
fname = 'cameranoise/Interesting/IMG_20201027_173503.dng';
[sensor, info] = sensorDNGRead(fname);
if remove
    imageA = sensorGet(sensor, 'dv');
    imageA = double(imageA);
    if rotate
        imageA = rot90(imageA,3); % counterclock
    end
    imageA = imageA ./ map;
    if rotate
        imageA = rot90(imageA,1);
    end
    sensor = sensorSet(sensor, 'dv', imageA);
end
ip = ipCreate;
ip = ipSet(ip, 'render demosaic only', true);
ip = ipCompute(ip, sensor);
ipWindow(IP);
=== B. Read Noise===
The below code is read noise estimation with flow 2, i.e..evaluating using the difference from the means (after feedback, decided to retain only flow 2 for final report writeup as this is a better method to calculate read noise). Please note that the code was written on Windows so there could be some errors while reading in the image files because Mac tends to use forward slashes for directories.
% This section reads in images and stores the digital value
clc; clear; ieInit;
% macro
imageType = 'Read_Noise'; % PRNU_DSNU, DarkCurrentRate, ReadNoise
readNoiseFolder = 'C:\Users\shwet\Downloads\ReadNoise';    % 55, 99, 198, 299, 395, 798
outfilePath = 'D:\documents_bkup\C\Course\psych221\project_data\reset_noise_solutions';
if (strcmp(imageType,'PRNU_DSNU'))
imageSize  = [3024,4032];
burstNumber = 5;
digitalValue = zeros(imageSize);
elseif(strcmp(imageType,'DarkCurrentRate'))
imageSize  = [3024,4032];
burstNumber = 5;
digitalValue = zeros(imageSize);
elseif(strcmp(imageType,'Read_Noise'))
imageSize  = [3024,4032,6,6];
burstNumber = 5;
digitalValue = zeros(imageSize);
else
imageSize = 'others';
burstNumber = -1;
end
%figure('Name','Relative Illumination');
dirS = dir(readNoiseFolder);
isoVal = zeros(6,1);
k=1;
for j = 1:1:length(dirS)
if contains(dirS(j).name, '.')
% do nothing
else
inpFolder = strcat(readNoiseFolder, '\', dirS(j).name);
files = dir( fullfile(inpFolder,'*.dng') );
temp = split(inpFolder, '_');
isoVal(k) = str2num(temp{2});
for i=1:1:length(files)
cd(inpFolder);
fname = files(i).name;
[sensor, info] = sensorDNGRead(fname);
digitalValue(:,:,k,i) = digitalValue(:,:,k,i) + double(sensorGet(sensor, 'dv'));
end
k=k+1;
end
end
%Special code to determine the 6th image for ISO = 99 because only 5 images
%are available. It is average of first 5 images
for mc=1:1:5
digitalValue(:,:,6,6) = digitalValue(:,:,6,6) + digitalValue(:,:,6,mc);
end
digitalValue(:,:,6,6) = digitalValue(:,:,6,6)./5;
read_noise_dv = zeros(6,5);
read_noise_v = zeros(6,5);
%% This section computes read noise, uses simple functions defined in the functions section
average_px_value_per_ISO = mean(digitalValue(:,:,:,1:5), 4);
for k = 1:1:6
for j = 1:1:5
Im1 = reshape(average_px_value_per_ISO(:,:,k), [3024 4032]);
Im2 = reshape(digitalValue(:,:,k,j), [3024 4032]);
read_noise_dv(k,j) = estimate_read_noise(Im1, Im2);
read_noise_v(k,j) = dv_to_v(read_noise_dv(k,j), 10, 0.4591);
end
end
read_noise_per_ISO = mean(read_noise_v, 2);
lgd = strcat('ISO=', num2str(isoVal));
T = table([lgd], [read_noise_per_ISO]);
cd(outfilePath);
writetable(T, 'read_noise_data2.xls');
%% This section plots all the outputs
f = figure();
set(gcf, 'position', [400, 400, 850, 400])
cls = ['b','g','r','c','m','y'];
for k = 1:1:6
scatter(1:5, read_noise_dv(k,:), 18, cls(k), 'filled');
%plot(1:15, read_noise_dv(k,:), cls(k));
hold on;
end
legend(lgd);
legend('Location','northeastoutside');
ylim([0,3])
xlim([0,6])
xlabel('Experiment # ');
ylabel('Read noise measured in gray levels')
title('Read noise (in graylevels) over repeated measurements at zero illumination and zero exposure time')
disp('output file saved at:')
disp(outfilePath)
disp('Readout Noise Values in the output file are specified in Volts');
%% This section contains functions developed to perform read noise analysis.
% 1. Read noise estimation
function rn = estimate_read_noise(im1, im2)
im_df = im1 - im2;
imsize = size(im_df);
im_df_rs = reshape(im_df, [imsize(1)*imsize(2),1]);
rn = std(im_df_rs);
end
% 2. Voltage conversion
function op_volts = dv_to_v(gray_lev, quantization, voltage_swing)
n_gls = 2^quantization;
volt_per_gl = voltage_swing/n_gls;
op_volts = gray_lev*volt_per_gl;
=== C. Dark Current Noise  ===
The code used for dark current noise measurement is shown below. The .dng files listed under DarkCurrentRate folder was used to compute the measurements
==== Dark Noise Measurement.m ====
imageType = 'DarkCurrentRate'; % PRNU_DSNU, DarkCurrentRate, ReadNoise
isoFolder = 'ISO_55';    % 55, 99, 198, 299, 395, 798
folder = ['Phani_Project/' imageType '/' isoFolder '/'];
strcmp(imageType,'DarkCurrentRate');
imageSize  = [3024,4032];
burstNumber = 5;
x = [];
y = [];
figure('Name','DarkNoise');
for i = 0:burstNumber-1 % filenames end with i
    folderInfo  = dir([folder, '*', int2str(i), '.dng']);
    folderName  = {folderInfo.name}; 
    for j = 1:5
        fname = [folder, folderName{j}];
        [sensor, info] = sensorDNGRead(fname);
        img = ieDNGRead(fname);
        E = info.ExposureTime;
        y = [y; mean2(img)];
        x = [x;E];
    end
end
img = ieDNGRead(fname);
[rows, columns, numberOfColorBands] = size(img);
subplot(2, 3, 1);
imshow(img, []);
FontSize = 14;
title('img', 'FontSize');
%Set up figure properties:
% Enlarge figure to full screen.
set(gcf, 'Units', 'Normalized', 'OuterPosition', [0 0 1 1]);
% Let's compute and display the histogram.
img = ieDNGRead(fname);
subplot(2, 3, 2);
histogram(img);
xlim([0, 255]);
grid on;
title('Intensity Histogram of Gray Scale Image', 'FontSize', FontSize, 'Interpreter', 'None');
xlabel('Gray Level', 'FontSize', FontSize);
ylabel('Pixel Count', 'FontSize', FontSize);
% Do the fit to a line:
coeff = polyfit(x,y,1)
fitted_y = polyval(coeff, x);
subplot(2, 3, 3);
plot(x, y, 'bd', 'MarkerSize', 10);
hold on;
plot(x, fitted_y, 'r-', 'LineWidth', 3);
grid on;
title('Dark current plot', 'FontSize', FontSize);
xlabel('Exposure time ', 'FontSize', FontSize);
ylabel('Dark current image ', 'FontSize', FontSize);
message = sprintf('The slope = %.3f\nThe intercept is %.3f',...
    coeff(1), coeff(2));
uiwait(msgbox(message));
%code for voltage conversion
sensor = sensorDNGRead(fname);
sensorWindow(sensor);
voltageswing = 0.459; % Measured from ('IMX363') sensor
% This is a 10-bit sensor so the whitelevel is 1023. ISOSpeed 55 corresponds to analoggain = 1
Resolution = 1023;
Voltagemeasured = (coeff(1)*voltageswing)/Resolution;
==== Noise relationship w.r.t RGB Channels.m ====
Modified the above code in order to compare the values when using data from different color channels (R,G,B).
    ieInit; clear; close all;
    imageType = 'DarkCurrentRate'; % PRNU_DSNU, DarkCurrentRate, ReadNoise
    isoFolder = 'ISO_55';    % 55, 99, 198, 299, 395, 798
    folder = ['Phani_Project/' imageType '/' isoFolder '/'];
    strcmp(imageType,'DarkCurrentRate');
    imageSize  = [3024,4032];
    burstNumber = 5;
    x = [];
    y = [];
    y_r = [];
    y_g = [];
    y_b = [];
    figure('Name','DarkNoise');
    for i = 0:burstNumber-1 % filenames end with i
        folderInfo  = dir([folder, '*', int2str(i), '.dng']);
        folderName  = {folderInfo.name};
        for j = 1:5
            fname = [folder, folderName{j}];
            [sensor, info] = sensorDNGRead(folderName{j});
            img = ieDNGRead(fname);
            E = info.ExposureTime;
            y = [y; mean2(img)];
            x = [x;E];
            % similar codes for r,g,b components:
            [R, G, B] = split_channels(img);
            y_r = [y_r; mean2(R)];
            y_g = [y_g; mean2(G)];
            y_b = [y_b; mean2(B)];
        end   
    end
    img = ieDNGRead(fname);
    [rows, columns, numberOfColorBands] = size(img);
    subplot(2, 3, 1);
    imshow(img, []);
    FontSize = 14;
    % Enlarge figure to full screen.
    set(gcf, 'Units', 'Normalized', 'OuterPosition', [0 0 1 1]);
    % Do the fit to a line:
    coeff = evaluate_dark_current(x, y, 'RGB');
    coeff_r = evaluate_dark_current(x, y_r, 'R');
    coeff_g = evaluate_dark_current(x, y_g, 'G');
    coeff_b = evaluate_dark_current(x, y_b, 'B');
    % Functions for splitting image into its constituent channels
    function [R, G, B] = split_channels(img)
        G1 = img(1:2:end,2:2:end);
        G2 = img(2:2:end,1:2:end);
        R  = img(2:2:end,2:2:end);
        B  = img(1:2:end,1:2:end);
        G  = (G1 + G2) ./ 2;
    end
    function coeff = evaluate_dark_current(x, y, channel)
        coeff = polyfit(x,y,1);
        fitted_y = polyval(coeff, x);
        f = figure();
        plot(x, y, 'bd', 'MarkerSize', 10);
        hold on;
        plot(x, fitted_y, 'r-', 'LineWidth', 3);
        grid on;
        title(strcat('Fitted Y, Channel = ', channel));
        %ylabel('Fitted Y', 'FontSize', fontSize);
        message = sprintf('Channel = %s\n The slope = %.3f\nThe intercept is %.3f',...
        channel, coeff(1), coeff(2));
        uiwait(msgbox(message));
    end
=== D. DSNU estimation  ===
clc; clear; ieInit;
    imageType = 'PRNU_DSNU'; % PRNU_DSNU, DarkCurrentRate, ReadNoise
    isoFolder = 'ISO_55';    % 55, 99, 198, 299, 395, 798
    folder = ['Phani_Project/' imageType '/' isoFolder '/'];
    load('map.mat');
    strcmp(imageType,'PRNU_DSNU');
    imageSize  = [3024,4032];
    burstNumber = 5;
    x = [];
    y = [];
    figure('Name','DSNU');
    for i = 0:burstNumber-1 % filenames end with i
        folderInfo  = dir([folder, '*', int2str(i), '.dng']);
        folderName  = {folderInfo.name};
        for j = 1:5
            fname = [folder, folderName{j}];
            [sensor, info] = sensorDNGRead(fname);
            %sensor = sensorSet(sensor, 'exp time', 5);
            img = ieDNGRead(fname);
            img = im2double(img);
            img = img./map;
            E = info.ExposureTime;
            y = [y; std2(img)];
            x = [x;E];
            %sensorWindow(sensor);  % ISETCAM window
            %imagesc(digitalValue); % visualize
        end   
    end
        img = ieDNGRead(fname);
        [rows, columns, numberOfColorBands] = size(img);
        subplot(2, 3, 1);
        imshow(img, []);
        FontSize = 14;
        title('img', 'FontSize');
        %Set up figure properties:
        % Enlarge figure to full screen.
        set(gcf, 'Units', 'Normalized', 'OuterPosition', [0 0 1 1]);
        % Do the fit to a line:
        coeff = polyfit(x,y,1)
        fitted_y = polyval(coeff, x);
        subplot(2, 3, 3);
        plot(x, y, 'bd', 'MarkerSize', 10);
        hold on;
        plot(x, fitted_y, 'r-', 'LineWidth', 3);
        grid on;
        title('Dark current plot', 'FontSize', FontSize);
        xlabel('Exposure time ', 'FontSize', FontSize);
        ylabel('Dark current image ', 'FontSize', FontSize);
        message = sprintf('The slope = %.3f\nThe intercept is %.3f',...
        coeff(1), coeff(2));
        uiwait(msgbox(message));
    %%Extra code for voltage conversion
    sensor = sensorDNGRead(fname);
    sensorWindow(sensor);
    voltageswing = 0.459; % Measured from ('IMX363') sensor
    % This is a 10-bit sensor so the whitelevel is 1023. ISOSpeed 55 corresponds to analoggain = 1
    Resolution = 1023;
    coeff(2) = 0.0001;
    Voltagemeasured = (coeff(2)*voltageswing)/Resolution;
==== Noise relationship w.r.t gain/exposure duration ====
The relationship between the noise estimates and the main camera parameter "gain" was studied and it showed the slope result increased with respect to increased gain.
    clc; clear; ieInit;
    imageType = 'DarkCurrentRate'; % PRNU_DSNU, DarkCurrentRate, ReadNoise
    isoFolder1 = {'ISO_55','ISO_99','ISO_198','ISO_299','ISO_395','ISO_798'};    % 55, 99, 198, 299, 395, 798
    for k = 1:length(isoFolder1)
        isoFolder = isoFolder1{k};
        folder = ['Phani_Project/' imageType '/' isoFolder '/'];
        strcmp(imageType,'DarkCurrentRate');
        imageSize  = [3024,4032];
        burstNumber = 5;
        x = [];
        y = [];
        figure('Name','DarkCurrentRate');
        for i = 0:burstNumber-1 % filenames end with i
            folderInfo  = dir([folder, '*', int2str(i), '.dng']);
            folderName  = {folderInfo.name};
            for j = 1:1
                fname = [folder, folderName{j}];
                [sensor, info] = sensorDNGRead(fname);
                img = ieDNGRead(fname);
                E = info.ExposureTime;
                y = [y; mean2(img)];
                x = [x;E];
                %sensorWindow(sensor);  % ISETCAM window
                %imagesc(digitalValue); % visualize
            end
        end
        FontSize = 14;
        %Set up figure properties:
        % Enlarge figure to full screen.
        set(gcf, 'Units', 'Normalized', 'OuterPosition', [0 0 1 1]);
        % Do the fit to a line:
        coeff = polyfit(x,y,1)
        fitted_y = polyval(coeff, x);
    %    subplot(2, 3, 3);
        plot(x, y, 'bd', 'MarkerSize', 10);
        hold on;
        plot(x, fitted_y, 'r-', 'LineWidth', 3);
        grid on;
        title('Dark current plot', 'FontSize', FontSize);
        xlabel('Exposure time ', 'FontSize', FontSize);
        ylabel('Dark current image ', 'FontSize', FontSize);
        message = sprintf('The slope = %.3f\nThe intercept is %.3f',...
            coeff(1), coeff(2));
        uiwait(msgbox(message));
    end
=== E. PRNU/DSNU Measurement using pixel response ===
The codes for PRNU measurement are as follows. Please note that this code was written on Windows. Hence, the file reading part could fail on a Mac. This code extracts the PRNU, DSNU for each ISO Speed. We would need to change the isoFolder in the 4th line of code to run it each time with a different ISO Speed. Once the R, G, B and all channel PRNU are obtained, we will output this data. The key files will be mentioned on the display of the Matlab console after the code is run. The way to interpret the numbers in the output files is shared after the code. 
  %11-13
  %% Initialize: Define folder paths, initialize iSETCam
 
  ieInit; clear; close all;
  load('D:\documents_bkup\C\Course\psych221\project_data\map.mat');
  imageType = 'PRNU_DSNU'; % PRNU_DSNU, DarkCurrentRate, ReadNoise
  isoFolder = 'ISO_55';    % 55, 99, 198, 299, 395, 798
  folder = ['D:\documents_bkup\C\Course\psych221\project_data\' imageType '\' isoFolder '\'];
  out_path = 'D:\documents_bkup\C\Course\psych221\project_data\';
 
  imageSize  = [5,3024,4032];        % Define 3-D image size which stores 5 images, each of 3024x4032 pixels
  digitalValueRaw = zeros(imageSize);    % Create zero 3-D array
  digitalValueMean = zeros(imageSize);    % Create zero 3-D array
  burstNumber = 5;
 
  x = [];
  y = [];
 
  y_r = [];
  y_g = [];
  y_b = [];
 
  %% Read data from different exposures
 
  for i = 0:4        % i runs from 0 through 4, each value signifying a different exposure settings
      folderInfo  = dir([folder, '*', int2str(i), '.dng']);
      folderName  = {folderInfo.name};
     
      for j = 1:5    % j runs from 1 to 5, each value signifying repeat number. Total 5 repeats (image is grabbed 5 times with same exposure setting and ISO speed)
          fname = [folder, folderName{j}];   
          cd(folder);                       
          [sensor, info] = sensorDNGRead(folderName{j});  % Read .dng file
          img = ieDNGRead(fname); % Read image
          img2 = double(img);      % Convert to double before lens correction
          img3 = img2./map;        % Lens shading correction
          E = info.ExposureTime;      % Read exposure time
          digitalValueRaw(j,:,:) = img3;  % Store 5 repeatedly grabbed images with same exposure time settings
         
      end
     
      x = [x;E];
      digitalValueMean(i+1,:,:) = mean(digitalValueRaw);  % Average the repeated image captures and arrive at one image per exposure setting
     
  end
 
  imageSize2 = [5,968,1318];
  digitalValueMeanCropped = zeros(imageSize2);
 
  for k = 1:1:5
     
      Im = reshape(digitalValueMean(k,:,:), [3024 4032]);
      Im_crop = imcrop(Im,[917 1505 1317 967]);
      digitalValueMeanCropped(k,:,:) = Im_crop;
     
  end
 
  %% Evaluating PRNU, DSNU in this section
 
  % Evaluate PRNU and DSNU for entire cropped image
  tileSize = [968 1318];
  [slopes, intercepts, slopes_r, slopes_g, slopes_b, intercepts_r, intercepts_g, intercepts_b]  = evaluate_prnu_dsnu(x, digitalValueMeanCropped, tileSize);
 
 
  % Divide the cropped image into 16 patches at different locations.
  % 4D array will not contain: 16x5x64x64
  %      where 16: number of patches
  %      5: representing different exposure settings
  %      64x64: representing the patch size
 
 
  patches_4d_dim = [16,5,64,64];
  patchDigitalValues = zeros(patches_4d_dim);
 
  patchSize = [64 64];
  numPatches = 16;
 
  r_coord = [1, floor(tileSize(1)*0.2), floor(tileSize(1)*0.4), floor(tileSize(1)*0.6) floor(tileSize(1)*0.8)];
  c_coord = [1, floor(tileSize(2)*0.2), floor(tileSize(2)*0.4), floor(tileSize(2)*0.6) floor(tileSize(2)*0.8)];
 
  l=1;
  for k=1:1:5
      for j=1:1:4
          for i = 1:1:4
             
              if mod(r_coord(i),2)==0
                  r_coord(i) = r_coord(i)+1;
              end
 
              if mod(c_coord(i),2)==0
                  c_coord(i) = c_coord(i)+1;
              end           
             
              Im_orig = reshape(digitalValueMeanCropped(k,:,:), tileSize);
              patchDigitalValues(l,k,:,:) = imcrop(Im_orig, [r_coord(i) c_coord(j) 63 63]);
              l = l+1;           
             
          end
      end
     
      l = 1;
 
  end
 
 
  % Now that the cropped image is divided into 16 patches, PRNU  and DSNU
  % measurements are calculated for each patch to understand if there's a
  % spatial effect that's contributing to high PRNU / DSNU.
 
  data_out_dim = [16, 64, 64];
  channel_out_dim = [16, 32, 32];
 
  % initialize output variables
 
  p_slopes = zeros(data_out_dim);   
  p_intercepts = zeros(data_out_dim);
  p_slopes_r = zeros(channel_out_dim);
  p_slopes_g = zeros(channel_out_dim);
  p_slopes_b = zeros(channel_out_dim);
  p_intercepts_r = zeros(channel_out_dim);
  p_intercepts_g = zeros(channel_out_dim);
  p_intercepts_b = zeros(channel_out_dim);
 
 
  % compute pixel response for all the 16 patches
 
  for l = 1:1:16
     
      [p_slopes(l,:,:), p_intercepts(l,:,:), p_slopes_r(l,:,:), p_slopes_g(l,:,:), p_slopes_b(l,:,:), p_intercepts_r(l,:,:), p_intercepts_g(l,:,:), p_intercepts_b(l,:,:)] = evaluate_prnu_dsnu(x, reshape(patchDigitalValues(l,:,:,:), [5 64 64]), patchSize);
     
  end
 
 
  %% Output all pixel response data obtained to files
 
  cd(out_path)
  mkdir solutions
  cd solutions;
  mkdir(isoFolder);
  cd(isoFolder);
  writematrix(slopes, 'px_resp_all_px.csv');
  writematrix(intercepts, 'darkv_all_px.csv');
  writematrix(slopes_r, 'px_resp_r.csv');
  writematrix(slopes_g, 'px_resp_g.csv');
  writematrix(slopes_b, 'px_resp_b.csv');
  writematrix(intercepts_r, 'darkv_r.csv');
  writematrix(intercepts_g, 'darkv_g.csv');
  writematrix(intercepts_b, 'darkv_b.csv');
 
 
  for l = 1:1:16
     
      suffix = strcat('patch_', num2str(l));
     
      writematrix(reshape(p_slopes(l,:,:), [64 64]), strcat(suffix,'px_resp_all_px.csv'));
      writematrix(reshape(p_intercepts(l,:,:), [64 64]), strcat(suffix,'darkv_all_px.csv'));
      writematrix(reshape(p_slopes_r(l,:,:), [32 32]), strcat(suffix,'px_resp_r.csv'));
      writematrix(reshape(p_slopes_g(l,:,:), [32 32]), strcat(suffix,'px_resp_g.csv'));
      writematrix(reshape(p_slopes_b(l,:,:), [32 32]), strcat(suffix,'px_resp_b.csv'));
      writematrix(reshape(p_intercepts_r(l,:,:), [32 32]), strcat(suffix,'darkv_r.csv'));
      writematrix(reshape(p_intercepts_g(l,:,:), [32 32]), strcat(suffix,'darkv_g.csv'));
      writematrix(reshape(p_intercepts_b(l,:,:), [32 32]), strcat(suffix,'darkv_b.csv'));
 
  end
 
 
  % Calculate PRNU from the pixel responses for the cropped image [968x1318 pixels]
  prnu_all_px = std2(slopes); prnu_r = std2(slopes_r);
  prnu_g = std2(slopes_g); prnu_b = std2(slopes_b);
 
  px_resp_mean = mean2(slopes); px_resp_mean_r = mean2(slopes_r);
  px_resp_mean_g = mean2(slopes_g); px_resp_mean_b = mean2(slopes_b);
 
  PRNU_final_cropped_image = [prnu_all_px px_resp_mean;
      prnu_r px_resp_mean_r;
      prnu_g px_resp_mean_g;
      prnu_b px_resp_mean_b];
 
 
  PRNU_1288_cropped_image = 100*(PRNU_final_cropped_image(:,1)./PRNU_final_cropped_image(:,2)) ;
 
  % Calculate DSNU from the pixel responses for the cropped image [968x1318 pixels]
  dsnu_all_px = std2(intercepts); dsnu_r = std2(intercepts_r);
  dsnu_g = std2(intercepts_g); dsnu_b = std2(intercepts_b);
 
  darkv_mean = mean2(intercepts); darkv_mean_r = mean2(intercepts_r);
  darkv_mean_g = mean2(intercepts_g); darkv_mean_b = mean2(intercepts_b);
 
  DSNU_final_cropped_image = [dsnu_all_px darkv_mean;
      dsnu_r darkv_mean_r;
      dsnu_g darkv_mean_g;
      dsnu_b darkv_mean_b];
 
 
  % Output the PRNU and DSNU values for the cropped image [968x1318 pixels]
  writematrix(PRNU_final_cropped_image, 'PRNU_final_cropped_image.csv');
  writematrix(DSNU_final_cropped_image, 'DSNU_final_cropped_image.csv');
 
  % Calculate PRNU from the pixel responses for the 16 patches [64x64 px]
 
  p_prnu_all_px = zeros(16,1); p_prnu_r = zeros(16,1);
  p_prnu_g = zeros(16,1); p_prnu_b = zeros(16,1);
 
  for l = 1:1:16
      p_prnu_all_px(l) = std2(reshape(p_slopes(l,:,:), [64 64]));
      p_prnu_r(l) = std2(reshape(p_slopes_r(l,:,:), [32 32]));
      p_prnu_g(l) = std2(reshape(p_slopes_g(l,:,:), [32 32]));
      p_prnu_b(l) = std2(reshape(p_slopes_b(l,:,:), [32 32]));
  end
 
  PRNU_final_patches = [p_prnu_all_px p_prnu_r p_prnu_g p_prnu_b];
 
 
  p_pxr_all_px = zeros(16,1); p_pxr_r = zeros(16,1);
  p_pxr_g = zeros(16,1); p_pxr_b = zeros(16,1);
 
  for l = 1:1:16
      p_pxr_all_px(l) = mean2(reshape(p_slopes(l,:,:), [64 64]));
      p_pxr_r(l) = mean2(reshape(p_slopes_r(l,:,:), [32 32]));
      p_pxr_g(l) = mean2(reshape(p_slopes_g(l,:,:), [32 32]));
      p_pxr_b(l) = mean2(reshape(p_slopes_b(l,:,:), [32 32]));
  end
 
  PX_Response_final_patches = [p_pxr_all_px p_pxr_r p_pxr_g p_pxr_b];
 
  PRNU1288_patches = 100*(PRNU_final_patches./PX_Response_final_patches);
 
  % Calculate DSNU from the pixel responses for the 16 patches [64x64 px]
  p_dsnu_all_px = zeros(16,1); p_dsnu_r = zeros(16,1);
  p_dsnu_g = zeros(16,1); p_dsnu_b = zeros(16,1);
 
  for l = 1:1:16
      p_dsnu_all_px(l) = std2(reshape(p_intercepts(l,:,:), [64 64]));
      p_dsnu_r(l) = std2(reshape(p_intercepts_r(l,:,:), [32 32]));
      p_dsnu_g(l) = std2(reshape(p_intercepts_g(l,:,:), [32 32]));
      p_dsnu_b(l) = std2(reshape(p_intercepts_b(l,:,:), [32 32]));
  end
 
  DSNU_final_patches = [p_dsnu_all_px p_dsnu_r p_dsnu_g p_dsnu_b];
 
 
  p_dv_all_px = zeros(16,1); p_dv_r = zeros(16,1);
  p_dv_g = zeros(16,1); p_dv_b = zeros(16,1);
 
  for l = 1:1:16
      p_dv_all_px(l) = mean2(reshape(p_intercepts(l,:,:), [64 64]));
      p_dv_r(l) = mean2(reshape(p_intercepts_r(l,:,:), [32 32]));
      p_dv_g(l) = mean2(reshape(p_intercepts_g(l,:,:), [32 32]));
      p_dv_b(l) = mean2(reshape(p_intercepts_b(l,:,:), [32 32]));
  end
 
  DarkV_final_patches = [p_dv_all_px p_dv_r p_dv_g p_dv_b];
 
 
  % Output PRNU and DSNU values for the 16 64x64 patch images
  writematrix(PRNU_final_patches, 'PRNU_final_16patches.csv');
  writematrix(DSNU_final_patches, 'DSNU_final_16patches.csv');
 
  writematrix(DarkV_final_patches, 'DarkV_final_patches.csv');
 
 
  writematrix(PRNU1288_patches, 'PRNU1288_patches.csv');
  writematrix(PRNU_1288_cropped_image, 'PRNU_1288_cropped_image.csv'); 
  writematrix(PX_Response_final_patches, 'PX_Response_final_patches.csv');
 
  disp('Please look for the output data at specified output folder in the first line of code --> solutions --> ISO setting folder')
  disp('--------------------')
  disp('Lots of raw data are stored for troubleshooting/other analysis...')
  disp('Key files for the 64x64 clip approach are : PRNU_final_16patches.csv and PRNU1288_patches.csv')
  disp('--------------------')
  disp('Key files for the centrally cropped analysis: PRNU_final_cropped_image.csv and PRNU_1288_cropped_image.csv')
  disp('--------------------')
  disp('The PRNU1288 contain standard deviation divided by mean pixel response across the pixels and is expressed as a percentage')
  disp('The PRNU_final* output data contain standard deviation in gray levels of the pixel response slope')
  disp('The Pixel Response (mean slope) for the sixteen 64x64 patch clips is saved at PX_response_final_patches.csv')
  disp('Please send email to vangal87@stanford.edu in case of any issues with the code. Thanks! ')
 
 
  %% Useful functions for splitting image into its constituent channels
 
  function [R, G, B] = split_channels(img)
 
      G1 = img(1:2:end,2:2:end);
      G2 = img(2:2:end,1:2:end);
      R  = img(2:2:end,2:2:end);
      B  = img(1:2:end,1:2:end);
      G  = (G1 + G2) ./ 2;
 
  end
 
 
  function coeff = evaluate_dark_current(x, y, channel)
 
      coeff = polyfit(x,y,1);
      fitted_y = polyval(coeff, x);
      f = figure();
      plot(x, y, 'bd', 'MarkerSize', 10);
      hold on;
      plot(x, fitted_y, 'r-', 'LineWidth', 3);
      grid on;
      title(strcat('Fitted Y, Channel = ', channel));
      %ylabel('Fitted Y', 'FontSize', fontSize);
      message = sprintf('Channel = %s\n The slope = %.3f\nThe intercept is %.3f',...
      channel, coeff(1), coeff(2));
      uiwait(msgbox(message));
 
  end
 
 
  function [slopes, intercepts, slopes_r, slopes_g, slopes_b, intercepts_r, intercepts_g, intercepts_b]  = evaluate_prnu_dsnu(x, Im_stack_cropped, tileSize)
 
      rows = tileSize(1);
      cols = tileSize(2);
     
      out_3d_dim = [2, rows, cols];
     
      final_coeffs = zeros(out_3d_dim);
 
      tic
      for i=1:1:rows
          for j = 1:1:cols
              y = Im_stack_cropped(:,i,j);
              coeff = polyfit(x,y,1);
              final_coeffs(1,i,j) = coeff(1);
              final_coeffs(2,i,j) = coeff(2);
          end
      end
      toc
      t = toc;
 
      slopes = reshape(final_coeffs(1,:,:), tileSize);
      intercepts = reshape(final_coeffs(2,:,:), tileSize);
      [slopes_r, slopes_g, slopes_b] = split_channels(slopes);
      [intercepts_r, intercepts_g, intercepts_b] = split_channels(intercepts);
The PRNU1288_patches.csv will contain data in the below format, as an example. The first column contains PRNU1288 calculated as a percentage considering all the pixels of the 64x64 clip. The second, third and fourth columns are PRNU measurements for individual channels (r, g, and b respectively). The 16 rows correspond to each patch. There are 16 patches in total. For the purpose of representing a singular value for each color channel, a median value is chosen among the measurements across the 16 patches.     
[[File: Image_prnu_results.PNG|thumb|1000px|center|PRNU evaluation (expressed as percentage as described by formula in the previous section) at ISO 55 per color filter channel using the multiple patches of 64x64 px. approach]]
=== F. Final presentation (with corrections) as presented on 18-Nov-2020===
The final presentation can be found at this link: https://drive.google.com/file/d/1tL_vB67qgA9THTCS8tp6gK0AWrjP6uvH/view
== Work Breakdown ==
* ''Tzu-Sheng Kuo'' – Lens Shading
* ''Phani Kondapani'' – Dark Noise and DSNU estimation
* ''Vangal Aravindh'' – Read Noise and PRNU estimation

Latest revision as of 03:06, 5 December 2020

Introduction

This goal of the project is to evaluate the image sensor characteristics of the Google Pixel 4 from an image-systems perspective. Lens shading effects of the illuminant on the image sensor is analyzed. Furthermore, image sensors are typically characterized by their signal-to-noise ratio. This project looks at different settings that a user would set on a camera and estimates the noise levels and compares the trends to the theoretical understanding gained in the course. The project execution was done by (i) Tzu-Sheng Kuo (MS EE student), (ii) Aravindh Vangal (SCPD student) and (iii) Phani Kondapani (SCPD student).

Theory: Lens Shading

The lens shading effect, also known as vignetting, describes an image's reduction of brightness from the center to peripheral areas. The illustration below describes the cause of the lens shading effect. Specifically, the central area of the sensor receives more incoming light (green rays) in comparison to the peripheral regions (purple rays), even if the rays arriving at the lens span all angles uniformly. As a result, the relative illumination at the edge of the sensor is lower.

In this project, we are interested in modeling the lens shading effect using the images captured under different exposure duration, ISO speed, and color channels. Furthermore, we like to apply our model to remove the effect on several real-world images and to support the analysis of sensor noise, particularly DSNU and PRNU estimations.

An illustration of the lens shading effect.

Theory: Sensor Noise

The block diagram of the sensor can be divided into two parts - the photodiode pixel and the readout circuitry. Each of the components contribute to the total charge collected at the output. We start by looking at each component in order of time from when the proton arrives up to the point where voltages are available to the user. Component 1 marked in the picture is the photodiode. Photodiode works as a bucket that receives photons and fills the bucket up with electrons. More the amount of photons incident on the photodiode, more is the collection of electrons. This is the operation that is happening fundamentally at point 1 in the picture. At point 2 , the charges collected by the pixel are being sensed for driving it to the output. This is called as the sense node. Now, the charge sensed at point 2 has some contribution from the reset signal as well, as this flag is necessary to transfer charges out of the pixel. Hence, what is sensed at point 2 is the bucket of electrons that the photodiode has collected plus the signal from the reset switch.

Now, these electrons travel through a series of transistors and find their way into the readout circuit. All the electrons lost or added during the process of travel path from the sense node to the output can be modeled as transfer charges. Now the readout circuit comes with an analog gain which is labelled 'g' in the figure. This function of the analog gain is to amplify the signal at the input by a factor g at the output. The reasons for a user to want to do so is discussed in a later section where we discuss ISO Speeds. What is clear from the travel path is that all the charges or electrons available at the input of the analog gain circuit go through a transformation of the factor given by the gain. With this setup to be the context of our camera, we will now attempt to understand from theory the effects one is supposed to expect with varying settings available to the user in the camera. We will first start by defining some of them .

A. Camera Options and Impact

1. Exposure time: The exposure time is the time for which the photo-diode is allowed to integrate photons. It is clear from this diagram what follows from this is that the charge accumulated at the sense node before readout is a direct function of the exposure time. All electrons accumulated in the photodiode irrespective of the process is subject to a change by variation of exposure time.

2. ISO speed: The ISO speed in the camera is an option available to the use to capture images of objects in motion at high speeds. How this works is that the read column shutter switches at a faster speed for a higher speed setting. So from here, this follows that if the shutter is switched faster, there must be a circuitry to compensate for the loss of signal. This is exactly what the gain functionality in the analog circuit serves.

An illustration of the different components of the sensor and their resultant noise contributions. [6]

B. Electron generation phenomenon

1. Photoelectric voltage: Photons are converted to electrons due to photoelectric effect in the photodiode. The voltage accumulated by the photodiode this way is the photoelectric voltage. This is the signal voltage that we would ideally like to see in our image. What is important to note is that as these photons are incident, they follow a Poisson's distribution as it is fundamental to nature. This statistical variation is captured in the term shot_noise and is a function of Poisson's distribution and since the circuit multiplies it all by a grain factor g, also a function of gain.

2. Dark Voltage or thermal voltage: Electrons are also generated in the photodiode by virtue of heating in the pixel. This is referred to thermal voltage or dark voltage. Note that even in absence of any photons or illumination, these electrons are generated simply by heat.

Note that for both 1) and 2) above, allowing the pixel to integrate charge will create more electrons at the sense node. Thus, in the equation show in the image above we see that the dark current and the signal is a function of both the exposure time and the gain 'g' which is downstream in the flow of charge. Also note that the electrons accumulated in the pixel due to photoelectric effect is a function of incident photons (or illumination), and that dark current is independent of this process since the driving factor is heat.

Theory: Dark Current Noise

Introduction: The Dark Current Noise is induced from thermally generated electrons which are indistinguishable from photo-generated electrons. Dark Current depends on the CCD temperature thus the high-end CCD’s are designed with a certain mechanisms such as by cooling or in CMOS special circuity to reduce the amount of dark current. Even in the absence of light, the charge will accumulate in each pixel and this accumulation of charge constitutes ‘dark current’ noise. A well-known method for measuring this dark current noise is by checking the slope of the dark images with respect to different exposure durations.  

Example of dark current (Source Kodak, KA 0401 sensor).


There is another aspect of noise associated with dark signal is called DSNU i.e., is the Dark Signal Non-Uniformity and it is seen as an offset between pixels in dark.

Methods

The method used to drive down to evaluate each of the parameters for a sensor is to identify parameters with the least amount of dependencies, control all the other parameters to help evaluate. We start with lens shading.

A. Lens Shading

Before diving into details of the algorithm that models the lens shading effect, we first introduce the data provided for this project.

Data

The main folder is PRNU_DSNU, which contains six subfolders with images captured under different ISO speeds (55, 99, 198, 299, 395, and 798). Within each of these subfolders, there are five image bursts that correspond to five different exposure duration. For each image burst, there are five images. The illustration below summarizes the described folder structure.

An illustration of the folder structure.

Algorithm

Given the provided data, we run our algorithm for each image burst independently. The illustration below shows the modeling pipeline.

First, we perform element-wise matrix averaging on five images within an image burst to mitigate random noise. We then extract the digital values of each color channel in the averaged image to get a smaller matrix with half the original image size. For example, in the illustration below, we plot the matrix of the blue channel in 3-dimension with their digital value in the z-axis. We can clearly see the lens shading effect, which causes a higher digital value in the central area. Note that for this step, we get different matrices for the G1 and G2 green channels in the Bayer pattern. Next, for each of the smaller matrices, we use the fit function from MATLAB's Curve Fitting Toolbox to fit a 2-dimensional polynomial function to the digital values of each color channel. For example, the illustration below shows the fitting processing of the blue channel. We refer to these fitted values for each color channel as a color-wise lens shading map. In total, we get four maps that correspond to four color channels from each image burst. For the final step, which is not shown here, we combine four color-wise maps into a single lens shading map by assigning their values to a new image-size matrix based on their position in the Bayer pattern of the averaged matrix. This new matrix is the final lens shading map for each image burst.

The lens shading modeling pipeline.

B. Read Noise

It can be clearly inferred from the circuit diagram above that the read noise in the circuit is physically not a consequence of the pixel. It is independent from the pixel but adds onto the pixel signal. Hence, exposure time is of no consequence for readout noise. Hence, the way to determine this is if we could set everything else to zero. We do know from the previous paragraph that both the dark current and signal need a finite exposure time or integration time in which to collect the charge. By denying the camera any exposure time and illumination, we ascertain that all the voltages being read out are solely a function of the read noise. Read noise can include a combined effect of Reset switch noise as well as circuit noise. This is expected to increase with increasing gain. Typically information is read out either all pixels at once or column by column. So read noise can show up sometimes as column noise also.

1. Read noise estimation flowchart: Basically the read noise from images would correspond to the noise floor in the images. There are a couple of different ways to go about this given the measurement accuracy and conditions. The first method described is by evaluating difference pairs from the raw image dataset and comparing evaluating the standard deviations. The second method, a more accurate one, is to calculate mean images and subtract each individual images from the mean image and derive noise statistics by taking the standard deviation of all the pixels in the image.

Read noise Estimation Procedure. Flow 1 - With difference image pairs
Read noise Estimation Procedure. Flow 2 - Using subtraction from the mean image


2. Results: The difference images typically looks like salt and pepper noise spread around the entire image sensor. We couldn't find any spatial correlation along the rows or columns of the sensor. The pixel channel is irrelevant to this analysis as the illumination is fixed to zero and there is no accumulation of electrons at all in the pixel for this experiment. For flow 1, a normalizing factor of sqrt(2) was used to represent noise in the target image instead of the difference image. The results below are plotted in micro-volts versus the gain.

Difference Image

A few words on the difference images. Typical difference image looks like below: a series of salt and pepper noise spread across the image sensor.

Difference image of pair of images at zero exposure and illumination.


Spatial samples at fixed portions of the sensor were selected to analyze for any variations in noise statistics. Histograms were plotted of the difference images and the observatino ws that there was no spatial correlation of the pixel to the readout noise. This is also something that was expected.

Difference image evaluated across different spatial regions within the sensor.
Corresponding histograms evaluated for each of these difference images. Note that there is almost no difference, the position of the sensor doesn't show correlation to read noise indicating the readout circuit components are fabricated very uniformly.

Normalization principle for flow 1

C. Dark Current

The provided DarkCurrentRate data was used to perform dark current noise estimation. The data consists of various images captured in the dark with different exposure durations which are in the range of 0.01, 0.02, 0.04, 0.08, and 0.16 seconds and the longest duration is 0.16 seconds. The corresponding ISO of the data ranging from ISO 55 to ISO 798. The data is used to measure the growth in the dark noise with time and used the data with ISO55 to simplify the mathematical computation since the gain for ISO55 is 1. The Image Systems Engineering Toolbox for Cameras (ISETCAM) is a Matlab toolbox provided by Stanford University was used to calculate dark current noise and DSNU measurements.

For Dark Current, the mean value of 5 images was extracted since we can only get digital values, this is the best accuracy we could have in terms of “unprocessed signal”. The slope was calculated by using the mean values with respect to various exposure times.

Dark current measurement procedure.

The zoom in the image of .dng shows the various pixels which correspond to accumulated Noise in the image

D. DSNU Estimation

The Dark Signal Non-Uniformity and it is seen as an offset between pixels in dark. For DSNU, the method used is similar to the Dark Current Noise Measurement except we corrected the lens shading from DSNU result and used y-intercept to calculate the result.

E. PRNU Estimation

PRNU is defined as pixel response non-uniformity. The aim is to calculate pixle response and determine the non-uniformity. To understand this in more detail, please refer to the figure below. Here, a plot of the output voltage versus the number of photons is shown. If Even under no illumination (0 photons), there is a finite dark current by virtue of thermal effects as seen in previous section. Hence, the output voltage will not start at zero for zero incoming photons but will have some dark current present in the output. Now, as light becomes incident on the pixel, the pixel starts converting the photons into electrons and this is in the linear regime. There is indeed a minimum number of photons that need to be incident on the pixel to start creation of electrons. This is determined by pixel design. There do exist pixels that can count single photons. For the purpose of our analysis with constant illumination, we will vary the exposure time to simulate the x-axis (# of photons). The output voltage is expressed in volts, which then gets converted to a grey level in our image. Essentially, the idea is to fit a line between pixel value and the exposure time in the linear regime of the graph and compare the variation in the slopes. The standard deviation of the slopes is measured and the ratio w.r.t the mean is taken and expressed as a percentage. The mathematical formula is defined below:


PRNU formula, expressed as a percentage [7]


Discussion: We saw from section A that lens shading effect determines the reception of photons at the image sensor spatially. Hence, as a precursor to estimation of PRNU, we will first seek to compensate for the lens shading effects by using the results generated in sec. A. This gives rise to the below workflow for estimation of PRNU of the sensor. The first portion is to evaluate a mean image from the repeated experiments per ISO value per exposure time. This will compensate to the extent possible for shot noise although, taking the mean will not get rid of the noise entirely. Second step is to correct for lens shading, then followed by cropping out a 1 MP portion from the sensor, and further dividing the 1MP into 16 tiles. The idea is then to valuate PRNU for patches of size 64x64 pixels to estimate PRNU.

Workflow for PRNU estimation


We observe that there is still some residual errors after correcting for lens shading after polynomial fit. The following image demonstrates the need to be more rigorous in order to arrive at the right estimates for PRNU. We go for a cropped portion at the center (~ 1 MP). However, even at the cropped portion of 1 MP out of the total of 12 MP on the image sensor, we observe that the pixel responses still follow the gradual spatial variation in pixel values. Hence, the approach is to divide this 1MP cropped portion of the image into 16 different patches of size 64x64 px, thereby analyzing 4k pixels at a time. The results are shared in the results section.

An illustration of the gray-level intensity profiles at the sensor from left to right (a): Before correction for lens shading, (b): After applying map for lens shading correction, (c): A localized ~ 1 MP patch within the 12 MP camera and (d): A further localized 64x64 px. patch


Shown below are a surface comparison of the pixel values and pixel responses extracted out of the data with varying exposure times. The strong correlation between the surfaces tells us that any standard deviation evaluated on these pixel responses are likely to contain a significant aspect or contribution of the lens shading artefacts and thus won't be truly indicative of the non-uniformities in pixel response. Hence, the idea to subdivide this into a further 16 patches.

From left to right (a): Gray-level , (b): Pixel response evaluated as a slope evaluated by plotting exposure time on x-axis and gray-level on the y-axis per pixel


Slope estimation routine: A function to evaluate PRNU from an image with specification of tilesize has been created as follows. The critical part is the fitting of the line itself which is done using the MATLAB function 'polyfit(x, y, 1)' to fit a 1 degree polynomial between 'x' and 'y'. The split_channels() function used in this code splits the evaluated slopes and intercepts per channels and then estimates the mean and standard deviation per channels. Thus, we have obtained PRNU per channel per ISO speed for our experiments. Link to the complete code is shared at the end of the report.

Function taking in Image and tilesize as arguments and retrieving the slopes and intercepts for each color channel as well as the entire pixel overall
function [slopes, intercepts, slopes_r, slopes_g, slopes_b, intercepts_r, intercepts_g, intercepts_b]  = evaluate_prnu_dsnu(x, Im_stack_cropped, tileSize)
   rows = tileSize(1);
   cols = tileSize(2);
  
   out_3d_dim = [2, rows, cols];
  
   final_coeffs = zeros(out_3d_dim);
   tic
   for i=1:1:rows
       for j = 1:1:cols
           y = Im_stack_cropped(:,i,j);
           coeff = polyfit(x,y,1);
           final_coeffs(1,i,j) = coeff(1);
           final_coeffs(2,i,j) = coeff(2);
       end
   end
   toc
   t = toc;
   slopes = reshape(final_coeffs(1,:,:), tileSize);
   intercepts = reshape(final_coeffs(2,:,:), tileSize);
   [slopes_r, slopes_g, slopes_b] = split_channels(slopes);
   [intercepts_r, intercepts_g, intercepts_b] = split_channels(intercepts);
end

We notice that the pixel response varies to a much smaller extent over the 64x64 pixel patch licpes from the figure below. This validates our choice of going for a 64x64 pixel patch clip for evaluating PRNU for this image sensor. The PRNU values calculated per channel are available in the Results section.

Plot of the pixel responses across the 1 MP patch. Illustrates the variance in slope even in a 1MP cropped portion of the image sensor
Plot of the pixel responses across a 64x64 tile. Shows significantly lower variance in slope. PRNU is more effectively measured this way as the residual effects of lens shading correction are ignored.

Results and discussion

A. Lens Shading

For the experiment results, we first look at the lens shading map of a single image burst. We then compare the maps of different exposure duration and ISO speed. In addition, we also compare our map with another standard approximation approach. Finally, we apply the maps to a set of real-world images to remove their lens shading effect.

Single Image Burst

In this section, we model the lens shading map of a single image burst captured with an ISO speed of 55 and an exposure duration of 1/100 second. Following the modeling pipeline described above, we can first visualize the digital value of each color channel using 3-dimensional plots, which are shown below. While the range of digital values (z-coordinates) is different among color channels, every plot is higher in the central area and lower in the periphery. This observation indicates that all the color channels have a lens shading effect.

The 3-dimensional plots of digital values.

Next, we fit 2-dimensional polynomial surfaces to each of these plots separately. The resulting color-wise maps are shown below.

The 3-dimensional plots of color-wise maps.

Since our goal is to get the relative illumination, we normalize each color-wise map as shown below. Now, the maximal value of each map is equal to one.

The 3-dimensional plots of normalized color-wise maps.

Finally, we stacked these normalized color-wise maps together by assigning their values to an image-size matrix based on their corresponding positions in the original Bayer pattern. The resulting matrix, as visualized below, is our final lens shading map for the image burst captured with an ISO speed of 55 and an exposure duration of 1/100 second. As we can see, there are multiple unaligned layers in this final map, indicating that the lens shading effect of each color channel is slightly different. We name this lens shading map as our base map, which we will use to compare against the others and apply for the analysis of sensor noise.

The lens shading map.
Exposure Duration & ISO Speed

Following the same modeling pipeline, we may compute the lens shading map for every image burst. For example, consider image bursts with the same 55 ISO speed. We get each of their maps as shown below. To understand how different they are, we compute their relative difference to the base map, which is the one we get above and also the left-most one in this figure. We visualize their differences in 2-dimensional plots shown under each lens shading map. The value of the differences is about 0.001% in the central area and 10% in the corners. We conclude that the exposure duration does not significantly affect the lens shading effect.

The lens shading map for different exposure duration and their corresponding differences to the base map.

Similarly, we may compute the lens shading map for each ISO speed and compare their differences to our base map. The image below visualizes their differences. We again conclude that the ISO speed does not significantly affect the lens shading effect.

The differences for each lens shading map in comparison to our base map.

In summary, both the exposure duration and ISO speed do not significantly change the lens shading map in our experiment. We believe that this experimental result is reasonable because exposure duration and ISO speed have nothing to do with the cause of the lens shading, as we introduce previously in the theory section. One possibility that these factors may affect the lens shading effect is that the sensor saturates after a long exposure period. In this case, both the digital values at the central and peripheral areas of the sensor will be identical, and the fitted map will become a flat surface. However, we do not observe any saturation in our data and are confident that our conclusion should be valid for our experiment.

Standard Approach

According to our analysis, we may use a single lens shading map for all exposure duration and ISO speed because there are no significant differences. As a result, we choose our base map as the final lens shading map and compare it to another standard modeling approach: general approximation. As shown in the illustration below, general approximation estimates the value of the lens shading map using a cosine function. When theta equals zero, which means the central axis, the cosine equals one, which is identical to our lens shading map.

An illustration of the general approximation approach and its equation.

As shown below, we plot our lens shading map (left) and the one obtained from general approximation (right) side-by-side.

The 3-dimensional plot of our lens shading map and the one obtained using general approximation.

To further visualize their difference, we compute their relative difference and plot it both 2- and 3- dimensionally, as shown below. Overall, we find that the central area and the corners have a difference closer to zero, which means that the two maps are similar in these regions, except for one color channel that has a more salient difference between the two.

The comparison between our map and the general approximation.
Application

Finally, we apply our map to remove the lens shading effect of five provided images in this project. The images below in the first row are the original images without processing. The images in the second row are those with the lens shading effect removed. By comparing the images in two rows, we notice some artifacts at the corners. We hypothesize that these artifacts are due to the microlens that causes a second lens shadowing effect. Specifically, as the viewing angle increases, the light may fall out of the photosensitive area due to microlens, causing our modeled lens shading map to be smaller than expected at corners. When we later apply them by multiplying the original images with the reciprocal of our lens shading map, the corners thus have higher values than they should be. Further experiments are required to verify this hypothesis.



B. Read Noise

The results are along expected lines with regards to the trend w.r.t gain. The read noise shows a near linear increase with increasing gain (ISO Speed) on the x-axis. Moreover, it is understandable that the noie estimates for difference images is slightly higher than the method 2 as difference images nearly double the signal and standard approximation of a factor of sqrt(2) for uncorrelated images might not work as effectively as the images are not uncorrelated in the true sense of it. They are burst images so each pixel corresponds to the same physical readout circuitry associated with it. Hence, we will use the results obtained from flow 2 for the rest of the exercise.


Final read noise estimates evaluated on the google pixel 4A image sensor.

C. Dark Current

The below plot consists of a .dng image (left), the overall histogram which represents the intensity histogram of a grayscale image, and the slope of the dark current image with respect to different exposure times is shown on the right. The pixel concentration in the histogram at 64 shows the black level



Dark Noise Result:


Voltage conversion:



This is a 10-bit sensor, so the white level is 1023
The sensor model was created with the dark image using ISETCAM software and the Voltage swing obtained from the sensor is 0.459V

  • ADC reading is 0.0640 (Slope of dark image vs exposure time)
  • Here is the method used for voltage conversion,
  • Voltage measured = (0.0640) *(0.459)/1023 = 2.87 e-05V = 28.7 Micro-Volts


Examine the relationship with different channels (R,G,B):

Once the dark noise estimate is complete, then we further examined the noise relationship with respect to different channels (R,G,B)
Interestingly, the slope output of ‘R(0.053)’ is lower, ‘G(0.071) is higher, and the ‘B(0.062)’ is close to the mean value
We believe the variability in the slope can be pretty high if the longest duration is not very long.




D. DSNU


Relationship between noise estimates and gain/exposure duration:
The below table shows the slope and y-intercept results corresponding to different ISO (55 to 798)
settings shows a linear relationship i.e., the slope increases with respect to an increase in gain values.

E. PRNU

PRNU by channel: The below figure compares the results of estimation of the PRNU per channel with and without taking into effect the residual illumination pattern (which is a function of the lens) on the image sensor. The data tells that on an average the PRNU values estimated after locally analyzing 64x64 pixel patch clips is roughly 50% lower than what was estimated with residual lens shading effects. This shows a remarkable improvement in accuracy of evaluation of the PRNU. A direct inference from this graph below is that it is not a good idea to analyze PRNU by not analyzing each channel separately. The illuminant has a spectral density for which the color filters at the head of the sensor have s specific response. This response is different for different channels because the spectral power that each channel sees is different. The results reported are in percentage as defined in the Method section in 4.E.


PRNU evaluation (expressed as percentage as described by formula in the previous section) at ISO 55 per channel of the sensor with the 1MP patch and one of the 64x64 tiles.

Workflow validation: To confirm that the PRNU in the 1 MP image was a result of lens shading artefact, we plotted the PRNU values of each of the 64x64 patch clips and found all of them to be consistently around the same value. This gives us confidence that our methodology of evaluation of pixel non-uniformities is pointing in the right direction. See figure below:

PRNU data (expressed as percentage as described by formula in the previous section) at ISO 55 for each tile of the 16 tiles of 64x64 px. in the central 1MP patch. Although the pixel responses at these tiles are different, the variation in the pixel responses (standard deviation) is nearly constant.

Variation with gain: The PRNU was evaluated for different values of gain. We observed a weakly increasing trend with higher ISO speed. Since we already posited that the right way to calculate PRNU is to analyze each channel individually, only the trends noted in individual channels are considered valid trends for this analysis. The trend makes sense as the pixel responses are expected to increase with higher gain. So when the pixel responses increase, the standard deviation also tends to increase slightly. Please see the figure below:

Variation of PRNU across ISO speeds. Shows a weak increasing trend with increase in the value of gain, which is expected.

Conclusions

In this project, we first model the lens shading effect by fitting a 2-dimensional polynomial to images captured with the Google Pixel 4a of a uniform field. We compare the lens shading effect under different exposure time, ISO speed, and against one standard approximation approach. We use our modeled map to remove the lens shading effect of some images and the data for sensor noise analysis. Secondly, the dark current noise was calculated by measuring the slope of the dark images with respect to different exposure durations and the corresponding voltage value was obtained by measuring voltage swing from the sensor model and the slope results. We also examined the noise relationship with respect to different channels (R,G,B) followed by slope calculations with different gain settings. For DSNU, the method used is similar to the Dark Current Noise Measurement except we corrected the lens shading from the DSNU result and used y-intercept to calculate the result. Finally, pixel non-uniformities were estimated from the pixel responses of the pixels in the camera. Since we couldn't completely correct the effect of lens shading, we went for a smaller tile based approach to address the issue of pixel non-uniformities. The results obtained are that in general the green pixels seem to suffer from lower non-uniformities across the board for different ISO speeds. Moreover, each color filter triggers a unique pixel response so we also saw it is not a good idea to use all the pixel responses of all channels to evaluate aggregated PRNU. The right thing to do is to segment PRNU estimation by color filter array pattern.

References

[1] A simulation tool for evaluating digital camera image quality(2004). J. E. Farrell, F. Xiao, P. Catrysse, B. Wandell . Proc. SPIE vol. 5294, p. 124-131, Image Quality and System Performance, Miyake and Rasmussen (Eds). January 2004
[2] Digital camera simulation (2012). J. E. Farrell, P. B. Catrysse, B.A. Wandell . Applied Optics Vol. 51 , Iss. 4, pp. A80–A90
[3] Image Systems Simulation (2015). J.E. Farrell and B.A. Wandell Handbook of Digital Imaging (Edited by Kriss). Chapter 8. ISBN: 978-0-470-51059-9
[4] Joyce Farrell and Brian Wandell 2020, ISETCam, "https://github.com/ISET/isetcam".
[5] Philip Kaaret, “Astronomical Laboratory ASTR:4850”, Spring 2018
[6] Assim Boukhayma, Arnaud Peizerat and Christian Enz "Noise Reduction Techniques and Scaling Effects towards Photon Counting CMOS Image Sensors", Sensors 2016, 16(4), 514; doi: 10.3390/s16040514
[7] "Standard for Characterization of Image Sensors and Cameras", https://www.emva.org/wp-content/uploads/EMVA1288-3.0.pdf - Nov 2010

Appendix

A. Lens Shading

The code for lens shading map includes five MATLAB files: main.m, getMap.m, analysis.m, cos4th.m, and interesting.m. To run these files, they should be on the same path as the folder cameranoise, which contains two subfolders: PRNU_DSNU and Interesting. While we have introduced the subfolder PRNU_DSNU above, the subfolder Interesting includes five .dng real-world images. The folder structure looks like this:

.
├── main.m              % model the lens shading map
├── getMap.m            % a function that helps model the lens shading map
├── analysis.m          % analyze maps of different exposure duration and ISO speed
├── cos4th.m            % compare maps between ours and the one using general approximation
├── interesting.m       % remove the lens shading of real-world images
└── cameranoise
    ├── PRNU_DSNU       % data folder, as introduced above
    └── Interesting     $ contain five .dng real-world images

main.m

The script main.m is responsible for generating the 3d plots of raw digital values, fitted maps, coefficients of 2d polynomials, and the final lens shading map. To switch between these four outputs, specify the variable 'task' as one of the followings: 'Raw', 'Fit', 'Table', or 'Map'. You may keep only your interested exposure duration and ISO speeds by removing some elements in the arrays 'exposure' and 'isoSpeed'.


clc; clear; ieInit;

task = 'Map'; % task = 'Raw', 'Fit', 'Table', 'Map'

imageSize   = [3024,4032]; % 1.4 μm pixel width
burstNumber = 5;
exposure  = [1, 2, 3, 4, 5];
isoSpeed  = [55, 99, 198, 299, 395, 798];
tableR    = zeros(length(isoSpeed) * length(exposure), 6);
tableG1   = zeros(length(isoSpeed) * length(exposure), 6);
tableG2   = zeros(length(isoSpeed) * length(exposure), 6);
tableB    = zeros(length(isoSpeed) * length(exposure), 6);

normalize = true;

% meshgrid
x = 1:imageSize(1)/2;
y = 1:imageSize(2)/2;
[mesh_x,mesh_y] = meshgrid(x,y);
mesh_x = mesh_x';
mesh_y = mesh_y';

for iso_index = 1:length(isoSpeed)
    iso = isoSpeed(iso_index);
    folder = ['cameranoise/PRNU_DSNU/ISO_' int2str(iso) '/'];
   
    if strcmp(task,'Raw') || strcmp(task,'Fit')
        figure('Name',[task ' Relative Illumination - isospeed ' int2str(iso)]);
    end
   
    for i = 0:4 % filenames end with i
        folderInfo   = dir([folder, '*', int2str(i), '.dng']);
        folderName   = {folderInfo.name};
        digitalValue = zeros(imageSize);
        for j = 1:5 % 5 burst
            fname = [folder, folderName{j}];
            [sensor, info] = sensorDNGRead(fname);
            digitalValue = digitalValue + double(sensorGet(sensor, 'dv'));
        end
        % extract digital values
        % B  G1
        % G2  R
        digitalValue = digitalValue ./ 5;
        G1 = digitalValue(1:2:end,2:2:end);
        G2 = digitalValue(2:2:end,1:2:end);
        R  = digitalValue(2:2:end,2:2:end);
        B  = digitalValue(1:2:end,1:2:end);

        % fitting poly22
        sfR  = fit([mesh_x(:) mesh_y(:)],  R(:), 'poly22');
        sfG1 = fit([mesh_x(:) mesh_y(:)], G1(:), 'poly22');
        sfG2 = fit([mesh_x(:) mesh_y(:)], G2(:), 'poly22');
        sfB  = fit([mesh_x(:) mesh_y(:)],  B(:), 'poly22');

        if strcmp(task,'Raw')  
            
            subplot(4,5,i+0*5+1);
            scatter3(mesh_x(:), mesh_y(:),  R(:), 1, 'r');
            title('Red');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);

            subplot(4,5,i+1*5+1);
            scatter3(mesh_x(:), mesh_y(:), G1(:), 1, 'g');
            title('Green 1');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);
           
            subplot(4,5,i+2*5+1);
            scatter3(mesh_x(:), mesh_y(:), G2(:), 1, 'g');
            title('Green 2');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);

            subplot(4,5,i+3*5+1);
            scatter3(mesh_x(:), mesh_y(:),  B(:), 1, 'b');
            title('Blue');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);
            
        elseif strcmp(task,'Fit')
            
            tmp = subplot(4,5,i+0*5+1);
            plot(sfR);
            colormap(tmp, autumn);
            title('Red');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);

            tmp = subplot(4,5,i+1*5+1);
            plot(sfG1);
            colormap(tmp, summer);
            title('Green 1');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);

            tmp = subplot(4,5,i+2*5+1);
            plot(sfG2);
            colormap(tmp, summer);
            title('Green 2');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);
            
            tmp = subplot(4,5,i+3*5+1);
            plot(sfB);
            colormap(tmp, winter);
            title('Blue');
            xlabel('x');
            ylabel('y');
            xlim([0 x(end)]);
            ylim([0 y(end)]);            
            
        elseif strcmp(task,'Table')
            row = (iso_index - 1) * 5 + i + 1;
            tableR(row,1:8)  = [iso, exposure(i+1), sfR.p00, sfR.p10, sfR.p01, sfR.p20, sfR.p11, sfR.p02];
            tableB(row,1:8)  = [iso, exposure(i+1), sfB.p00, sfB.p10, sfB.p01, sfB.p20, sfB.p11, sfB.p02];
            tableG1(row,1:8) = [iso, exposure(i+1), sfG1.p00, sfG1.p10, sfG1.p01, sfG1.p20, sfG1.p11, sfG1.p02];
            tableG2(row,1:8) = [iso, exposure(i+1), sfG2.p00, sfG2.p10, sfG2.p01, sfG2.p20, sfG2.p11, sfG2.p02];            
        elseif strcmp(task,'Map')
            map = getMap(sfR, sfG1, sfG2, sfB, imageSize);
        else
            disp('error');
        end
    end
end

getMap.m

This is a helper function for main.m. Basically, it stacks the fitted color-wise maps together to generate the lens shading map.

function map = getMap(sfR, sfG1, sfG2, sfB, imageSize)

    % meshgrid
    x = 1:imageSize(1)/2;
    y = 1:imageSize(2)/2;
    [mesh_x,mesh_y] = meshgrid(x,y);
    mesh_x = mesh_x';
    mesh_y = mesh_y';

    % B  G1
    % G2  R
    map = zeros(imageSize);

    R  =  sfR(mesh_x(:),mesh_y(:));
    G1 = sfG1(mesh_x(:),mesh_y(:));
    G2 = sfG2(mesh_x(:),mesh_y(:));
    B  =  sfB(mesh_x(:),mesh_y(:));

    R  =  R ./ max(R, [],'all');
    G1 = G1 ./ max(G1,[],'all');
    G2 = G2 ./ max(G2,[],'all');
    B  =  B ./ max(B, [],'all');

    R  = reshape( R, imageSize./2);
    G1 = reshape(G1, imageSize./2);
    G2 = reshape(G2, imageSize./2);
    B  = reshape( B, imageSize./2);

    map(2:2:end,2:2:end) = R;
    map(1:2:end,2:2:end) = G1;
    map(2:2:end,1:2:end) = G2;
    map(1:2:end,1:2:end) = B;

end

analysis.m

To run this script, we will need to generate the map of every image burst first using main.m introduced above. The script will create a 2d visualization of the differences between each map and base map.

imageSize = [3024,4032]; % 1.4 μm pixel width
isoSpeed  = [55, 99, 198, 299, 395, 798];

Ni = length(isoSpeed);
Ne = 5;
mapBase = load('map_55_1.mat');
figure('Name','Relative Illumination - Analysis');
for i = 1:Ni
    for e = 1:Ne
        mapCompare = load(['map_' int2str(isoSpeed(i)) '_' int2str(e) '.mat']);
        subplot(Ni, Ne, (i-1)*Ne+e);
        title(['ISO ' isoSpeed(i) 'EXP ' int2str(e)]);
        imagesc((mapCompare.map - mapBase.map) ./ mapBase.map);
        ax2 = gca;
        caxis(ax2,[-1 1]);
        colorbar;
    end    
end

cos4th.m

This script will create 3d visualizations of our base map and the one obtained with general approximation. It will also compute their relative difference and plot it in 2d.

data      = load('map_55_1.mat');
mapBase   = data.map;
imageSize = [3024,4032];
unit      = 1.4 * 1e-6; % 1.4 μm pixel width
center_x  = imageSize(1)/2;
center_y  = imageSize(2)/2;

% meshgrid
xx = 1:imageSize(1);
yy = 1:imageSize(2);
[mesh_xx,mesh_yy] = meshgrid(xx,yy);
mesh_xx = mesh_xx';
mesh_yy = mesh_yy';

d = 4.38 * 1e-3; %focal length: 4.38 mm
h = sqrt( (mesh_xx - center_x).^2 + (mesh_yy - center_y).^2 ) .* unit;
mapCompare = ( d ./ sqrt(d^2 + h.^2) ).^4;
figure('Name', 'mapCompare');
scatter3(mesh_xx(:), mesh_yy(:), mapCompare(:), 1, mapCompare(:));
figure('Name', 'mapBase');
scatter3(mesh_xx(:), mesh_yy(:), mapBase(:)   , 1, mapBase(:));
figure('Name', 'diff');
imagesc((mapCompare - mapBase)./mapBase);
ax2 = gca;
caxis(ax2,[-1 1]);
colorbar;

interesting.m

This script can visualize the image before and after the removal of lens shading using the provided map. We may load different maps using the 'load' function. We may also specify different images by changing the variable 'fname'. Set the variable 'remove' as true to remove lens shading. For some images, the rotation of the Bayer pattern may be different from the provided map. We may check its rotation using the function 'sensorWindow(sensor)'.

load('map.mat');

remove = false;
rotate = false;

fname = 'cameranoise/Interesting/IMG_20201027_173503.dng'; 

[sensor, info] = sensorDNGRead(fname);

if remove
    imageA = sensorGet(sensor, 'dv');
    imageA = double(imageA);
    if rotate
        imageA = rot90(imageA,3); % counterclock
    end
    imageA = imageA ./ map;
    if rotate
        imageA = rot90(imageA,1);
    end
    sensor = sensorSet(sensor, 'dv', imageA);
end

ip = ipCreate;
ip = ipSet(ip, 'render demosaic only', true);
ip = ipCompute(ip, sensor);
ipWindow(IP);

B. Read Noise

The below code is read noise estimation with flow 2, i.e..evaluating using the difference from the means (after feedback, decided to retain only flow 2 for final report writeup as this is a better method to calculate read noise). Please note that the code was written on Windows so there could be some errors while reading in the image files because Mac tends to use forward slashes for directories.

% This section reads in images and stores the digital value

clc; clear; ieInit;

% macro
imageType = 'Read_Noise'; % PRNU_DSNU, DarkCurrentRate, ReadNoise
readNoiseFolder = 'C:\Users\shwet\Downloads\ReadNoise';    % 55, 99, 198, 299, 395, 798
outfilePath = 'D:\documents_bkup\C\Course\psych221\project_data\reset_noise_solutions';

if (strcmp(imageType,'PRNU_DSNU'))
	imageSize   = [3024,4032];
	burstNumber = 5;
	digitalValue = zeros(imageSize);
elseif(strcmp(imageType,'DarkCurrentRate'))
	imageSize   = [3024,4032];
	burstNumber = 5;
	digitalValue = zeros(imageSize);
elseif(strcmp(imageType,'Read_Noise'))
	imageSize   = [3024,4032,6,6];
	burstNumber = 5;
	digitalValue = zeros(imageSize);
else
	imageSize = 'others';
	burstNumber = -1;
end

%figure('Name','Relative Illumination');
dirS = dir(readNoiseFolder);
isoVal = zeros(6,1);
k=1;
for j = 1:1:length(dirS)

	if contains(dirS(j).name, '.')

		% do nothing
		
	else

		inpFolder = strcat(readNoiseFolder, '\', dirS(j).name);
		files = dir( fullfile(inpFolder,'*.dng') );
		temp = split(inpFolder, '_');
		isoVal(k) = str2num(temp{2});
		
		for i=1:1:length(files)
			
			cd(inpFolder);
			fname = files(i).name;
			[sensor, info] = sensorDNGRead(fname);
			digitalValue(:,:,k,i) = digitalValue(:,:,k,i) + double(sensorGet(sensor, 'dv'));
			
		end
		
		k=k+1;
		
	end

end

%Special code to determine the 6th image for ISO = 99 because only 5 images
%are available. It is average of first 5 images

for mc=1:1:5
	digitalValue(:,:,6,6) = digitalValue(:,:,6,6) + digitalValue(:,:,6,mc);
end
digitalValue(:,:,6,6) = digitalValue(:,:,6,6)./5;

read_noise_dv = zeros(6,5);
read_noise_v = zeros(6,5);

%% This section computes read noise, uses simple functions defined in the functions section


average_px_value_per_ISO = mean(digitalValue(:,:,:,1:5), 4);

for k = 1:1:6
	
	for j = 1:1:5
		
		Im1 = reshape(average_px_value_per_ISO(:,:,k), [3024 4032]);
		Im2 = reshape(digitalValue(:,:,k,j), [3024 4032]);
		
		read_noise_dv(k,j) = estimate_read_noise(Im1, Im2);
		read_noise_v(k,j) = dv_to_v(read_noise_dv(k,j), 10, 0.4591);
		
	end
	
end

read_noise_per_ISO = mean(read_noise_v, 2);
lgd = strcat('ISO=', num2str(isoVal));
T = table([lgd], [read_noise_per_ISO]);
cd(outfilePath);
writetable(T, 'read_noise_data2.xls');


%% This section plots all the outputs

f = figure();
set(gcf, 'position', [400, 400, 850, 400]) 
cls = ['b','g','r','c','m','y'];

for k = 1:1:6
	
	scatter(1:5, read_noise_dv(k,:), 18, cls(k), 'filled'); 
	%plot(1:15, read_noise_dv(k,:), cls(k)); 
	hold on;
	
end
legend(lgd);
legend('Location','northeastoutside');
ylim([0,3])
xlim([0,6])
xlabel('Experiment # ');
ylabel('Read noise measured in gray levels')
title('Read noise (in graylevels) over repeated measurements at zero illumination and zero exposure time')

disp('output file saved at:')
disp(outfilePath)
disp('Readout Noise Values in the output file are specified in Volts');

%% This section contains functions developed to perform read noise analysis.


% 1. Read noise estimation
function rn = estimate_read_noise(im1, im2)

	im_df = im1 - im2;
	imsize = size(im_df);
	im_df_rs = reshape(im_df, [imsize(1)*imsize(2),1]);
	rn = std(im_df_rs); 

end


% 2. Voltage conversion
function op_volts = dv_to_v(gray_lev, quantization, voltage_swing)

	n_gls = 2^quantization;
	volt_per_gl = voltage_swing/n_gls; 
	op_volts = gray_lev*volt_per_gl;

C. Dark Current Noise

The code used for dark current noise measurement is shown below. The .dng files listed under DarkCurrentRate folder was used to compute the measurements

Dark Noise Measurement.m

imageType = 'DarkCurrentRate'; % PRNU_DSNU, DarkCurrentRate, ReadNoise
isoFolder = 'ISO_55';    % 55, 99, 198, 299, 395, 798
folder = ['Phani_Project/' imageType '/' isoFolder '/'];
strcmp(imageType,'DarkCurrentRate');
imageSize   = [3024,4032];
burstNumber = 5;
x = [];
y = [];
figure('Name','DarkNoise');
for i = 0:burstNumber-1 % filenames end with i
   folderInfo   = dir([folder, '*', int2str(i), '.dng']);
   folderName   = {folderInfo.name};  
   for j = 1:5
       fname = [folder, folderName{j}];
       [sensor, info] = sensorDNGRead(fname);
       img = ieDNGRead(fname);
       E = info.ExposureTime;
       y = [y; mean2(img)];
       x = [x;E];
   end
end
img = ieDNGRead(fname);
[rows, columns, numberOfColorBands] = size(img);
subplot(2, 3, 1);
imshow(img, []);
FontSize = 14;
title('img', 'FontSize');
%Set up figure properties:
% Enlarge figure to full screen.
set(gcf, 'Units', 'Normalized', 'OuterPosition', [0 0 1 1]);
% Let's compute and display the histogram.
img = ieDNGRead(fname);
subplot(2, 3, 2);
histogram(img);
xlim([0, 255]);
grid on;
title('Intensity Histogram of Gray Scale Image', 'FontSize', FontSize, 'Interpreter', 'None');
xlabel('Gray Level', 'FontSize', FontSize);
ylabel('Pixel Count', 'FontSize', FontSize);
% Do the fit to a line:
coeff = polyfit(x,y,1)
fitted_y = polyval(coeff, x);
subplot(2, 3, 3);
plot(x, y, 'bd', 'MarkerSize', 10);
hold on;
plot(x, fitted_y, 'r-', 'LineWidth', 3);
grid on;
title('Dark current plot', 'FontSize', FontSize);
xlabel('Exposure time ', 'FontSize', FontSize);
ylabel('Dark current image ', 'FontSize', FontSize);
message = sprintf('The slope = %.3f\nThe intercept is %.3f',...
   coeff(1), coeff(2));
uiwait(msgbox(message));
%code for voltage conversion
sensor = sensorDNGRead(fname);
sensorWindow(sensor);
voltageswing = 0.459; % Measured from ('IMX363') sensor
% This is a 10-bit sensor so the whitelevel is 1023. ISOSpeed 55 corresponds to analoggain = 1
Resolution = 1023;
Voltagemeasured = (coeff(1)*voltageswing)/Resolution;

Noise relationship w.r.t RGB Channels.m

Modified the above code in order to compare the values when using data from different color channels (R,G,B).

   ieInit; clear; close all;
   imageType = 'DarkCurrentRate'; % PRNU_DSNU, DarkCurrentRate, ReadNoise
   isoFolder = 'ISO_55';    % 55, 99, 198, 299, 395, 798
   folder = ['Phani_Project/' imageType '/' isoFolder '/'];
   strcmp(imageType,'DarkCurrentRate');
   imageSize   = [3024,4032];
   burstNumber = 5;
   x = [];
   y = [];
   y_r = [];
   y_g = [];
   y_b = [];
   figure('Name','DarkNoise');
   for i = 0:burstNumber-1 % filenames end with i
       folderInfo   = dir([folder, '*', int2str(i), '.dng']);
       folderName   = {folderInfo.name};
       for j = 1:5
           fname = [folder, folderName{j}];
           [sensor, info] = sensorDNGRead(folderName{j});
           img = ieDNGRead(fname);
           E = info.ExposureTime;
           y = [y; mean2(img)];
           x = [x;E];
           % similar codes for r,g,b components:
           [R, G, B] = split_channels(img);
           y_r = [y_r; mean2(R)];
           y_g = [y_g; mean2(G)];
           y_b = [y_b; mean2(B)];
       end    
   end
   img = ieDNGRead(fname);
   [rows, columns, numberOfColorBands] = size(img);
   subplot(2, 3, 1);
   imshow(img, []);
   FontSize = 14;
   % Enlarge figure to full screen.
   set(gcf, 'Units', 'Normalized', 'OuterPosition', [0 0 1 1]);
   % Do the fit to a line:
   coeff = evaluate_dark_current(x, y, 'RGB');
   coeff_r = evaluate_dark_current(x, y_r, 'R'); 
   coeff_g = evaluate_dark_current(x, y_g, 'G'); 
   coeff_b = evaluate_dark_current(x, y_b, 'B'); 
   % Functions for splitting image into its constituent channels
   function [R, G, B] = split_channels(img)
       G1 = img(1:2:end,2:2:end);
       G2 = img(2:2:end,1:2:end);
       R  = img(2:2:end,2:2:end);
       B  = img(1:2:end,1:2:end);
       G  = (G1 + G2) ./ 2;
   end
   function coeff = evaluate_dark_current(x, y, channel)
       coeff = polyfit(x,y,1);
       fitted_y = polyval(coeff, x);
       f = figure();
       plot(x, y, 'bd', 'MarkerSize', 10);
       hold on;
       plot(x, fitted_y, 'r-', 'LineWidth', 3);
       grid on;
       title(strcat('Fitted Y, Channel = ', channel));
       %ylabel('Fitted Y', 'FontSize', fontSize);
       message = sprintf('Channel = %s\n The slope = %.3f\nThe intercept is %.3f',...
       channel, coeff(1), coeff(2));
       uiwait(msgbox(message));
   end

D. DSNU estimation

clc; clear; ieInit;
   imageType = 'PRNU_DSNU'; % PRNU_DSNU, DarkCurrentRate, ReadNoise
   isoFolder = 'ISO_55';    % 55, 99, 198, 299, 395, 798
   folder = ['Phani_Project/' imageType '/' isoFolder '/'];
   load('map.mat');
   strcmp(imageType,'PRNU_DSNU');
   imageSize   = [3024,4032];
   burstNumber = 5;
   x = [];
   y = [];
   figure('Name','DSNU');
   for i = 0:burstNumber-1 % filenames end with i
       folderInfo   = dir([folder, '*', int2str(i), '.dng']);
       folderName   = {folderInfo.name}; 
       for j = 1:5
           fname = [folder, folderName{j}];
           [sensor, info] = sensorDNGRead(fname);
           %sensor = sensorSet(sensor, 'exp time', 5);
           img = ieDNGRead(fname);
           img = im2double(img);
           img = img./map;
           E = info.ExposureTime;
           y = [y; std2(img)];
           x = [x;E];
           %sensorWindow(sensor);  % ISETCAM window
           %imagesc(digitalValue); % visualize
       end    
   end
       img = ieDNGRead(fname);
       [rows, columns, numberOfColorBands] = size(img);
       subplot(2, 3, 1);
       imshow(img, []);
       FontSize = 14;
       title('img', 'FontSize');
       %Set up figure properties:
       % Enlarge figure to full screen.
       set(gcf, 'Units', 'Normalized', 'OuterPosition', [0 0 1 1]);
       % Do the fit to a line:
       coeff = polyfit(x,y,1)
       fitted_y = polyval(coeff, x);
       subplot(2, 3, 3);
       plot(x, y, 'bd', 'MarkerSize', 10);
       hold on;
       plot(x, fitted_y, 'r-', 'LineWidth', 3);
       grid on;
       title('Dark current plot', 'FontSize', FontSize);
       xlabel('Exposure time ', 'FontSize', FontSize);
       ylabel('Dark current image ', 'FontSize', FontSize);
       message = sprintf('The slope = %.3f\nThe intercept is %.3f',...
       coeff(1), coeff(2));
       uiwait(msgbox(message));
   %%Extra code for voltage conversion
   sensor = sensorDNGRead(fname);
   sensorWindow(sensor);
   voltageswing = 0.459; % Measured from ('IMX363') sensor
   % This is a 10-bit sensor so the whitelevel is 1023. ISOSpeed 55 corresponds to analoggain = 1
   Resolution = 1023;
   coeff(2) = 0.0001;
   Voltagemeasured = (coeff(2)*voltageswing)/Resolution;

Noise relationship w.r.t gain/exposure duration

The relationship between the noise estimates and the main camera parameter "gain" was studied and it showed the slope result increased with respect to increased gain.

   clc; clear; ieInit;
   imageType = 'DarkCurrentRate'; % PRNU_DSNU, DarkCurrentRate, ReadNoise
   isoFolder1 = {'ISO_55','ISO_99','ISO_198','ISO_299','ISO_395','ISO_798'};    % 55, 99, 198, 299, 395, 798
   for k = 1:length(isoFolder1)
       isoFolder = isoFolder1{k};
       folder = ['Phani_Project/' imageType '/' isoFolder '/'];
       strcmp(imageType,'DarkCurrentRate');
       imageSize   = [3024,4032];
       burstNumber = 5;
       x = [];
       y = [];
       figure('Name','DarkCurrentRate');
       for i = 0:burstNumber-1 % filenames end with i
           folderInfo   = dir([folder, '*', int2str(i), '.dng']);
           folderName   = {folderInfo.name}; 
           for j = 1:1
               fname = [folder, folderName{j}];
               [sensor, info] = sensorDNGRead(fname);
               img = ieDNGRead(fname);
               E = info.ExposureTime;
               y = [y; mean2(img)];
               x = [x;E];
               %sensorWindow(sensor);  % ISETCAM window
               %imagesc(digitalValue); % visualize
           end
       end
       FontSize = 14;
       %Set up figure properties:
       % Enlarge figure to full screen.
       set(gcf, 'Units', 'Normalized', 'OuterPosition', [0 0 1 1]);
       % Do the fit to a line:
       coeff = polyfit(x,y,1)
       fitted_y = polyval(coeff, x);
   %     subplot(2, 3, 3);
       plot(x, y, 'bd', 'MarkerSize', 10);
       hold on;
       plot(x, fitted_y, 'r-', 'LineWidth', 3);
       grid on;
       title('Dark current plot', 'FontSize', FontSize);
       xlabel('Exposure time ', 'FontSize', FontSize);
       ylabel('Dark current image ', 'FontSize', FontSize);
       message = sprintf('The slope = %.3f\nThe intercept is %.3f',...
           coeff(1), coeff(2));
       uiwait(msgbox(message));
   end

E. PRNU/DSNU Measurement using pixel response

The codes for PRNU measurement are as follows. Please note that this code was written on Windows. Hence, the file reading part could fail on a Mac. This code extracts the PRNU, DSNU for each ISO Speed. We would need to change the isoFolder in the 4th line of code to run it each time with a different ISO Speed. Once the R, G, B and all channel PRNU are obtained, we will output this data. The key files will be mentioned on the display of the Matlab console after the code is run. The way to interpret the numbers in the output files is shared after the code.

 %11-13
 %% Initialize: Define folder paths, initialize iSETCam
 
 ieInit; clear; close all;
 load('D:\documents_bkup\C\Course\psych221\project_data\map.mat');
 imageType = 'PRNU_DSNU'; % PRNU_DSNU, DarkCurrentRate, ReadNoise
 isoFolder = 'ISO_55';    % 55, 99, 198, 299, 395, 798
 folder = ['D:\documents_bkup\C\Course\psych221\project_data\' imageType '\' isoFolder '\'];
 out_path = 'D:\documents_bkup\C\Course\psych221\project_data\';
 
 imageSize   = [5,3024,4032];        % Define 3-D image size which stores 5 images, each of 3024x4032 pixels
 digitalValueRaw = zeros(imageSize);     % Create zero 3-D array
 digitalValueMean = zeros(imageSize);    % Create zero 3-D array
 burstNumber = 5;
 
 x = [];
 y = [];
 
 y_r = [];
 y_g = [];
 y_b = [];
 
 %% Read data from different exposures
 
 for i = 0:4         % i runs from 0 through 4, each value signifying a different exposure settings
     folderInfo   = dir([folder, '*', int2str(i), '.dng']);
     folderName   = {folderInfo.name};
     
     for j = 1:5     % j runs from 1 to 5, each value signifying repeat number. Total 5 repeats (image is grabbed 5 times with same exposure setting and ISO speed)
         fname = [folder, folderName{j}];    
         cd(folder);                         
         [sensor, info] = sensorDNGRead(folderName{j});  % Read .dng file
         img = ieDNGRead(fname); % Read image
         img2 = double(img);      % Convert to double before lens correction
         img3 = img2./map;         % Lens shading correction
         E = info.ExposureTime;      % Read exposure time
         digitalValueRaw(j,:,:) = img3;  % Store 5 repeatedly grabbed images with same exposure time settings
         
     end
     
     x = [x;E];
     digitalValueMean(i+1,:,:) = mean(digitalValueRaw);  % Average the repeated image captures and arrive at one image per exposure setting
     
 end
 
 imageSize2 = [5,968,1318];
 digitalValueMeanCropped = zeros(imageSize2);
 
 for k = 1:1:5
     
     Im = reshape(digitalValueMean(k,:,:), [3024 4032]);
     Im_crop = imcrop(Im,[917 1505 1317 967]);
     digitalValueMeanCropped(k,:,:) = Im_crop;
     
 end
 
 %% Evaluating PRNU, DSNU in this section
 
 % Evaluate PRNU and DSNU for entire cropped image
 tileSize = [968 1318];
 [slopes, intercepts, slopes_r, slopes_g, slopes_b, intercepts_r, intercepts_g, intercepts_b]  = evaluate_prnu_dsnu(x, digitalValueMeanCropped, tileSize);
 
 
 % Divide the cropped image into 16 patches at different locations. 
 % 4D array will not contain: 16x5x64x64
 %       where 16: number of patches
 %       5: representing different exposure settings
 %       64x64: representing the patch size
 
 
 patches_4d_dim = [16,5,64,64];
 patchDigitalValues = zeros(patches_4d_dim);
 
 patchSize = [64 64];
 numPatches = 16;
 
 r_coord = [1, floor(tileSize(1)*0.2), floor(tileSize(1)*0.4), floor(tileSize(1)*0.6) floor(tileSize(1)*0.8)];
 c_coord = [1, floor(tileSize(2)*0.2), floor(tileSize(2)*0.4), floor(tileSize(2)*0.6) floor(tileSize(2)*0.8)];
 
 l=1;
 for k=1:1:5
     for j=1:1:4
         for i = 1:1:4
             
             if mod(r_coord(i),2)==0
                 r_coord(i) = r_coord(i)+1;
             end
 
             if mod(c_coord(i),2)==0
                 c_coord(i) = c_coord(i)+1;
             end            
             
             Im_orig = reshape(digitalValueMeanCropped(k,:,:), tileSize);
             patchDigitalValues(l,k,:,:) = imcrop(Im_orig, [r_coord(i) c_coord(j) 63 63]);
             l = l+1;            
             
         end
     end
     
     l = 1;
 
 end
 
 
 % Now that the cropped image is divided into 16 patches, PRNU  and DSNU
 % measurements are calculated for each patch to understand if there's a
 % spatial effect that's contributing to high PRNU / DSNU.
 
 data_out_dim = [16, 64, 64];
 channel_out_dim = [16, 32, 32];
 
 % initialize output variables
 
 p_slopes = zeros(data_out_dim);     
 p_intercepts = zeros(data_out_dim);
 p_slopes_r = zeros(channel_out_dim);
 p_slopes_g = zeros(channel_out_dim);
 p_slopes_b = zeros(channel_out_dim);
 p_intercepts_r = zeros(channel_out_dim);
 p_intercepts_g = zeros(channel_out_dim);
 p_intercepts_b = zeros(channel_out_dim);
 
 
 % compute pixel response for all the 16 patches
 
 for l = 1:1:16
     
     [p_slopes(l,:,:), p_intercepts(l,:,:), p_slopes_r(l,:,:), p_slopes_g(l,:,:), p_slopes_b(l,:,:), p_intercepts_r(l,:,:), p_intercepts_g(l,:,:), p_intercepts_b(l,:,:)] = evaluate_prnu_dsnu(x, reshape(patchDigitalValues(l,:,:,:), [5 64 64]), patchSize);
     
 end
 
 
 %% Output all pixel response data obtained to files
 
 cd(out_path)
 mkdir solutions
 cd solutions;
 mkdir(isoFolder);
 cd(isoFolder);
 writematrix(slopes, 'px_resp_all_px.csv');
 writematrix(intercepts, 'darkv_all_px.csv');
 writematrix(slopes_r, 'px_resp_r.csv');
 writematrix(slopes_g, 'px_resp_g.csv');
 writematrix(slopes_b, 'px_resp_b.csv');
 writematrix(intercepts_r, 'darkv_r.csv');
 writematrix(intercepts_g, 'darkv_g.csv');
 writematrix(intercepts_b, 'darkv_b.csv');
 
 
 for l = 1:1:16
     
     suffix = strcat('patch_', num2str(l));
     
     writematrix(reshape(p_slopes(l,:,:), [64 64]), strcat(suffix,'px_resp_all_px.csv'));
     writematrix(reshape(p_intercepts(l,:,:), [64 64]), strcat(suffix,'darkv_all_px.csv'));
     writematrix(reshape(p_slopes_r(l,:,:), [32 32]), strcat(suffix,'px_resp_r.csv'));
     writematrix(reshape(p_slopes_g(l,:,:), [32 32]), strcat(suffix,'px_resp_g.csv'));
     writematrix(reshape(p_slopes_b(l,:,:), [32 32]), strcat(suffix,'px_resp_b.csv'));
     writematrix(reshape(p_intercepts_r(l,:,:), [32 32]), strcat(suffix,'darkv_r.csv'));
     writematrix(reshape(p_intercepts_g(l,:,:), [32 32]), strcat(suffix,'darkv_g.csv'));
     writematrix(reshape(p_intercepts_b(l,:,:), [32 32]), strcat(suffix,'darkv_b.csv'));
 
 end
 
 
 % Calculate PRNU from the pixel responses for the cropped image [968x1318 pixels]
 prnu_all_px = std2(slopes); prnu_r = std2(slopes_r);
 prnu_g = std2(slopes_g); prnu_b = std2(slopes_b);
 
 px_resp_mean = mean2(slopes); px_resp_mean_r = mean2(slopes_r);
 px_resp_mean_g = mean2(slopes_g); px_resp_mean_b = mean2(slopes_b);
 
 PRNU_final_cropped_image = [prnu_all_px px_resp_mean;
     prnu_r px_resp_mean_r;
     prnu_g px_resp_mean_g;
     prnu_b px_resp_mean_b];
 
 
 PRNU_1288_cropped_image = 100*(PRNU_final_cropped_image(:,1)./PRNU_final_cropped_image(:,2)) ;
 
 % Calculate DSNU from the pixel responses for the cropped image [968x1318 pixels]
 dsnu_all_px = std2(intercepts); dsnu_r = std2(intercepts_r);
 dsnu_g = std2(intercepts_g); dsnu_b = std2(intercepts_b);
 
 darkv_mean = mean2(intercepts); darkv_mean_r = mean2(intercepts_r); 
 darkv_mean_g = mean2(intercepts_g); darkv_mean_b = mean2(intercepts_b);
 
 DSNU_final_cropped_image = [dsnu_all_px darkv_mean;
     dsnu_r darkv_mean_r;
     dsnu_g darkv_mean_g;
     dsnu_b darkv_mean_b];
 
 
 % Output the PRNU and DSNU values for the cropped image [968x1318 pixels]
 writematrix(PRNU_final_cropped_image, 'PRNU_final_cropped_image.csv');
 writematrix(DSNU_final_cropped_image, 'DSNU_final_cropped_image.csv');
 
 % Calculate PRNU from the pixel responses for the 16 patches [64x64 px]
 
 p_prnu_all_px = zeros(16,1); p_prnu_r = zeros(16,1);
 p_prnu_g = zeros(16,1); p_prnu_b = zeros(16,1);
 
 for l = 1:1:16
     p_prnu_all_px(l) = std2(reshape(p_slopes(l,:,:), [64 64]));
     p_prnu_r(l) = std2(reshape(p_slopes_r(l,:,:), [32 32]));
     p_prnu_g(l) = std2(reshape(p_slopes_g(l,:,:), [32 32]));
     p_prnu_b(l) = std2(reshape(p_slopes_b(l,:,:), [32 32]));
 end
 
 PRNU_final_patches = [p_prnu_all_px p_prnu_r p_prnu_g p_prnu_b];
 
 
 p_pxr_all_px = zeros(16,1); p_pxr_r = zeros(16,1);
 p_pxr_g = zeros(16,1); p_pxr_b = zeros(16,1);
 
 for l = 1:1:16
     p_pxr_all_px(l) = mean2(reshape(p_slopes(l,:,:), [64 64]));
     p_pxr_r(l) = mean2(reshape(p_slopes_r(l,:,:), [32 32]));
     p_pxr_g(l) = mean2(reshape(p_slopes_g(l,:,:), [32 32]));
     p_pxr_b(l) = mean2(reshape(p_slopes_b(l,:,:), [32 32]));
 end
 
 PX_Response_final_patches = [p_pxr_all_px p_pxr_r p_pxr_g p_pxr_b];
 
 PRNU1288_patches = 100*(PRNU_final_patches./PX_Response_final_patches);
 
 % Calculate DSNU from the pixel responses for the 16 patches [64x64 px]
 p_dsnu_all_px = zeros(16,1); p_dsnu_r = zeros(16,1);
 p_dsnu_g = zeros(16,1); p_dsnu_b = zeros(16,1);
 
 for l = 1:1:16
     p_dsnu_all_px(l) = std2(reshape(p_intercepts(l,:,:), [64 64]));
     p_dsnu_r(l) = std2(reshape(p_intercepts_r(l,:,:), [32 32]));
     p_dsnu_g(l) = std2(reshape(p_intercepts_g(l,:,:), [32 32]));
     p_dsnu_b(l) = std2(reshape(p_intercepts_b(l,:,:), [32 32]));
 end
 
 DSNU_final_patches = [p_dsnu_all_px p_dsnu_r p_dsnu_g p_dsnu_b];
 
 
 p_dv_all_px = zeros(16,1); p_dv_r = zeros(16,1);
 p_dv_g = zeros(16,1); p_dv_b = zeros(16,1);
 
 for l = 1:1:16
     p_dv_all_px(l) = mean2(reshape(p_intercepts(l,:,:), [64 64]));
     p_dv_r(l) = mean2(reshape(p_intercepts_r(l,:,:), [32 32]));
     p_dv_g(l) = mean2(reshape(p_intercepts_g(l,:,:), [32 32]));
     p_dv_b(l) = mean2(reshape(p_intercepts_b(l,:,:), [32 32]));
 end
 
 DarkV_final_patches = [p_dv_all_px p_dv_r p_dv_g p_dv_b];
 
 
 % Output PRNU and DSNU values for the 16 64x64 patch images 
 writematrix(PRNU_final_patches, 'PRNU_final_16patches.csv');
 writematrix(DSNU_final_patches, 'DSNU_final_16patches.csv');
 
 writematrix(DarkV_final_patches, 'DarkV_final_patches.csv');
 
 
 writematrix(PRNU1288_patches, 'PRNU1288_patches.csv');
 writematrix(PRNU_1288_cropped_image, 'PRNU_1288_cropped_image.csv');  
 writematrix(PX_Response_final_patches, 'PX_Response_final_patches.csv');
 
 disp('Please look for the output data at specified output folder in the first line of code --> solutions --> ISO setting folder')
 disp('--------------------')
 disp('Lots of raw data are stored for troubleshooting/other analysis...')
 disp('Key files for the 64x64 clip approach are : PRNU_final_16patches.csv and PRNU1288_patches.csv')
 disp('--------------------')
 disp('Key files for the centrally cropped analysis: PRNU_final_cropped_image.csv and PRNU_1288_cropped_image.csv')
 disp('--------------------')
 disp('The PRNU1288 contain standard deviation divided by mean pixel response across the pixels and is expressed as a percentage')
 disp('The PRNU_final* output data contain standard deviation in gray levels of the pixel response slope')
 disp('The Pixel Response (mean slope) for the sixteen 64x64 patch clips is saved at PX_response_final_patches.csv')
 disp('Please send email to vangal87@stanford.edu in case of any issues with the code. Thanks! ')
 
 
 %% Useful functions for splitting image into its constituent channels
 
 function [R, G, B] = split_channels(img)
 
     G1 = img(1:2:end,2:2:end);
     G2 = img(2:2:end,1:2:end);
     R  = img(2:2:end,2:2:end);
     B  = img(1:2:end,1:2:end);
     G  = (G1 + G2) ./ 2;
 
 end
 
 
 function coeff = evaluate_dark_current(x, y, channel)
 
     coeff = polyfit(x,y,1);
     fitted_y = polyval(coeff, x);
     f = figure();
     plot(x, y, 'bd', 'MarkerSize', 10);
     hold on;
     plot(x, fitted_y, 'r-', 'LineWidth', 3);
     grid on;
     title(strcat('Fitted Y, Channel = ', channel));
     %ylabel('Fitted Y', 'FontSize', fontSize);
     message = sprintf('Channel = %s\n The slope = %.3f\nThe intercept is %.3f',...
     channel, coeff(1), coeff(2));
     uiwait(msgbox(message));
 
 end
 
 
 function [slopes, intercepts, slopes_r, slopes_g, slopes_b, intercepts_r, intercepts_g, intercepts_b]  = evaluate_prnu_dsnu(x, Im_stack_cropped, tileSize)
 
     rows = tileSize(1);
     cols = tileSize(2);
     
     out_3d_dim = [2, rows, cols];
     
     final_coeffs = zeros(out_3d_dim);
 
     tic 
     for i=1:1:rows
         for j = 1:1:cols
             y = Im_stack_cropped(:,i,j);
             coeff = polyfit(x,y,1);
             final_coeffs(1,i,j) = coeff(1);
             final_coeffs(2,i,j) = coeff(2);
         end
     end
     toc
     t = toc;
 
     slopes = reshape(final_coeffs(1,:,:), tileSize);
     intercepts = reshape(final_coeffs(2,:,:), tileSize);
     [slopes_r, slopes_g, slopes_b] = split_channels(slopes);
     [intercepts_r, intercepts_g, intercepts_b] = split_channels(intercepts);


The PRNU1288_patches.csv will contain data in the below format, as an example. The first column contains PRNU1288 calculated as a percentage considering all the pixels of the 64x64 clip. The second, third and fourth columns are PRNU measurements for individual channels (r, g, and b respectively). The 16 rows correspond to each patch. There are 16 patches in total. For the purpose of representing a singular value for each color channel, a median value is chosen among the measurements across the 16 patches.

PRNU evaluation (expressed as percentage as described by formula in the previous section) at ISO 55 per color filter channel using the multiple patches of 64x64 px. approach

F. Final presentation (with corrections) as presented on 18-Nov-2020

The final presentation can be found at this link: https://drive.google.com/file/d/1tL_vB67qgA9THTCS8tp6gK0AWrjP6uvH/view

Work Breakdown

  • Tzu-Sheng Kuo – Lens Shading
  • Phani Kondapani – Dark Noise and DSNU estimation
  • Vangal Aravindh – Read Noise and PRNU estimation