QuachFu: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Projects221
No edit summary
imported>Projects221
Line 39: Line 39:
In the expectation step, we estimate the probability of each pixel belonging to a particular model. We can estimate the probability of a sample belonging to Model 1 using Bayes' Rule:
In the expectation step, we estimate the probability of each pixel belonging to a particular model. We can estimate the probability of a sample belonging to Model 1 using Bayes' Rule:


 
<math>\Pr\{\emph{f(x,y)} \in M_{1} \mid \emph{f(x,y)}\} = \frac{\Pr\{\emph{f(x,y)} \mid \emph{f(x,y)} \in M_{1}\}\Pr\{\emph{f(x,y)} \in M_{1}\}}{\sum_{i=1}^{2}{\Pr\{\emph{f(x,y)} \mid \emph{f(x,y)} \in M_{1}\}\Pr\{\emph{f(x,y)} \in M_{1}\}}}</math>
 
 


=== M-Step ===
=== M-Step ===

Revision as of 02:10, 19 March 2013

Back to Psych 221 Projects 2013




Background

Digital Image Forgery

Digital cameras have long since replaced traditional cameras as the dominant form of photography. Their practicality coupled with the combination of low cost and high performance have made them extremely popular. Additionally, with the advent of the camera phone, nearly every person now has a camera on them at all times. This has led to an explosion of images of nearly anything that can be easily found on the internet. Combined with very power photo editing software such as Photoshop, the amount of forged images has also dramatically increased. Images of a subject or event can longer be blindly taken at face value since it has become so easy to forge them. Because of this, it has become increasingly important to be able to discern legitimate images from forged ones. It is possible to leverage how digital camera sensors record images to determine if they have been altered and post-processed.

Digital Camera Image Sensors

Each pixel in a color digital image consists of three different values: its red, green, and blue intensity. Therefore, the entire image can be thought to have three separate channels--one for each color. However, each of the CMOS imaging chip's photosensors, which will record the light in the scene for a single pixel, can only detect light intensity without wavelength information. Therefore, they are unable to separate color information and record a separate value for each channel. To work around this limitation, image sensors typically employ a color filter array (CFA), which is a mosaic of tiny color filters placed on top of the image sensor, so that for each individual pixel, only a single band of light will pass through it and be recorded onto the pixel. Figure 1 depicts a common CFA known as a Bayer array, which employs a pattern of red, blue, and green color filters.


Figure 1: Bayer Array

CFA Interpolation

In order to construct a complete image with RGB values for every pixel, we must estimate the two missing color values at each pixel by interpolating the values of neighboring pixels. There are numerous interpolation algorithms that can be used, which can vary between the make and model of the camera.

Camera Forensics using CFA Interpolation

A technique used to determine if an image has been altered leverages the properties of CFA interpolation. In an undoctored image, there will be a strong correlation between the estimated color values and its values of its neighbors. However, if an image has been digitally altered, these correlations will likely be destroyed. Even if an image has been created by combining two unaltered images together, the interpolation pattern will not remain the same across the subimage boundaries. Therefore, by measuring the strength of the periodic pattern of interpolated pixels in an image, we can tell if an image has been doctored or not.

Expectation Maximization

Given an image, we do not know both the exact parameters of the interpolation algorithm, nor which pixels are being estimated. Since we have two unknowns, we need to use an expectation/maximization (EM) algorithm to estimate both unknowns at the same time. We assume a simple linear model for the interpolation algorithm, which may not be exact, but is simple and will give a decent approximation of more complex CFA algorithms. We also assume each color channel is interpolated independently. Therefore, we run the EM algorithm for a single color channel at a time. Lastly, we assume each pixel belongs to one of two models: 1 if the pixel is estimated from its neighbors, and 2 if it is not.

E-Step

In the expectation step, we estimate the probability of each pixel belonging to a particular model. We can estimate the probability of a sample belonging to Model 1 using Bayes' Rule:

Failed to parse (Conversion error. Server ("https://wikimedia.org/api/rest_") reported: "Cannot get mml. TeX parse error: Undefined control sequence \emph"): {\displaystyle \Pr\{{\emph {f(x,y)}}\in M_{1}\mid {\emph {f(x,y)}}\}={\frac {\Pr\{{\emph {f(x,y)}}\mid {\emph {f(x,y)}}\in M_{1}\}\Pr\{{\emph {f(x,y)}}\in M_{1}\}}{\sum _{i=1}^{2}{\Pr\{{\emph {f(x,y)}}\mid {\emph {f(x,y)}}\in M_{1}\}\Pr\{{\emph {f(x,y)}}\in M_{1}\}}}}}

M-Step

Probability Map

2D FFT

Similarity Measure

Thresholding

Results

Uncompressed Images

JPEG Images

Equations

If you want to use equations, you can use the same formats that are use on wikipedia.
See wikimedia help on formulas for help.
This example of equation use is copied and pasted from wikipedia's article on the DFT.

The sequence of N complex numbers x0, ..., xN−1 is transformed into the sequence of N complex numbers X0, ..., XN−1 by the DFT according to the formula:

where i is the imaginary unit and is a primitive N'th root of unity. (This expression can also be written in terms of a DFT matrix; when scaled appropriately it becomes a unitary matrix and the Xk can thus be viewed as coefficients of x in an orthonormal basis.)

The transform is sometimes denoted by the symbol , as in or or .

The inverse discrete Fourier transform (IDFT) is given by

Retinotopic models in group-averaged data projected back into native space

Some text. Some analysis. Some figures.


Conclusions

Here is where you say what your results mean.

References - Resources and related work

References

Software

Appendix I - Code and Data

Code

File:CodeFile.zip

Data

zip file with my data

Appendix II - Work partition (if a group project)

Brian and Bob gave the lectures. Jon mucked around on the wiki.