Campiotti
Back to Psych 221 Projects 2013
Introduction
The primary motivation for this project is the proliferation of forged images in modern society (e.g. in advertising, viral images, and even political races) and the increasing need to detect these forgeries in a forensic setting. This project seeks to reproduce the results of [1], which proposes the use of the underlying statistics of an image resulting from color filter array (CFA) interpolation (i.e. demosaicking) to detect localized forgeries in an image.
CFA Interpolation
Most CMOS sensors used in cameras today have sensitivities spanning the entire visible spectrum or more. In order to obtain information about the different color bands when a photograph is taken, a CFA is placed in front of the sensor array. With this in place, each pixel in the array can only detect one band of colors, determined by the color filter in front of it. Numerous CFA patterns are utilized, each featuring three or more colors. The most common is the Bayer pattern, shown in Figure 1.
The remaining color channels for a given pixel must be interpolated from neighboring pixels. In order to do this, a CFA demosaicking algorithm must be implemented, of which there are literally hundreds (for a description of some of these algorithms, see [1]). The common theme amongst all CFA algorithms is that the interpolated values are some combination of neighboring measured values. The method proposed in [1] and emulated here assumes a linear model (i.e. that interpolated values are a weighted sum of neighboring measured values).
Methods
As mentioned previously, the
Measuring retinotopic maps
Retinotopic maps were obtained in 5 subjects using Population Receptive Field mapping methods Dumoulin and Wandell (2008). These data were collected for another research project in the Wandell lab. We re-analyzed the data for this project, as described below.
Subjects
Subjects were 5 healthy volunteers.
MR acquisition
Data were obtained on a GE scanner. Et cetera.
MR Analysis
The MR data was analyzed using mrVista software tools.
Pre-processing
All data were slice-time corrected, motion corrected, and repeated scans were averaged together to create a single average scan for each subject. Et cetera.
PRF model fits
PRF models were fit with a 2-gaussian model.
MNI space
After a pRF model was solved for each subject, the model was trasnformed into MNI template space. This was done by first aligning the high resolution t1-weighted anatomical scan from each subject to an MNI template. Since the pRF model was coregistered to the t1-anatomical scan, the same alignment matrix could then be applied to the pRF model.
Once each pRF model was aligned to MNI space, 4 model parameters - x, y, sigma, and r^2 - were averaged across each of the 6 subjects in each voxel.
Et cetera.
Results - What you found
Retinotopic models in native space
Some text. Some analysis. Some figures.
Retinotopic models in individual subjects transformed into MNI space
Some text. Some analysis. Some figures.
Retinotopic models in group-averaged data on the MNI template brain
Some text. Some analysis. Some figures. Maybe some equations.
Equations
If you want to use equations, you can use the same formats that are use on wikipedia.
See wikimedia help on formulas for help.
This example of equation use is copied and pasted from wikipedia's article on the DFT.
The sequence of N complex numbers x0, ..., xN−1 is transformed into the sequence of N complex numbers X0, ..., XN−1 by the DFT according to the formula:
where i is the imaginary unit and is a primitive N'th root of unity. (This expression can also be written in terms of a DFT matrix; when scaled appropriately it becomes a unitary matrix and the Xk can thus be viewed as coefficients of x in an orthonormal basis.)
The transform is sometimes denoted by the symbol , as in or or .
The inverse discrete Fourier transform (IDFT) is given by
Retinotopic models in group-averaged data projected back into native space
Some text. Some analysis. Some figures.
Conclusions
Here is where you say what your results mean.
References
Software
Appendix I - Code and Data
Code
Data
Appendix II - Work partition (if a group project)
Brian and Bob gave the lectures. Jon mucked around on the wiki.