CollinsZvinakisDanowitz: Difference between revisions
imported>Psych2012 Created page with '= Introduction = Retouched images are everywhere today. Magazine covers feature impossibly fit and blemish free models, and advertisements frequently show people too thin to be …' |
imported>Psych2012 |
||
| Line 1: | Line 1: | ||
= Implementation and analysis of a perceptual metric for photo retouching = | |||
= Introduction = | = Introduction = | ||
Revision as of 04:12, 22 March 2012
Implementation and analysis of a perceptual metric for photo retouching
Introduction
Retouched images are everywhere today. Magazine covers feature impossibly fit and blemish free models, and advertisements frequently show people too thin to be real. While some of these alterations could be considered comical, an increasing number of studies show that these pictures lead to low self-image and other mental health problems for many of those that view them. To help address this problem, lawmakers in several countries, including France and the UK, have proposed legislation that would require publishers to label any severely retouched images, and over the last few days, Isreal has passed the first law to require labels for retouched images (in this case, for thinning the model).
Legislation requiring the labeling of modified images raises a number of issues. Namely, how do we define “severely retouched”? Nearly all published images are modified in some way, whether through basic cropping or color adjustments or more significant alterations. Which, if any, of these changes are acceptable? The second problem is that there are a huge number of photographs published every day. How can they all be analyzed for retouching in a timely, cost-effective manner?
In their 2011 paper “A perceptual metric for photo retouching,” Kee and Farid proposed a perceptual photo rating scheme to solve these problems. With their method, an algorithm would analyze the original and retouched versions of an image to determine the extent of the geometric (e.g., stretching, warping) and photometric (e.g., blurring, sharpening) changes made to the original. The results of this analysis would be compared to a database of human-rated altered images to automatically assign a perceptual modification score between 1 (“very similar”) and 5 (“very different”). This scheme, intended to deliver an objective measure of perceptual modification with minimal human involvement, would allow authorities or publishers to define a threshold for a “severely retouched” image and label them accordingly.
This project is largely intended as an effort to reproduce the results from the Kee and Farid paper. Accordingly, the algorithm and methods described by the paper have been implemented and tested on a set of images. The rest of this report describes the algorithm implementation process. The report discusses the results of applying this algorithm to a set of retouched images, as well as potential improvements to improve the effectiveness and practicality of the algorithm.
Methods
Measuring retinotopic maps
Retinotopic maps were obtained in 5 subjects using Population Receptive Field mapping methods Dumoulin and Wandell (2008). These data were collected for another research project in the Wandell lab. We re-analyzed the data for this project, as described below.
Subjects
Subjects were 5 healthy volunteers.
MR acquisition
Data were obtained on a GE scanner. Et cetera.
MR Analysis
The MR data was analyzed using mrVista software tools.
Pre-processing
All data were slice-time corrected, motion corrected, and repeated scans were averaged together to create a single average scan for each subject. Et cetera.
PRF model fits
PRF models were fit with a 2-gaussian model.
MNI space
After a pRF model was solved for each subject, the model was trasnformed into MNI template space. This was done by first aligning the high resolution t1-weighted anatomical scan from each subject to an MNI template. Since the pRF model was coregistered to the t1-anatomical scan, the same alignment matrix could then be applied to the pRF model.
Once each pRF model was aligned to MNI space, 4 model parameters - x, y, sigma, and r^2 - were averaged across each of the 6 subjects in each voxel.
Et cetera.
Results - What you found
Retinotopic models in native space
Some text. Some analysis. Some figures.
Retinotopic models in individual subjects transformed into MNI space
Some text. Some analysis. Some figures.
Retinotopic models in group-averaged data on the MNI template brain
Some text. Some analysis. Some figures. Maybe some equations.
Equations
If you want to use equations, you can use the same formats that are use on wikipedia.
See wikimedia help on formulas for help.
This example of equation use is copied and pasted from wikipedia's article on the DFT.
The sequence of N complex numbers x0, ..., xN−1 is transformed into the sequence of N complex numbers X0, ..., XN−1 by the DFT according to the formula:
where i is the imaginary unit and is a primitive N'th root of unity. (This expression can also be written in terms of a DFT matrix; when scaled appropriately it becomes a unitary matrix and the Xk can thus be viewed as coefficients of x in an orthonormal basis.)
The transform is sometimes denoted by the symbol , as in or or .
The inverse discrete Fourier transform (IDFT) is given by
Retinotopic models in group-averaged data projected back into native space
Some text. Some analysis. Some figures.
Conclusions
Here is where you say what your results mean.
References - Resources and related work
References
Software
Appendix I - Code and Data
Code
Data
Appendix II - Work partition (if a group project)
Brian and Bob gave the lectures. Jon mucked around on the wiki.