Kaja Johnson: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Psych204B
No edit summary
imported>Psych204B
No edit summary
Line 17: Line 17:


= Methods =
= Methods =
== Measuring retinotopic maps ==
 
Retinotopic maps were obtained in 5 subjects using Population Receptive Field mapping methods [http://white.stanford.edu/~brian/papers/mri/2007-Dumoulin-NI.pdf Dumoulin and Wandell (2008)]. These data were collected for another [http://www.journalofvision.org/9/8/768/ research project] in the Wandell lab. We re-analyzed the data for this project, as described below. 


=== Subjects ===
=== Subjects ===
Line 25: Line 24:
=== MR acquisition ===
=== MR acquisition ===
Brain imaging was performed on a 3 tesla whole-body General Electric Signa MRI scanner. During fMRI the subject viewed images of faces of children and adults, abstract objects, cars, indoor scenes, and outdoor scenes presented foveally and peripherally. Stimuli presented in the periphery were larger than stimuli in the fovea to control for size of the object on the retina, as it has been shown that stimuli in the periphery must be larger to be processed at equivalent levels (Latham and Whitaker, 1996).  Stimuli were presented in 12 s blocks.  Four of each stimulus type were presented (face fovea, face periphery, place fovea, place periphery; 16 total blank blocks were placed between stimulus presentations.  Motion and still images were also used to monitor activity in a region known to be not selective for face and place categories (Visual Area Middle Temporal; from here on referred to as MT).  The total time of each run was 198 TRs (TR=2s; subjects participated in two 396-s runs with different images.  
Brain imaging was performed on a 3 tesla whole-body General Electric Signa MRI scanner. During fMRI the subject viewed images of faces of children and adults, abstract objects, cars, indoor scenes, and outdoor scenes presented foveally and peripherally. Stimuli presented in the periphery were larger than stimuli in the fovea to control for size of the object on the retina, as it has been shown that stimuli in the periphery must be larger to be processed at equivalent levels (Latham and Whitaker, 1996).  Stimuli were presented in 12 s blocks.  Four of each stimulus type were presented (face fovea, face periphery, place fovea, place periphery; 16 total blank blocks were placed between stimulus presentations.  Motion and still images were also used to monitor activity in a region known to be not selective for face and place categories (Visual Area Middle Temporal; from here on referred to as MT).  The total time of each run was 198 TRs (TR=2s; subjects participated in two 396-s runs with different images.  
<br>
<br>
[[File:C:\Users\Kaja Johnson\Desktop\Winter 2010\Psych 204B\edited_images\hrf_design.jpg]]


=== MR Analysis ===
=== MR Analysis ===
Line 30: Line 32:


==== Pre-processing ====
==== Pre-processing ====
All data were slice-time corrected, motion corrected, and repeated scans were averaged together to create a single average scan for each subject. Et cetera.
All data were slice-time corrected, motion corrected, and repeated scans were averaged together to create a single average scan for the subject.


==== PRF model fits ====
PRF models were fit with a 2-gaussian model.


==== MNI space ====
== Model ==
After a pRF model was solved for each subject, the model was trasnformed into MNI template space. This was done by first aligning the high resolution t1-weighted anatomical scan from each subject to an MNI template. Since the pRF model was coregistered to the t1-anatomical scan, the same alignment matrix could then be applied to the pRF model. <br>
I estimated a GLM with two contrasts to generate parameter maps for analysis: one for the face-selective areas and the other for place-selective areas. For face-selective areas this was child and man as active condition and indoor, outdoor, cars and abstract object as the control condition. For place-selective areas this was indoor and outdoor as active condition and child, man, cars, and abstract object as the control condition. For the control scans in the MT the contrast was motion as the active condition and still images as the control condition.
Once each pRF model was aligned to MNI space, 4 model parameters - x, y, sigma, and r^2 - were averaged across each of the 6 subjects in each voxel.


Et cetera.




= Results - What you found =
= Results =


== Retinotopic models in native space ==
== Retinotopic models in native space ==

Revision as of 02:02, 20 March 2010

Back to Psych 204 Projects 2009



Foveal and Peripheral Perception in Face and Place Processing

Background

It has been well established in fMRI literature that there are regions in the brain that are selective for particular objects over others, namely faces and places. The Fusiform Face Area (FFA) has been shown to be significantly more active when subjects view faces than other objects (e.g. Kanswisher et al., 1997); the Occipital Face Area (OFA) is known to be responsible for perceiving faces at a basic level (Gauthier et al., 2000), while specifically the rOFA has been implicated in face-part recognition (Pitcher et al., 2007); the Parahippocampal Place Area (PPA) activates more strongly to places than other visual stimuli (e.g. Epstein et al., 1999); the Retrosplenial Cortex (RSc) in involved in spatial learning and memory in animal models (Vann et al., 2003), and has an analogue function in humans in being responsible for processing of the perception of location (Epstein et al., 2007).

The limitation of such studies seeking specialized localization in the brain is that it is still not clear if particular regions of interests (ROIs) are involved in more complex processing rather than this simplistic one-to-one region to function model. Visual stimuli of faces and places may be processed in more comprehensive ways that include taking in information about other factors like location in the visual field. The current study aims to examine this and determine if these specialized regions for face and place processing respond differently to stimuli presented foveally and peripherally. The differential functions of foveal and peripheral vision implicate the possibility that external stimuli may possibly be encoded and processed in different ROIs. Foveal vision is used for examining detailed objects and peripheral vision is used for organizing the broad spatial scene and seeing large objects (Peripheral Vision). Because of the fovea’s responsibility for attention to central details of an expression in the visual field (i.e. eyes, nose and mouth) it is likely that facial processing recruits energy from foveal processing systems. Likewise, because the recognition of a scene or place requires attention to broader context of visual field where details are also maintained in periphery and not solely the fovea, stimuli of this kind should recruit energy from peripheral processing systems. In line with these functions, I hypothesized that 1) face-selective areas (FFA, OFA) will also be selective for foveal stimuli, and 2) place-selective areas (PPA, RSc) will also be selective for peripheral stimuli.


Methods

Subjects

Data was sampled from a session with one subject (female).

MR acquisition

Brain imaging was performed on a 3 tesla whole-body General Electric Signa MRI scanner. During fMRI the subject viewed images of faces of children and adults, abstract objects, cars, indoor scenes, and outdoor scenes presented foveally and peripherally. Stimuli presented in the periphery were larger than stimuli in the fovea to control for size of the object on the retina, as it has been shown that stimuli in the periphery must be larger to be processed at equivalent levels (Latham and Whitaker, 1996). Stimuli were presented in 12 s blocks. Four of each stimulus type were presented (face fovea, face periphery, place fovea, place periphery; 16 total blank blocks were placed between stimulus presentations. Motion and still images were also used to monitor activity in a region known to be not selective for face and place categories (Visual Area Middle Temporal; from here on referred to as MT). The total time of each run was 198 TRs (TR=2s; subjects participated in two 396-s runs with different images.

File:C:\Users\Kaja Johnson\Desktop\Winter 2010\Psych 204B\edited images\hrf design.jpg

MR Analysis

The MR data was analyzed using mrVista software tools.

Pre-processing

All data were slice-time corrected, motion corrected, and repeated scans were averaged together to create a single average scan for the subject.


Model

I estimated a GLM with two contrasts to generate parameter maps for analysis: one for the face-selective areas and the other for place-selective areas. For face-selective areas this was child and man as active condition and indoor, outdoor, cars and abstract object as the control condition. For place-selective areas this was indoor and outdoor as active condition and child, man, cars, and abstract object as the control condition. For the control scans in the MT the contrast was motion as the active condition and still images as the control condition.


Results

Retinotopic models in native space

Some text. Some analysis. Some figures.

Retinotopic models in individual subjects transformed into MNI space

Some text. Some analysis. Some figures.

Retinotopic models in group-averaged data on the MNI template brain

Some text. Some analysis. Some figures. Maybe some equations.


Equations

If you want to use equations, you can use the same formats that are use on wikipedia.
See wikimedia help on formulas for help.
This example of equation use is copied and pasted from wikipedia's article on the DFT.

The sequence of N complex numbers x0, ..., xN−1 is transformed into the sequence of N complex numbers X0, ..., XN−1 by the DFT according to the formula:

Xk=n=0N1xne2πiNknk=0,,N1

where i is the imaginary unit and e2πiN is a primitive N'th root of unity. (This expression can also be written in terms of a DFT matrix; when scaled appropriately it becomes a unitary matrix and the Xk can thus be viewed as coefficients of x in an orthonormal basis.)

The transform is sometimes denoted by the symbol , as in 𝐗={𝐱} or (𝐱) or 𝐱.

The inverse discrete Fourier transform (IDFT) is given by

xn=1Nk=0N1Xke2πiNknn=0,,N1.

Retinotopic models in group-averaged data projected back into native space

Some text. Some analysis. Some figures.


Conclusions

Here is where you say what your results mean.

References - Resources and related work

References

Software

Appendix I - Code and Data

Code

File:CodeFile.zip

Data

zip file with my data

Appendix II - Work partition (if a group project)

Brian and Bob gave the lectures. Jon mucked around on the wiki.