2009 Jimmy Chion: Difference between revisions
imported>Psych 204 |
imported>Psych 204 |
||
| Line 18: | Line 18: | ||
number of additional ROIs. For subject A we choose a single voxel in the pEBA and extract the fMRI data associated (timeseries, Glm betas, whatever) with it. We then search through all the data from subject B, voxel by voxel until we determine the voxel in subject B that is most correlated with the voxel from subject A. We then write down the correlation and the name of the ROI in subject B where the voxel was found. We repeat for every voxel in subject A for the pEBA. If the pEBA is 100 voxels, then at the end we should have 100 correlations and 100 ROI choices. If the pEBA is distinct from the aEBA (at least using the type of data we have collected) the majority of those ROI choices should be pEBA in subject 2. We then repeat this for every ROI in subject A. So in the end, we have what each ROI in subject A is best matched with in subject B. | number of additional ROIs. For subject A we choose a single voxel in the pEBA and extract the fMRI data associated (timeseries, Glm betas, whatever) with it. We then search through all the data from subject B, voxel by voxel until we determine the voxel in subject B that is most correlated with the voxel from subject A. We then write down the correlation and the name of the ROI in subject B where the voxel was found. We repeat for every voxel in subject A for the pEBA. If the pEBA is 100 voxels, then at the end we should have 100 correlations and 100 ROI choices. If the pEBA is distinct from the aEBA (at least using the type of data we have collected) the majority of those ROI choices should be pEBA in subject 2. We then repeat this for every ROI in subject A. So in the end, we have what each ROI in subject A is best matched with in subject B. | ||
[[ | [[Media:samediff3.m | the code for the first approach]] | ||
[[Media:MyCodeZipFile.zip | zip file with code]] | |||
== Second approach == | == Second approach == | ||
Revision as of 10:30, 8 December 2009
Back to Psych 204 Projects 2009
Project: Intersubject mapping correlations
Multisubject analysis remains a core problem in neuroimaging due to the greater variance in cortical activity is less well-defined areas of the brain. This project correlated an ROI in one subject's brain to multiple ROIs in another brain. In our first exploration, Dr. Nathan Witthoft and I wrote MATLAB code to find, for a given voxel in subject A, the ROI in subject B which held the voxel with the highest correlation. We iterated across all voxels in subject A's ROI and found the mode of where the highest correlations were from. The simple correlations were then mapped in various ways. In our second exploration, we wrote code to map (on a voxel by voxel basis) the average correlations of an ROI in subject A to the ROIs in subject B. Our code provides a tool to help quantify the similarity or dissimilarity of ROIs in different subjects.
Background
Research

The problem
Functional regions in visual cortex are identified using various types of experiments. There are two common ways to identify regions. For retinotopic regions, the areas are often well-defined and have agreed upon criteria for distinguishing different areas of the visual cortex. For researchers working with category selectivity (object localizers), or finding areas keen on certain categories, the regions become much more vague. If one considers the fusiform face area (FFA), it is usually one of several areas that show functional activity when face stimuli are presented. These multiple clusters in each individual may vary greatly and it becomes a significant problem in multisubject analysis, where identifying congruent regions is critical for analysis. Selectivity ceases to be a sufficient condition for calling two regions the same.
Methods
First approach
We have subject A and some number of ROIs, including the pEBA and aEBA. We also have some scans of various kinds. In subject B, we have the same scans and ROIs as well as a large number of additional ROIs. For subject A we choose a single voxel in the pEBA and extract the fMRI data associated (timeseries, Glm betas, whatever) with it. We then search through all the data from subject B, voxel by voxel until we determine the voxel in subject B that is most correlated with the voxel from subject A. We then write down the correlation and the name of the ROI in subject B where the voxel was found. We repeat for every voxel in subject A for the pEBA. If the pEBA is 100 voxels, then at the end we should have 100 correlations and 100 ROI choices. If the pEBA is distinct from the aEBA (at least using the type of data we have collected) the majority of those ROI choices should be pEBA in subject 2. We then repeat this for every ROI in subject A. So in the end, we have what each ROI in subject A is best matched with in subject B.
the code for the first approach zip file with code
Second approach
Because our first approach only took the ROI where the maximally correlated voxel was, there was a loss of information that we had not captured but would be valuable to display. Perhaps there were outliers in our first approach that were selected as the maximum correlation. This would allow us to see the general distribution of which ROIs an ROI likes.
Our code was very similar to the code in the first approach except this time, we only selected one ROI in subject A. For each voxel in A's ROI, we find the correlation with every voxel in al ROIs in subject B (just like we did in the first approach). This produces a map of correlations from one voxel to all voxels in B. We would do this for every voxel in subject A and then averaged the maps together to get the average correlations in subject B.
the code
Second approach
Results - What you found
Retinotopic models in native space
Some text. Some analysis. Some figures.
Retinotopic models in individual subjects transformed into MNI space
Some text. Some analysis. Some figures.
Retinotopic models in group-averaged data on the MNI template brain
Some text. Some analysis. Some figures.
Retinotopic models in group-averaged data projected back into native space
Some text. Some analysis. Some figures.
Conclusions
Here is where you say what your results mean.
References - Resources and related work
References
Software
Appendix I - Code and Data
Code
Appendix II - Work partition
To give you a sense of the dynamic, Nathan did a lot of explaining to me, gave me some papers and the steps on how it should be coded. I coded up the first approach. He debugged it. We worked on it together and beefed it up, and then we worked on the second approach together (but it was mostly Nathan because he knew what mrVista functions to use). All in all, I learned a lot more working with Nathan than I would have by myself.