2009 Jimmy Chion

From Psych 221 Image Systems Engineering
Revision as of 06:28, 11 December 2009 by imported>Psych 204 (→‎Appendix I - Code)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Back to Psych 204 Projects 2009


Multisubject analysis remains a core problem in neuroimaging due to the greater variance in cortical activity is less well-defined areas of the brain. This project correlated an ROI in one subject's brain to multiple ROIs in another brain. In our first exploration, Dr. Nathan Witthoft and I wrote MATLAB code to find, for a given voxel in subject A, the ROI in subject B which held the voxel with the highest correlation. We iterated across all voxels in subject A's ROI and found the mode of where the highest correlations were from. The simple correlations were then mapped in various ways. In our second exploration, we wrote code to map (on a voxel by voxel basis) the average correlations of an ROI in subject A to the ROIs in subject B. Our code provides a tool to help quantify the similarity or dissimilarity of ROIs in different subjects.

Intersubject correlations mapping

The problem

Functional regions after face stimuli are shown


Functional regions in visual cortex are identified using various types of experiments. There are two common ways to identify regions. For retinotopic regions, the areas are often well-defined and have agreed upon criteria for distinguishing different areas of the visual cortex. For researchers working with category selectivity (object localizers), or finding areas keen on certain categories, the regions become much more vague. If one considers the fusiform face area (FFA), it is usually one of several areas that show functional activity when face stimuli are presented. These multiple clusters in each individual may vary greatly and it becomes a significant problem in multisubject analysis, where identifying congruent regions is critical for analysis. Selectivity ceases to be a sufficient condition for calling two regions the same.

Methods

All data was obtained through Nathan Witthoft and many functions from mrVista were used.

First approach

We have subject A and some number of ROIs, including the pEBA and aEBA. We also have some scans of various kinds. In subject B, we have the same scans and ROIs as well as a large number of additional ROIs. For subject A we choose a single voxel in the pEBA and extract the fMRI data associated (timeseries, Glm betas, whatever) with it. We then search through all the data from subject B, voxel by voxel until we determine the voxel in subject B that is most correlated with the voxel from subject A. We then write down the correlation and the name of the ROI in subject B where the voxel was found. We repeat for every voxel in subject A for the pEBA. If the pEBA is 100 voxels, then at the end we should have 100 correlations and 100 ROI choices. If the pEBA is distinct from the aEBA (at least using the type of data we have collected) the majority of those ROI choices should be pEBA in subject 2. We then repeat this for every ROI in subject A. So in the end, we have what each ROI in subject A is best matched with in subject B.

The code allows you to load all ROIs or enumerate the ones you want to compare. It currently outputs a cross of ROIs in A with ROIs in B such that it shows what percentage of the "votes" from each voxel came from each of the regions in B.


MATLAB script for the first approach (code much prettier in matlab, albeit still not so pretty)

Second approach

Because our first approach only took the ROI where the maximally correlated voxel was, there was a loss of information that we had not captured but would be valuable to display. Perhaps there were outliers in our first approach that were selected as the maximum correlation. This would allow us to see the general distribution of which ROIs an ROI likes.
Our code was very similar to the code in the first approach except this time, we only selected one ROI in subject A. For each voxel in A's ROI, we find the correlation with every voxel in al ROIs in subject B (just like we did in the first approach). This produces a map of correlations from one voxel to all voxels in B. We would do this for every voxel in subject A and then averaged the maps together to get the average correlations in subject B.


MATLAB script for correlation maps

Results

These are example outputs using actual subject data.

Row is subject kw and column is subject jc. Each row represents the histogram of voxels that picked the ROI on that column. Take the histogram in the figure below and convert that into a percentage and then make that into one row of this chart.
Note: not the same subjects as previous figure. The histogram shows which from which ROI in subject B the voxels in subject A correlated with most. The ROI in subject A is the title of the graph.
Produced from the second approach. This figure shows subject A's left PPA to subject B's brain. The green ROIs are PPA and the pink ones are FFa. the blue and white are retrosplenial. I've included them because they appear using the same contrast (places>objects) that is used to define the PPA
Subject A's left FFA mapped to B's brain. Picks out the face areas much better

Conclusions

The purpose of this endeavor was to create a tool to quantify how similar regions were between subjects in relation to other scanned regions. To the best of our knowledge, the code works fine, but I have not assessed its utility as a tool or the effectiveness of the methodology we used. For instance, it's likely that the data is confounded by the periodicity of the stimulus turning on and off, creating a large wave of activation of on and off. The mappings of some pairs of subjects were not as correlated as expected. Some ROIs mapped to seemingly unrelated ROIs. There are likely several factors our method does not capture.

All in all, however, we successfully built working code to conduct rudimentary multisubject analysis from which regions or sets of regions can be compared. It is an exploration as a solution to the aforementioned problem, but its effectiveness has yet to be determined.

Possible improvements or changes

To calculate our correlations, we used the time-series data, but this was an arbitrary decision. The betas from the GLMs or the percent signal change could be used in its place to find the correlation.

As mentioned earlier, for the first approach, the ROI from which the max correlation is found is recorded and the rest discarded. Finding the 'mean' or center of the distribution would probably improve the first approach

As a tool, its interface is simply through the code, but a GUI could easily be developed for choosing subjects, ROIs, and sifting through the outputted data. Additionally, The data could be visualized in several other ways.

References - Resources and related work

  1. Kriegeskorte, N. Mur, M. & Bandettini, P (2008). Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience. Front Syst Neurosci (2) 4. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2605405/
  2. Hasson, U., Nir, Y., Levy, I., Fuhrmann, G., & Malach, R. (2004). Intersubject Synchronization of Cortical Activity During Natural Vision.Science (303) 1634-1640

Software: mrVista

Appendix I - Code

zip file with code

Appendix II - Work partition

To give you a sense of the dynamic, Nathan did a lot of explaining to me, gave me some papers and the steps on how it should be coded. I coded up the first approach. He debugged it. We worked on it together and beefed it up, and then we worked on the second approach together (but it was mostly Nathan because he knew what mrVista functions to use). All in all, I learned a lot more working with Nathan than I would have by myself.