Nicole

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search

Back to Psych 204B Projects 2012


Introduction

Broad Context

Instructional methods influence the likelihood that an individual will learn either a symbolic rule, its relation to the referent, or both. If students are provided with the rule and later shown example situations in which it applies, they may not fully understand the problem space. In contrast, inventing a symbolic representation based on the context provides students with a better sense of the relationships among variables (Schwartz, Chase, Oppezzo, & Chin, 2011; Star & Rittle-Johnson, 2007). Alternatively, students may not know why operations are used in the ways that they are. To teach formulas, many typical instructional methods rely on clear lectures. These can effectively guide students toward procedural fluency. However, this type of instruction does not always emphasize the relationships between the symbolic notation and the referenced context. Consequentially, students may perform poorly in new contexts, failing to transfer or showing evidence of negative transfer, emphasizing their inability to understand the limits of problem space. Additionally, they may exhibit only a brittle understanding of the rules and their meanings (Lehrer, Schauble, Carpenter, & Penner, 2000). Even more, people may attempt to develop connections to the referent but fail to do so if they already have the formula to fall back on (Schwartz, Chase, Oppezzo, & Chin, 2011). With my colleagues, I hypothesize that with only the referent or only symbolic understanding, knowledge would not transfer to new contexts.

Prior behavioral piloting has shown that learners who search for a formula are less likely to overgeneralize (negatively transfer) the formula to new contexts than their counterparts who are told the formula directly.

This study explores the role of the mental model, comparing different ways to integrate knowledge: using two representations versus only one. Individuals who receive direct instruction about the formula are compared to those who learn both the formula and a spatial strategy that ties it to the referent. With this design, this study compares purely symbolic knowledge to an integrated understanding of multiple aspects of the relationship. It is predicted that people who have both an algebraic and spatial representation of the problem will be more flexible in applying either strategy and will show neurological overlap between the two types of problem solving. Additionally, individual differences in neurological signatures among individuals who successfully generalize the formula and those who find this task difficult may provide insight into the effects of understanding the spatial referent of the formula.

Materials

All participants solved the Polygon Problem, a growth pattern problem in which the solver’s task is to determine the perimeter of a row of regular polygons arranged in a singe line, as in Figure 1. These figures consisted of shapes ranging from 3 sides (triangles) to 6 sides (hexagons) in rows of 3 to 10 shapes. The relationship between the total perimeter and these two variables can be expressed in various formulae which simplify to this canonical form: Perimeter = (s-2)n + 2 , where s represents the number of sides per polygon and n represents the number of polygons in a row. The Polygon Problem has been used in professional development programs with teachers as an example of growth pattern problem that allows for a general abstract solution to be built from a range of possible contexts (Koellner et al., 2007).


Example polygon problem stimuli.


Trial Paradigm

The trial procedure is shown in the figure below. To ensure that potential differences in brain activations to the different conditions would not simply be an effect of different visual stimuli, the presentation of each problem was the same for all trials. As shown below, this screen consisted of the graphic representation of the problem, the formula, and the values of the variables ‘s’ and ‘n’ such that it provided all of the necessary information to use the formula or use the spatial strategy. In each trial, participants clicked a button to indicate that they had solved the problem. On the next screen, participants used the trackball to scroll to their answer.


Schematic of each trial. Note self-paced task and jittered ISI.

The study timing was self-paced, a method that has been used in other studies of cognitive processing (e.g. Kalbfleish, VanMeter, & Zeffiro, 2006). The interstimulus interval (ISI) was jittered between each trial.

Blocks consisted of 24 trials with varying lengths and types of shapes presented. Each block lasted approximately six to eight minutes depending on the participants’ efficiency in working through the problems.

Study Design

In this study, participants received training about the Polygon Problem prior to entering the scanner, solved several blocks of problems in the scanner, and were tested on transfer questions after scanning. The study included a between subjects manipulation of instruction. In the Formula + Spatial condition, participants built up the formula from the referent, learning both the formula and an analogous spatial strategy involving skip-counting along the figure to geometrically represent the formula. The Formula Only condition simply learned the formula to be able to do the task but did not learn about its relationship to the spatial referent. This design is outlined below.


Overall study design.

In the present investigation, data from seven participants in the Formula Only condition are examined. In particular, I considered data from the first two scanner blocks, during which the participants are using the formula to solve the problems.

The present investigation: Subject Motion

This project was designed to explore the effects of subject motion on individual and group level analyses. While motion correction algorithms are frequently applied, I was particularly interested in learning the specific ways in which these algorithms affect data and potential interpretation. Ultimately, my goal was to use this knowledge to make an educated decision about whether to include motion correction in my future preprocessing analyses.

The first example below is a graphical representation of typical subject motion. Note that the translational motion is within approximately 0.5mm throughout the sessions and that rotational motion is within 1 degree. These small levels of motion and gradual drifts are easily modeled by General Linear Models by the addition of motion parameters as regressors of no interest.

Typical Motion.


In contrast, the subject motion represented below is atypical and far outside the usual range of ‘acceptable’ motion. Again, note the scales of both Y-axes; translational motion ranges to more than 4mm and rotational motion approaches 1.5 degrees. The spiky quality of the motion is very apparent and renders this subjects’ data a good case for motion correction algorithms.

Atypical Motion.


Interpolation using motion correction algorithms

To identify and correct motion outliers, the Artifact Repair toolbox (|Art Repair) was used in conjunction with SPM. The following graphic is produced by the ArtRepair software’s feature “Bad Volumes: Detect and Repair”:
Motion during study block 1.

In this interactive figure, ArtRepair displays four graphs of various features of the data. First, the global average signal is plotted in arbitrary scanner units in the top graph. The second plot conveys similar information but instead displays this intensity in terms of standard deviations away from the mean signal intensity of the run. The final two plots display motion information; translational and rotational motion is shown in the third graph and overall scan-to-scan movement is plotted in the fourth graph.

The user can select the threshold at which to repair volumes using the “Up” and “Down” buttons located in the lower left corner of the figure. As shown in the images here, I selected a motion threshold of 0.5mm/TR, which corresponds to a threshold of 0.9 SD in this particular dataset. One could imagine that researchers studying patient populations or children may prefer a more lenient motion threshold. This same figure output generated for the second block of the subject with the most motion is displayed below:

Motion during study block 2.

Volumes flagged as outliers can be repaired using one of three methods: interpolation, despiking, or replacing with the mean. In interpolation, the volumes in question are replaced with the average of the nearest uncorrected neighbors. Despiking also interpolates from neighbor volumes, but instead averages the two nearest neighbors regardless of their outlier status. Finally, replacing each outlier volume with the mean signal intensity can be used to statistically remove these volumes from future analyses. For this project, I selected interpolation as the best way to treat large chunks of consecutive outlier volumes.. As such, a new series of volumes was generated with corrections applied to the outlier volumes denoted with red lines in the images above. Additionally, ArtRepair outputs a vector of scans to deweight (shown with green lines above) that can be used in future level 1 analyses or in fixed effects group level analyses.

To ensure quality control, I created movies of both the uncorrected and corrected data. To see these movies, please follow the links below. Note the significant reduction in motion and intensity artifacts after the application of ArtRepair’s interpolation methods.

| Uncorrected, realigned raw images.

| Realigned raw images that have been motion corrected.

Methods

Subjects

Sixteen right-handed participants were recruited from the Stanford Psychology Participant Pool to participate in a functional MRI study of the Polygon Problem. The age range of participants was constrained to 18 to 40 years of age as we observed that some older participants had difficulty with simple math facts in the pilot study. Nine of the participants were male and the average age of participants was 23.2 years of age.

The dataset selected for the present study is comprised of seven healthy volunteers who participated in the Formula condition of the overall study. Their mean age was 23.8 years old and four of these participants were male.

MR acquisition

Data were obtained on a 3 Tesla GE scanner at Stanford's Center for Cognitive and Neurobiological Imaging (CNI) using a 32 channel headcoil. Axial slices covering the whole brain were imaged using a T2*-weighted gradient echo EPI pulse sequence (TR = 2000ms, TE = 30, flip angle = 77°, interleaved slices). The filed of view was 230 and the spatial resolution was 3.6mm isotropic voxels.

A high resolution T1-weighted BRAVO pulse sequence was used with the following parameters: flip angle = 12°, interleaved axial slices, FOV = 240, acquired resolution 0.8mm x 0.8mm x 0.8mm.

MR Analysis

The MR data were analyzed using SPM software tools in MATLAB: [1].


Pre-processing

As shown in the graphic below, all data were realigned and resliced before motion correction was applied in cases of extreme motion. Next, all data were slice-time corrected.

To accommodate future group analyses, each subjects’ data was transformed into MNI template space. MNI is an abbreviation for Montreal Neurological Institute. First, each subject’s high resolution T1-weighted anatomical scan from each subject to an MNI template (‘Coregister’) and segmenting the grey and white matter (‘Segment’). Each subjects’ functional data was then smoothed at a kernel of 6mm and normalized by applying the same transformation as was applied to his or her T1 scan (‘Normalize’).

Preprocessing pipeline with addition of ART repair step.

Of interest in this particular project is the interjection of the Motion Correction step into the standard preprocessing pipeline.

Level 1 Analysis

Subject-level analyses were performed twice on the participant who exhibited the most motion correction. First, the subject’s uncorrected, preprocessed data was used to determine his activation during the formula use task (vs. rest). Then, these same analyses were performed using deweighted and motion corrected data to explore the effects of these repair programs on the subjects’ parameter maps.

As shown in the task design below, three regressors of interest were included and convolved with the HRF: correct trials, incorrect trials, and time spent answering questions using the sliders. In addition, subjects’ motion vectors were included as covariates.
Task design specification.
The following results show the subject’s activation while doing the formula task over rest. In these images, a T-threshold of 2.5 was used and no corrections for multiple comparisons were applied.

No Motion Correction

Subject-level parameter map with no motion correction.
In this image, the contrast between formula use and rest is shown in orange. This analysis has been conducted using data that have been preprocessed but not motion corrected. The widespread activation patterns and large portions of activation outside of the brain and on the brain stem are artifacts of the subject’s motion.

Using Motion Correction

Subject-level parameter map with motion correction.
Now, the same contrast (formula use – rest) is shown for motion corrected files in blue. This image illustrates the effects of motion correction on single subject analyses. Although some false positives still remain, the activation is more concentrated as the outlier volumes have been repaired using interpolation.

Group Level Analyses

One-sample t-tests were used to analyze several contrasts at the group level. Of interest in the images below is the contrast between using the formula and rest (activations indicate regions that are significantly more active during formula processing over rest). These analyses were modeled using random effects (each subjects’ level 1 statistics were computed separately and then analyzed at the group level). Because this study is exploratory, a relatively low T-threshold of 2.0 was used in the following images and no cluster-thresholding was applied. These data are displayed on the standard MNI-152 template as they have been smoothed and normalized to the template.

No Motion Correction

Group level parameter map with no motion correction.
Here, the group level activations of all subjects with no motion correction are displayed in orange.

Using Motion Correction

Group level parameter map with motion correction.
In this image, the group level statistics have been computed using the motion-corrected level-1 data of the participant with the most severe motion outliers. These activations are overlaid in blue and share many regions in common with the uncorrected activations (displayed in orange).

Excluding a Subject with Too Much Motion

Group level parameter map with motion subject excluded.
Finally, this image shows the result of excluding the subject with the most motion from the dataset; only 6 of the 7 total subjects’ data are included in this analysis. These results, overlaid in green, are clearly more spatially constrained than the other two analyses. This comparison highlights the potential of one subject to affect group analyses of a small sample size.

Conclusions

At the level of subject-specific analyses, it is evident that motion correction reduces the prevalence of false positive results. For example, the level 1 visualizations highlight areas outside of the brain, on the skull, and on the brain stem that are measured as significantly activated during formula processing without motion correction. Although these effects are not entirely eliminated by motion correction, the degree of false positives decreases.

In group level analyses, the effects of motion correction and exclusion of the worst mover were examined. As in first-level analyses, motion correction tightens the spread of the activated areas. Furthermore, removing the worst mover from the dataset results in less distributed activations. Going forward, it is important to consider the tradeoff between including more participants to increase statistical power and the cost of including additional datapoints (i.e., degrees of freedom). This project demonstrates the potentially harmful effects of including just one case of extreme motion in a group.

References - Resources and related work

References

Kalbfleisch, M.L., VanMeter, J.W., & Zeffiro, T.A. (2007). The influences of task difficulty and response correctness on neural systems supporting fluid reasoning. Cognitive Neurodynamics, 1: 71-84.

Koellner, K., Jacobs, J., Borko, H., Schneider, C., Pittman, M.E., Eiteljorg, E., Bunning, K., & Frykholm, J. (2007). The problem-solving cycle: A model to support the development of teachers’ professional knowledge. Mathematical Thinking and Learning, 9, (3), 273-303.

Lehrer, R., Schauble, L., Carpenter, S., & Penner, D. (2000). The interrelated development of inscriptions and conceptual understanding. In P. Cobb, E. Yackel., & K. McClain (Eds.) , Symbolizing and communication in mathematics classrooms (pp. 325 – 360). Mahwah, NJ: Lawrence Erlabaum Associates, Inc.

Schwartz, D.L., Chase, C.C., Oppezzo, M.A., & Chin, D.B. (2011). Practicing versus inventing with contrasting cases: The effects of telling first on learning and transfer. Journal of Educational Psycholgy, 103(4), 759-775.


Star, J.R., & Rittle-Johnson, B. (2008). Flexibility in problem solving: The case of equation solving. Learning and Instruction 18(6), 565-579.


Software

Art Repair

For more information about the SPM plugin ArtRepair, see: |Art Repair