PengStarobinets: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Projects221
No edit summary
imported>Projects221
No edit summary
Line 75: Line 75:
<math> \vec{\mathbf{p}} = [\hat{x}_{0i}-\hat{x}_{i},\hat{y}_{0i}-\hat{y}_{i}]^T </math> = <math> [\Delta \hat{x}_{1},...,\Delta \hat{x}_{i},\Delta \hat{y}_{1},...,\Delta \hat{y}_{i}]^T </math>
<math> \vec{\mathbf{p}} = [\hat{x}_{0i}-\hat{x}_{i},\hat{y}_{0i}-\hat{y}_{i}]^T </math> = <math> [\Delta \hat{x}_{1},...,\Delta \hat{x}_{i},\Delta \hat{y}_{1},...,\Delta \hat{y}_{i}]^T </math>


 
Using B-spline interpolation, we defined the spline basis <math> c_i </math> and the basis function matrix <math> \mathbf{A(x)} </math> for each pixel where <math> \mathbf(x}= </math>


END OF OUR STUFF
END OF OUR STUFF

Revision as of 19:30, 16 March 2013

Back to Psych 221 Projects 2013




Background

The performance of long-distance imaging systems can often be strongly affected by atmospheric turbulence caused by variation of refractive index along the optical transmission path. Such turbulence can produce geometric distortion, space and time-variant defocus blur, and motion blur. An example is shown in the following video of the moon.

Video Link to example of atmospheric turbulence http://vimeo.com/21417297

There have been many approaches to solving this problem that attempt to restore a single high-quality image from an observed frame sequence distorted by air turbulence. As in the video, these approaches, and the approach addressed in this paper, work under the assumption that the scene and the image sensor are both static and that observed motions are due to the air turbulence alone.

The imaging process can be modeled as:

where denotes the ideal image, and represent the geometric deformation and blurring matrices respectively, denotes additive noise, and is the k-th observed frame.

The key then becomes to basically reverse this process so that we can find the desired corrected image.

Methods

Existing restoration algorithms for this problem can generally be categorized in two ways.

Multi-Frame Reconstruction Framework

To correct geometric distortion and reduce space and time-varying blur, a new approach is proposed in this paper capable of restoring a single high-quality image from a given image sequence distorted by atmospheric turbulence. Finally, a blind deconvolution algorithm is implemented to deblur the fused image, generating a final output. Experiments using real data illustrate that this approach can effectively alleviate blur and distortions, recover details of the scene and significantly improve visual quality. This approach reduces the space and time-varying deblurring problem to a shift invariant one. It first registers each frame to suppress geometric deformation through B-spline based non-rigid registration. Next, a temporal regression process is carried out to produce an image from the registered frames, which can be viewed as being convolved with a space invariant near-diffraction-limited blur.


Measuring retinotopic maps

Retinotopic maps were obtained in 5 subjects using Population Receptive Field mapping methods Dumoulin and Wandell (2008). These data were collected for another research project in the Wandell lab. We re-analyzed the data for this project, as described below.

Subjects

Subjects were 5 healthy volunteers.

MR acquisition

Data were obtained on a GE scanner. Et cetera.

MR Analysis

The MR data was analyzed using mrVista software tools.

Pre-processing

All data were slice-time corrected, motion corrected, and repeated scans were averaged together to create a single average scan for each subject. Et cetera.

PRF model fits

PRF models were fit with a 2-gaussian model.

MNI space

After a pRF model was solved for each subject, the model was trasnformed into MNI template space. This was done by first aligning the high resolution t1-weighted anatomical scan from each subject to an MNI template. Since the pRF model was coregistered to the t1-anatomical scan, the same alignment matrix could then be applied to the pRF model.
Once each pRF model was aligned to MNI space, 4 model parameters - x, y, sigma, and r^2 - were averaged across each of the 6 subjects in each voxel.

Et cetera.

Results - What you found

Retinotopic models in native space

Some text. Some analysis. Some figures.

Retinotopic models in individual subjects transformed into MNI space

Some text. Some analysis. Some figures.

Retinotopic models in group-averaged data on the MNI template brain

Some text. Some analysis. Some figures. Maybe some equations.


Equations

The initial test points are taken with equal spacing and can be represented as:

Running correlation algorithm, we found the deformed locations of the test points. This difference between the original positions and the deformed position are then stored in the deformation vector:

=

Using B-spline interpolation, we defined the spline basis and the basis function matrix for each pixel where Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \mathbf(x}= }

END OF OUR STUFF


The sequence of N complex numbers x0, ..., xN−1 is transformed into the sequence of N complex numbers X0, ..., XN−1 by the DFT according to the formula:

where i is the imaginary unit and is a primitive N'th root of unity. (This expression can also be written in terms of a DFT matrix; when scaled appropriately it becomes a unitary matrix and the Xk can thus be viewed as coefficients of x in an orthonormal basis.)

The transform is sometimes denoted by the symbol , as in or or .

The inverse discrete Fourier transform (IDFT) is given by

Retinotopic models in group-averaged data projected back into native space

Some text. Some analysis. Some figures.


Conclusions

Here is where you say what your results mean.

References - Resources and related work

References

Software

Appendix I - Code and Data

Code

File:CodeFile.zip

Data

zip file with my data

Appendix II - Work partition (if a group project)

Brian and Bob gave the lectures. Jon mucked around on the wiki.