WooLee: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Projects221
No edit summary
imported>Projects221
No edit summary
Line 34: Line 34:


<br>
<br>
[[File:.png |thumb|400px|center| Figure 1  Overview]]
[[File:BlenderBasic.png |thumb|400px|center| Figure 2]]





Revision as of 00:44, 21 March 2013

Back to Psych 221 Projects 2013




Background

It is no wonder that vision is one of the most important parts in our lives. Besides, due to the tremendous improvements in technology, research on 3D vision has become hot topic. 3D reconstruction is interesting in a way that we could realize a 3D image by just looking at a 2D image. In this project, we provide some techniques for developing scene models that could be used in analysis in ISET. We build the scene using Blender and export it to PBRT files. Then, in Matlab, we run the ISET to analyze the scene converted. We also attempted to do 3D reconstruction, demonstrating a possible way of using scenes for computer vision testing.

Figure 1 Overview


Figure 2

Once you upload the images, they look like this. Note that you can control many features of the images, like whether to show a thumbnail, and the display resolution.

Figure 3


Blender

Blender was first conceived in December 1993 and born as a usable product in August 1994 as an integrated application that allows the creation of a broad range of 2D and 3D content. Blender provides a broad spectrum of modeling, texturing, lighting, animation and video post-processing functionality in one package. Through its open architecture, Blender provides cross-platform interoperability, extensibility, an incredibly small footprint, and a tightly integrated workflow. Aimed world-wide at media professionals and artists, Blender can be used to create 3D visualizations, stills as well as broadcast and cinema quality videos, while the incorporation of a real-time 3D engine allows for the creation of 3D interactive content for stand-alone playback. Key features: • Fully integrated creation suite, offering a broad range of essential tools for the creation of 3D content, including modeling, UV-mapping, texturing, rigging, skinning, animation, particle and other simulation, scripting, rendering, compositing, post-production, and game creation. • Cross platform, with a uniform OpenGL GUI on all platforms, ready to use for all versions of Windows (98, NT, 2000, XP, Vista, 7), GNU/Linux, OS X, FreeBSD, Irix, SunOS, and numerous other operating systems. • High quality 3D architecture enabling fast and efficient creation work-flow. • Small executable size, easy distribution.


Blender is an integrated application that enables the creation of a broad range of 2D and 3D content. Blender provides a broad spectrum of modeling, texturing, lighting, animation and video post-processing functionality in one package. Through its open architecture, Blender provides cross-platform interoperability, extensibility, an incredibly small footprint, and a tightly integrated workflow. The figure below shows the Blender windows.


Figure 2



Methods

Measuring retinotopic maps

Retinotopic maps were obtained in 5 subjects using Population Receptive Field mapping methods Dumoulin and Wandell (2008). These data were collected for another research project in the Wandell lab. We re-analyzed the data for this project, as described below.

Subjects

Subjects were 5 healthy volunteers.

MR acquisition

Data were obtained on a GE scanner. Et cetera.

MR Analysis

The MR data was analyzed using mrVista software tools.

Pre-processing

All data were slice-time corrected, motion corrected, and repeated scans were averaged together to create a single average scan for each subject. Et cetera.

PRF model fits

PRF models were fit with a 2-gaussian model.

MNI space

After a pRF model was solved for each subject, the model was trasnformed into MNI template space. This was done by first aligning the high resolution t1-weighted anatomical scan from each subject to an MNI template. Since the pRF model was coregistered to the t1-anatomical scan, the same alignment matrix could then be applied to the pRF model.
Once each pRF model was aligned to MNI space, 4 model parameters - x, y, sigma, and r^2 - were averaged across each of the 6 subjects in each voxel.

Et cetera.


Results - What you found

Retinotopic models in native space

Some text. Some analysis. Some figures.

Retinotopic models in individual subjects transformed into MNI space

Some text. Some analysis. Some figures.

Retinotopic models in group-averaged data on the MNI template brain

Some text. Some analysis. Some figures. Maybe some equations.


Equations

If you want to use equations, you can use the same formats that are use on wikipedia.
See wikimedia help on formulas for help.
This example of equation use is copied and pasted from wikipedia's article on the DFT.

The sequence of N complex numbers x0, ..., xN−1 is transformed into the sequence of N complex numbers X0, ..., XN−1 by the DFT according to the formula:

where i is the imaginary unit and is a primitive N'th root of unity. (This expression can also be written in terms of a DFT matrix; when scaled appropriately it becomes a unitary matrix and the Xk can thus be viewed as coefficients of x in an orthonormal basis.)

The transform is sometimes denoted by the symbol , as in or or .

The inverse discrete Fourier transform (IDFT) is given by

Retinotopic models in group-averaged data projected back into native space

Some text. Some analysis. Some figures.


Conclusions

Here is where you say what your results mean.

References - Resources and related work

References

Software

Appendix I - Code and Data

Code

File:CodeFile.zip

Data

zip file with my data

Appendix II - Work partition (if a group project)

Brian and Bob gave the lectures. Jon mucked around on the wiki.