Psych 284: Difference between revisions
imported>Winawer |
imported>Winawer |
||
Line 33: | Line 33: | ||
Take a series of standardized images of object categories (e.g., Caltech 101?), and transform each into a series of new images, so that we have (again) | Take a series of standardized images of object categories (e.g., Caltech 101?), and transform each into a series of new images, so that we have (again) | ||
*1. the pixel image | |||
*2. the irradiance image(assuming a typical LCD display) | |||
*3. the optical image (after passing through the eye's optics) | |||
*4. the cone absorption image | |||
*5. retinal ganglion cell image | |||
Then we can ask, how well can a standard image classification algorithm learn to distinguish image classes for each of the separate images. Does classification get easier or harder across these processing steps? |
Revision as of 20:46, 6 April 2011
This Psych 284 wiki page enables students to post code, comment on the organization of the project, and document aspects of their work.
Return to main teaching page.
Introduction
The course project is to build a computational infrastructure for modeling visual circuits and behavior.
Software organization
Results
Readings
Related work
Crazy ideas
Project ideas:
(A) How does the image amplitude spectrum change across the image processing pipeline?
Generate images containing noise patterns with particular amplitude distributions in the Fourier domain. For example a series of noise patterns such that the amplitude of the spatial frequency components in pixel space can be described by a function 1/f^n, with n spanning a range of values, such as 0 (white noise) 1 (pink noise), 2 (highly blurred) and so on. Then we calculate the amplitude spectrum for a series of images as they are processed through the visual system, including:
- 1. the pixel image
- 2. the irradiance image(assuming a typical LCD display)
- 3. the optical image (after passing through the eye's optics)
- 4. the cone absorption image
- 5. retinal ganglion cell image
We would like to know, how doe the amplitude spectrum change across these different stages of processing?
(B) Image Classification
Take a series of standardized images of object categories (e.g., Caltech 101?), and transform each into a series of new images, so that we have (again)
- 1. the pixel image
- 2. the irradiance image(assuming a typical LCD display)
- 3. the optical image (after passing through the eye's optics)
- 4. the cone absorption image
- 5. retinal ganglion cell image
Then we can ask, how well can a standard image classification algorithm learn to distinguish image classes for each of the separate images. Does classification get easier or harder across these processing steps?