Psych 284: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Winawer
imported>Winawer
Line 23: Line 23:
Generate images containing noise patterns with particular amplitude distributions in the Fourier domain. For example a series of noise patterns such that the amplitude of the spatial frequency components in pixel space can be described by a function 1/f^n, with n spanning a range of values, such as 0 (white noise) 1 (pink noise), 2 (highly blurred) and so on. Then we calculate the amplitude spectrum for a series of images as they are processed through the visual system, including:
Generate images containing noise patterns with particular amplitude distributions in the Fourier domain. For example a series of noise patterns such that the amplitude of the spatial frequency components in pixel space can be described by a function 1/f^n, with n spanning a range of values, such as 0 (white noise) 1 (pink noise), 2 (highly blurred) and so on. Then we calculate the amplitude spectrum for a series of images as they are processed through the visual system, including:
*1. the pixel image
*1. the pixel image
*2. the irradiance image(assuming a typical LCD display)
*2. the irradiance image (assuming a typical LCD display)
*3. the optical image (after passing through the eye's optics)
*3. the radiant (optical) image  
*4. the cone absorption image
*4. the cone absorption image(s) (there are three)
*5. retinal ganglion cell image
*5. retinal ganglion cell image (there may be many, depending on how many cell types are created)


We would like to know, how doe the amplitude spectrum change across these different stages of processing?
We would like to know, how doe the amplitude spectrum change across these different stages of processing?
Line 34: Line 34:
Take a series of standardized images of object categories (e.g., Caltech 101?), and transform each into a series of new images, so that we have (again)
Take a series of standardized images of object categories (e.g., Caltech 101?), and transform each into a series of new images, so that we have (again)
*1. the pixel image
*1. the pixel image
*2. the irradiance image(assuming a typical LCD display)
*2. the irradiance image (assuming a typical LCD display)
*3. the optical image (after passing through the eye's optics)
*3. the radiant (optical) image  
*4. the cone absorption image
*4. the cone absorption image (there are three)
*5. retinal ganglion cell image
*5. retinal ganglion cell image (there may be many, depending on how many cell types are created)
Then we can ask, how well can a standard image classification algorithm learn to distinguish image classes for each of the separate images. Does classification get easier or harder across these processing steps?
Then we can ask, how well can a standard image classification algorithm learn to distinguish image classes for each of the separate images. Does classification get easier or harder across these processing steps?

Revision as of 20:52, 6 April 2011

This Psych 284 wiki page enables students to post code, comment on the organization of the project, and document aspects of their work.

Return to main teaching page.


Introduction

The course project is to build a computational infrastructure for modeling visual circuits and behavior.

Software organization

Results

Readings

Related work

Crazy ideas

Project ideas:

(A) How does the image amplitude spectrum change across the image processing pipeline?

Generate images containing noise patterns with particular amplitude distributions in the Fourier domain. For example a series of noise patterns such that the amplitude of the spatial frequency components in pixel space can be described by a function 1/f^n, with n spanning a range of values, such as 0 (white noise) 1 (pink noise), 2 (highly blurred) and so on. Then we calculate the amplitude spectrum for a series of images as they are processed through the visual system, including:

  • 1. the pixel image
  • 2. the irradiance image (assuming a typical LCD display)
  • 3. the radiant (optical) image
  • 4. the cone absorption image(s) (there are three)
  • 5. retinal ganglion cell image (there may be many, depending on how many cell types are created)

We would like to know, how doe the amplitude spectrum change across these different stages of processing?

(B) Image Classification

Take a series of standardized images of object categories (e.g., Caltech 101?), and transform each into a series of new images, so that we have (again)

  • 1. the pixel image
  • 2. the irradiance image (assuming a typical LCD display)
  • 3. the radiant (optical) image
  • 4. the cone absorption image (there are three)
  • 5. retinal ganglion cell image (there may be many, depending on how many cell types are created)

Then we can ask, how well can a standard image classification algorithm learn to distinguish image classes for each of the separate images. Does classification get easier or harder across these processing steps?