Psych 284

From Psych 221 Image Systems Engineering
(Redirected from Psych 284 course wiki page)
Jump to navigation Jump to search

This Psych 284 wiki page enables students to post code, comment on the organization of the project, and document aspects of their work.

Return to main teaching page.


Introduction

The course project is to build a computational infrastructure for image sensing and processing. The project should include modeling visual stimuli and sensors. The sensors can be part of a digital imaging system (including optics, color filter arrays, photodetector, image processing algorithms, etc.) or a human imaging system (including wavefront aberrations, cone photodetectors, retinal ganglion cell circuitry, and behavior).

Software organization

Results

Readings

Related work

Ideas

Project ideas:

(A) How does the image amplitude spectrum change across the image processing pipeline?

Generate images containing noise patterns with particular amplitude distributions in the Fourier domain. For example a series of noise patterns such that the amplitude of the spatial frequency components in pixel space can be described by a function 1/f^n, with n spanning a range of values, such as 0 (white noise) 1 (pink noise), 2 (highly blurred) and so on. Then we calculate the amplitude spectrum for a series of images as they are processed through the visual system, including:

  • 1. the pixel image
  • 2. the irradiance image (assuming a typical LCD display)
  • 3. the radiant (optical) image
  • 4. the cone absorption image(s) (there are three)
  • 5. retinal ganglion cell image (there may be many, depending on how many cell types are created)

We would like to know, how doe the amplitude spectrum change across these different stages of processing?

(B) Image Classification

Take a series of standardized images of object categories (e.g., Caltech 101?), and transform each into a series of new images, so that we have (again)

  • 1. the pixel image
  • 2. the irradiance image (assuming a typical LCD display)
  • 3. the radiant (optical) image
  • 4. the cone absorption image (there are three)
  • 5. retinal ganglion cell image (there may be many, depending on how many cell types are created)

Then we can ask, how well can a standard image classification algorithm learn to distinguish image classes for each of the separate images. Does classification get easier or harder across these processing steps?

(C) Hyperspectral Imaging

Select one of the following image data problem domains

  • 1. Porcine tissue - classification of tissue type (spleen, stomach, etc.) or tissue change (ischemia)
  • 2. Paintings - classification of painting pigment
  • 3. Faces - classification of skin pigment components (melanin and hemoglobin)
  • 4. Landscape - separation of surface reflectance from scene illumination (including effects of haze)

Answer the following questions

  • 1. What parts of the hyperspectral signal are important for solving the classification problems?
  • 2. How would you design a sensor to optimally capture this information?
  • 3. Once you have captured the relevant information, how do you optimally render it on the display?