Main Page: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Wandell
imported>Wandell
Line 17: Line 17:


= Psych 221 =  
= Psych 221 =  
== Tutorials ==


== Project Suggestions ==
== Project Suggestions ==


Visibility of Font Contours
Visibility of Font Contours
Line 103: Line 104:


Students present projects for both [http://scien.stanford.edu/class/psych221/projectinfo/PreviousYears.htm PSYCH-221 - Applied Vision and Image Systems Engineering] and [[Psych204-Projects]].
Students present projects for both [http://scien.stanford.edu/class/psych221/projectinfo/PreviousYears.htm PSYCH-221 - Applied Vision and Image Systems Engineering] and [[Psych204-Projects]].
== Tutorials ==

Revision as of 00:58, 22 November 2009

This wiki houses projects, links, and other information regarding 'https://coursework.stanford.edu/portal/site/F09-PSYCH-204A-01 PSYCH-204A - Human Neuroimaging Methods] and PSYCH-221 Applied Vision and Image Systems.

For information about our lab's research, please consult the lab wiki.

Psych 204

Tutorials

Project Suggestions

Students present projects for both PSYCH-221 - Applied Vision and Image Systems Engineering and Psych204-Projects.

Write-up Information (Templates)

Project archives

Psych 221

Tutorials

Project Suggestions

Visibility of Font Contours ISET has tools for modeling scenes, cameras, displays and the retinal response patterns of the human eye. We will use these tools to predict 1) the irradiance image of a displayed character and 2) the retinal cone photoreceptor response. We will then apply basic edge detectors to the photoreceptor responses under various noise conditions, perhaps including eye movements. This will provide use with a measure of the perceived sharpness and continuity of the font on the display under specified viewing conditions. Project consultants: Joyce Farrell and Brian Wandell

Noise in the digital camera imaging pipeline Color imaging sensors used in digital cameras acquire three spatially subsampled color channels with a color filter array (CFA) mosaic. The final image is formed by demosaicking these color channels, and transforming the interpolated image to a color space suitable for display. There are multiple stages in this imaging pipeline; several of these stages are nonlinear. The effect of these imaging pipeline stages on image noise is complex. In this project we will study the propagation of noise in the imaging pipeline via simulations in ISET. Some specific questions we'd like to address are: a) the effect of the order of image processing operations on visible noise in the final image, b) the improvement offered by simultaneously performing some imaging pipeline operations (e.g., joint demosaicking and denoising). Project consultants: Manu Parmar and Steve Lansel


Resolution in color filter array images The many megapixels available on modern imaging sensors offer the opportunity to trade-off spatial resolution for other desirable measurements. For instance, a color filter array with more than 3 color filters may offer improved color reproduction and the ability to render scenes under arbitrary illuminants. It is important to understand the real resolution trade-off in such schemes. In this project we will address this issue via simulations in ISET. We will consider the effect on final image resolution of some novel image acquisition schemes (e.g., interleaved imaging) by considering the full imaging pipeline (imaging lens, pixel size, color filter efficiencies, etc.). Project consultants: Manu Parmar, Steve Lansel and Brian Wandell


Color balancing pipeline If displayed without any processing, the raw image data acquired under different illuminants will appear to have an unnatural color cast. Images taken under tungsten illumination will appear too yellow; images under fluorescent illumination generally appear too green. Color balancing algorithms are designed to correct these images, transforming the raw data such that the unwanted color cast is eliminated. These images appear more correct to human viewers because the human visual system also performs a color balancing transformation as we move between illumination conditions. Despite work at Stanford on this problem for nearly three decades, there is no integrated suite of software tools for color balancing algorithms. This could be the year that you help us fix this problem. Project consultants: Joyce Farrell, Jeff DiCarlo and Brian Wandell


Surfaces, lights and cameras: A web database There are a number of online resources for surface reflectances, illuminants, and digital camera sensors (see below). Each of the existing databases have some strengths and weaknesses. We would you to design a web-database for surfaces, illuminants and camera sensors that improves upon the current set of pages. One improvement would be to offer some functionality. For example, suppose a user has a camera with a known sensor spectral sensitivity and a known light source – could you tell the user which surface reflectance functions in the database could have generated specific RGB values? Suppose the person took a picture of a wall with a flash; could you provide an estimate of the paint reflectance function on the wall, or possibly the name of the paint? Could the site help users generate test targets that help evaluate camera accuracy in different environments, such as a chart made of natural reflectances, or paint reflectances, or automotive reflectances, etc.? The web-site should have a nice user-interface, some back-end functionality for simple computations, and a way for users to volunteer new datasets. http://www.cs.sfu.ca/~colour/data/colour_constancy_synthetic_test_data/index.html ftp://ftp.eos.ncsu.edu/pub/eos/pub/spectra/ http://www.cs.utah.edu/~bes/graphics/spectra/ http://www1.cs.columbia.edu/CAVE/databases/ http://www.graphics.cornell.edu/online/measurements/ Project consultants: Joyce Farrell and Janice Chen


Camera image quality judgments The ISET camera simulator was designed so that engineers can simulate properties of imaging sensors and visualize and quantify image quality. This project uses ISET to determine the effect that different optical, sensor and image processing properties have upon perceived image quality. Image metrics will include sharpness, color accuracy and noise visibility. These properties will be evaluated using color test charts, including the Macbeth ColorChecker and others, 2) the ISO 12233 slanted edge metric, and 3) various measures of image SNR, such as Minimum Photometric Exposure (30). The project will include informal preference ratings in which peoples’ judgments of the simulated images are compared with these metrics. Project consultants: Joyce Farrell and Jiajing Yu


Removing haze from aerial photographs The image quality of high resolution images captured at high altitudes is degraded by atmospheric haze. This project will consider the design of new imaging systems to estimate and remove the contribution of haze at each pixel in the high resolution image. One idea is to simultaneously capture a high resolution aerial image and multiple low resolution polarized aerial images. The project team will collaborate on the design a camera rig to take the polarized and non-polarized shots. This rig will then be placed in a plane to capture the aerial images. Given the data, consider how to use these multiple images to estimate and subtract the haze signal from a non-polarized high resolution imager with little loss of sensitivity. (Previous attempts to remove atmospheric haze can be found at: Fattal , Schechner et al and Tan ) Project consultant: Iain Mcclatchie [iainm@google.com]


Displays, gamuts and gamut transformations Projection displays use different rendering methods depending on the image content. Text and graphics are displayed at higher luminance levels but with smaller colorgamuts. Video images are displayed using the widest possible gamut, but this reduces the overall brightness. This project will analyze the measured color gamuts already measured for different projection displays in different rendering modes. We will investigate the relationship between color gamuts, image content and perceived image quality. Project consultants: Joyce Farrell, Louis Silverstei and Karl Lang


Tracking individually marked ants A colony of ants exhibits coordinated behavior that is based on individual based rules without central control. In addition, not all ants are the same. Some ants are lazy,others very busy, some are jacks of all trades and others are masters of one. To examine how individual variation in ants contributes to the overall organization of colony behavior we will use paint marks to individually identify and track the behavior of all ants in a colony. The project proposed for this class is to: 1. Predict the camera rgb values given the spectral sensitivity of the camera, the spectral power of the light, and the spectral reflectance of objects (paints) in the scene, to determine the most discriminable colors and color combinations that should be used for tagging the ants. 2. Develop an algorithm that identifies each individual ant based on her color code in each frame of a video sequence. Project consultants: Joyce Farrel and, Noa Pinter-Wollman (Biology Department)

Write-up Information (Templates)

Write-up Requirements

This page contains information about what should be in your report, technical guidelines, and how to submit your report.

Write-up Content

The purpose of the writeup is to document the methods, results, and conclusions of your class project.

If your project involved writing any non-trivial source code or processing scripts, you should make this available. Be sure to describe the purpose of your code and if possible, edit the code for clarity. The purpose of placing the code online is to allow others to verify your methods and to learn from your ideas.

You may include your in-class presentation slides as part of your writeup, but this should not be the entire writeup, since much of your presentation's information is not on your slides, and will come from what you say.

Projects at the minimum should contain the following:


Introduction Motivate the problem. Describe what has been done in the past. What is the problem? What have people tried? Methods Describe techniques used to measure data and/or source code algorithms. Measure something? How? Develop code? What utilities/algorithms did you use? Results Show relevant graphs and/or images. Explain them Conclusions Describe what you learned. What worked? What didn't? Why? What should someone next year try? References List all references. Include links if papers where found online. Appendix I Link in all source code, test images, etc, and give a description of each link.

In some cases, your acquired data may be too large to store practically. In this case, use your judgement (or consult one of us) and only link the most relevant data. Appendix II (for groups only) Work breakdown. Explain how the project work was divided among group members.

Project archives

Students present projects for both PSYCH-221 - Applied Vision and Image Systems Engineering and Psych204-Projects.