Main Page: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Wandell
No edit summary
 
(228 intermediate revisions by 14 users not shown)
Line 1: Line 1:
This wiki houses projects, links, and other information regarding 'https://coursework.stanford.edu/portal/site/F09-PSYCH-204A-01 PSYCH-204A - Human Neuroimaging Methods] and [https://coursework.stanford.edu/portal/site/W09-PSYCH-221-01 PSYCH-221 Applied Vision and Image Systems].


For information about our lab's research, please consult the [http://white.stanford.edu/newlm/index.php/Main_Page lab wiki].


= Psych 204 =  
= Psych 221 Project Information=  


== Project Suggestions ==
* [[Project Guidelines]] describes the project write-up rubric.
* [[Psych221 Project Suggestions]] for this and previous years.


Students present projects for both [http://scien.stanford.edu/class/psych221/projectinfo/PreviousYears.htm PSYCH-221 - Applied Vision and Image Systems Engineering] and [[Psych204-Projects]].
== Project writeups ==
* [[Psych221-Projects-2023-Fall]]


==Project Archives ==
== Past project writeups ==
=== 2009 ===
*
* [http://white.stanford.edu/class/Psych204a/ PSYCH-204A - Human Neuroimaging Methods] (This is a placeholder. Replace with wiki-style page)
* PSYCH-221 - Applied Vision and Image Systems Engineering


== Tutorials ==
* [[Psych221-Projects-2022-Fall]]
* [[Psych221-Projects-2021-Fall]]
* [[Psych221-Projects-2020-Fall]]
* [[Psych221-Projects-2019-Fall]]
* [[Psych221-Projects-2018-Fall]]
* [[Psych221-Projects-2017-Fall]]
* [[Psych221-Projects-2016-Fall]]
* [[Psych221-Projects-2015-Fall]]
* [[Psych221-Projects-2015-Winter]]
* [[Psych221-Projects-2014]]
* [[Psych221-Projects-2013]]
* [[Psych221-Projects-2012]]
* [[Psych221-Projects-2011]]


= Psych 221 =
(Even older project pages (from acorn))


== Project Suggestions ==
* [http://acorn.stanford.edu/psych221/projects/2010/index.html Projects 2009-10]
* [http://acorn.stanford.edu/psych221/projects/2009/index.htm Projects 2008-09]
* [http://acorn.stanford.edu/psych221/projects/2008/index.htm Projects 2007-08]
* [http://acorn.stanford.edu/psych221/projects/2007/index.htm Projects 2006-07]
* [http://acorn.stanford.edu/psych221/projects/2006/index.htm Projects 2005-06]
* [http://acorn.stanford.edu/psych221/projects/2005/index.htm Projects 2004-05]
* [http://acorn.stanford.edu/psych221/projects/2003/index.htm Projects 2002-03]
* [http://acorn.stanford.edu/psych221/projects/2002/index.htm Projects 2001-02]
* [http://acorn.stanford.edu/psych221/projects/2000/index.html Projects 1999-00]
* [http://acorn.stanford.edu/psych221/projects/1999/index.html Projects 1998-99]
* [http://acorn.stanford.edu/psych221/projects/1998/index.html Projects 1997-98]


= Helpful Mediawiki pages =
* Consult [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents MediaWiki User's Guide] for information on editing the wiki and using wiki software.
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/postorius/lists/mediawiki-announce.lists.wikimedia.org/ MediaWiki release mailing list]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]
* [https://www.mediawiki.org/wiki/Cheatsheet Mediawiki formatting cheat sheet]


Visibility of Font Contours
ISET has tools for modeling scenes, cameras, displays and the retinal response patterns of the human eye. We will use these tools to predict 1) the irradiance image of a displayed character and 2) the retinal cone photoreceptor response.  We will then apply basic edge detectors to the photoreceptor responses under various noise conditions, perhaps including eye movements.  This will provide use with a measure of the perceived sharpness and continuity of the font on the display under specified viewing conditions.
Project consultants: Joyce Farrell and Brian Wandell


Noise in the digital camera imaging pipeline
= [[Psych 284]] =
Color imaging sensors used in digital cameras acquire three spatially subsampled color  channels with a color filter array (CFA) mosaic.  The final image is formed by demosaicking these color channels, and transforming the interpolated image to a color space suitable for display. There are multiple stages in this imaging pipeline; several of these stages are nonlinear. The effect of these imaging pipeline stages on image noise is complex. In this project we will study the propagation of noise in the imaging pipeline via simulations in ISET. Some specific questions we'd like to address are: a) the effect of the order of image processing  operations on visible noise in the final image, b) the improvement  offered by simultaneously performing some imaging pipeline operations  (e.g., joint demosaicking and denoising).
Project consultants:  Manu Parmar and Steve Lansel


In Spring 2011 the course included a shared software project.  There is a [[Psych 284 |Psych 284 course wiki page]] for commenting and discussing about the software project.


Resolution in color filter array images
= [[Psych202-Projects-2013]] =
The many megapixels available on modern imaging sensors offer the opportunity to trade-off spatial resolution for other desirable measurements. For instance, a color filter array with more than 3 color filters may offer improved color reproduction and the ability to render scenes under arbitrary illuminants. It is important to understand the real resolution trade-off in such schemes. In this project we will address this issue via simulations in ISET. We will consider the effect on final image resolution of some novel image acquisition schemes (e.g., interleaved imaging) by considering the full imaging pipeline (imaging lens, pixel size, color filter efficiencies, etc.).
Project consultants:  Manu Parmar, Steve Lansel and Brian Wandell


= Psych 204A/B =


Color balancing pipeline
Some years, but not all, we perform projects in Psych 204A.
If displayed without any processing, the raw image data acquired under different illuminants will appear to have an unnatural color cast.  Images taken under tungsten illumination will appear too yellow; images under fluorescent illumination generally appear too green.  Color balancing algorithms are designed to correct these images, transforming the raw data such that the unwanted color cast is eliminated.  These images appear more correct to human viewers because the human visual system also performs a color balancing transformation as we move between illumination conditions. Despite work at Stanford on this problem for nearly three decades, there is no integrated suite of software tools for color balancing algorithms.  This could be the year that you help us fix this problem.
Project consultants:  Joyce Farrell, Jeff DiCarlo and Brian Wandell


[[Psych204B-Projects-2010 | Psych 204B Projects 2010]]


Surfaces, lights and cameras: A web database
[[Psych204B-Projects-2012 | Psych 204B Projects 2012]]
There are a number of online resources for surface reflectances, illuminants, and digital camera sensors (see below).  Each of the existing databases have some strengths and weaknesses.  We would you to design a web-database for surfaces, illuminants and camera sensors that improves upon the current set of pages.  One improvement would be to offer some functionality.  For example, suppose a user has a camera with a known sensor spectral sensitivity and a known light source – could you tell the user which surface reflectance functions in the database could have generated specific RGB values?  Suppose the person took a picture of a wall with a flash; could you provide an estimate of the paint reflectance function on the wall, or possibly the name of the paint?  Could the site help users generate test targets that help evaluate camera accuracy in different environments, such as a chart made of natural reflectances, or paint reflectances, or automotive reflectances, etc.? The web-site should have a nice user-interface, some back-end functionality for simple computations, and a way for users to volunteer new datasets.
http://www.cs.sfu.ca/~colour/data/colour_constancy_synthetic_test_data/index.html
ftp://ftp.eos.ncsu.edu/pub/eos/pub/spectra/
http://www.cs.utah.edu/~bes/graphics/spectra/
http://www1.cs.columbia.edu/CAVE/databases/
http://www.graphics.cornell.edu/online/measurements/
Project consultants:  Joyce Farrell and Janice Chen


[[Psych204B-Projects-2013 | Psych 204B Projects 2013]]


Camera image quality judgments
= Administration =
The ISET camera simulator was designed so that engineers can simulate properties of imaging sensors and visualize and quantify image quality. This project uses ISET to determine the effect that different optical, sensor and image processing properties have upon perceived image quality.  Image metrics will include sharpness, color accuracy and noise visibility.  These properties will be evaluated using color test charts, including the Macbeth ColorChecker and others, 2) the ISO 12233 slanted edge metric, and 3) various measures of image SNR, such as Minimum Photometric Exposure (30).  The project will include informal preference ratings in which peoples’ judgments of the simulated images are compared with these metrics.
Project consultants:  Joyce Farrell and Jiajing Yu


This section is for CAs.  It will describe


Removing haze from aerial photographs
<ol>
The image quality of high resolution images captured at high altitudes is degraded by atmospheric haze. This project will consider the design of new imaging systems to estimate and remove the contribution of haze at each pixel in the high resolution image. One idea is to simultaneously capture a high resolution aerial image and multiple low resolution polarized aerial images. The project team will collaborate on the design a camera rig to take the polarized and non-polarized shots.  This rig will then be placed in a plane to capture the aerial images. Given the data, consider how to use these multiple images to estimate and subtract the haze signal from a non-polarized high resolution imager with little loss of sensitivity. (Previous attempts to remove atmospheric haze can be found at: Fattal  , Schechner et al and Tan )
<li> How students get their accounts and passwords - each student creates their own account. They create their own project page and work within that. </li>
Project consultant:  Iain Mcclatchie [iainm@google.com]
<li> How we remove accounts at the end of the time - we delete them after class is over.   The wiki pages stay up, though. </li>
<li> Who has Bureaucrat/Sysop status on the Psych 221 mediawiki pages? Brian, Doug, Joyce </li>
</ol>


 
Bureaucrat/Sysop users can manually create accounts at this page: [[Special:UserLogin]]
Displays, gamuts and gamut transformations
Projection displays use different rendering methods depending on the image content.  Text and graphics are displayed at higher luminance levels but with smaller colorgamuts.  Video images are displayed using the widest possible gamut, but this reduces the overall brightness.  This project will analyze the measured color gamuts already measured for different projection displays in different rendering modes. We will investigate the relationship between color gamuts, image content and perceived image quality.
Project consultants:  Joyce Farrell, Louis Silverstei and Karl Lang
 
 
Tracking individually marked ants
A colony of ants exhibits coordinated behavior that is based on individual based rules without central control. In addition, not all ants are the same. Some ants are lazy,others very busy, some are jacks of all trades and others are masters of one. To examine how individual variation in ants contributes to the overall organization of colony behavior we will use paint marks to individually identify and track the behavior of all ants in a colony. The project proposed for this class is to: 1. Predict the camera rgb values given the spectral sensitivity of the camera, the spectral power of the light, and the spectral reflectance of objects (paints) in the scene, to determine the most discriminable colors and color combinations that should be used for tagging the ants. 2. Develop an algorithm that identifies each individual ant based on her color code in each frame of a video sequence.
Project consultants:  Joyce Farrel and, Noa Pinter-Wollman (Biology Department)
 
== Write-up Information (Templates) ==
 
Write-up Requirements
 
This page contains information about what should be in your report, technical guidelines, and how to submit your report.
 
Write-up Content
 
The purpose of the writeup is to document the methods, results, and conclusions of your class project.
 
If your project involved writing any non-trivial source code or processing scripts, you should make this available. Be sure to describe the purpose of your code and if possible, edit the code for clarity. The purpose of placing the code online is to allow others to verify your methods and to learn from your ideas.
 
You may include your in-class presentation slides as part of your writeup, but this should not be the entire writeup, since much of your presentation's information is not on your slides, and will come from what you say.
 
Projects at the minimum should contain the following:
 
 
Introduction Motivate the problem. Describe what has been done in the past.
What is the problem? What have people tried?
Methods Describe techniques used to measure data and/or source code algorithms.
Measure something? How? Develop code? What utilities/algorithms did you use?
Results Show relevant graphs and/or images. Explain them
Conclusions Describe what you learned.
What worked? What didn't? Why? What should someone next year try?
References List all references. Include links if papers where found online.
Appendix I Link in all source code, test images, etc, and give a description of each link.
 
In some cases, your acquired data may be too large to store practically. In this case, use your judgement (or consult one of us) and only link the most relevant data.
Appendix II
(for groups only) Work breakdown. Explain how the project work was divided among group members.
 
== Project archives (Psych 221) ==
 
Students present projects for both [http://scien.stanford.edu/class/psych221/projectinfo/PreviousYears.htm PSYCH-221 - Applied Vision and Image Systems Engineering] and [[Psych204-Projects]]
 
== Project archives (Psych 204) ==
[http://scien.stanford.edu/class/psych221/projectinfo/PreviousYears.htm Overview]
 
== Tutorials ==
 
== Write-up Information (Templates) ==

Latest revision as of 20:21, 5 December 2023


Psych 221 Project Information

Project writeups

Past project writeups

(Even older project pages (from acorn))

Helpful Mediawiki pages


Psych 284

In Spring 2011 the course included a shared software project. There is a Psych 284 course wiki page for commenting and discussing about the software project.

Psych202-Projects-2013

Psych 204A/B

Some years, but not all, we perform projects in Psych 204A.

Psych 204B Projects 2010

Psych 204B Projects 2012

Psych 204B Projects 2013

Administration

This section is for CAs. It will describe

  1. How students get their accounts and passwords - each student creates their own account. They create their own project page and work within that.
  2. How we remove accounts at the end of the time - we delete them after class is over. The wiki pages stay up, though.
  3. Who has Bureaucrat/Sysop status on the Psych 221 mediawiki pages? Brian, Doug, Joyce

Bureaucrat/Sysop users can manually create accounts at this page: Special:UserLogin