Psych 221 Projects 2010

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search


Camera Identification

Lukas et al. (2005) suggest that cameras each have a unique signature based on their particular noise defects. They propose identifying which camera is the source of an image from an analysis of the fixed pattern noise in the camera. We would like to implement and test the algorithm on images from cameras owned by members of the class.

Project consultants: Course Assistant.

Visibility of Font Contours

ISET has tools for modeling scenes, cameras, displays and the retinal response patterns of the human eye. We will use these tools to predict 1) the irradiance image of a displayed character and 2) the retinal cone photoreceptor response. We will then apply basic edge detectors to the photoreceptor responses under various noise conditions, perhaps including eye movements. This will provide use with a measure of the perceived sharpness and continuity of the font on the display under specified viewing conditions.
Project consultants: Joyce Farrell and Brian Wandell

Human Flicker Sensitivity

You are relatively more sensitive to flickering objects when they occur in the periphery of your visual field. Flickering objects also tend to grab your attention such that you automatically move your eyes to the location of a flickering or moving object. The question we ask in this project is whether there is a flicker rate that you can detect in your visual periphery but cannot detect when it is presented in the center of your visual field. In other words, can you design a target that will appear to flicker in your visual periphery but will not appear to flicker when you move your eyes to center the target in the center of your visual field? This may allow the ability to steer the attention of a computer user in a relatively imperceptible manner. We will design a visual psychophysical experiment to answer this question.
Project consultants: Joyce Farrell and Tom Malzbender (HPL)

Printer misprint algorithms

High-speed digital presses are increasingly used to print short runs of books or books on demand. While misprint detection is an important capability in any printing system, it is particularly important in short-run printing. We are developing hardware and algorithms to run on high-speed digital presses; the goal of these algorithms is to identify misprints (quickly). We are also interested in developing methods for automatically calibrating digital printers and verifying them for production. This project will use the Image Systems Evaluation Toolbox (ISET) to model and predict the printed page response of a prototype optical sensing system that we developed. You will also be able to evaluate existing algorithms and perhaps even suggest new ones.
Project consultants: Peter Catrysse, Joyce Farrell and Brian Wandell

Identifying the limits of visual sensitivity

We are developing quantitative models of visual encoding; these models account for the properties of the lens, photoreceptor mosaic, retinal ganglion cells, and so forth. We are interested to know how much information is present in the visual pathways at different stages of the process. We assess this by building the model and then using ideal observers to measure performance. There are many opportunities to undertake computational projects that assess the precision of the information present in the visual pathways at different stages and compare the information available at that point with the performance of human observers. The human observer performance is generally available in the literature, though it is also possible to imagine carrying out new experiments.
Project consults: Anthony Sherbondy and Brian Wandell

The Design of Imaging Sensors for Robotic Surgery

Many robotic surgical devices use conventional rgb sensors to capture high quality 3D images of internal organs along with the surrounding cardiovascular support and neural innervation. This project will use the Image Systems Evaluation Toolbox (ISET) to simulate the performance of imaging sensors with non-conventional color sensitivities. The project may involve making spectral measurements of organs in a living animal that will be used as input to the ISET simulations.
Project consultants: Joyce Farrell, Steve Lansel and Dave Scott (Research Director, Intuitive Surgical)

Estimation and Evaluation of Camera Spectral Sensitivities

Accurate estimates of a camera's spectral sensitivities are required for simulation. The sensitivities describe how different wavelengths of light incident on the camera are converted to the R, G, and B measurements at each pixel. To determine a camera's sensitivities, a number of images are taken of targets with known spectral power distributions. Then, the measurements are placed in an optimization function that yields the sensitivity estimates. For example, see this reference. We would like to improve our current estimation method and evaluate the accuracy of the resultant curves. The project may also include evaluation of the accuracy of image measurements of the camera combined with our multispectral illumination system. The project will include both lab measurements and Matlab calculations.
Project consultants: Steve Lansel

Resolution in color filter array images

The many megapixels available on modern imaging sensors offer the opportunity to trade-off spatial resolution for other desirable measurements. For instance, a color filter array with more than 3 color filters may offer improved color reproduction and the ability to render scenes under arbitrary illuminants. It is important to understand the real resolution trade-off in such schemes. In this project we will address this issue via simulations in ISET. We will consider the effect on final image resolution of some novel image acquisition schemes (e.g., interleaved imaging) by considering the full imaging pipeline (imaging lens, pixel size, color filter efficiencies, etc.).
Project consultants: Steve Lansel and Brian Wandell

Color balancing pipeline

If displayed without any processing, the raw image data acquired under different illuminants will appear to have an unnatural color cast. Images taken under tungsten illumination will appear too yellow; images under fluorescent illumination generally appear too green. Color balancing algorithms are designed to correct these images, transforming the raw data such that the unwanted color cast is eliminated. These images appear more correct to human viewers because the human visual system also performs a color balancing transformation as we move between illumination conditions. Despite work at Stanford on this problem for nearly three decades, there is no integrated suite of software tools for color balancing algorithms. This could be the year that you help us fix this problem.
Project consultants: Joyce Farrell and Brian Wandell

Surfaces, lights and cameras: A web database

There are a number of online resources for surface reflectances, illuminants, and digital camera sensors (see below). Each of the existing databases have some strengths and weaknesses. We would you to design a web-database for surfaces, illuminants and camera sensors that improves upon the current set of pages. One improvement would be to offer some functionality. For example, suppose a user has a camera with a known sensor spectral sensitivity and a known light source – could you tell the user which surface reflectance functions in the database could have generated specific RGB values? Suppose the person took a picture of a wall with a flash; could you provide an estimate of the paint reflectance function on the wall, or possibly the name of the paint? Could the site help users generate test targets that help evaluate camera accuracy in different environments, such as a chart made of natural reflectances, or paint reflectances, or automotive reflectances, etc.? The web-site should have a nice user-interface, some back-end functionality for simple computations, and a way for users to volunteer new datasets.


Project consultants: Joyce Farrell and Reno Bowen

Camera image quality judgments

The ISET camera simulator was designed so that engineers can simulate properties of imaging sensors and visualize and quantify image quality. This project uses ISET to determine the effect that different optical, sensor and image processing properties have upon perceived image quality. Image metrics will include sharpness, color accuracy and noise visibility. These properties will be evaluated using color test charts, including the Macbeth ColorChecker and others, 2) the ISO 12233 slanted edge metric, and 3) various measures of image SNR, such as Minimum Photometric Exposure (30). The project will include informal preference ratings in which peoples’ judgments of the simulated images are compared with these metrics.
Project consultants: Joyce Farrell

Displays, gamuts and gamut transformations

Projection displays use different rendering methods depending on the image content. Text and graphics are displayed at higher luminance levels but with smaller colorgamuts. Video images are displayed using the widest possible gamut, but this reduces the overall brightness. This project will analyze the measured color gamuts already measured for different projection displays in different rendering modes. We will investigate the relationship between color gamuts, image content and perceived image quality.
Project consultants: Joyce Farrell, Louis Silverstein and Karl Lang

Evaluation of the Live Databases

The Laboratory for Image and Video Engineering (LIVE) at The University of Texas at Austin (directed by Prof. Alan C. Bovik) provides a collection of images and videos that have been rated by human observers. The images, videos and corresponding mean opinion scores (MOS) can be downloaded from their website (see http://live.ece.utexas.edu/research/quality/ ). The Live databases are now commonly used by the engineering community to evaluate image quality metrics. This project will evaluate the Live databases. Do the databases provide sufficient information about the displays and viewing illumination that were used to obtain the MOS such that you can replicate the viewing conditions? Will different displays generate different results? Can you replicate a subset of their data? How much variance is there in MOS? What is the effect of practice and expertise on MOS?
Project consultants: Joyce Farrell

Machine learning and neuroimaging data

This project is for people already involved in neuroimaging, or who would like to learn more about some magnetic resonance imaging methods. We have collected a large amount of data about the white matter in the brain of developing children. It is possible to build classifiers and pattern analyzers to look for various effects in various ways. If you are interested in machine learning algorithms, pattern analyzers, support vector machines, and you would like to develop a project in this area that applies to human white matter data, let us know.
Project consults: Reno Bowen, Robert Dougherty, Anthony Sherbondy, Brian Wandell

Previous Projects (Done)

Removing haze from aerial photographs

The image quality of high resolution images captured at high altitudes is degraded by atmospheric haze. This project will consider the design of new imaging systems to estimate and remove the contribution of haze at each pixel in the high resolution image. One idea is to simultaneously capture a high resolution aerial image and multiple low resolution polarized aerial images. The project team will collaborate on the design a camera rig to take the polarized and non-polarized shots. This rig will then be placed in a plane to capture the aerial images. Given the data, consider how to use these multiple images to estimate and subtract the haze signal from a non-polarized high resolution imager with little loss of sensitivity. (Previous attempts to remove atmospheric haze can be found at: Fattal , Schechner et al and Tan )
Project consultant: Iain Mcclatchie [iainm@google.com]

Tracking individually marked ants

A colony of ants exhibits coordinated behavior that is based on individual based rules without central control. In addition, not all ants are the same. Some ants are lazy,others very busy, some are jacks of all trades and others are masters of one. To examine how individual variation in ants contributes to the overall organization of colony behavior we will use paint marks to individually identify and track the behavior of all ants in a colony. The project proposed for this class is to: 1. Predict the camera rgb values given the spectral sensitivity of the camera, the spectral power of the light, and the spectral reflectance of objects (paints) in the scene, to determine the most discriminable colors and color combinations that should be used for tagging the ants. 2. Develop an algorithm that identifies each individual ant based on her color code in each frame of a video sequence.
Project consultants: Joyce Farrell and, Noa Pinter-Wollman (Biology Department)