Breneman

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search

Back to Psych 221 Projects 2013

Background

Those who venture below the surface of the ocean are privy to a magnificent seascape of colorful marine plants, coral, and fish. Unfortunately, to humans, whose eyes are adapted to viewing landscapes from above the water line, much of that color is lost. The same filtering of sunlight that tints the sky a pale blue is present and magnified underwater reducing most marine scenes to a greenish or bluish tinge. Many divers must bring a flashlight to re-illuminate a scene.

Despite the limitations, there is much information to be gained via color underwater. The colors of underwater scenes can be important indicators of an ecosystem's health. Living coral, for instance, are a different color than dead coral. The CoralWatch project at the University of Queensland in Brisbane, Australia [1] uses a waterproof color card to detect coral bleaching. The color of light at depth underwater might also be used as an indicator of mid-column plankton activity. A quick internet search reveals thousands of underwater photographs that may contain clues regarding the status of the global underwater ecosystem.

Extracting this color information, however, requires detailed models describing an illuminating light spectrum's evolution as it passes through seawater. Although the basic principles of the absorption of light are described by the Beer-Lambert law [2], the specific absorption constants may vary by location or season. To provide a means to easily characterize an underwater environment's illumination, a simple color rig was designed using a commercial point-and-shoot camera in an underwater housing and a calibrated color target. The rig was used to estimate the variation of illumination with depth near Stanford University's Hopkins Marine Station in Monterey CA.

Methods

Light spectra are conventionally measured with a specialized instrument known as a colorimeter. Unfortunately such colorimeters are both prohibitively expensive and designed for laboratory conditions. They are unsuitable for the underwater environment. However, a digital camera when combined with a calibrated target can be used to derive an estimate of scene illumination. Digital cameras are popular among SCUBA divers and many are available with underwater housings allowing them to operate underwater. A linear model of the digital camera color capture system from the Foundations of Vision text [3] was applied to design an underwater color measurement rig. The color filter array of the digital camera used in the rig was then carefully measured in Stanford's SCIEN optics laboratory. The combined information is then used to measure how illumination in underwater scenes varies with depth.


Linear Model for Digital Photography

The digital photography process can be described by a simple linear system with four elements [3]. Light illuminating a scene interacts with surfaces within the scene according to each surface's reflectance spectrum. The resulting surface light spectrum is captured by the digital camera's lens where it is directed onto a series of color filter arrays. Conventional digital cameras have a total of 3 color filter arrays which produce a set of red, green, and blue values. The linear system is shown below in matrix tableu form:

The illuminant light on the right side of the equation is multiplied by a diagonal matrix describing an individual surface reflectance spectrum. The resulting vector is passed through a 3xN matrix describing a camera's 3 color filter arrays with N-point resolution. A 3-element RGB results. If three of the four elements are known the fourth can be derived, provided there are enough linearly independent surfaces within a scene.

Design of an Underwater Color Rig

The underwater color rig assembles the three elements required to estimate illumination, namely a set of color targets with known reflectances, a Canon SX260HS color point-and-shoot digital camera, and a large memory card to store the resulting underwater images. The color rig used in this experiment was developed through a series of three design iterations, described below.

Handheld

Initial data were taken using a handheld acrylic sheet with 6 color patches approximately 2in x 3in in size. A photo with the acrylic sheet is shown below. The configuration proved to be difficult to control; achieving a consistent and fixed distance was nearly impossible and sediment underwater produced high levels of backscatter. Furthermore, divers are typically limited by SCUBA equipment to 2-3 dives per day and each dive produced only 5 or 6 data points. Also missing was any information regarding the specific spectral reflectances of the pigments used in the paint, which would have to be determined through lab calibration.

PVC Rig

A PVC frame was constructed to provide better control over image framing. A standardized color metric, the XRite color checker card, was attached. The PVC frame holds the color card a fixed distance of 12" from the camera and a surplus analog depth gauge is attached near the upper left side of the color target. The color card was enclosed between sheets of 3/8-inch thick clear acrylic from TAP Plastics. Venting holes drilled in the PVC pipe allow air to escape from the frame making the rig negatively buoyant. The rig is lowered underwater by nylon line attached to an eye bolt in the center support.

Original acrylic color card

The second version of the underwater performed very well, making a total of 4 dives to 90 feet with varying camera settings. However, the original PVC was poorly weighted and rotated up and down as it descended producing variable reflections within the color target. Worse, it was designed for the camera's field of view in air. In air, the Canon SX 260HS point-and-shoot camera has a 45-degree vertical field of view, but due to the higher index of refraction of water this is reduced to approximately 30 degrees. The reduced field of view in water hides about 25% of the color target from view, as shown below.

PVC color rig field of view in air
PVC color rig field of view in water

PVC With Index-Compensated Geometry

The original PVC color rig was extended by 12" resulting in a final camera to target distance of 24". This allows the full color card along with depth gauge to easily fit within the SX260HS's underwater field of view. It also allows room for the reduced field of view used by the camera when shooting HD video. A 2-lb surplus diving weight was added to the color target side of the rig to ensure the assembly descends at a constant attitude.

Final Color Rig

Future versions of the color rig should address the significant reflections produced by the acrylic sheet used to protect the Xrite color target. Although the reflections' effect is mitigated by pixel value averaging within each color patch as described below, the reflections are not modeled in the linear scheme used to derive the illuminant spectrum.

Calibrating the Underwater Color Rig

From the linear model of digital camera color systems, we must know both the reflectances of each of the color patches on the Xrite color target and the responsivities of the SX260HS color filter array. Fortunately, the Xrite color target contains a set of 24 well-known colors commonly used by photographers to verify the color balance of their equipment. The Image Systems Engineering Toolbox (ISET) suite of MATLAB routines and data from the Stanford Center for Image Systems Engineering (SCIEN) contains the reflectances of a similar color target, the Macbeth color checker. To save time, the reflectances of the Macbeth color checker were assumed to closely model the reflectances of the Xrite color target used in the color rigs. Future work must verify this assumption.

Macbeth color target

Canon does not publish the responsivities of the SX260HS point-and-shoot. Furthermore, the SX260HS does not provide a means to access the RAW image pixel values so the values must be inferred from the camera's 12Megapixel high-resolution JPEG images. These JPEG images have passed through the camera's image processing pipeline. Although the high pixel density limits artifacts in the JPEG compressor, the image processing pipeline's color balancing algorithm is also unknown. To guarantee data consistency the camera's color balance was fixed at the beginning of the project's datacollection phase and measured later in the SCIEN lab.

To measure the SX260HS responsivities, the camera was exposed to a series of illuminants created by a Newport Oriel Cornerstone 130 monochromatic light. The illuminants were directed at a white color target and the resulting spectrum was captured photographically by the SX260HS and measured by a Photo Research PR-650 Spectra-Scan colorimeter. A graphic of the responsivity measurement setup is shown below. To guarantee the SX260HS captured images at the correct time, its shutter was computer controlled via CHDK firmware and an Arduino trigger signal.

The series of monochromatic inputs guarantees a set of linearly independent color targets for characterizing the SX260HS color filter array. Manipulation of the linear model of digital camera color systems allows the camera color filter responsivities to be written as a function of the RGB pixel values and spectra measured by the PR650.

Data-Collection Dives

A total of 11 dives were conducted near Stanford's Hopkins Marine Station in Monterey Bay, CA. Five of the dives were conducted by the final index-compensated underwater color rig. The site is home to a rich diversity of cold-water marine life and is well cataloged by the Stanford faculty and students stationed there. Furthermore, Monterey Bay is home to the extremely deep Monterey Submarine Canyon. Seasonal weather patterns in Monterey Bay occasionally draw nutrient rich water from the depths of the canyon to the surface creating a plankton boom. The color rig may be useful for quickly measuring the plankton population throughout the year. It is hoped the results of these experiments will be relevant and useful to future researchers at the Hopkins Marine Station.

Results

Ultimately the underwater color rig was successful in providing convincing estimates of underwater illumination. A summary of intermediate and final results, along with overviews of the data processing techniques used are introduced below.

Macbeth/Xrite Color Checker Reflectances

The Macbeth Color Checker patch reflectances are a part of the ISET suite and were not re-measured for the purposes of this experiment. A chart of the reflectances is shown below:

The reflectances easily cover the spectral extent of the human visual system and beyond into the infared. They make a useful set of spectrally independent color estimation targets. The Xrite color target used in the PVC color rigs is produced by a different manufacturer than the Macbeth target characterized by the ISET suite. While the color patches used by Xrite are assumed to be identical to Macbeth's, a verification of this assumption would guarantee the results presented here.

Canon SX260HS Responsivities

The laboratory configuration described above was used to collect a series of 81 images of linearly independent spectra with the SX260HS with fixed color balance. An example of one of the images is shown below, with a red illuminant. The camera's exposure, ISO, and aperture were set manually to avoid saturation in any of the camera's RGB color channels.

Red monochromatic illuminant on color target

A section of the illumination target within the images was extracted and compared to the dark portions of the image to render an estimate of the average RGB response of the camera to the illuminant. A graph of the RGB responses measured by the SX260HS is shown below. The x-axis is the wavelength setting of the Cornerstone 130 monochromatic light source.

The RGB responses alone are not sufficient to estimate the camera's responsivities; the intensity and spectral bandwidth of the illumination target must also be known. The intensity and bandwidth of the same illumination targets as measured by the PR-650 colorimeter are shown below. Note the monochromatic light source has both a finite bandwidth and variable intensity.

Combining the SX260HS RGB response and the measured PR-650 colorimeter spectra, the Canon's overall spectral response can be estimated. The linear system of digital color photography was rearranged to describe camera sensitivities as a function of RGB pixel response and illuminant spectra and solved using a non-negative least squares solver in MATLAB's optimization toolbox, lsqnonneg. A graph of the individual color filter responsivities is shown below.

The SX260HS sensor response is significant between 405nm and 660nm. This represents the limit of the spectral band the color rig can estimate. The PR-650 colorimeter has a 5nm resolution, resulting in 51 bands within the sensor range.

Underwater Color Rig Data

The color rig's Canon SX260HS camera was programmed using the CHDK firmware to snap photos of the onboard Xrite color checker at an interval of approximately 2 seconds, limited by the camera's onboard processor. A single descent from the surface to 90 feet and back produces between 100 and 150 JPEG photos. On some of the dives, the camera was switched to HD video mode to collect higher rate data. The video recording subsystem on the SX260HS includes an automatic exposure control making the recorded videos unsuitable for seawater spectral absorption analysis, but they do provide an excellent qualitative overview of the selective absorption effect with depth. Two videos from the final configuration of the color rig are posted online at YouTube:

85ft Dive Number 1 (with Salp capture on return!)

85ft Dive Number 2 (with plastic bottle for fun)

Extracting Color Patch RGB Information

The PVC color rigs maintain a consistent field of view as the rig descends, so a map of the color patches within the Xrite color target was defined as shown below. Each of the photos taken by the rig results in a set of RGB values for each of the 24 color patches, or 72 data points per photo.

As the rig descends the patches take on a green tint and the red patches especially lose their intensity and appear a muted gray color. A rendering of the color checker values at different depths is shown below.

Xrite Chart Colors at 5 ft
Xrite Chart Colors at 45 ft

Light is quickly attenuated by the water column as the rig descends. A plot of RGB color values throughout a dive is shown below to illustrate the effect. The RGB values correspond to the white color patch on the bottom left of the color target.

Pixel Saturation

The SX260HS aperture, exposure, ISO, and color balance settings were manually fixed at the surface using the camera's onboard light meter as a reference. Unfortunately, the light meter allows individual color channels to saturate within a pixel. As a result many of the brighter patches within the Xrite color target result in saturated pixel readings, a value of 255 in this case. Since saturation is a nonlinear effect it must be avoided. A histogram of the pixel intensities recorded throughout a dive reveals many of the color patches saturate at least one color channel.

During the RGB pixel extraction process, color patches that experienced saturation were flagged as unusable for illuminant estimation. Of the 24 patches on the Xrite color target, only 7 were usable. A map of the usable color patches is shown below.

Underwater Spectra

With known camera color responsivities, known color target reflectances, and a set of extracted RGB pixel data the underwater scene illuminant can be estimated. Rearranging the linear model for color digital photography, we get the following system for each camera photo:

The color patch reflectivities in the model have been merged via element-by-element multiplication to form a single combined spectral response for each color patch. There are a total of 7 usable color targets within each image and 3 pixel color values per target resulting in a total of 21 linear equations usable for estimating the illuminant. There are a total of 51 spectral bands in the responsivity matrix of the color filter array and consequently 51 unknown values in the illuminant estimate. This results in an underdetermined system. To solve the system, an additional smoothness constraint was applied by developing the following optimization objective function, inspired by the multispectral estimation techniques of Park et. al [4].

The system was solved via non-negative least squares optimization in MATLAB using the lsqnonneg function. Since natural illuminants tend to be smooth and are guaranteed to be non-negative this is a reasonable constraint. An estimated, normalized spectrum from a depth of 65 feet is shown below.

By sorting the estimated spectra by depth an intensity map of illuminants can be created, as shown below. Depth in feet is shown along the y-axis and wavelength is shown in nanometers along the x-axis. Illuminant intensity is shown in log scale. Note the dramatic attenuation of light with depth and the spectral peak near 532nm.

Conclusions and Future Work

The proof-of-concept experiment reveals future improvements and experiments for the system: The color target chosen must be analyzed explicitly to confirm the color patch spectra match those in the ISET toolbox. The acrylic technique used to shield the color card must be refined to suppress reflections. A higher resolution setting on the PR-650 colorimeter should be used to refine the color filter array characterization of the digital camera. A shorter exposure setting should be used during data-collection dives to prevent pixel saturation. Finally, more illuminant data should be collected at both the Hopkins Marine Station site and other locations during different seasons to determine the geographic and temporal diversity of the seawater absorption characteristic.

Despite design challenges, however, the underwater color rig was successfully able to characterize the changing underwater illumination with depth using relatively inexpensive equipment. By careful selection and characterization of both digital camera and color target a sufficient basis was established to generate a true multispectral estimate of the underwater illuminant spectrum. The techniques described in this report can be used to process images captured and estimate illumination at depth in a variety of applications from digital photography color balancing to monitoring marine plankton populations or estimating the health of a coral population. Hopefully the design of this inexpensive system will enable further underwater multispectral exploration.

References - Resources and related work

References

[1] Coral Watch Project http://www.coralwatch.org/

[2] Raimondo Schettini and Silvia Corchs, "Underwater Image Processing: State of the Art of Restoration and Image Enhancement Methods" EURASIP Journal on Advances in Signal Processing, Volume 2010, Article ID 746052, 14 pages

[3] Brian A. Wandell, Foundations of Vision, Chapter 9. https://www.stanford.edu/group/vista/cgi-bin/FOV/chapter-9-color/#Linear_Models

[4] Jong-Il Park, Moon-Hyun Lee, Michael D. Grossberg, and Shree K. Nayar "Multispectral Imaging Using Multiplexed Illumination" Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on

Software

Image Systems Engineering Toolbox http://imageval.com/

Canon Hack Development Kit http://chdk.wikia.com/wiki/CHDK

Appendix I - Code and Data

In the belief that the techniques used may be illustrated best by example, the MATLAB code used to perform the multispectral analysis and calibration is available below along with sample data from the project.

Code

All code was written in MATLAB R2012a for Mac OSX, Mountain Lion. External dependencies include the MATLAB Image Processing Toolkit, MATLAB Optimization Toolkit, and the ISET toolkit.

File:UnderwaterColorCode.zip

Data

A total of 2.5GB of image data were collected for this project, and are available upon request. The extracted RGB color patch data, colorimeter responses, and SX260HS responsivity data are available below.

File:UnderwaterColorData.zip

File:CanonSX260HSResponsivity.zip

Presentation

This project was given as a 5-minute presentation to the PSYCH221 Winter 2013 class at Stanford. The presentation files used are linked below.

5min Powerpoint Presentation File

5min PDF Presentation File