Hypercube Waveband Registration: Difference between revisions
imported>Psych2012 |
imported>Psych2012 |
||
| Line 5: | Line 5: | ||
Hyperspectral cameras are defined by their spectral and spatial resolutions. The hyperspectral imaging system used to capture data for this project consists of two individual hyperspectral cameras mounted side-by-side. One is responsible for the visible and near-infrared (VNIR) portion of the spectrum and operates in the range of 400 to 1000 nm. The other captures the short-wavelength infrared (SWIR) portion and works in the 900 to 2500 nm range. While there are many obvious advantages of using a hyperspectral imaging system, there are imperfections inherent in the design of the system. In this project, we consider the misregistration (pixel misalignment) between bands within either the VNIR or SWIR camera, as well as misregistration between the two cameras. We make an effort to quantify the extent of the misregistration using the mutual information measure in probability and information theory. In the consideration of misregistration, we also encounter other imperfections in the spectral images, of which we pay particular attention to how noise at certain bands affects the processing of the hyperspectral data. | Hyperspectral cameras are defined by their spectral and spatial resolutions. The hyperspectral imaging system used to capture data for this project consists of two individual hyperspectral cameras mounted side-by-side. One is responsible for the visible and near-infrared (VNIR) portion of the spectrum and operates in the range of 400 to 1000 nm. The other captures the short-wavelength infrared (SWIR) portion and works in the 900 to 2500 nm range. While there are many obvious advantages of using a hyperspectral imaging system, there are imperfections inherent in the design of the system. In this project, we consider the misregistration (pixel misalignment) between bands within either the VNIR or SWIR camera, as well as misregistration between the two cameras. We make an effort to quantify the extent of the misregistration using the mutual information measure in probability and information theory. In the consideration of misregistration, we also encounter other imperfections in the spectral images, of which we pay particular attention to how noise at certain bands affects the processing of the hyperspectral data. | ||
[[File:HyperspectralCamera.jpg]] | [[File:HyperspectralCamera.jpg]] | ||
[[File:HyperspectralCamera.jpg |thumb|300px|center| HySpex hypersectral imaging system]]] | |||
= Methods = | = Methods = | ||
Revision as of 00:51, 18 March 2012
Introduction
Hyperspectral Imaging
Hyperspectral imaging allows us to visualize a vast portion of the electromagnetic spectrum and detect information that would otherwise be invisible to the naked eye. Hyperspectral sensors are able to extend its spectral footprint significantly beyond the visible red, green, and blue bands into the infrared region. The availability of sensor data at a large number of spectral bands generates a three-dimensional hypercube whose levels represent the different spectral bands and whose values at each level represent the sensor-detected light intensity at the corresponding pixel location at that specific band. Analyzing information presented by these additional spectral bands may lead to additional insight on a particular object or scene. For example, different materials have different spectral signatures. The existence of certain material in a scene or an object may be much more evident in a particular waveband than others.
Available Hyperspectral System
Hyperspectral cameras are defined by their spectral and spatial resolutions. The hyperspectral imaging system used to capture data for this project consists of two individual hyperspectral cameras mounted side-by-side. One is responsible for the visible and near-infrared (VNIR) portion of the spectrum and operates in the range of 400 to 1000 nm. The other captures the short-wavelength infrared (SWIR) portion and works in the 900 to 2500 nm range. While there are many obvious advantages of using a hyperspectral imaging system, there are imperfections inherent in the design of the system. In this project, we consider the misregistration (pixel misalignment) between bands within either the VNIR or SWIR camera, as well as misregistration between the two cameras. We make an effort to quantify the extent of the misregistration using the mutual information measure in probability and information theory. In the consideration of misregistration, we also encounter other imperfections in the spectral images, of which we pay particular attention to how noise at certain bands affects the processing of the hyperspectral data.

]
Methods
Feature vs. Area Based Approach
There exist two general categories of registration techniques: area-based and feature-based. Feature-based approaches work under the assumption that the images in question contain distinct features such as corners and edges that can be easily detected using methods such as Harris Corner Point or Zero-Cross Edge Detection. For example, feature-based methods generally work better in urban scenes than terrains. On the other hand, area-based methods rely on the statistical correlations of pixel intensities between corresponding regions of the two images. Cross correlation is a popular area-based measure used to compute the similarity between two images. In this project, we focus on the superior area-based mutual information similarity measure. Mutual information, as proposed by Viola, requires no information on the object's surface properties besides its shape, and has been shown to be robust against variation in illumination. In other words, mutual information works well on scenes that lack distinct features, and when gradient-based corner and edge detection methods fail. Mutual information is also superior to correlation-based methods because it does not degrade with contrast reversal, in which case correlation-based methods provide no well-defined optimal solution.
Mutual Information
Mutual information measures the dependency and redundancy between two images, each represented by a two-dimensional matrix filled with intensity value at each pixel location (matrix entry). By letting the two images to be registered be two random variables and , the mutual information between these images can be calculated by finding the marginal probabilities and with the joint probability . In the calculation of , denotes a certain intensity value in Image A, and denotes intensity value in Image B.
The marginal and joint probabilities are as defined as
While the calculation of the marginal probabilities and is quite straightforward, the computation of the joint probability brings out a two-dimensional joint histogram matrix used to tabulate the occurrences of different intensity values across the two images. Each entry in the joint histogram matrix denotes the number of corresponding pixels with intensity value in Image and intensity value in Image .
The dependence on only intensity values categorizes mutual information as an area-based (or intensity-based) method that works well when there is a lack of features on the image. The inclusion of joint probability statistic helps make mutual information robust against variation in illumination and random noise.
Results
- What you found
Conclusions
- What it means
References
- HySpex Main Specifications. Norsk Elektro Optikk A/S (NEO).
- Skauli, Torbjorn. Hyperspectral Sensor Technology. Norwegian Defence Research Establishment.
- Viola, Paul A. Alignment by Maximization of Mutual Information. Massachusetts Institute of Technology. 1995.
- Kern, Jeffrey P, Marios Pattichis, and Samuel D. Stearns. Registration of Image Cubes Using Multivariate Mutual Information IEEE 0-7803-8 104-1 (2003): 1645-1649.
Appendix I
- Code and Data
Appendix II
- Work partition (if a group project)