Fannjiang: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Projects221
imported>Projects221
Line 37: Line 37:
<math>\mathbf{V} \approx (\Delta \mathbf{p})^2  \sum_k \mathbf{I} (\mathbf{p}_k) e^{-2 \pi i \mathbf{b} \cdot \mathbf{p}_k} </math>.
<math>\mathbf{V} \approx (\Delta \mathbf{p})^2  \sum_k \mathbf{I} (\mathbf{p}_k) e^{-2 \pi i \mathbf{b} \cdot \mathbf{p}_k} </math>.


In effect, the array samples the two-dimensional Fourier transform of the spatial intensity distribution <math> I(\mathbf{p}) </math> of the radio source. Ideally, if we thoroughly sample the Fourier plane, we can invert the transform to reconstruct <math> I(\mathbf{p}) </math>, the image of the radio source. However, the data acquisition quickly gets expensive, as we need to capture one Fourier coefficient per desired pixel in the image. The need to capture so much data has motivated a new generation of ambitious interferometers, including the ALMA, which will use 66 antennas, and the Square Kilometre Array (SKA), which will use several thousand antennas to probe the Fourier plane. Meanwhile, smaller interferometers like the 27-antenna VLA must sample over an extended period of time.
In effect, the array samples the two-dimensional Fourier transform of the spatial intensity distribution <math> I(\mathbf{p}) </math> of the radio source. It collects one Fourier coefficient with each baseline, or each pair of antennas. Ideally, if we thoroughly sample the Fourier plane, we can then invert the transform to reconstruct <math> I(\mathbf{p}) </math>, the image of the radio source. However, the data acquisition quickly gets expensive, as we need to capture one Fourier coefficient per desired pixel in the image. The need to capture so much data has motivated a new generation of ambitious interferometers, including the ALMA, which will use 66 antennas, and the Square Kilometre Array (SKA), which will use several thousand antennas to probe the Fourier plane. Meanwhile, smaller interferometers like the 27-antenna VLA must sample over an extended period of time.


Despite such efforts, there are always irregular holes on the Fourier plane where sampling of the visibility function is thin or simply nonexistent. This data deficiency is currently managed by filling in zeros for unknown visibility values, and applying deconvolution algorithms such as CLEAN to the resulting “dirty images”. However, is it necessary to collect so much data in the first place?  
Despite such efforts, there are always irregular holes on the Fourier plane where sampling of the visibility function is thin or simply nonexistent. This data deficiency is currently managed by filling in zeros for unknown visibility values, and applying deconvolution algorithms such as CLEAN to the resulting “dirty images”. However, is it necessary to collect so much data in the first place?  
Line 49: Line 49:
If <math>\mathbf{s}</math> is sparse, intuitively we should seek a sparse solution to the inverse problem. Indeed, in the absence of noise <math>\mathbf{s}</math> is the sparsest solution, i.e., the <math>\mathbf{\hat s}</math> subject to <math>\mathbf{y} = \Phi \mathbf{\hat s}</math> that has the least number of non-zero components. A direct search for this solution, however, is computationally intractable. Given <math>\Phi</math> of favorable characteristics (described below), an <math>\ell_1</math>-minimization problem can be solved instead, such that we search for the solution <math>\mathbf{\hat s}</math> with the smallest <math>\ell_1</math>-norm, or sum of the absolute values of the coefficients. CS recovery schemes also include greedy algorithms to solve a similar "<math>\ell_0</math>-minimization" problem, such as OMP, which I used (mostly because it runs much faster than basis pursuit and LASSO, the classic <math>\ell_1</math>-minimization methods).
If <math>\mathbf{s}</math> is sparse, intuitively we should seek a sparse solution to the inverse problem. Indeed, in the absence of noise <math>\mathbf{s}</math> is the sparsest solution, i.e., the <math>\mathbf{\hat s}</math> subject to <math>\mathbf{y} = \Phi \mathbf{\hat s}</math> that has the least number of non-zero components. A direct search for this solution, however, is computationally intractable. Given <math>\Phi</math> of favorable characteristics (described below), an <math>\ell_1</math>-minimization problem can be solved instead, such that we search for the solution <math>\mathbf{\hat s}</math> with the smallest <math>\ell_1</math>-norm, or sum of the absolute values of the coefficients. CS recovery schemes also include greedy algorithms to solve a similar "<math>\ell_0</math>-minimization" problem, such as OMP, which I used (mostly because it runs much faster than basis pursuit and LASSO, the classic <math>\ell_1</math>-minimization methods).


Recovering<math> \mathbf{x}</math> requires that the measurement matrix <math>\Phi = \Theta \Psi</math> does not corrupt or lose key features of <math>\mathbf{s}</math> in mapping higher-dimension <math>\mathbf{s}</math> to the lower-dimension <math>\mathbf{y}</math>. One simple and intuitive measure of the information content of <math>\mathbf{s}</math> is its Euclidean norm. So we would hope that <math>\Phi</math> nearly preserves the Euclidean norm of <math>\mathbf{s}</math>, which implies that every possible subset of <math>\mathit{S}</math> columns of <math>\Phi</math> is approximately orthonormal. (A rigorous formulation of this quality is known as the restricted isometry property, but unfortunately it cannot be checked empirically: checking every <math>\mathit{S}</math>-combination of columns of <math>\Phi</math> for near-orthogonality is a combinatorial process and NP-hard.)  
Recovering <math>\mathbf{x}</math> requires that the measurement matrix <math>\Phi = \Theta \Psi</math> does not corrupt or lose key features of <math>\mathbf{s}</math> in mapping higher-dimension <math>\mathbf{s}</math> to the lower-dimension <math>\mathbf{y}</math>. One simple and intuitive measure of the information content of <math>\mathbf{s}</math> is its Euclidean norm. So we would hope that <math>\Phi</math> nearly preserves the Euclidean norm of <math>\mathbf{s}</math>, which implies that every possible subset of <math>\mathit{S}</math> columns of <math>\Phi</math> is approximately orthonormal. (A rigorous formulation of this quality is known as the restricted isometry property, but unfortunately it cannot be checked empirically: checking every <math>\mathit{S}</math>-combination of columns of <math>\Phi</math> for near-orthogonality is a combinatorial process and NP-hard.)  


In radio interferometry, <math>\Theta</math> is given by the van Cittert-Zernike theorem to be a partial Fourier ensemble. For my project, I used the DCT matrix as <math>\Psi</math>. Due to the role of the antenna array in selecting the rows of <math>\Theta</math>, the array plays a key role in designing an incoherent measurement matrix <math>\Phi</math>. Understanding what arrays are best for CS, as Wenger et al. 2010 explores, is therefore an important step before it can be used in radio interferometry in practice.
In radio interferometry, <math>\Theta</math> is given by the van Cittert-Zernike theorem to be a partial Fourier ensemble. For my project, I used the DCT matrix as <math>\Psi</math>. Due to the role of the antenna array in selecting the rows of <math>\Theta</math>, the array plays a key role in designing an incoherent measurement matrix <math>\Phi</math>. Understanding what arrays are best for CS, as Wenger et al. 2010 explores, is therefore an important step before it can be used in radio interferometry in practice.


==Methods==
==Methods==
===Software===
All of my simulated image reconstructions were done in MATLAB.
===Array Models===
The VLA's antenna coordinates are published in the ''VLA Green Book'', and are plotted below. An array measures one Fourier coefficient with each pair of antennas, or baseline, so the set of baselines is also plotted. The baselines give the Fourier plane sampling of the array.
<center>
<gallery widths=400px heights=400px>
File:VLAAntennas.jpg | VLA antenna configuration. Coordinates taken from the ''VLA Green Book''.
File:VLABaselines.jpg | VLA baselines. Calculated by finding the displacement vectors between all pairs of antennas in the array.
</gallery>
</center>


==Results==
==Results==
Line 77: Line 94:


==Appendix==
==Appendix==
Link to a PostScript-format download of the ''VLA Green Book'', which publishes the coordinates of their antenna array in Chapter 1: [https://science.nrao.edu/facilities/vla/obsolete/green-book]


Links to the test images, taken from the VLA's beautiful online image gallery:
Links to the test images, taken from the VLA's beautiful online image gallery:

Revision as of 03:24, 21 March 2013

Compressed Sensing in Astronomical Imaging

Introduction

Compressed sensing, which proposes the recovery of sparse signals from highly sub-Nyquist sampling, is a recent advancement in signal processing that has always intrigued me. Just as image compression schemes like JPEG have helped lighten expensive data storage and transmission, compressed sensing could reform the expensive data acquisition that is an element of many modern imaging systems. (Romberg 2008 gives a great introduction.) For my project, I did a simple implementation of compressed sensing for an imaging system used in astronomy, known as radio interferometry. Early in the literature of compressed sensing, radio interferometry was noted in Wiaux et al. 2009 and others as an ideal candidate for application, as it conforms to the two principles compressed sensing works by. The first is signal sparsity, where most of the signal's coefficients are zero or near-zero in some transform space; the second is incoherence of the measurement matrix, or a property known as the restricted isometry property, where (roughly speaking) the underdetermined measurement matrix has columns that are "nearly" orthogonal. The measurement matrix in radio interferometry is in essence the "partial Fourier ensemble" described in Donoho 2006, or "the collection of n×m matrices made by sampling n rows out of the m × m Fourier matrix". The partial Fourier ensemble was one of the first measurement matrices proven to work well for compressed sensing, as shown in Candès et al. 2006, which makes radio interferometry a natural candidate.

"Sampling n rows out of the m x m Fourier matrix" leaves open the question of which n rows, however. In radio interferometry, the set of rows is determined by the placement of the radio antennas in the antenna array. Random sampling has proven to be critical to much of compressed sensing theory, which clashes with the highly patterned arrays used in radio interferometry such as the prominent Very Large Array (VLA). Wenger et al. 2010 noted this clash, and ran a few numerical simulations to see whether a randomized array or the traditional patterned array would work better for compressed sensing. The authors report that the VLA's patterned array actually outperforms a uniform random array, which I thought was an interesting and somewhat counterintuitive result. So for my project, I ran similar numerical simulations, with some deviations from the paper:

  • I incorporated the discrete cosine transform (DCT) covered in class to represent images sparsely, whereas Wenger et al. just took advantage of the fact that radio images tend to be sparse in the pixel domain.
  • I used orthogonal matching pursuit (OMP), a greedy compressed sensing algorithm, whereas the original paper used SparseRI, a compressed sensing recovery scheme the authors designed specifically for radio interferometry.
  • I compared the VLA to a Gaussian array, as many of the massive up-and-coming interferometers like the Atacama Large Millimetre/submillimetre Array (ALMA) seem to be centrally condensed as well as randomized, whereas the paper compared the VLA to a uniform random array.

To simulate signals from astronomical radio sources, I just took images the VLA has published on their online image gallery [1] and vectorized them. If the radio source is thought of of as an array of point sources, then the brightness value of each pixel in the image represents the intensity of each point source.

Background

Imaging in Radio Astronomy

Interferometry is the definitive imaging tool of radio astronomy, allowing us to image finely structured radio sources such as galaxies, nebulas, and supernova remnants by using an array of many antennas to emulate a single lens. The aperture of the array is the greatest pairwise distance between antennas—which, at several kilometers for interferometers like the VLA in New Mexico, gives us far higher imaging resolution than a single lens could.

The Very Large Array, which uses 27 antennas.

Radio interferometry works by measuring the visibility function, or the interference fringes of the radio signal, at every pair of antennas in the array. The van Cittert-Zernike theorem gives the visibility function, as measured by one pair of antennas in the array, over the viewing window of the sky P as

where is the displacement vector between the two antennas, called the baseline, and is the intensity of the radiation from direction . Assuming a small field of view, such that the object lives on the plane instead of the sphere, the visibility measurements can be approximated as

.

In effect, the array samples the two-dimensional Fourier transform of the spatial intensity distribution of the radio source. It collects one Fourier coefficient with each baseline, or each pair of antennas. Ideally, if we thoroughly sample the Fourier plane, we can then invert the transform to reconstruct , the image of the radio source. However, the data acquisition quickly gets expensive, as we need to capture one Fourier coefficient per desired pixel in the image. The need to capture so much data has motivated a new generation of ambitious interferometers, including the ALMA, which will use 66 antennas, and the Square Kilometre Array (SKA), which will use several thousand antennas to probe the Fourier plane. Meanwhile, smaller interferometers like the 27-antenna VLA must sample over an extended period of time.

Despite such efforts, there are always irregular holes on the Fourier plane where sampling of the visibility function is thin or simply nonexistent. This data deficiency is currently managed by filling in zeros for unknown visibility values, and applying deconvolution algorithms such as CLEAN to the resulting “dirty images”. However, is it necessary to collect so much data in the first place?

Among imaging's most promising developments in recent years is the theory of compressed sensing (CS), which has shown that the information of a signal can be preserved even when sampling does not fulfill the fundamental Nyquist rate. The theory revolves around a priori knowledge that the signal is sparse or compressible in some basis, in which case its information naturally resides in a relatively small number of coefficients. Instead of directly sampling the signal, whereby full sampling would be inevitable in finding every non-zero or significant coefficient, CS allows us to compute just a few inner products of the signal along measurement vectors of certain favorable characteristics. (Here, the measurement vectors are the Fourier-like measurement vectors described by the van Cittert-Zernike theorem above.) The novelty of CS over image compression is that it takes advantage of image compressibility to lighten data acquisition, not just data storage.

Compressed Sensing

Consider a signal in , which has a sparse representation with the columns of an basis matrix , such that . Given that is -sparse, meaning it has only non-zero components (or, more generally, given that is -compressible and has only "significant components") and given measurement vectors of certain favorable characteristics, CS proposes that we should not have to take the complete set of inner-product measurements to recover the signal. If is an matrix whose rows are the measurement vectors, then CS aims to invert the underdetermined linear system , where is a vector of the measurements of and we call the measurement matrix.

If is sparse, intuitively we should seek a sparse solution to the inverse problem. Indeed, in the absence of noise is the sparsest solution, i.e., the subject to that has the least number of non-zero components. A direct search for this solution, however, is computationally intractable. Given of favorable characteristics (described below), an -minimization problem can be solved instead, such that we search for the solution with the smallest -norm, or sum of the absolute values of the coefficients. CS recovery schemes also include greedy algorithms to solve a similar "-minimization" problem, such as OMP, which I used (mostly because it runs much faster than basis pursuit and LASSO, the classic -minimization methods).

Recovering requires that the measurement matrix does not corrupt or lose key features of in mapping higher-dimension to the lower-dimension . One simple and intuitive measure of the information content of is its Euclidean norm. So we would hope that nearly preserves the Euclidean norm of , which implies that every possible subset of columns of is approximately orthonormal. (A rigorous formulation of this quality is known as the restricted isometry property, but unfortunately it cannot be checked empirically: checking every -combination of columns of for near-orthogonality is a combinatorial process and NP-hard.)

In radio interferometry, is given by the van Cittert-Zernike theorem to be a partial Fourier ensemble. For my project, I used the DCT matrix as . Due to the role of the antenna array in selecting the rows of , the array plays a key role in designing an incoherent measurement matrix . Understanding what arrays are best for CS, as Wenger et al. 2010 explores, is therefore an important step before it can be used in radio interferometry in practice.

Methods

Software

All of my simulated image reconstructions were done in MATLAB.

Array Models

The VLA's antenna coordinates are published in the VLA Green Book, and are plotted below. An array measures one Fourier coefficient with each pair of antennas, or baseline, so the set of baselines is also plotted. The baselines give the Fourier plane sampling of the array.


Results

Conclusions

References

Candès, E., Romberg, J., and Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52:489 - 509, 2006. [2]

Donoho, D.L. Compressed sensing. IEEE Trans. Inform. Theory, 52:1289 -1306, 2006. [3]

Romberg, J. Imaging via compressive sampling. IEEE Signal Proc. Mag., 25:14 - 20, 2008. [4]

Wenger, S., Darabi, S., Sen, P., Glassmeier, K., and Magnor, M. Compressed sensing for aperture synthesis imaging. Proc. IEEE International Conf. on Imag. Proc, 1381, 2010. [5]

Wiaux, Y., Jacques, L., Puy, G., Scaife, A.M.M., and Vandergheynst, P. Compressed sensing imaging techniques for radio interferometry. Monthly Notices of the Royal Astr. Soc., 395:1733 - 1742, 2009. [6]

Appendix

Link to a PostScript-format download of the VLA Green Book, which publishes the coordinates of their antenna array in Chapter 1: [7]

Links to the test images, taken from the VLA's beautiful online image gallery:

Radio image of the Crab Nebula: [8]

Radio image of the Cassiopeia A supernova remnants: [9]

Radio image of the galaxy M87: [10]

Radio image of the galaxy 3C58: [11]