PengStarobinets: Difference between revisions
imported>Projects221 No edit summary |
imported>Projects221 No edit summary |
||
| Line 15: | Line 15: | ||
<center> Recording of the moon with effects of turbulence<sup>1</sup></center> | <center> Recording of the moon with effects of turbulence<sup>1</sup></center> | ||
There have been many approaches to solving this problem that attempt to restore a single high-quality image from an observed frame sequence distorted by air turbulence. As in the video, these approaches, and the approach addressed in this paper, work under the assumption that the scene and the image sensor are both static and that observed motions are due to the air turbulence alone. | There have been many approaches to solving this problem that attempt to restore a single high-quality image from an observed | ||
frame sequence distorted by air turbulence. As in the video, these approaches, and the approach addressed in this paper, | |||
work under the assumption that the scene and the image sensor are both static and that observed motions are due to the air | |||
turbulence alone. | |||
The imaging process can be modeled as: | The imaging process can be modeled as: | ||
<math> g_k = H_kF_kf + n_k </math> where <math> f </math> denotes the ideal image, <math> F_k </math> and <math> H_k </math> represent the geometric deformation and blurring matrices respectively, <math> n_k </math> denotes additive noise, and <math> g_k </math> is the k-th observed frame. | <math> g_k = H_kF_kf + n_k </math> where <math> f </math> denotes the ideal image, <math> F_k </math> and <math> H_k </math> | ||
represent the geometric deformation and blurring matrices respectively, <math> n_k </math> denotes additive noise, and | |||
<math> g_k </math> is the k-th observed frame. | |||
The key then becomes to basically reverse this process so that we can find the desired corrected image. | The key then becomes to basically reverse this process so that we can find the desired corrected image. | ||
| Line 29: | Line 39: | ||
== Multi-Frame Reconstruction Framework == | == Multi-Frame Reconstruction Framework == | ||
First, a nonrigid image registration technique is used to register each observed frame with respect to a fixed reference grid (usually generated from an average of the image sequence to eliminate spatial variation)<sup>1</sup>. Next, the registration parameters are used to estimate the corresponding motion field for each frame and one frame is transformed back to a non-deformed position. Finally, a sharp image is formed through a Bayesian reconstruction filter. | First, a nonrigid image registration technique is used to register each observed frame with respect to a fixed reference | ||
grid (usually generated from an average of the image sequence to eliminate spatial variation)<sup>1</sup>. Next, the | |||
registration parameters are used to estimate the corresponding motion field for each frame and one frame is transformed back | |||
to a non-deformed position. Finally, a sharp image is formed through a Bayesian reconstruction filter. | |||
== Lucky Imaging == | == Lucky Imaging == | ||
Image selection and fusion methods are used to reduce the blurring effects caused by turbulence<sup>5</sup>. The image selection algorithm attempts to find frames of the best quality (lucky frames) from the image stream. The output image is then produced by fusing these best-quality images together. The principle behind this method is that for short-exposuer images, turbulence creates 'mutations' in image quality and randomly makes some images sharp. Lucky-image-based weighting schemes can be used to select the sharpest image or images with certain sharp regions, and combine them to create an image that is sharp everywhere. | Image selection and fusion methods are used to reduce the blurring effects caused by turbulence<sup>5</sup>. The image | ||
selection algorithm attempts to find frames of the best quality (lucky frames) from the image stream. The output image is | |||
then produced by fusing these best-quality images together. The principle behind this method is that for short-exposuer | |||
images, turbulence creates 'mutations' in image quality and randomly makes some images sharp. Lucky-image-based weighting | |||
schemes can be used to select the sharpest image or images with certain sharp regions, and combine them to create an image | |||
that is sharp everywhere. | |||
== Our Method == | == Our Method == | ||
To correct geometric distortion and reduce space and time-varying blur, we are proposing a new approach in this work capable of restoring a single high-quality image from a given image sequence distorted by atmospheric turbulence. This approach uses two major steps. For the first step, we use a B-spline based non-rigid image registration algorithm to register each observed frame with respect to a reference image<sup>1</sup>. To improve registration accuracy, a symmetry constraint is introduced to the cost function to penalize inconsistency between the forward and backward deformation parameters during the estimation process. A fast Gauss-Newton method is used to reduce the computational cost of the registration algorithm. | To correct geometric distortion and reduce space and time-varying blur, we are proposing a new approach in this work capable | ||
Our new idea and an addition to this approach involves separating a single frame into a set of overlapping sub-frames and running the earlier-described algorithm on those sub-frames. In addition to reducing computational load and time, this improves on the original method by reducing the effect of systematic errors on the frame, since the errors are localized and isolated to the particular block rather than affecting the whole image. Another benefit of this approach is that scaling the number of the test points causes a much smaller processing time increase as opposed to the processing time that would be required for the same number of test points if applied to the whole image. The second step of our proposal is to stitch the undeformed back into a single image and then, using a high-pass sharpening filter produce a final, high-quality image. | |||
of restoring a single high-quality image from a given image sequence distorted by atmospheric turbulence. This approach uses | |||
two major steps. For the first step, we use a B-spline based non-rigid image registration algorithm to register each | |||
observed frame with respect to a reference image<sup>1,4</sup>. To improve registration accuracy, a symmetry constraint is | |||
introduced to the cost function to penalize inconsistency between the forward and backward deformation parameters during the | |||
estimation process. A fast Gauss-Newton method is used to reduce the computational cost of the registration algorithm. | |||
Our new idea and an addition to this approach involves separating a single frame into a set of overlapping sub-frames and | |||
running the earlier-described algorithm on those sub-frames. In addition to reducing computational load and time, this | |||
improves on the original method by reducing the effect of systematic errors on the frame, since the errors are localized and | |||
isolated to the particular block rather than affecting the whole image. Another benefit of this approach is that scaling the | |||
number of the test points causes a much smaller processing time increase as opposed to the processing time that would be | |||
required for the same number of test points if applied to the whole image. The second step of our proposal is to stitch the | |||
undeformed back into a single image and then, using a high-pass sharpening filter produce a final, high-quality image. | |||
| Line 45: | Line 93: | ||
==== Non-rigid Image Registration ==== | ==== Non-rigid Image Registration ==== | ||
First, we obtain a reference image without image deformation, generally by averaging the frame sequence. The math is specified below in the equations section. However, in general, the geometric distortion between an observed distorted image and the reference image is obtained by the movement of control points placed in a grid on the images. B-spline interpolation is then used to map all the points from the reference image to the deformed image<sup>1</sup>. A cost function is then calculated to minimize both the error between the mapped points and maximize symmetry between forward and backward deformations. After this cost function is minimized, a motion vector from the deformed image to the reference image is generated. This process is repeated for each sub-frame and then the sub-frames are recombined into a single image. | First, we obtain a reference image without image deformation, generally by averaging the frame sequence. The math is | ||
specified below in the equations section. However, in general, the geometric distortion between an observed distorted image | |||
and the reference image is obtained by the movement of control points placed in a grid on the images. B-spline interpolation | |||
is then used to map all the points from the reference image to the deformed image<sup>1</sup>. A cost function is then | |||
calculated to minimize both the error between the mapped points and maximize symmetry between forward and backward | |||
deformations. After this cost function is minimized, a motion vector from the deformed image to the reference image is | |||
generated. This process is repeated for each sub-frame and then the sub-frames are recombined into a single image. | |||
The figure below shows an example of the sub-frame implementation of turbulence compensation. | The figure below shows an example of the sub-frame implementation of turbulence compensation. | ||
[[File:Turbulance_test_points.PNG|400px|thumb|center| (Left) Test points and their deformed positions for each sub-frame. (Right) Deformed location of each pixel for a single subframe.]] | [[File:Turbulance_test_points.PNG|400px|thumb|center| (Left) Test points and their deformed positions for each sub-frame. | ||
(Right) Deformed location of each pixel for a single subframe.]] | |||
= Equations= | = Equations= | ||
Equations | Equations analyzed and used are from [1] and [3]. | ||
The initial test points are taken with equal spacing and can be represented as: | The initial test points are taken with equal spacing and can be represented as: | ||
<math> \hat{\mathbf{x}}_{0i} = (\hat{x}_{0i},\hat{y}_{0i})^T </math> | <math> \hat{\mathbf{x}}_{0i} = (\hat{x}_{0i},\hat{y}_{0i})^T </math> | ||
Running correlation algorithm, we found the deformed locations of the test points. This difference between the original | Running correlation algorithm, we found the deformed locations of the test points. This difference between the original | ||
positions and the deformed position are then stored in the deformation vector: | |||
Using B-spline interpolation, we defined the spline basis <math> c_i </math> and the basis function matrix <math> \mathbf{A{(x)}} </math> for each pixel where <math> \mathbf{x}=(x,y)^T </math>. | <center> <math> \vec{\mathbf{p}} = [\hat{x}_{0i}-\hat{x}_{i},\hat{y}_{0i}-\hat{y}_{i}]^T </math> = <math> [\Delta \hat{x}_ | ||
{1},...,\Delta \hat{x}_{i},\Delta \hat{y}_{1},...,\Delta \hat{y}_{i}]^T </math> </center> | |||
Using B-spline interpolation, we defined the spline basis <math> c_i </math> and the basis function matrix <math> \mathbf{A | |||
{(x)}} </math> for each pixel where <math> \mathbf{x}=(x,y)^T </math>. | |||
By definition, <math> c_i </math> is: | By definition, <math> c_i </math> is: | ||
<center> <math> c_i = \Beta (\frac{x-\hat{x}_{0i}}{h_x}) </math> <math> \Beta (\frac{y-\hat{y}_{0i}}{h_y}) </math>, where <math> \Beta (z) = \begin{cases} | <center> <math> c_i = \Beta (\frac{x-\hat{x}_{0i}}{h_x}) </math> <math> \Beta (\frac{y-\hat{y}_{0i}}{h_y}) </math>, where | ||
2/3-(1-\left | z \right | /2)z^2 \mbox{, if }0\le \left | z \right | \le 1 \\ (2-\left | z \right | )^3/6 \mbox{, if }1\le \left | z \right | \le 2\\ 0 \mbox{, otherwise} \\ \end{cases} </math> </center> | |||
<math> \Beta (z) = \begin{cases} | |||
2/3-(1-\left | z \right | /2)z^2 \mbox{, if }0\le \left | z \right | \le 1 \\ (2-\left | z \right | )^3/6 \mbox{, if }1\le | |||
\left | z \right | \le 2\\ 0 \mbox{, otherwise} \\ \end{cases} </math> </center> | |||
The basis function matrix is then: | The basis function matrix is then: | ||
<center> <math> \mathbf{A(x)} = \begin{bmatrix} c_1 & \cdots & c_n & 0 & \cdots & 0 \\ 0 & \cdots & 0 & c_1 & \cdots & c_n \end{bmatrix} </math> </center> | <center> <math> \mathbf{A(x)} = \begin{bmatrix} c_1 & \cdots & c_n & 0 & \cdots & 0 \\ 0 & \cdots & 0 & c_1 & \cdots & c_n | ||
\end{bmatrix} </math> </center> | |||
| Line 75: | Line 149: | ||
<center> <math> \mathbf{W(x; } \vec{\mathbf{p}} \mathbf{)} = \mathbf{x} + \mathbf{A(x)} \vec{\mathbf{p}} </math> </center> | <center> <math> \mathbf{W(x; } \vec{\mathbf{p}} \mathbf{)} = \mathbf{x} + \mathbf{A(x)} \vec{\mathbf{p}} </math> </center> | ||
As suggested in Professor Milanfar's paper, instead of using a classic B-spline registration approach to the estimation of | As suggested in Professor Milanfar's paper, instead of using a classic B-spline registration approach to the estimation of | ||
<center> <math> C(\overrightarrow{\mathbf{p}})= \sum_{x} \left | R(\mathbf{W(x; } \overrightarrow{\mathbf{p}} \mathbf{)})-G(\mathbf{x}) \right |^2 </math> </center> | the deformation vector through minimization of the following cost function, | ||
<center> <math> C(\overrightarrow{\mathbf{p}})= \sum_{x} \left | R(\mathbf{W(x; } \overrightarrow{\mathbf{p}} \mathbf{)})-G | |||
(\mathbf{x}) \right |^2 </math> </center> | |||
which is prone to local minima traps, one can use the following cost function: | which is prone to local minima traps, one can use the following cost function: | ||
<center> <math> C(\overrightarrow{\mathbf{p}})= \sum_{x} \left | R(\mathbf{W(x; } \overrightarrow{\mathbf{p}} \mathbf{)}) - G(\mathbf{x}) \right |^2 + \sum_{x} \left | G( \mathbf{W(x; } \overleftarrow{\mathbf{p}} \mathbf{)}) - R(\mathbf{x}) \right |^2 + \gamma (\overrightarrow{\mathbf{p}} + \overleftarrow{\mathbf{p}})^T(\overrightarrow{\mathbf{p}} + \overleftarrow{\mathbf{p}}) </math> </center> | <center> <math> C(\overrightarrow{\mathbf{p}})= \sum_{x} \left | R(\mathbf{W(x; } \overrightarrow{\mathbf{p}} \mathbf{)}) - | ||
G(\mathbf{x}) \right |^2 + \sum_{x} \left | G( \mathbf{W(x; } \overleftarrow{\mathbf{p}} \mathbf{)}) - R(\mathbf{x}) | |||
\right |^2 + \gamma (\overrightarrow{\mathbf{p}} + \overleftarrow{\mathbf{p}})^T(\overrightarrow{\mathbf{p}} + | |||
\overleftarrow{\mathbf{p}}) </math> </center> | |||
by initial approximation that: | by initial approximation that: | ||
| Line 94: | Line 178: | ||
The following equations describe the system: | The following equations describe the system: | ||
<center> <math> \mathbf{H} = \begin{bmatrix} \overrightarrow{\mathbf{H}}+\mathbf{\gamma I} & \mathbf{\gamma I} \\ \mathbf{\gamma I} & \overleftarrow{\mathbf{H}}+\mathbf{\gamma I} \end{bmatrix} </math>, where <math> \gamma =5000 </math>. </center> | <center> <math> \mathbf{H} = \begin{bmatrix} \overrightarrow{\mathbf{H}}+\mathbf{\gamma I} & \mathbf{\gamma I} \\ \mathbf | ||
{\gamma I} & \overleftarrow{\mathbf{H}}+\mathbf{\gamma I} \end{bmatrix} </math>, where <math> \gamma =5000 </math>. | |||
</center> | |||
<center> <math> \overrightarrow{\mathbf{H}} = \sum_{x} \overrightarrow{\mathbf{d}} \mathbf{(x)} \overrightarrow{\mathbf{d}} | |||
\mathbf{(x)}^T, \overleftarrow{\mathbf{H}} = \sum_{x} \overleftarrow{\mathbf{d}} \mathbf{(x)} \overleftarrow{\mathbf{d}} | |||
\mathbf{(x)}^T </math> </center> | |||
The faster implementation, as suggested in the paper is calculating <math> \overrightarrow{\mathbf{H}} </math> and <math> \overleftarrow{\mathbf{H}} </math> the following way: | The faster implementation, as suggested in the paper is calculating <math> \overrightarrow{\mathbf{H}} </math> and <math> | ||
\overleftarrow{\mathbf{H}} </math> the following way: | |||
<center> <math> \overrightarrow{\mathbf{H}} = \sum_{x} \overrightarrow{\mathbf{H_x}} </math> </center> | <center> <math> \overrightarrow{\mathbf{H}} = \sum_{x} \overrightarrow{\mathbf{H_x}} </math> </center> | ||
| Line 104: | Line 198: | ||
where: | where: | ||
<center> <math> \overrightarrow{\mathbf{H_x}} = \begin{bmatrix} r_x(\mathbf{W})^2 \mathbf{C_x} & r_x(\mathbf{W})r_y(\mathbf | <center> <math> \overrightarrow{\mathbf{H_x}} = \begin{bmatrix} r_x(\mathbf{W})^2 \mathbf{C_x} & r_x(\mathbf{W})r_y(\mathbf | ||
{W}) \mathbf{C_x} \\ r_x(\mathbf{W})r_y(\mathbf{W}) \mathbf{C_x} & r_y(\mathbf{W})^2 \mathbf{C_x} \end{bmatrix} </math> | |||
</center> | |||
where <math> \frac{\part R(\mathbf{W(x; } \overrightarrow{\mathbf{p}}^l \mathbf{)})}{\part \mathbf{W}} = [r_x(\mathbf{W}),r_y(\mathbf{W})]</math> and is the gradient of the averaged image R at the deformed pixel location <math> \mathbf{W(x; } \overrightarrow{\mathbf{p}}^l \mathbf{)}) </math>. | <center> <math> \overrightarrow{\mathbf{d}} \mathbf{(x)^T} = \frac{\part R(\mathbf{W(x; } \overrightarrow{\mathbf{p}}^l | ||
\mathbf{)})}{\part \mathbf{W}} \mathbf{A(x)}, \overleftarrow{\mathbf{d}} \mathbf{(x)^T} = \frac{\part G(\mathbf{W(x; } | |||
\overleftarrow{\mathbf{p}}^l \mathbf{)})}{\part \mathbf{W}} \mathbf{A(x)} </math> </center> | |||
<center> <math> \overrightarrow{\mathbf{b}} = \sum_{x} \overrightarrow{\mathbf{d}} \mathbf{(x)} [R( \mathbf{W(x; } | |||
\overrightarrow{\mathbf{p}}^l)) - G(\mathbf{x} )], \overleftarrow{\mathbf{b}} = \sum_{x} \overleftarrow{\mathbf{d}} | |||
\mathbf{(x)} [G( \mathbf{W(x; } \overleftarrow{\mathbf{p}}^l)) - R(\mathbf{x} )] </math> </center> | |||
where <math> \frac{\part R(\mathbf{W(x; } \overrightarrow{\mathbf{p}}^l \mathbf{)})}{\part \mathbf{W}} = [r_x(\mathbf | |||
{W}),r_y(\mathbf{W})]</math> and is the gradient of the averaged image R at the deformed pixel location <math> \mathbf{W(x; | |||
} \overrightarrow{\mathbf{p}}^l \mathbf{)}) </math>. | |||
= Results = | = Results = | ||
After a number of iterations to minimize the cost function, the new deformation matrix <math> \mathbf{W(x; } \overleftarrow{\mathbf{p}})</math> for a single subframe can look like this: | After a number of iterations to minimize the cost function, the new deformation matrix <math> \mathbf{W(x; } \overleftarrow | ||
{\mathbf{p}})</math> for a single subframe can look like this: | |||
[[File:deform_after_iter.png|300px|thumb|center|Deformation matrix after 10 iterations]] | [[File:deform_after_iter.png|300px|thumb|center|Deformation matrix after 10 iterations]] | ||
Using our process flow, we generated a set of sub-frames. As can be seen in the figure below, each individual sub-frame suffers from edge artifacts. Overlapping removes these artifacts and the final stitched image is also shown. | Using our process flow, we generated a set of sub-frames. As can be seen in the figure below, each individual sub-frame | ||
suffers from edge artifacts. Overlapping removes these artifacts and the final stitched image is also shown. | |||
[[File:before_after_stitching.PNG|400px|thumb|center|Frame before and after the stitching algorithm]] | [[File:before_after_stitching.PNG|400px|thumb|center|Frame before and after the stitching algorithm]] | ||
| Line 125: | Line 239: | ||
= Conclusions = | = Conclusions = | ||
In this project, we proposed a new framework for restoration of a single high-quality frame from an image sequence distorted by air turbulence. The proposed algorithm uses a multi-frame reconstruction approach with the addition of sub-frame evaluations with overlap and sub-image stitching to create a final image. The algorithm registers the sub-frames to suppress spatial and geometric deformation using a B-spline based non-rigid image registration method. Next the frames are stitched together using an edge-correlation algorithm. Finally, a sharpening high-pass filter is applied to the final image to generate a final high-quality output. We feel that our results show that our method is both an effective implementation of the multi-frame reconstruction framework and possibly improves on the process in some ways, by preventing localized errors from affecting the whole image. | In this project, we proposed a new framework for restoration of a single high-quality frame from an image sequence distorted | ||
by air turbulence. The proposed algorithm uses a multi-frame reconstruction approach with the addition of sub-frame | |||
evaluations with overlap and sub-image stitching to create a final image. The algorithm registers the sub-frames to suppress | |||
spatial and geometric deformation using a B-spline based non-rigid image registration method. Next the frames are stitched | |||
together using an edge-correlation algorithm. Finally, a sharpening high-pass filter is applied to the final image to | |||
generate a final high-quality output. We feel that our results show that our method is both an effective implementation of | |||
the multi-frame reconstruction framework and possibly improves on the process in some ways, by preventing localized errors | |||
from affecting the whole image. | |||
= References = | = References = | ||
[1] Zhu, X. and Milanfar, P., “<span style="text-decoration:underline">Image Reconstruction from Videos Distorted by Atmospheric Turbulance</span>”, <i>SPIE Electronic Imaging, Conference 7543 on Visual Information Processing and Communication</i>, San Jose, CA, 2010 | [1] Zhu, X. and Milanfar, P., “<span style="text-decoration:underline">Image Reconstruction from Videos Distorted by | ||
Atmospheric Turbulance</span>”, <i>SPIE Electronic Imaging, Conference 7543 on Visual Information Processing and | |||
Communication</i>, San Jose, CA, 2010 | |||
[2] Shimizu, M., Yoshimura, S., Tanaka, M., “<span style="text-decoration:underline">Super-Resolution from Image Sequence | |||
under Influence of Hot-Air Optical Turbulence</span>”, <i>CVPR</i> 2008 , 1–8 (June 2008) | |||
[3] Zhu, X. and Milanfar P., “<span style="text-decoration:underline">Removing Atmospheric Turbulence via Space-Invariant | |||
Deconvolution</span>”, <i>IEEE Trans. on Pattern Analysis and Machine Intelligence</i> vol. 35, no. 1, pp. 157-170, Jan. | |||
2013 | |||
[ | [4] Szelinski, R. and Coughlan J., “<span style="text-decoration:underline">Spline-Based Image Registration</span>”, | ||
<i>CRL</i>, Jan. 1994 | |||
[ | [5] Joshi, N. and Cohen M., “<span style="text-decoration:underline">Seeing Mt. Rainier: Lucky Imaging for Multi-Image | ||
Denoising, Sharpening, and Haze Removal</span>”, Microsoft Research, <i>ICCP</i>, 2010 | |||
= Software = | = Software = | ||
Revision as of 06:07, 20 March 2013
Back to Psych 221 Projects 2013
Background
The performance of long-distance imaging systems can often be strongly affected by atmospheric turbulence caused by variation of refractive index along the optical transmission path. Such turbulence can produce geometric distortion, space and time-variant defocus blur, and motion blur. An example is shown in the following video of the moon.
Below is the exmaple of atmospheric turbulence that was used for this project:

There have been many approaches to solving this problem that attempt to restore a single high-quality image from an observed
frame sequence distorted by air turbulence. As in the video, these approaches, and the approach addressed in this paper,
work under the assumption that the scene and the image sensor are both static and that observed motions are due to the air
turbulence alone.
The imaging process can be modeled as:
where denotes the ideal image, and
represent the geometric deformation and blurring matrices respectively, denotes additive noise, and
is the k-th observed frame.
The key then becomes to basically reverse this process so that we can find the desired corrected image.
Methods
Existing restoration algorithms for this problem can generally be categorized in two ways.
Multi-Frame Reconstruction Framework
First, a nonrigid image registration technique is used to register each observed frame with respect to a fixed reference
grid (usually generated from an average of the image sequence to eliminate spatial variation)1. Next, the
registration parameters are used to estimate the corresponding motion field for each frame and one frame is transformed back
to a non-deformed position. Finally, a sharp image is formed through a Bayesian reconstruction filter.
Lucky Imaging
Image selection and fusion methods are used to reduce the blurring effects caused by turbulence5. The image
selection algorithm attempts to find frames of the best quality (lucky frames) from the image stream. The output image is
then produced by fusing these best-quality images together. The principle behind this method is that for short-exposuer
images, turbulence creates 'mutations' in image quality and randomly makes some images sharp. Lucky-image-based weighting
schemes can be used to select the sharpest image or images with certain sharp regions, and combine them to create an image
that is sharp everywhere.
Our Method
To correct geometric distortion and reduce space and time-varying blur, we are proposing a new approach in this work capable
of restoring a single high-quality image from a given image sequence distorted by atmospheric turbulence. This approach uses
two major steps. For the first step, we use a B-spline based non-rigid image registration algorithm to register each
observed frame with respect to a reference image1,4. To improve registration accuracy, a symmetry constraint is
introduced to the cost function to penalize inconsistency between the forward and backward deformation parameters during the
estimation process. A fast Gauss-Newton method is used to reduce the computational cost of the registration algorithm. Our new idea and an addition to this approach involves separating a single frame into a set of overlapping sub-frames and
running the earlier-described algorithm on those sub-frames. In addition to reducing computational load and time, this
improves on the original method by reducing the effect of systematic errors on the frame, since the errors are localized and
isolated to the particular block rather than affecting the whole image. Another benefit of this approach is that scaling the
number of the test points causes a much smaller processing time increase as opposed to the processing time that would be
required for the same number of test points if applied to the whole image. The second step of our proposal is to stitch the
undeformed back into a single image and then, using a high-pass sharpening filter produce a final, high-quality image.
Proposed Approach
Non-rigid Image Registration
First, we obtain a reference image without image deformation, generally by averaging the frame sequence. The math is
specified below in the equations section. However, in general, the geometric distortion between an observed distorted image
and the reference image is obtained by the movement of control points placed in a grid on the images. B-spline interpolation
is then used to map all the points from the reference image to the deformed image1. A cost function is then
calculated to minimize both the error between the mapped points and maximize symmetry between forward and backward
deformations. After this cost function is minimized, a motion vector from the deformed image to the reference image is
generated. This process is repeated for each sub-frame and then the sub-frames are recombined into a single image.
The figure below shows an example of the sub-frame implementation of turbulence compensation.
Equations
Equations analyzed and used are from [1] and [3]. The initial test points are taken with equal spacing and can be represented as:
Running correlation algorithm, we found the deformed locations of the test points. This difference between the original
positions and the deformed position are then stored in the deformation vector:
Using B-spline interpolation, we defined the spline basis and the basis function matrix for each pixel where .
By definition, is:
The basis function matrix is then:
With the basis function matrix, we can then define the deformed coordinates for every pixel:
As suggested in Professor Milanfar's paper, instead of using a classic B-spline registration approach to the estimation of
the deformation vector through minimization of the following cost function,
which is prone to local minima traps, one can use the following cost function:
by initial approximation that:
where we initially assume that
Then, after each iteration, the cost deformation vector can be updated as:
The following equations describe the system:
The faster implementation, as suggested in the paper is calculating and the following way:
where:
where and is the gradient of the averaged image R at the deformed pixel location .
Results
After a number of iterations to minimize the cost function, the new deformation matrix for a single subframe can look like this:

Using our process flow, we generated a set of sub-frames. As can be seen in the figure below, each individual sub-frame
suffers from edge artifacts. Overlapping removes these artifacts and the final stitched image is also shown.
The images below show the frames as the progression from distorted image to the final improved one.
Conclusions
In this project, we proposed a new framework for restoration of a single high-quality frame from an image sequence distorted
by air turbulence. The proposed algorithm uses a multi-frame reconstruction approach with the addition of sub-frame
evaluations with overlap and sub-image stitching to create a final image. The algorithm registers the sub-frames to suppress
spatial and geometric deformation using a B-spline based non-rigid image registration method. Next the frames are stitched
together using an edge-correlation algorithm. Finally, a sharpening high-pass filter is applied to the final image to
generate a final high-quality output. We feel that our results show that our method is both an effective implementation of
the multi-frame reconstruction framework and possibly improves on the process in some ways, by preventing localized errors
from affecting the whole image.
References
[1] Zhu, X. and Milanfar, P., “Image Reconstruction from Videos Distorted by
Atmospheric Turbulance”, SPIE Electronic Imaging, Conference 7543 on Visual Information Processing and
Communication, San Jose, CA, 2010
[2] Shimizu, M., Yoshimura, S., Tanaka, M., “Super-Resolution from Image Sequence
under Influence of Hot-Air Optical Turbulence”, CVPR 2008 , 1–8 (June 2008)
[3] Zhu, X. and Milanfar P., “Removing Atmospheric Turbulence via Space-Invariant
Deconvolution”, IEEE Trans. on Pattern Analysis and Machine Intelligence vol. 35, no. 1, pp. 157-170, Jan.
2013
[4] Szelinski, R. and Coughlan J., “Spline-Based Image Registration”,
CRL, Jan. 1994
[5] Joshi, N. and Cohen M., “Seeing Mt. Rainier: Lucky Imaging for Multi-Image
Denoising, Sharpening, and Haze Removal”, Microsoft Research, ICCP, 2010
Software
This project has been done using MATLAB R2011b.
Appendix I - Code and Data
Code
Data
The data has been provided by Professor Milanfar. Originally provided by NASA Langley Research Center.
Appendix II - Work partition
Both of us have been working together on the project/wiki as a team with even work distribution.