Fast Alignment of Image Bursts for Google's HDR+
Introduction
Standard imaging systems often lack the dynamic range necessary to properly expose all parts of a given scene. Typically, an exposure value is chosen to compromise between overexposure, with bright regions lacking detail due to saturation, and underexposure, with dark regions predominantly consisting of noise. High dynamic range (HDR) imaging refers to a family of techniques for alleviating this tradeoff. A common approach to HDR imaging is to capture the same scene multiple times using multiple exposure values. A short exposure produces good results in bright regions, a long exposure increases the signal-to-noise ratio in dark regions, and a number of other exposure values in between can be used to properly expose different parts of the scene. All of the resulting images can then be merged into a single image in which all content appears to be well exposed. This method is effective in certain scenarios, but it necessarily suffers with scene motion and camera motion; the frames must be aligned in order to merge them, but alignment becomes difficult when different features are prominent in each image, and this problem worsens when some or all portions of the scene move between frames.
A great deal of photography today is done with the image sensors built into mobile phones; the very existence of platforms like Instagram and Snapchat are evidence of this. These sensors have increasingly high resolution and general performance, but they are inherently limited by their small size, the quality of the optical assemblies attached to them, the computing power available to them for image processing, the likelihood that they will move while capturing images, and the responsiveness and high throughput desired by their users. The first two limitations suggest that mobile image sensors would be good candidates for HDR imaging. The last three suggest that the common HDR method described above would be a bad candidate for this paradigm. In response to this problematic situation, Google developed a different HDR algorithm optimized for mobile imaging platforms with the constraints discussed above. This project explores a prototype Matlab implementation of frame alignment, a key step in the algorithm, with the goal of understanding which parts of the process most affect the speed of computation.
Background
Google's HDR+ algorithm [1] is designed to operate on bursts of raw frames directly from the image sensor, eventually merging them into a single raw image that can then follow the normal demosaicking, sensor correction, optics correction, tone mapping, and associated processing steps that a traditional single-exposure image would undergo. The process is roughly broken up into four steps: capture, align, merge, finish.
Capture
Instead of attempting to perform exposure bracketing, HDR+ uses a burst of intentionally-underexposed frames. These frames are captured in quick succession, and each frame is captured with the same exposure value. Underexposure increases the chances of preserving detail in the bright parts of each frame.
Methods
Results
Conclusions
Appendix
You can write math equations as follows:
You can include images as follows (you will need to upload the image first using the toolbox on the left bar.):
References
<references />