Stereo Depth Estimation Algorithms: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Student2017
imported>Student2017
Line 77: Line 77:
Scan line based traversal with a 2D dp matrix of pixel and disparity
Scan line based traversal with a 2D dp matrix of pixel and disparity
==== Plane Sweeping ====
==== Plane Sweeping ====
Take an image and warp to different disparity arrangements until a plan match is found and then iterate and solve for remaining items.
Also referred to as segmentation method, this solution attempts to coarsly segment an image with some disparities and generate epipolar planes with estimated disparities. "After a set of initial cost values for each segment has been stored into a disparity space distribution (DSD), iterative relaxation (or loopy belief propagation, in the more recent work of Zitnick and Kang (2007)) is used to adjust the disparity estimates for each segment"[12].


== Results ==
== Results ==

Revision as of 05:29, 15 December 2017

Introduction

The brain is a fantastic supercomputer with unmatched 3D object recognition and depth estimation. The ocular system has over 120 million rods and 6 million cones which send signals that are funneled into of our visual cortex resulting in our capability to see. The massive amounts of data and processing power of our brain, make it a fascinating phenomenon. Stereo depth is a major part of this machinery and in this paper, we delve into the topic by investigating existing computer algorithms. Depth extraction from stereo estimation boils down to Triangulation. The paper seeks to explain three primary approaches and details surrounding stereo depth extraction.

Background

Computer vision has seen explosive growth during the turn of the century; stereo depth in particular has seen its fair share of research due to its application in panorama mosaic, improved face tracking, facial recognition, autonomous driving, high resolution geometric sampling and many others. The result of this high interest is a plethora of algorithms and papers describing various ways of estimating depth from two images.

A typical stereo algorithm consists of four steps: matching cost computation, cost aggregation, optimization and disparity refinement [1]. Stereo correspondence, the process of matching pixels between stereo images, is essentially an optimization problem. The local or global categories for stereo depth estimation are determined by the way an algorithm solves the stereo correspondence problem.

Methods

"The wide range of modern dense matching algorithms can be assigned to two big categories: local methods, which evaluate the correspondences one point at a time, not considering neighboring points/measures, and global methods where some constraint on the regularity of results are enforced in the estimation"[2]. However, after the advent of the MC-CNN paper[3] a whole host of learning network algorithms have dominated the KITTI data set. Below we will go over the types of depth estimation methods assuming a rectified image has been provided.

Local or Block Method


One Image is picked as a reference and then a block size is searched across the epipolar plane to identify a corresponding pixel on the paired image. Different cost functions can be used to determine the similarity between pixels. A winner takes all strategy is used for finding the location of a corresponding pixel in the search space.

Examples of costing functions

  • Sum of absolutes distance
  • Sum of squared distances
  • Maximize Cross-correlation
  • Cross-correlation
  • Normalized cross-correlation

This can be very efficient and accurate making it particularly suitable for real-time applications [2]. However, block based algorithms are susceptible to images with :

  • Sudden depth variations
  • Higher degree of occlusions
  • Very noisy results in low contrasted textured regions.

The local solutions do not take advantage of smoothing functions as described in global optimization solutions. Most algorithm methods also use a "left-right consistency" disparity refinement step to improve on the captured images. This step helps mitigate some of the occlusion issues by re-running the algorithm after swapping the reference image.

Semi-Global Matching (SGM)

"Local methods, which are based on correlation can have very efficient implementations that are suitable for real-time applications. However, these methods assume constant disparities within a correlation window, which is incorrect at discontinuities and leads to blurred object boundaries"[8]. The semi-global matching approach is a spin on block matching but attempts to remedy the depth discontinuity boundaries by penalizing these areas at a higher cost than the normal cost function. The other change is the use of mutual information as a cost function, instead of the previously mentioned intensity error functions.

Global Method


Global methods handle occlusion and depth discontinuity much better than previous methods because the optimization is done at a global level. The global optimization implementation attempts to minimize an energy function. This method is characterized by a data energy function that tries to minimize the error in identifying pixels and a smoothing function that is designed to model the similarity in depth between neighbouring pixels.

Simulated Annealing

The simulated annealing implementation was first proposed by Stuart Geman and Donal Geman for restoring images using Bayesian approach [9] and later modified for use with stereo vision by Stephen T. Barnard [10]. The simulated annealing solution to depth estimation follows a method of gradually reducing a costing factor (temperature) using a Gibbs distribution. In the initial stages, big swings are allowed so as to avoid finding local minima for the energy minimization task. The method takes its inspiration from physical phenomena of high molecular activity at high temperatures and more stable fixed position of molecules at low temperatures. This method is guranteed to reach the optimal solution but that guarantee is dependent on running till infinity.


The cooling rate is characterized by:

T(k)clog(1+k)


The Gibbs/Boltzman energy function is characterized by:

pi=eεi/kTj=1Meεj/kT

Faster Global MEthods

Markov Random Fields have enabled a statistical method of building out a disparity by using the .

Belief Propagation

The belief propagation implementation of energy minimization is done by a message passing algorithm approach. Each neighbour broadcasts what it thinks its neighbours' disparity is supposed to be and in turn modifies own disparity after getting the opinion of its neighbours. You can imagine a bunch of forces pulling one way and another until a consensus is reached. Belief propagation doesn't have a mathematical proof or guarantee that a consensus can be reached to find global minima, but it is highly performant and is used or the quality of solutions that can be achieved by it.

Graph cuts

Other Methods


Plane sweeping

CNN

MC_CNN is an algorithm that started the movement The latest algorithms have implemented various approaches to improving on the idea of a learned stereo matching. The cost optimization designs, whether global or local, seek to gain an answer of whether a reference pixel is similar or not to the pixel under comparison. The machine learning networks seek to improve on this comparison step by adding apriori knowledge.


Difficult geometry and occlusion of disparate images are considered costs that are to be minimized in previously mentioned implementations. The learned neural nets attempt to be trained with various geometric image snippets with both positive and negative matches to be able to answer the question of similarity with a priori knowledge.

Features are first extracted from input images and then these features are combined together to determine a similarity score

Dynamic Programming

Scan line based traversal with a 2D dp matrix of pixel and disparity

Plane Sweeping

Also referred to as segmentation method, this solution attempts to coarsly segment an image with some disparities and generate epipolar planes with estimated disparities. "After a set of initial cost values for each segment has been stored into a disparity space distribution (DSD), iterative relaxation (or loopy belief propagation, in the more recent work of Zitnick and Kang (2007)) is used to adjust the disparity estimates for each segment"[12].

Results

Local based Algorithms perform well in smooth images with low occlusion <insert mask disparity> <insert bust and light bulb disparity>


Poor performance with increased occlusion <pipes performace> <Jade disparity map>

The costing function makes small changes within the same image <side by side and difference image of cones >


Global optimization does a much better job of handling occlusion and depth discontinuity <Block SSD vs ICM vs Graph cut comparison image here>


The implementation function selected can make a big difference < iterated conditional modes (ICM) vs graph cuts >


However, the latest KITT benchmarks are dominated by CNNs < insert KITT table >

Conclusions

In this paper, we have given a wide range of topics related to the available solutions for computer depth. The paper has attempted to describe the mathematical and computational components of a small piece of the computer vision, namely stereo depth estimation. Surely, this is not an exhaustive review but the paper seeks to introduce the various approaches and their fundamental ideas.

The state of the art with this problem set is wide and varied but with more stereo images with ground truths, the learned approaches are quickly becoming the norm. Although we take our innate capability of understanding the 3-dimensional geometry of the objects we see, the capability was enabled by a huge trove of a priori knowledge on top of the available planar intersection, shadow and ray tracing, relative size, convergence, defocus and accommodation cues. The world of computer vision is barely scratching the surface when compared to what nature has already perfected. The pace of innovation holds much promise that one day we will be able to outperform the human brain not only in depth perception but also in 3D reconstruction and recognition.

Appendix

[1] Scharstein, D., & Szeliski, R. (2002). A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47.
[2] Dall'Asta, E. and Roncella, R., 2014. A comparison of semiglobal and local dense matching algorithms for surface reconstruction. International Archives of
Photogrammetry, Remote Sensing and Spatial Information Sciences, 40(5): 187–194.
[3] Zbontar, Jure, and Yann LeCun. "Stereo matching by training a convolutional neural network to compare image patches." Journal of Machine Learning Research 17.1-32 (2016): 2.
[4] Van Meerbergen, Geert, et al. "A hierarchical symmetric stereo algorithm using dynamic programming." International Journal of Computer Vision 47.1 (2002): 275-285.
[5] Boykov, Yuri, Olga Veksler, and Ramin Zabih. "Fast approximate energy minimization via graph cuts." IEEE Transactions on pattern analysis and machine intelligence23.11 (2001): 1222-1239.
[6] Sun, Jian, Nan-Ning Zheng, and Heung-Yeung Shum. "Stereo matching using belief propagation." IEEE Transactions on pattern analysis and machine intelligence 25.7 (2003): 787-800
[7]Szeliski, Richard, et al. "A comparative study of energy minimization methods for markov random fields with smoothness-based priors." IEEE transactions on pattern analysis and machine intelligence 30.6 (2008): 1068-1080.
[8] Hirschmuller, Heiko. "Accurate and efficient stereo processing by semi-global matching and mutual information." Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. Vol. 2. IEEE, 2005.
[9] Geman, Stuart, and Donald Geman. "Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images." IEEE Transactions on pattern analysis and machine intelligence 6 (1984): 721-741.
[10] Barnard, Stephen T. "A stochastic approach to stereo vision." AAAI. 1986.


some of the content here was derived from UCF Computer Vision class http://crcv.ucf.edu/courses/CAP5415/Fall2012/index.php