Geometric Calibration Of A Stereo Camera

From Psych 221 Image Systems Engineering
Revision as of 06:14, 14 December 2017 by imported>Student2017 (Part I)
Jump to navigation Jump to search

Geometric Calibration Of A Stereo Camera

Introduction

Modern technology has made cameras smarter and smarter over the past several decades, as they offer better and better image quality as well as photo capturing experiences. Stereo cameras have gained much attention because they provide the visual experience closest to that from human eyes. In this project, we take a closer look at the geometric calibration steps of stereo cameras, evaluate the results obtained from both simulations and real camera experiment, and discuss the features and tradeoffs in the calibration process.

Background

JEDEYE

Stereo cameras use two sets of lenses and imaging sensors to capture a pair of two images each time, emulating the binocular visual system of a human being. These images contain 3D depth information as well as the color contend found in regular camera pictures. Such cameras have only been used in the filming industry and highly advanced research fields so far, very few products are available to replace a regular phone camera or more advanced DSLR’s. JEDEYE stereo camera from Fengyun Vision is a new product to solve this problem integrating advanced electronics and stereo cameras. However, a stereo camera needs to be geometrically calibrated first in order to use in an everyday scenario.

Geometric calibration

Geometric camera calibration is the process of estimating the extrinsic and intrinsic parameters of the lens and imaging sensor of a image recording device. These parameters are crucial in correcting lens distortion, depth estimation, 3D scene reconstruction, as well as object measurements. The photo capturing process can be modeled as a transform from 3D world coordinate system to the 2D image coordinates [1]:

W[x y 1]=[X Y Z 1]P

Where the real world coordinates [X Y Z 1] is projected onto the image coordinates [x y 1]. W is a scaling factor for the image, and P is the camera parameter matrix:

P=[R | t] K

[R | t] is the extrinsic matrix, it describes the 3D spatial relationship between the filming object and the camera. It is a 3x3 rotation matrix (R) concatenating with a 3x1 translation vector (t), making a 3x4 matrix. This matrix represents the location of the camera in the 3D scene, as well as the direction it is looking at. It provides rigid transformation from the 3D real world coordinates to the 3D camera’s coordinates. The Intrinsic parameters K characterizes the geometric parameters of the camera including focal length, optical center, and skew coefficient. This matrix then projects the 3D camera coordinates into the image 2D coordinates. The complete image capturing process has two transformation steps, as illustrated in fig. 1.

Part I Calibrating the JEDEYE stereo camera

Calibration steps

A standard 7x9 checkerboard pattern is printed on a letter size paper as the calibration pattern. This pattern is widely used because it has great contrast and many rigid corners that’s easy for the computer to recognize. This checkerboard is taped at the back wall of a light box, to avoid the movement of the calibration pattern. Cameras are then placed at locations with different distance and angle to the checkerboard to take multiple pairs of images. Alternatively, the same images can be acquired by fixing the camera and moving the calibration patterns. However, moving this checkerboard pattern printed on a paper would cause unwanted variations between different different set ups, such as accidental warping of the paper. All images are taken with D65 lighting conditions. Calibration techniques for both single camera and stereo cameras are readily available and incorporated into the matlab toolboxes [2]. Both tools follow the same algorithms to calibrate a camera. In the stereo camera calibrator, it also calculates the geometric parameters between the two cameras of this stereo pair, namely the rotation and translation of the second camera with respect to the first one. Both tools are used in this project and the results are compared here. To get parameters for both cameras, the single camera calibrator is used twice. Interestingly, the reprojection error obtained from the stereo camera calibrator is 0.69 pixels per corner, while the reprojection error from the single camera calibrator is 0.4 pixels per corner, even though the same images are used for the two cases. This is not too surprise after examining the reprojection error distribution. In the stereo camera calibrator, a image pair has to be loaded at the same time, so it is possible that for one pair of image, the left one has very low reprojection error but the right one has higher reprojection error. Therefore it is harder to choose a set of images that work the best for both cameras. Whereas in the single camera calibrator, it is possible to individually choose the images that work best for each camera, resulting in better calibration results. In order to get the best calibration results, the extrinsics and intrincs of each camera is obtained from the single camera calibrator, while the stereo parameters are obtained from the the stereo camera calibrator. Fig. 3 shows the 3D reconstruction of the calibration set up, it matches perfectly with the actual setup.

Part II

Appendix

You can write math equations as follows: y=x+5

You can include images as follows (you will need to upload the image first using the toolbox on the left bar.):