Date of Award

1-1-2010

Document Type

Campus Access Dissertation

Department

Mechanical Engineering

First Advisor

Michael Sutton

Abstract

Basic concepts in probability are employed to develop analytic formulae for both the expectation (bias) and variance for image motions obtained during subset-based pattern matching. Specifically, the expectation and variance in image motions in the presence uncorrelated Gaussian intensity noise for each pixel location are obtained by optimizing a least square intensity matching metric. Results for both 1-D and 2-D image analyses clearly quantify both the bias and the covariance matrix for image motion estimates as a function of (a) interpolation method, (b) sub-pixel motion, (c) intensity noise, (d) contrast, (e) level of uniaxial normal strain and (f) subset size.

For one-dimensional translations, excellent agreement is demonstrated between simulations, theoretical predictions and experimental measurements. The level of agreement confirms that the analytical formulae can be used to provide a priori estimates for the "quality" of local, subset-based measurements achievable with a given pattern.

For one-dimensional strain with linear interpolation, theoretical predictions are provided for the expectation and co-variance matrix for the local displacement and strain parameters. For two-dimensional translations with bi-linear interpolation, theoretical predictions are provided for both the expectation and the co-variance matrix for both displacement components. Theoretical results in both cases show that the expectations for the local parameters are biased and a function of (a) the interpolation difference between the translated and reference images, (b) magnitude of white noise, (c) decimal part of the motion and (d) intensity pattern gradients. For 1D strain, the biases and the covariance matrix for both parameters are directly affected by the strain parameter p1 since the deformed image is stretched by (1+ p1). For 2-D rigid body motion case, the covariance matrix for measured motions is shown to have coupling between the motions, demonstrating that the directions of maximum and minimum variability do not generally coincide with the x- and y-directions.

Combining error analysis results from image matching, and using the basic equations for stereo-vision with established procedures for camera calibration, the error propagation equations for determining both bias and variability in a general 3D position are provided. The results use recent theoretical developments that quantified the bias and variance in image plane positions introduced during image plane correspondence identification for a common 3D point (e.g., pattern matching during measurement process) as a basis for preliminary application of the developments for estimation of 3D position bias and variability.

Extensive numerical simulations and theoretical analyses have been performed for selected stereo system configurations amenable to closed-form solution. Results clearly demonstrate that the general formulae provide a robust framework for quantifying the effect of various stereo-vision parameters and image-plane matching procedures on both the bias and variance in an estimated 3D object position.

Share

COinS