KEYWORDS: 3D image reconstruction, Kinematics, X-ray sources, Image intensifiers, 3D modeling, 3D metrology, 3D image processing, Accuracy assessment, Calibration, Error analysis, Magnetic resonance imaging, Fluoroscopy, Imaging systems
High-speed biplanar videoradiography, or clinically referred to as dual fluoroscopy (DF), imaging systems are being used increasingly for skeletal kinematics analysis. Typically, a DF system comprises two X-ray sources, two image intensifiers and two high-speed video cameras. The combination of these elements provides time-series image pairs of articulating bones of a joint, which permits the measurement of bony rotation and translation in 3D at high temporal resolution (e.g., 120-250 Hz). Assessment of the accuracy of 3D measurements derived from DF imaging has been the subject of recent research efforts by several groups, however with methodological limitations. This paper presents a novel and simple accuracy assessment procedure based on using precise photogrammetric tools. We address the fundamental photogrammetry principles for the accuracy evaluation of an imaging system. Bundle adjustment with selfcalibration is used for the estimation of the system parameters. The bundle adjustment calibration uses an appropriate sensor model and applies free-network constraints and relative orientation stability constraints for a precise estimation of the system parameters. A photogrammetric intersection of time-series image pairs is used for the 3D reconstruction of a rotating planar object. A point-based registration method is used to combine the 3D coordinates from the intersection and independently surveyed coordinates. The final DF accuracy measure is reported as the distance between 3D coordinates from image intersection and the independently surveyed coordinates. The accuracy assessment procedure is designed to evaluate the accuracy over the full DF image format and a wide range of object rotation. Experiment of reconstruction of a rotating planar object reported an average positional error of 0.44 ± 0.2 mm in the derived 3D coordinates (minimum 0.05 and maximum 1.2 mm).
Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.
It is frequently necessary in archaeology to map excavated features so their structure can be recorded before they are dismantled in order for the excavation to continue. This process can be time consuming, error prone and manually intensive. Three-dimensional recording devices, which have the advantage of being faster, less labor intensive and more detailed, present an attractive alternative method of mapping. A small, portable hand scanner such as the DotProduct DPI-7, could be used for this purpose. However, the three-dimensional data collected from this device contain systematic distortions that cause errors in the recorded shape of the features being mapped. The performance of the DPI-7 scanner is evaluated in this paper using self-calibration based techniques. A calibration field consisting of spherical targets rigidly mounted on a planar background was imaged from multiple locations, and the target deviations from expected locations are used to quantify the performance of the device. The largest source of systematic error in the DPI-7 data was found to be a scale error affecting dimensions orthogonal to the depth. These in-plane distortions were modeled using a single scale factor parameter in the self-calibration solution, resulting in a 54% reduction in the RMS coordinate errors.
KEYWORDS: Calibration, Cameras, 3D image processing, 3D-TOF imaging, Time of flight cameras, Distance measurement, Scattering, 3D acquisition, 3D modeling, Error analysis
Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world’s most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.
Most of the methods described in the literature for automatic hand gesture recognition make use of classification
techniques with a variety of features and classifiers. This research focuses on the frequently-used ones by performing a
comparative analysis using datasets collected with a range camera. Eight different gestures were considered in this
research. The features include Hu-moments, orientation histograms and hand shape associated with its distance
transformation image. As classifiers, the k-nearest neighbor algorithm and the chamfer distance have been chosen. For
an extensive comparison, four different databases have been collected with variation in translation, orientation and scale.
The evaluation has been performed by measuring the separability of classes, and by analyzing the overall recognition
rates as well as the processing times. The best result is obtained from the combination of the chamfer distance classifier
and hand shape and distance transformation image, but the time analysis reveals that the corresponding processing time
is not adequate for a real-time recognition.
A three-dimensional range camera is a state-of-the-art imaging technology that has strong potential for various closerange
high-precision measurement applications. One such application is the measurement of structural deformation
under external loading conditions. Deformation tests have been conducted on two concrete beams with and without
steel-reinforced polymer sheets in an indoor testing facility using an SR4000 range camera. The achieved measurement
precision and accuracy were both within 1 mm when compared with a terrestrial laser scanner. Further testing on the
concrete beam with the steel-reinforced polymer sheets has shown that a deformation as small as 3 mm can be reliably
detected with a range camera with a measurement precision of 0.3 mm and an accuracy of 0.4 mm. These results clearly
indicate the high metric potential of 3D range cameras in spite of their coarse imaging resolution and low (centimeterlevel)
single point accuracy. The high accuracy can be achieved thanks to the differencing scheme used to derive the
deflection estimates from two sets of range camera measurements, one at no-load and one of the beam in a loaded state,
which eliminates the scene-dependent range biases such as scattering and multi-path errors.
This paper reports on an investigation designed to quantify the systematic and random error properties of range
measurements from the SwissRanger SR-3000 range camera as a function of reflecting-surface color. This is achieved
with an integrated self-calibrating bundle adjustment of image co-ordinate and range observations of a network of targets
having three different colors (black, mid-level grey and white). Four different self-calibration adjustments are
performed: one per target color and a combined one comprising all targets. The systematic effects of the different target
colors are modeled with one rangefinder offset parameter per color. Results show considerable differences (up to 75
mm) between the different rangefinder offset parameters. The stochastic properties of the range observations, measured
in terms of the residual root mean square error, also differed considerably among the adjustment cases. Range
observations to black targets were found to be much noisier than those of the other targets, with white being the least
noisy. High correlations (up to 0.96) between the rangefinder offset and perspective center co-ordinates were found in all
adjustments.
KEYWORDS: Laser scanners, Modulation transfer functions, Spatial resolution, 3D scanning, Point spread functions, 3D modeling, Systems modeling, Imaging systems, Electro optical modeling, Laser systems engineering
Laser scanner angular resolution greatly depends on both the spatial sampling interval and laser beamwidth, though often the former is emphasized and the latter overlooked. Given the widespread use of 3-D laser scanners, a rigorous metric that unifies both factors is necessary to accurately model system resolution. A new resolution measure that incorporates both sampling and beamwidth is derived using an ensemble average linear system theory. Analysis of two commercially available scanning systems demonstrates the need for the new measure.
The recent emergence of high-resolution laser scanning technology offers unprecedented levels of data density for close range metrology applications such as deformation monitoring and industrial inspection. The scanner's pulsed laser ranging device coupled with beam deflection mechanisms facilitates rapid acquisition of literally millions of 3D point measurements. Perhaps the greatest advantage of such a system lies in the high sample density that permits accurate and detailed surface modeling as well as superior visualization relative to existing measurement technologies. As with any metrology technique, measurement accuracy is critically dependent upon instrument calibration. This aspect has been, and continues to be, an important research topic within the photogrammetric community. Ground-based laser scanners are no exception, and appropriate calibration procedures are still being developed. The authors' experience has shown that traditional sensor calibration techniques, in some instances, can not be directly applied to laser scanners. This paper details an investigation into the calibration and use the Cyrax 2400 3D laser scanner. With its variable spatial resolution and high accuracy, the Cyrax offers great potential for close range metrology applications. A series of rigorous experiments were conducted in order to quantify the instrument's precision and accuracy.
Image point displacements due to systematic errors in the image formation process are typically modeled in analytical photogrammetry with polynomial expressions. An alternative to this approach is the concept that the displacement of an image point is equivalent to a proportional change in the camera focal length for that particular location. The finite element method (FEM) of self-calibration, as developed by R.A.H. Munjy can be used to model focal length changes due to inherent systematic errors. This paper presents the results of an investigation into the use of the FEM for charge-coupled device (CCD) camera calibration. Two CCD cameras were calibrated using both the polynomial approach and the FEM in order to determine the adequacy of this alternative model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.