The success of the next generation of instruments for ELT class telescopes will depend upon improving the image quality by exploiting sophisticated Adaptive Optics (AO) systems. One of the critical components of the AO systems for the E-ELT has been identified as the optical Laser/Natural Guide Star WFS detector. The combination of large format, 1760×1680 pixels to finely sample the wavefront and the spot elongation of laser guide stars, fast frame rate of 700 frames per second (fps), low read noise (< 3e-), and high QE (> 90%) makes the development of this device extremely challenging. Design studies concluded that a highly integrated Backside Illuminated CMOS Imager built on High Resistivity silicon as the most likely technology to succeed. Two generations of the CMOS Imager are being developed: a) the already designed and manufactured NGSD (Natural Guide Star Detector), a quarter-sized pioneering device of 880×840 pixels capable of meeting first light needs of the E-ELT; b) the LGSD (Laser Guide Star Detector), the larger full size device. The detailed design is presented including the approach of using massive parallelism (70,400 ADCs) to achieve the low read noise at high pixel rates of ~3 Gpixel/s and the 88 channel LVDS 220Mbps serial interface to get the data off-chip. To enable read noise closer to the goal of 1e- to be achieved, a split wafer run has allowed the NGSD to be manufactured in the more speculative, but much lower read noise, Ultra Low Threshold Transistors in the unit cell. The NGSD has come out of production, it has been thinned to 12μm, backside processed and packaged in a custom 370pin Ceramic PGA (Pin Grid Array). First results of tests performed both at e2v and ESO are presented.
Time-of-flight (TOF) full-field range cameras use a correlative imaging technique to generate three-dimensional measurements of the environment. Though reliable and cheap they have the disadvantage of high measurement noise and errors that limit the practical use of these cameras in industrial applications. We show how some of these limitations can be overcome with standard image processing techniques specially adapted to TOF camera data. Additional information in the multimodal images recorded in this setting, and not available in standard image processing settings, can be used to improve reduction of measurement noise. Three extensions of standard techniques, wavelet thresholding, adaptive smoothing on a clustering based image segmentation, and an extended anisotropic diffusion filtering, make use of this information and are compared on synthetic data and on data acquired from two different off-the-shelf TOF cameras. Of these methods, the adapted anisotropic diffusion technique gives best results, and is implementable to perform in real time using current graphics processing unit (GPU) hardware. Like traditional anisotropic diffusion, it requires some parameter adaptation to the scene characteristics, but allows for low visualization delay and improved visualization of moving objects by avoiding long averaging periods when compared to traditional TOF image denoising.
Time-of-flight range imaging sensors acquire an image of a scene, where in addition to standard intensity information,
the range (or distance) is also measured concurrently by each pixel. Range is measured using a correlation technique,
where an amplitude modulated light source illuminates the scene and the reflected light is sampled by a gain modulated
image sensor. Typically the illumination source and image sensor are amplitude modulated with square waves, leading to
a range measurement linearity error caused by aliased harmonic components within the correlation waveform. A simple
method to improve measurement linearity by reducing the duty cycle of the illumination waveform to suppress
problematic aliased harmonic components is demonstrated. If the total optical power is kept constant, the measured
correlation waveform amplitude also increases at these reduced illumination duty cycles.
Measurement performance is evaluated over a range of illumination duty cycles, both for a standard range imaging
camera configuration, and also using a more complicated phase encoding method that is designed to cancel aliased
harmonics during the sampling process. The standard configuration benefits from improved measurement linearity for
illumination duty cycles around 30%, while the measured amplitude, hence range precision, is increased for both
methods as the duty cycle is reduced below 50% (while maintaining constant optical power).
Time-of-flight range imaging cameras measure distance and intensity simultaneously for every pixel in an image. With
the continued advancement of the technology, a wide variety of new depth sensing applications are emerging; however
a number of these potential applications have stringent electrical power constraints that are difficult to meet with the
current state-of-the-art systems. Sensor gain modulation contributes a significant proportion of the total image sensor
power consumption, and as higher spatial resolution range image sensors operating at higher modulation frequencies (to
achieve better measurement precision) are developed, this proportion is likely to increase. The authors have developed
a new sensor modulation technique using resonant circuit concepts that is more power efficient than the standard mode
of operation. With a proof of principle system, a 93-96% reduction in modulation drive power was demonstrated across
a range of modulation frequencies from 1-11 MHz. Finally, an evaluation of the range imaging performance revealed
an improvement in measurement linearity in the resonant configuration due primarily to the more sinusoidal shape of the
resonant electrical waveforms, while the average precision values were comparable between the standard and resonant
operating modes.
Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single
viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path
problem, which causes range distortions when stray light interferes with the range measurement in a given pixel.
Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but
enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating
the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only
been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this
paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator
unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization
approach, rather than relying on the processing provided by the manufacturer, to determine the individual component
returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene.
Time-of-flight range imaging is typically performed with the amplitude modulated continuous wave method. This
involves illuminating a scene with amplitude modulated light. Reflected light from the scene is received by the sensor
with the range to the scene encoded as a phase delay of the modulation envelope. Due to the cyclic nature of phase, an
ambiguity in the measured range occurs every half wavelength in distance, thereby limiting the maximum useable range
of the camera.
This paper proposes a procedure to resolve depth ambiguity using software post processing. First, the range data is
processed to segment the scene into separate objects. The average intensity of each object can then be used to determine
which pixels are beyond the non-ambiguous range. The results demonstrate that depth ambiguity can be resolved for
various scenes using only the available depth and intensity information. This proposed method reduces the sensitivity to
objects with very high and very low reflectance, normally a key problem with basic threshold approaches.
This approach is very flexible as it can be used with any range imaging camera. Furthermore, capture time is not
extended, keeping the artifacts caused by moving objects at a minimum. This makes it suitable for applications such as
robot vision where the camera may be moving during captures.
The key limitation of the method is its inability to distinguish between two overlapping objects that are separated by a
distance of exactly one non-ambiguous range. Overall the reliability of this method is higher than the basic threshold
approach, but not as high as the multiple frequency method of resolving ambiguity.
Time of flight range imaging is an emerging technology that has numerous applications in machine vision. In this paper we cover the use of a commercial time of flight range imaging camera for calibrating a robotic arm. We do this by identifying retro-reflective targets attached to the arm, and centroiding on calibrated spatial data, which allows precise measurement of three dimensional target locations. The robotic arm is an inexpensive model that does not have positional feedback, so a series of movements are performed to calibrate the servos signals to the physical position of the arm. The calibration showed a good linear response between the control signal and servo angles. The calibration procedure also provided a transformation between the camera and arm coordinate systems. Inverse kinematic control was then used to position the arm. The range camera could also be used to identify objects in the scene. With the object location now known in the arm's coordinate system (transformed from the camera's coordinate system) the arm was able to move allowing it to grasp the object.
Time-of-flight range imaging cameras operate by illuminating a scene with amplitude modulated light and measuring
the phase shift of the modulation envelope between the emitted and reflected light. Object distance can
then be calculated from this phase measurement. This approach does not work in multiple camera environments
as the measured phase is corrupted by the illumination from other cameras. To minimize inaccuracies in multiple
camera environments, replacing the traditional cyclic modulation with pseudo-noise amplitude modulation has
been previously demonstrated. However, this technique effectively reduced the modulation frequency, therefore
decreasing the distance measurement precision (which has a proportional relationship with the modulation frequency).
A new modulation scheme using maximum length pseudo-random sequences binary phase encoded onto
the existing cyclic amplitude modulation, is presented. The effective modulation frequency therefore remains
unchanged, providing range measurements with high precision. The effectiveness of the new modulation scheme
was verified using a custom time-of-flight camera based on the PMD19-K2 range imaging sensor. The new
pseudo-noise modulation has no significant performance decrease in a single camera environment. In a two camera
environment, the precision is only reduced by the increased photon shot noise from the second illumination
source.
A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and
distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source
is intensity modulated at a frequency between 10-100 MHz, and an image sensor is modulated at the same frequency,
synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested
as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously
for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of
these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating
parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for
a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the
modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the
characterization data these parameters can be identified and compensated for by modifying the sensor hardware or
through post processing of the acquired range measurements.
A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains
distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene,
hence is well suited to certain machine vision applications.
Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512)
pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s).
Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many
situations. The system's frame rate and length of acquisition is fully configurable in software, which means the
measurement rate can be increased by compromising precision and image resolution.
In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging
at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show
that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the
traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat
signal frequency selection.
Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using
an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in
the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.
Solid-state full-field range imaging technology, capable of determining the distance to objects in a scene simultaneously
for every pixel in an image, has recently achieved sub-millimeter distance measurement precision. With this level of
precision, it is becoming practical to use this technology for high precision three-dimensional metrology applications.
Compared to photogrammetry, range imaging has the advantages of requiring only one viewing angle, a relatively short
measurement time, and simplistic fast data processing. In this paper we fist review the range imaging technology, then
describe an experiment comparing both photogrammetric and range imaging measurements of a calibration block with
attached retro-reflective targets. The results show that the range imaging approach exhibits errors of approximately
0.5 mm in-plane and almost 5 mm out-of-plane; however, these errors appear to be mostly systematic. We then proceed
to examine the physical nature and characteristics of the image ranging technology and discuss the possible causes of
these systematic errors. Also discussed is the potential for further system characterization and calibration to compensate
for the range determination and other errors, which could possibly lead to three-dimensional measurement precision
approaching that of photogrammetry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.