PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The evaluation of the coefficient of thermal expansion (CTE) from the observed temperature induced length changes becomes the more difficult the lower the final uncertainty of the CTE is desired. On a scale of nanometers the length as a function of the sample temperature clearly deviates from the linear approximation so that higher polynomials are used as fit functions to the measured data. From such polynomials of a certain degree the CTE can easily be evaluated according to its definition. In this paper it is demonstrated in which way the corresponding uncertainty of the CTE can
be calculated in accordance with the GUM what is done on the basis of symbolic computation by means of MATHEMATICA. On the other hand, the arbitrariness of the choice of the polynomial order causes an additional uncertainty contribution as discussed in this paper. Examples are given to illustrate the mentioned problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Significant progress in high-precision temperature measurements of material artifacts has been achieved as a result of a development of a new approach which takes explicitly into account the effect of a heat waves propagation in a sample. The approach requires a complete change of a routine procedure in temperature measurements. In order to detect the time delay in a heat wave propagation, it is necessary to have two detectors which synchronously measure the temperatures and temperature rates in two different points of a sample. When these four parameters are recorded for a number of slow heating and cooling procedures, the measurement results can be corrected on the velocity error, which is associated with the time delay in the wave propagation between the two measurement points on a sample. As an intermediate result in the new method, we obtain the value of the thermal field variation (temperature gradient) between the two points of the artifact. We also use a special type of synchronous detection, with a complete averaging within a half-cycle of a modulation procedure, in order to measure precisely a self-heating effect of a resistance thermometer, which is attached to the artifact surface. This results in a reduction of the uncertainty of measurements to a few μK level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The evaluation of the measurement uncertainty of a robust all-fiber-based low-coherence interferometer for the measurement of absolute thickness of transparent artifacts is described. The performance of the instrument is evaluated by measuring the length of air-gaps in specially constructed artifacts and the observed measurement errors are discussed in the context of the uncertainty associated with them. A description of the construction of the artifacts is presented, accompanied by an uncertainty analysis to estimate the uncertainty associated with the artifacts. This analysis takes into account the dimensional uncertainty of the artifacts (including wringing effects), thermal effects, and effects of the environment on refractive index. The 'out-of-the-box' performance of the instrument is first evaluated. A maximum error of 350 nm for an air-gap of 10.1 mm is observed. A linear trend between the measured length and the error is also observed. The relative magnitude of the errors and the uncertainty associated with the error suggests that this trend is real and that a performance enhancement can be expected by mapping the error. Measurements of the artifacts are used to develop an error map of the instrument. The uncertainty associated with the predicted error is determined based on the uncertainty associated with the error. This analysis suggests that the uncertainty in the predicted error at the 2σ level may be conservatively estimated to be (2.9L+37.5) nm, where L is in units of mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large Koesters interferometer for long gauge block measurements is known as a unique instrument, which for many decades is being used for the realization of the SI length unit with extremely low uncertainty values. High-precision certification of the temperature measuring system of the original Koesters interferometer has been realized, when using two alternative systems, operating simultaneously. The uncertainty level of each of the reference system is well below 1 mK. Our systems are based on platinum resistance thermometers (PRTs), calibrated directly on a gauge block with the
application of the correction on velocity error. Advantages and some drawbacks of the original system are outlined. We show that in the original Koesters interferometer of INMETRO, as a result of systematic thermocouple offsets, which are specific for each particular thermocouple and the way how it is located inside the instrument, the accuracy of the system is limited to about 2 mK. The basic result of this study is that the chamber of the interferometer permits to improve gauge block temperature measurements, when the fine effects, associated with a small overheating of the gauge block surface by the measurement current of resistance thermometer, can be detected. Those effects are energy dissipation of a heat wave during its propagation in the artifact and the velocity effect in self-heating measurements. Local overheating of long gauge blocks by PRT current, which lies in range of ~0.1 mK, is found to be one of the limiting factors in precise temperature measurements of the blocks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a fringe scanning Fourier transform method to automatically measure the fractional interference order in gauge block interferometry. The advantages of the proposed method are presented with the comparison of measurement results between the existing Fourier transform method and the fringe scanning Fourier Transform method. The configuration of automatic gauge block measuring system is also described, where the proposed method is applied. The standard uncertainty evaluation of the fractional interference order measurement with this method is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Absolute Length, Temperature, and Thermal Expansion II
Under specific conditions, light will undergo a phase change upon reflection. This phase change affects the results of
many precision dimensional measurements. The phase change on reflection was investigated at a glass-metal interface
using samples with evaporated metal strips on the back surface of a wedged glass substrate. The samples were measured
on a phase shifting interferometer and the phase change was calculated from the apparent measured step heights from the
internal reflection at the back glass-metal interface. The background subtraction process was the largest contributor to
the phase change uncertainty. The phase change was measured for gold, copper and aluminum at normal incidence and
for gold at varying angle of incidence. The measured phase change for gold, copper, and aluminum is 131.4° ±3.8°,
173.7° ±3.8°, and 200.7° ±3.8°, respectively. The measured phase change for gold at varying incident angle is shown.
In addition, the effect of the phase change on the radius measurement was investigated. A bias in the radius
measurement due to the phase change was found that is interferometer dependent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The influence of gauge block preparation on their optical reflection properties is investigated. Results of length calibration experiments by optical interferometry performed on 9 steel gauge blocks having a wide range of surface roughness characteristics and pre-treatment are described. These data demonstrate that different conventional cleaning procedures and the use of appropriate wringing fluids have negligible effect on the reflection properties of the measurement surfaces of gauge blocks and platens.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Until today one dimensional length comparators or line scale interferometers are used to realize and disseminate the unit of length. The performance of the vacuum length comparator of the PTB, the Nanometer Comparator, was characterized by measuring photoelectric incremental encoders. In some respects the measurements were used to optimize the performance of the instrument, e.g. with respect to its noise characteristics. The non-linearity of its vacuum interferometer could be determined to show an amplitude of 0.2 nm. The reproducibility of the measurement of an incremental encoder system with 280 mm measuring range was 0.3 nm. Currently, the relative expanded measurement uncertainty for the calibration of incremental encoder systems is in the range of 2x10-8. These results show that incremental encoders are well suited to characterize one dimensional length measuring machines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moire fringe projection techniques are gaining popularity due to their non-contact nature and high accuracy in measuring the surface shape of many objects. The fringe patterns seen when using these instruments are similar to the patterns seen in traditional interferometry but differ in that the spacing between consecutive fringes in traditional interferometry is constant and equal to the wavelength of the source. In moire fringe projection, the spacing (equivalent wavelength) between consecutive fringes may not be constant over the field of view and it depends on the geometry (divergent or parallel) of the set-up. This variation in the equivalent wavelength causes the surface height measurements to be inaccurate. This paper looks at the aberrations that are caused by this varying equivalent wavelength and a calibration process to determine the equivalent wavelength map.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Gaussian model of the radius measurement of micro-optics has been developed and tested using simulations. The model is based on the propagation distances in the interferometer, a heretofore uninvestigated effect. The goal of the model is to determine the bias error in the radius due to the Gaussian Beam propagation model. After testing the model with varying conditions, we have concluded the following: the measured part is smaller than the input, the cat's eye and confocal positions have approximately the same error, radius error increases with smaller test parts, decreasing the numerical aperture increases the errors, and the propagation distances do not affect the radius. The outline of the experimental plan to be used to verify the results is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides a case study for identifying radius measurement uncertainty on a commercially-available optical bench using a homogeneous transformation matrix (or HTM)-based formalism. In this approach, radius is defined using a vector equation, rather than relying solely on the recorded displacement between the confocal and cat's eye null positions (i.e., the projection of the true displacement between these positions on the transducer axis). The vector-based approach enables the stage error motions, as well as other well-known error sources, to be considered through the use of HTMs. An important aspect of this mathematical radius definition is the intrinsic correction for measurement biases, such as cosine error (i.e., misalignment between the stage motion and displacement transducer axis) which would lead to an artificially small radius value if the traditional projection-based radius measurand were employed. Experimental results and measurement techniques are provided for the stage error motions, which are then combined with the setup geometry to determine the radius of curvature for a spherical artifact. Comparisons are shown between the vector-based radius calculation, traditional radius computation, and independent measurements using a coordinate measuring machine. The measurement uncertainty for the vector-based approach is determined using Monte Carlo simulation and is compared to experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The traditional method for calibrating angular indexing repeatability of rotary axes on machine tools and measuring equipment is with a precision polygon (usually 12 sided) and an autocollimator or angular interferometer. Such a setup is typically expensive. Here, we propose a far more cost-effective approach that uses just a laser, diffractive optical element, and CCD camera. We show that significantly high accuracies can be achieved for angular index calibration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An in-situ bootstrap method used at NRC to calibrate an ensemble of instruments for angle metrology traceable to international standards for angle and length is described. No prior knowledge is assumed, beyond nominal values with arbitrary large uncertainty, for the index-table step angles, the autocollimator scale factor, and the sine-bar length. Only the sine-bar displacements are known as calibrated values with uncertainty traceable to the SI unit for length. First, the nominal-length sine-bar is used to check the autocollimator linearity, stability, and estimate a nominal scale factor, thus
giving a first-iteration improvement in the uncertainty of autocollimator readings. Then, the index table (and a polygon) are measured by a full-closure method at the polygon intervals, and index steps of one interval are measured by the caliper method, with results expressed using improved autocollimator readings. This provides improved index angles. Finally, the autocollimator beam is aimed obliquely at the sine-bar mirror, and the beam deflects to the index-table mirror, where it retroreflects back to the autocollimator via the sine bar. As the index table is stepped through a sequence of
angles (with improved uncertainty), the sine-bar angle is adjusted in opposite rotation to produce a zero reading by the autocollimator, and the required sine-bar displacement recorded. This provides a better estimate of the sine-bar length. The steps can be re-iterated--hence bootstrap--to further improve the calibration of each device, until a limit is reached. In recent years, we have linked the data from the three setups in a single spreadsheet analysis, allowing the calibration variables to be jointly and optimally adjusted with just one data run through the three setups. Results using a Moeller-Wedel Elcomat HR autocollimator, a Moore 1440 index table and the NRC sine-bar interferometer are presented, along with an uncertainty analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On a commercial roundness measurement instrument METAS has implemented several modifications offering an up-grade of the measurement capabilities and a full understanding of the measurement process. The Talyrond 73 instrument is equipped with a rotating spindle with an oil-hydrostatic bearing. The modifications include an incremental encoder on the spindle, a vibration free DC motor with variable rotation speed for driving the spindle, new amplifier electronics with selectable gain for the LVDT probe, new software for data acquisition and evaluation, and a heat protection shield. The application of error separation techniques, the complete characterization of all relevant parameters and the high level of precision achieved allow the attribute "primary" be given to this roundness measuring machine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical techniques are used for non contact, high precision measurement tools. The most common optical technique is classical laser interferometry. Although laser interferometers offer high resolution, they suffer from limited dynamic range since the range is related to the wavelength of light. Other optical techniques like scanning white light interferometry and holography overcome this limitation. In this paper we propose a technique to enhance the vertical measurement range of a fringe projection system without reduction in its vertical resolution. It is based on the principle of inverse fringe projection, where the surface form is first measured by projecting a low frequency straight grating, and then used to create high frequency fringes with the proper inverse profile to project back on the surface and measure the surface finish without the impact of the form. The proposed technique is modeled, simulated and tested to measure the form, waviness and roughness of surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stylus tip reconstruction is imperative in tracing and calibration of micro and nano surface roughness measurement either for surface roughness analyzer or for scanning probe microscopy. This research is to investigate the size effects on stylus tip reconstruction in micro and nano roughness measurement. Aspect ratios within and between tips and gages, such as Tip Aspect Ratio (TAR) of tip width to height, Gage Aspect Ratio (GAR), Height Aspect Ratio (HAR) of tip height to gage height, and Width Aspect Ratio (WAR) have been formulated to develop a stylus tip reconstruction method (STRM) to estimate tip profile from the measured profile image and the traced gage profile. A simulated program has been used to test the developed STRM with different aspect ratios of tips and gages. Experiments have been conducted on a Hommelwerke T4000 surface roughness analyzer with a TKL T100 tip of radius 5 μm and a Veeco Dektak 200 surface roughness analyzer with nominal radius values of stylus tip radius 12.5 μm to measure a traced roughness gage (Mitutoyo Serial No. 0300042) of step height of 10 μm and razor blade in ISO 5436 standard. Experimental results show that the difference of STRM on step gage and razor blade measurement is about 4 % and the developed STRM can be further used to estimate the geometric size effects of tip reconstruction in scanning probe microscopy (SPM).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Normally, roughness measurements are carried out with a rather high uncertainty ranging to a few percent up to 10%. This is a rather bad situation considering the accuracy of primary length standards (1:5•10^11) and common dimensional measurements(1:106). In this paper we show that with laser interferometer calibration techniques and extensive uncertainty evaluations this situation can be improved. As the probe of a roughness measuring instrument moves dynamically, a probe calibration should be carried out dynamical as well, in the same frequency range as in which the probe normally operates. For this, a simple yet very effective dynamic probe calibration device has been designed, where the traceability is achieved by a laser interferometric read-out with 30 kHz and 10 nm resolution. With this device a probe can be fully characterized for various amplitudes and frequencies. It is shown that for a Mitutoyo roughness measuring machine the deviations stay well within 1%. Uncertainty estimations for roughness measurements are not straightforward, therefore one takes for safety often quite high estimates for effects like probe size and measurement force. With the VFM ('Virtual Form Measurement') concept this problem is treated by simulating the measurement on the same surface as actually measured, while varying probe diameter, filtering, probe angle, noise etc. and estimating the influence of all these circumstances on the calculated parameter. In this way a task-specific uncertainty estimation is obtained with an outcome which is mostly lower than expected. This is shown with some examples. With improved noise reduction techniques the basic uncertainty can be reduced to the nm-level for the roughness tester used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Spectrally Resolved White Light Interferometry (SRWLI), the white light interferogram is spectrally decomposed by a spectrometer. The interferogram displayed at the exit plane of the spectrometer has a continuous variation of wavelength along the chromaticity axis. This interferogram encodes the phase as a function of wave number. The optical phase is determined at several wavelengths simultaneously to get the surface profile. For a given optical path difference (OPD), the phase is different for different spectral component of the source. The absolute value of OPD can be determined as the slope of the phase versus wave number linear fit. Scanned over the test surface, this OPD gives an unambiguous surface profile. This profile can be improved by the monochromatic phase data which is already available from the measurement. Combining monochromatic phase data which has 2π ambiguity with the slope data a precise profile of the surface is obtained. Noisy data however leads to the misidentification of fringe order which gives unnecessary jumps in the profile. This paper addresses this problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A miniature sensor based on extrinsic Fabry-Perot interferometry is proposed. In this setup, two optical fibers are integrated into a miniature sensing head to produce a pair of quadrature signals for solving direction ambiguity problem. It can also achieve nanometer resolution and at least 10kHz dynamic range after electronic subdivision. Comparing with capacitive senor, it has advantages including electromagnetic interference immunity and long distance measurement capability. In this paper, related theoretical descriptions are introduced and the comparisons with capacitive sensor are conducted. An experiment for detecting the characteristics of hard disk drive with this novel sensor is depicted. Furthermore, a new method of measuring liquid refractive index is described. All the experiment results show that this novel sensor is excellent in displacement and vibration measurement and many other applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coherent interferometric absolute distance metrology is one of the most interesting techniques for length metrology. Without any movement, measurements are made without ambiguity, by using either one or several synthetic wavelengths resulting from the beating of two or more wavelengths (multiple wavelength interferometry) or, in the case of frequency sweeping interferometry (FSI), from a frequency sweep. FSI-based sensors are relatively simple devices and can fulfill an important role on the metrology chain, even for very small relative errors in the context of demanding applications (such as space). In addition, their parameterization flexibility allows tradeoffs to be performed, either technology driven or application related. In the context of the ESA/Darwin technology package, we implemented a FSI sensor composed of a mode-hop free frequency sweep external cavity diode laser, a high finesse Fabry-Perot interferometer to measure accurately the frequency sweep range, homodyne detection and data processing. In this paper, we present in detail the uncertainty budget for the FSI final uncertainty, give examples of different parameterizations, and demonstrate and evaluate sensor performances and robustness for the high precision optical metrology for the Darwin satellite configuration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, the non-linear errors in a commercial heterodyne interferometer are investigated. There are two types of cyclic nonlinearities present in heterodyne interferometers and it is desirable to be able to measure these nonlinearities in order to quantify the uncertainty of the interferometer setup. The current study investigates whether the nonlinearities can be detected by measuring the optical power of the interferometers output signal as a function of its phase. In theory, the optical power can be described as a perfect circle in polar coordinates in the absence of cyclic errors. The cyclic errors present, then manifest themselves as ellipticity of this circle and a translation of its centre. In this study large cyclic nonlinearities were deliberately introduced into a standard heterodyne interferometer setup, making them large enough to
measure directly from the displacement data. Comparison with predicted nonlinearities calculated from the optical power data showed a good fit, indicating that it is possible to predict cyclic nonlinearities by reading the optical power from the measurement board.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future space missions, among which the Darwin Space Interferometer, will consist of several free flying satellites. A complex metrology system is required to have all the components fly accurately in formation and have it operate as a single instrument. Our work focuses on a possible implementation of the sub-system that measures the absolute distance between two satellites with high accuracy. For Darwin the required accuracy is on the order of 70 micrometer over a distance of 250 meter. We are exploring a technique called frequency sweeping interferometry, which involves interferometrically measuring a phase difference while sweeping the wavelength of a tunable laser. This phase difference is directly proportional to the absolute distance. A very high finesse Fabry-Perot cavity is used as a reference standard, to which the laser is locked at the end-points of the sweep. We will discuss our measurement scheme, our set-up and some first measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Linear Canonical Transform (LCT) is a general transform which can be used to describe linear lossless quadratic phase systems (QPS). It can be shown that the Optical Fourier Transform (OFT), Optical Fractional Fourier Transform (OFRT) and the effect of a thin lens or Chirp Modulation Transform (CMT), are all special cases of the more general LCT. Using the Collins formula it is possible to represent these transforms as ABCD matrices. By cascading relevant matrices together, quite complicated bulk optical systems can be described in a compact manner. Digital Speckle Photography (DSP) can be used in the analysis of surface motion in combination with an optical LCT. It has previously been shown that Optical FRT's (OFRT) can be used in speckle based metrology systems to vary the range and sensitivity of a metrology system and also to determine both, the magnitude and direction, of tilting (rotation) and translation motion simultaneously, provided that the motion is captured in two separate OFRT domains. In this paper we extend this analysis to more general LCT systems. We demonstrate that a spherical illuminating wavefront can be conveniently described using matrix notation. We show that by changing the sphericity of wavefront we can change the domain of the LCT system. Hence by illuminating a target with a plane wavefront and then a spherical wavefront, we capture the motion in two separate LCT domains and we are thus in a position to fully determine the motion of a rigid body without a priori knowledge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The metrology of a satellite formation is a system-level issue relying on the measurement technology, number and selection of metrology links, location and tolerances of the components used to materialize the optical paths, and a variety of issues related to redundancy, recovery from contingencies, failure modes and sensor degradation. For a complex interferometric system such as ESA/Darwin, metrology must provide distances, angles and decide the right moments to "rigidify" the configuration and to keep it stable within small limits. Multi-stage approaches can be used, with GPS-like sensors, absolute or relative distance / angle sensors, fringe sensors, heterodyne interferometers. Each type of sensors is materialized by several devices with different accuracies, biases, degradation constants. In this paper, we demonstrate the applicability of geodetic compensation of the metrological network to refine the instantaneous knowledge of the constellation configuration - modelled as a network of nodes - by providing confidence regions for the location of nodes with smaller uncertainties, and pinpointing how better estimations of the internal calibration parameters can be obtained. In such framework, the following can be, in principle, analyzed: impact of the number and distribution of metrological links, added value of a link and its level of correlation with others, uncertainty reduction by network compensation, effects of redundant links and/or links with different accuracies on final accuracy and impact of sensor temporal degradation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assessment of measurement performance of gear measuring instrument is not easy task since a traceable master piece with sufficient accuracy has not been available. We propose a new artifact named Double Ball Artifact (DBA) which consists of a base plate and two balls. It has advantages of inexpensive, accurate, and calibrated with traceability. The examples of measurements by gear checkers and CMM will be presented to confirm the usefulness and validity of the artifact.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on performance of a new form of fiber probe, which can be used in conjunction with a coordinate measuring machine (CMM) for microfeature measurement. The probe stylus is a glass fiber with a small ball (≈75 μm diameter) glued to the end. When the ball is brought into contact with a surface, the fiber bends, and this bending is measured optically. The fiber acts as a cylindrical lens, focusing transmitted light into a narrow stripe that can be magnified by a microscope and detected by a camera, providing position resolution under 10 nm. In addition to the high resolution, the
primary advantage of this technique is the large aspect ratio attainable. (Measurements 5 mm deep inside a 100 μm
diameter hole are practical.) Another potential advantage of the probe is that it exerts exceptionally low forces, ranging
from a few micronewtons down to hundreds of nanonewtons. Furthermore, the probe is relatively robust, capable of surviving more than 1-mm over-travel, and the probe stylus should be inexpensive to replace if it is broken. To demonstrate the utility of the probe, we have used it to measure the internal geometry of a small glass hole and a fiber
ferrule. Although the intrinsic resolution of the probe is better than 10 nm, there are many potential sources of error that could cause larger errors, and many of these errors are discussed in this paper. Our practical measurement capabilities for the hole geometry are currently limited to about 70 nm uncertainty. Hole measurements only require a twodimensional probe, but we have now extended the use of the probe from 2-d to 3-d measurements. Measurements of the
z-height of a surface can be carried out by detecting buckling of the stylus when it is brought down into a surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the continued miniaturisation of mechanical and optical systems there is an increasing demand for high precision dimensional measurements on small parts. METAS combined a new probe head with a recently developed ultra precision CMM stage. The probe head with probing spheres in the diameter range of 0.1 mm to 1 mm has isotropic probing forces below 0.5 mN. Its unique parallel kinematic structure uses exclusively flexure hinges and is manufactured out of a single piece of aluminium. This structure blocks all rotational movements of the probing sphere and separates the 3D movement into three independent 1D displacements which are measured by inductive sensors. The repeatability for a single point probing is in the order of 5 nm.
This probe head was combined with an ultra precision micro-CMM, which is based on a development made at Philips CFT [1,2]. The micro-CMM features a 90 mm x 90 mm x 38 mm air bearing stage with interferometric position measurement at zero Abbe offset. At the reached level of precision, the shape deviation of the probing sphere becomes a major contribution to the uncertainty. Therefore a calibration method for spheres based on error separation techniques was implemented. The results of roundness measurements on 3 calibration spheres are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on the on the design and calibration of an elastically guided vertical axis that will be applied in a small high precision 3D Coordinate Measuring Machine aiming a volumetric uncertainty of 25 nm. The design part of this paper discusses the principles of this system, the compensation of the stiffness of the vertical axis in the direction of motion, the weight compensation method and the design and performance of the axis precision drive system, a Lorentz actuator. In the metrology part of this paper the calibration methods to determine the linearity as well as motion straightness and axis rotation errors are discussed. Finally first calibration results of this axis show nanometer repeatability of the probing point over the 4 mm stroke of this axis. The causes of the short-term variations with a bandwidth of about ± 10 nm are under investigation. Error compensation may reduce the residual error of the probing point to the nanometer level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ball plates have been increasingly used for checking the performance of coordinate measuring machines (CMMs) in Japan. In future, the ball plate will become a popular gauge for checking or calibrating the CMM in Japan. Currently, only the National Metrology Institute of Japan (NMIJ) can calibrate the ball plate in Japan. There are no other institutes or calibration laboratories able to calibrate the ball plate. Therefore, we organized a ball plate round-robin measurement to create the opportunity to relay our calibration technique to other institutes. This is the first domestic comparison of a ball plate calibration in Japan. Sixteen institutes including NMIJ participated in the comparison. In this round-robin measurement, we formed two groups and supplied two ball plates (KOBA 420×420 mm, RETTER 420×420 mm) and two material standards of length made by NMIJ for this round-robin measurement. The round-robin measurement took place from October 2003 to September 2004. We describe the results of the comparison in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of methods have been proposed to evaluate the reference value for intercomparisons of laboratory measurements. Methods for establishing the reference value include the arithmetic mean, weighted mean (with weights proportional to the reciprocals of the squared uncertainties), median, and total median. In addition, it has been suggested that it might be possible to modify the weighted mean, using iterative approaches to automatically eliminate outliers or to modify the weights in light of the results of the intercomparison. No single one of the analysis methods is best for all circumstances, nor can the efficiency of any method be determined without making assumptions about the underlying nature of the intercomparison. (How well do the participants evaluate their uncertainties? What is the underlying distribution of errors, including outliers? Are the errors correlated between one laboratory and the next?) Although there is considerable divergence of opinion as to what constitutes realistic assumptions, completed international comparisons can begin to provide at least rough guidance for constructing models. In this paper, I will try to construct models that are consistent with what we have learned thus far from CCL (Consultative Committee for Length) key comparisons in the field of dimensional metrology. Based on such models, I have explored various methods for establishing a reference value, to determine which methods are likely to produce a reference value with a low uncertainty. As would be expected, there is no single method that is always superior; results depend on both the underlying assumptions and on the spread and distribution of claimed uncertainties of the participating laboratories.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
National Measurement Institutes are becoming much more involved in international activities as world trade expands. One important activity is to organize and participate in comparisons aimed at establishing their calibration measurement capabilities. A typical comparison circulates a number of artefacts to between 10 and 20 institutes, which measure them following a defined technical protocol. A report is then written, reporting the results and drawing some conclusions about each laboratory's performance relative to a reference value, taken in the context of their declared uncertainties. The determination of the reference value is a very important first step and often results in a lot of discussion. In general laboratories have different capabilities and the reference value needs to be a weighted mean of some kind. This paper evaluates an approach which determines a participant's weighting factor from the reported results, without using the participant's uncertainty estimates. It is applied to a recent key comparison which required expert judgment from the pilot to exclude some results that contained measurement errors. This method avoids the need to exclude participants and is relatively insensitive to artificial noise, or an offset, added to one of the data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A phase-shifting interferometer (PSI) with equal phase steps, using a frequency-tunable diode laser and a Fabry-Perot cavity, is proposed for the Carre algorithm. The measurement accuracy of the Carre algorithm depends on the equality of the phase steps. Using the Fabry-Perot cavity as a highly stable optical frequency reference, a high degree of phase step equality can be realized in the PSI with an optical frequency shift. Our experimental scheme realizes an optical frequency step equality higher than 2.1×10-5 and a measurement repeatability of λ/850.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pitch and orthogonality of two-dimensional (2D) gratings have been calibrated by using an optical diffractometer (OD) and a metrological atomic force microscope (MAFM). Gratings are commonly used as a magnification standard for a scanning probe microscope (SPM) and a scanning electron microscope (SEM). Thus, to establish the meter-traceability in nano-metrology using SPM/SEM, it is important to certify the pitch and orthogonality of 2D gratings accurately. ODs and MAFMs are generally used as effective metrological instruments for the calibration of gratings in nanometer range. Since two methods have different metrological characteristics, they give complementary information for each other. ODs can measure only mean pitch value of grating with very low uncertainty, but MAFMs can obtain individual pitch value and local profile as well as mean pitch value, although they have higher uncertainty. Two kinds of 2D gratings, each with the nominal pitch of 700 nm and 1000 nm, were measured, and the uncertainties of calibrated values were evaluated. We also investigated the contribution of each uncertainty source to the combined standard uncertainty, and discussed the causes of main ones. The expanded uncertainties (k = 2) of calibrated pitch values were less than 0.05 nm and 0.5 nm for the OD and the MAFM, and the calibration results were coincident with each other within the expanded uncertainty of the MAFM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite the fact that there exists several techniques capable of characterizing the nanoparticle sizes, their measurement results from the same sample often deviate from each other at an amount that is considered significant in the nanometer scale. The principles of measurements these techniques or instruments based upon might contribute a notable portion to the disagreement of the measurement results. The sample preparation itself could only further add to the complexity of the problem. In the absence of international standards, or world-wide recognized protocols dealing with nanoparticle characterization, a comparison study was carried out to investigate the systematic deviations in measuring nanoparticle diameters. Three types of commonly used nanoparticle sizing instruments, Photon Correlation Spectroscopy (PCS), Atomic Force Microscopy (AFM), and Transmission Electron Microscopy (TEM) were utilized to take measurements on traceable polystyrene latex samples at 100 nm, 50 nm, and 20 nm in diameter. The final analysis showed a fairly satisfactory agreement of the measured data from the samples' certified values, with the exception of the result from the Field-Emission TEM (FE-TEM). It was later determined that the major source of the deviation was attributed to the instrument rather than to the sample. Instrument calibration was the course of action taken to bring the outlier to the desired accuracy. Additionally, discussions were also made with regards to the need of standardization in nanoparticle measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A compact linear and angular displacement measurement device was developed by combining a Michelson interferometer in Twyman-Green configuration and an autocollimator to characterize the movement of a precision stage. A precision stage usually has 6 degrees of freedom of motion (3 linear and 3 angular displacements) due to the parasitic motions, thus linear and angular displacement should be measured simultaneously for the complete evaluation of precision stage. A Michelson interferometer and an autocollimator are typical devices for measuring linear and angular displacement respectively. By controlling the polarization of reflected beam from the moving mirror of the interferometer, some parts of light are retro-reflected to the light source and the reflected beam can be used for angle measurement. Because the interferometer and the autocollimator have the same optic axis, the linear and angular displacements are measured at the same position of the moving mirror, and the moving mirror can be easily and precisely aligned to be orthogonal to the optic axis by monitoring the autocollimator's signal. A single mode polarization maintaining optical fiber is used to deliver the laser beam to the device, and all components except the moving mirror are fixed with bonding to achieve high thermal and mechanical stability. The autocollimator part was designed to have the angular resolution of 0.1" and the measurement range of 60". The nonlinearity error of interferometer was minimized by trimming the gain and offset of the photodiode signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a two-dimensional nano-displacement measuring system utilizing a combined optical and x-ray interferometer (COXI). The system consists of optical interferometers for two-dimensional displacements and an x-ray interferometer. The x-ray interferometer was used to calibrate the non-linearity of the optical interferometers. The x-ray interferometer can subdivide the optical interference signal with 0.2 nm linear scales. The measured non-linearity of the heterodyne optical interferometer was less than 2 nm. The calibrated optical interferometers were used to measure two dimensional nanoscale displacements, and the accuracy of the optical interferometers was reduced to sub-nanometer after the compensation. To demonstrate the application of the system, we have measured the non-linearity of capacitive sensors using the calibrated optical interferometers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new approach for surface roughness measurement using optical method and image processing. It has an advantage over traditional method where the surface geometry is not touched and line to line scanning is not required. In this system, a CCD camera is used to grab the image of the roughness sample using optical set up
along with image processing software and hardware. This paper explains how 3D parameters can be measured to provide greater insight into surface finish. It also includes two cases in which 3D parameters measurements are essential in the design and development of high performance surfaces. Experimental results demonstrated good
correlation between the received signal parameters and the root mean square surface roughness. A range of roughness up to 10μm. was detected, with a resolution of 0.01μm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.