In the emerging field of computational imaging, rapid prototyping of new camera concepts becomes increasingly difficult since the signal processing is intertwined with the physical design of a camera. As novel computational cameras capture information other than the traditional two-dimensional information, ground truth data, which can be used to thoroughly benchmark a new system design, is also hard to acquire. We propose to bridge this gap by using simulation. In this article, we present a raytracing framework tailored for the design and evaluation of computational imaging systems. We show that, depending on the application, the image formation on a sensor and phenomena like image noise have to be simulated accurately in order to achieve meaningful results while other aspects, such as photorealistic scene modeling, can be omitted. Therefore, we focus on accurately simulating the mandatory components of computational cameras, namely apertures, lenses, spectral filters and sensors. Besides the simulation of the imaging process, the framework is capable of generating various ground truth data, which can be used to evaluate and optimize the performance of a particular imaging system. Due to its modularity, it is easy to further extend the framework to the needs of other fields of application. We make the source code of our simulation framework publicly available and encourage other researchers to use it to design and evaluate their own camera designs.1
Spectral unmixing aims to determine the relative amount (so-called abundances) of raw materials (so-called endmembers) in hyperspectral images (HSI). Libraries of endmember spectra are often given. Since the linear mixing model assigns one spectrum to each raw material, the endmember variability is not considered. Computationally costly algorithms exist to still derive precise abundances. In the method proposed in this work, we use only the pseudoinverse of the matrix of the endmember spectra to estimate the abundances. As can be shown, this approach circumvents the necessity of acquiring a HSI and is less computationally costly. To become robust against model deviations, we iteratively estimate the abundances by modifying the matrix of the endmember spectra used to derive the pseudoinverse. The values to modify each endmember spectrum are derived involving the singular value decomposition and the grade of violation of physical constraints to the abundances. Unlike existing algorithms, we account for the endmember variability and force simultaneously to meet physical constraints. Evaluations of samples for material mixtures, such as mixtures of color powders and quartz sands, show that more accurate abundance estimates result. A physical interpretation of these estimates is enabled in most cases.
Using appropriately designed spectral filters allows to optically determine material abundances. While an infinite number of possibilities exist for determining spectral filters, we take advantage of using neural networks to derive spectral filters leading to precise estimations. To overcome some drawbacks that regularly influence the determination of material abundances using hyperspectral data, we incorporate the spectral variability of the raw materials into the training of the considered neural networks. As a main result, we successfully classify quantized material abundances optically. Thus, the main part of the high computational load, which belongs to the use of neural networks, is avoided. In addition, the derived material abundances become invariant against spatially varying illumination intensity as a remarkable benefit in comparison with spectral filters based on the Moore-Penrose pseudoinverse, for instance.
Traditional spectral unmixing involves intense signal processing applied on multispectral or hyperspectral data captured from an imaging device, which is highly time-consuming. In this article, a novel method, namely "optical unmixing", is proposed to alleviate the post processing effort by replacing the heavy computation with a spectrally tunable light source. By choosing spectral features of the light source intelligently, the abundance map of each material can be retrieved with minimum computation from gray value images captured by a normal camera. For n unknown endmembers, 3n + 1 measurements are required to retrieve the abundance maps with proposed algorithms.
This paper presents a method for appearance-based 3D head pose tracking utilizing optical flow computation. The task is to recover the head pose parameters for extreme head pose angles based on 2D images. A novel method is presented that enables a robust recovery of the full motion by employing a motion-dependent regulatory term within the optical flow algorithm. Thereby, the rigid motion parameters are coupled directly with a regulatory term in the image alignment method affecting translation and rotation independently. The ill-conditioned, nonlinear optimization problem is stabilized by the proposed regulatory term yielding suitable conditioning of the Hessian matrix. It is shown that the regularization corresponding to the motion parameters can be extended to full 3D motion consisting of six parameters. Experiments on the Boston University head pose dataset demonstrate the enhancement of robustness in head pose estimation compared to conventional regularization methods. Using well-defined values for the regulatory parameters, the proposed method shows significant improvement in headtracking scenarios in terms of accuracy compared to existing methods.
For demanding sorting tasks, the acquisition and processing of color images does not provide sufficient information for the successful discrimination between the different object classes that are to be sorted. An alternative to integrating three spectral regions of visible light to the three color channels is to sample the spectrum at up to several hundred, evenly-spaced points and acquire so-called hyperspectral images. Such images provide a complete image of the scene at each considered wavelength and contain much more information about the composition of the different materials. Hyperspectral images can also be acquired in spectral regions neighboring visible light such as, e.g., the ultraviolet (UV) and near-infrared (NIR) region. From a mathematical point of view, it is possible to extract the spectra of the pure materials and the amount to which these spectra contribute to material mixtures. This process is called spectral unmixing. Spectral unmixing based on the mostly used linear mixing model is a difficult task due to model ambiguities and distorting factors such as noise. Until a few years ago, the most inherent property of hyperspectral images, that is to say, the abundance correlation between neighboring pixels, was not used in unmixing algorithms. Only recently, researchers started to incorporate spatial information into the unmixing process, which by now is known to improve the unmixing results. In this paper, we will introduce two new methods and study the effect of these two and two already described methods on spectral unmixing, especially on their ability to account for edges and other shapes in the abundance maps.
Regardless whether mosaics, material surfaces or skin surfaces are inspected their texture plays an important role. Texture is a property which is hard to describe using words but it can easily be described in pictures. Furthermore, a huge amount of digital images containing a visual description of textures already exists. However, this information becomes useless if there are no appropriate methods to browse the data. In addition, depending on the given task some properties like scale, rotation or intensity invariance are desired. In this paper we propose to analyze texture images according to their characteristic pattern. First a classification approach is proposed to separate regular from non-regular textures. The second stage will focus on regular textures suggesting a method to sort them according to their similarity. Different features will be extracted from the texture in order to describe its scale, orientation, texel and the texel’s relative position. Depending on the desired invariance of the visual characteristics (like the texture’s scale or the texel’s form invariance) the comparison of the features between images will be weighted and combined to define the degree of similarity between them. Tuning the weighting parameters allows this search algorithm to be easily adapted to the requirements of the desired task. Not only the total invariance of desired parameters can be adjusted, the weighting of the parameters may also be modified to adapt to an application-specific type of similarity. This search method has been evaluated using different textures and similarity criteria achieving very promising results.
In optical inspection systems like automated bulk sorters, hyperspectral images in the near-infrared range are used more and more for identification and classification of materials. However, the possible applications are limited due to the coarse spatial resolution and low frame rate. By adding an additional multispectral image with higher spatial resolution, the missing spatial information can be acquired. In this paper, a method is proposed to fuse the hyperspectral and multispectral images by jointly unmixing the image signals. To this end, the linear mixing model, which is well-known from remote sensing applications, is extended to describe the spatial mixing of signals originating from different locations. Different spectral unmixing algorithms can be used to solve the problem. The benefit of the additional sensor and the properties of the unmixing process are presented and evaluated, as well as the quality of unmixing results obtained with different algorithms. With the proposed extended mixing model, an improved result can be achieved, as shown with different examples.
In cases of nuclear disasters it is desirable to know one's personal exposure to radioactivity and the related health risk.
Usually, Geiger-Mueller tubes are used to assess the situation. Equipping everyone with such a device in a short period
of time is very expensive. We propose a method to detect ionizing radiation using the integrated camera of a mobile
consumer device, e.g., a cell phone. In emergency cases, millions of existing mobile devices could then be used to
monitor the exposure of its owners. In combination with internet access and GPS, measured data can be collected by a
central server to get an overview of the situation.
During a measurement, the CMOS sensor of a mobile device is shielded from surrounding light by an attachment in front
of the lens or an internal shutter. The high-energy radiation produces free electrons on the sensor chip resulting in an
image signal. By image analysis by means of the mobile device, signal components due to incident ionizing radiation are
separated from the sensor noise. With radioactive sources present significant increases in detected pixels can be seen.
Furthermore, the cell phone application can make a preliminary estimate on the collected dose of an individual and the
associated health risks.
Wind turbine blades are made of composite materials and reach a length of more than 42 meters. Developments
for modern offshore turbines are working on about 60 meters long blades. Hence, with the increasing height
of the turbines and the remote locations of the structures, health monitoring systems are becoming more and
more important. Therefore, fiber-optic sensor systems are well-suited, as they are lightweight, immune against
electromagnetic interference (EMI), and as they can be multiplexed. Based on two separately existing concepts
for strain measurements and lightning detection on wind turbines, a fused system is presented. The strain
measurement system is based on a reflective fiber-Bragg-grating (FBG) network embedded in the composite
structure of the blade. For lightning detection, transmissive &fiber-optic magnetic field sensors based on the
Faraday effect are used to register the lightning parameters and estimate the impact point. Hence, an existing
lightning detection system will be augmented, due to the fusion, by the capability to measure strain, temperature
and vibration. Load, strain, temperature and impact detection information can be incorporated into the turbine's
monitoring or SCADA system and remote controlled by operators. Data analysis techniques allow dynamic
maintenance scheduling to become a reality, what is of special interest for the cost-effective maintenance of large
offshore or badly attainable onshore wind parks. To prove the feasibility of this sensor fusion on one optical
fiber, interferences between both sensor systems are investigated and evaluated.
We present a new method to segment images of structured surfaces from illumination series, i.e. sets of images of an object recorded with different lighting settings. We use a parallel light source whose angle of incidence is described by the azimuth and the elevation angle. Depending on the surface topography, characteristic patterns are described by the intensities viewed by the camera depending on the illumination direction. The segmentation itself is based on cluster analysis in a multi-dimensional feature space. The resulting classes correspond with the identified segments of the surface image. A crucial step within this approach is the definition of meaningful features. We focus on features that can be extracted from the signal described by the intensities at a single surface location depending on the illumination direction. We investigate features based on moments of this intensity signal as well as on its frequency decomposition with respect to the illumination direction. Furthermore, we show that features of this kind can be used to robustly segment a wide variety of textures on structured surfaces. In any case, since no spatial neighbourhood is utilized to compute the features, i.e.
"averaging" takes place only in illumination domain, no spatial resolution must be sacrificed. Consequently, even very small regions can reliably be segmented, as is necessary when defects are to be detected.
This contribution presents a new algorithm to extract a measure of
the quality of honed cylinder bores based on 2D intensity or
topography data, enabling thus an objective assessment of their
surface texture. The method is based on an adaptive separation of
the surface data into two complementary components-the groove
texture and the background data. Following, from these separation
results, a scalar feature is computed that describes the texture
quality compactly and reliably. Based on a series of fax film
replicas of real honing textures showing different degrees of
quality, the principle of the algorithm is illustrated. Moreover,
the usefulness of the proposed approach is demonstrated by
comparing classification results based on the new measure with
ratings of experts. In all cases, a correct class assignment could
be achieved.
We present a new image processing strategy that enables an automated comparison of striation patterns. A signal model is introduced to describe the interesting features of forensically relevant striation marks. To provide a high image quality, several images of the same surface area are recorded under systematically varying conditions and then combined for an improved result by means of data fusion techniques. Based on the signal model, the signal of interest is concentrated, and a compact representation of the marks is obtained. To enable an efficient description of the relevant features, even in the cases of deformed surfaces or curved striation marks, a straightening of the grooves is performed beforehand. Following, a meaningful "signature" describing the information of interest is extracted using the whole length of the grooves. This signature is used for an objective evaluation of similarity of the striation patterns.
Defects of painted surfaces have proven to be visually disturbing even when their depth is only a few microns. Most inspection approaches neither enable a reliable classification of small defects nor provide a suitable human-machine interface to identify areas to be refinished. Consequently, in most cases the inspection still takes place manually and visually - an unsatisfactory compromise that lacks both objectivity and reproducibility. Our approach combines the reliability of automated methods with the acceptance and flexibility of human-based techniques. The measurement principle is based on deflectometry, and features a significantly higher sensitivity than triangulation methods. The developed system consists of a light source based on a digital micromirror device (DMD), a screen where defined patterns are projected on, as well as a mobile inspection device equipped with a head-mounted display (HMD) and a video camera. During operation, the camera captures images of different patterns reflected in the surface. By combining several images using one of the two techniques described to enhance surface defects, the resulting feature image is displayed in the HMD. This procedure takes place in real time and is repeated continuously. The system performance is demonstrated with the visual inspection of car doors. Promising results show that our prototype allows a reliable yet cost-efficient inspection of painted surfaces matching the needs of automotive industry.
This contribution presents a new fusion strategy to inspect specular surfaces. To cope with illumination problems, several images are recorded with different lighting. Typically, the information of interest is extracted from each image separately and is then combined at a decision level. However, in our approach all images are processed simultaneously by means of a centralized fusion-no matter whether the desired results are images, features or symbols. Since the information fused is closer to the source, a better exploitation of the raw data is achieved. The sensors are virtual in the sense that a single camera is employed to record all images with different illumination patterns. The fusion problem is formulated by means of an energy function. Its minimization yields the desired fusion results, which describe surface defects. The performance of the proposed methodology is illustrated by means of two case studies: the analysis of machined surfaces, and the inspection of painted free-form surfaces. The programmable light sources utilized are a DMD, and an LED based illumination device, respectively. In both cases, the results demonstrate that by generating complementary imaging situations and using fusion techniques, a reliable yet cost-efficient inspection is attained matching the needs of industry.
Correlation methods represent a well-known and reliable approach to detect similarities between images, signal patterns, and stochastic processes. However, by means of the cross-correlation function only linear similarities are registered. Unfortunately, often it is not possible to avoid non-linearities in the characteristics of the sensors used or, as in image processing, in the interaction between illumination and the scene to be captured. Thus, in such cases correlation methods may yield poor results. In this paper, we describe alternative strategies to enhance the performance of correlation methods even when the statistical connection between the signals is non-linear. To reduce the impact of non-linearities on the signals to be analyzed, a preprocessing is performed in which certain properties affecting first-order statistics are manipulated. This step impresses the same histogram to the signals to be compared, so that typically higher correlation coefficients are obtained as compared to if no preprocessing methods were used. The performance of our approach is demonstrated with two different tasks. First, a preprocessing strategy is proposed for signals obtained from train-based sensors to enable an identification of rail switches. Finally, a method for comparing striation patterns in forensic science is presented. To investigate the benefit of this approach, a large database of toolmarks is used.
We present a new image processing strategy that enables an automated extraction of signatures from striation patterns. To this end, a signal model is proposed that allows a suitable description of the interesting features of forensically relevant striation marks. To provide for a high image quality, several images of the same surface area are recorded under systematically varying conditions. The images obtained are then combined to an improved result by means of appropriate sensor fusion techniques. Based upon the signal model, the signal of interest is concentrated, and a compact representation of the grooves is obtained. To enable an efficient description of the relevant features even in the cases of deformed surfaces or curved striation marks, a straightening of the grooves is performed before. In the following, a meaningful signature describing the information of interest is extracted using the whole length of the grooves. This signature can be used for an objective evaluation of similarity of striation patterns.
Shot peening is a technique used to increase the flexural fatigue strength of machine parts which are heavily loaded by alternate bending. The impacts of the projectiles induce a compressive strain tangential to the surface which increases its endurance limit. To achieve a defined surface coverage with projectile impacts, this process has to be calibrated by measuring the surface coverage as a function of time. Up to now, this is done visually by inspecting test surfaces with a microscope. Following, the surfaces are compared with a catalog of reference patterns showing different coverage factors. This paper presents a model- based technique enabling an automated inspection of shot peened surfaces. For this purpose, an image series is recorded by varying the surface illumination systematically, and subsequently the series is fused to a symbolic result describing the areas showing shot impacts. Thanks to the simultaneous analysis of signal intensities in illumination space, no consideration of neighboring pixels is necessary to classify each single surface point. However, to assure a consistent result, additional constraints are introduced. Thus, a robust and precise detection of the interesting areas is attained.
This paper deals with an important task within forensic science - the automatic comparison of bullets for the purpose of firearm identification. Bullets bear groove- shaped marks that can be though of as a kind of 'fingerprint' of the firearm on their circumferential surface. To accomplish the comparison task, mainly the fine grooves on the bullet surface are of interests.
This paper deals with strategies for reliably obtaining the edges and the surface texture of metallic objects. Since illumination is a critical aspect regarding robustness and image quality, it is considered here as an active component of the image acquisition system. The performance of the methods presented is demonstrated -- among other examples -- with images of needles for blood sugar tests. Such objects show an optimized form consisting of several planar grinded surfaces delimited by sharp edges. To allow a reliable assessment of the quality of each surface, and a measurement of their edges, methods for fusing data obtained with different illumination constellations were developed. The fusion strategy is based on the minimization of suitable energy functions. First, an illumination-based segmentation of the object is performed. To obtain the boundaries of each surface, directional light-field illumination is used. By formulating suitable criteria, nearly binary images are selected by variation of the illumination direction. Hereafter, the surface edges are obtained by fusing the contours of the areas obtained before. Following, an optimally illuminated image is acquired for each surface of the object by varying the illumination direction. For this purpose, a criterion describing the quality of the surface texture has to be maximized. Finally, the images of all textured surfaces of the object are fused to an improved result, in which the whole object is contained with high contrast. Although the methods presented were designed for inspection of needles, they also perform robustly in other computer vision tasks where metallic objects have to be inspected.
A novel image processing method is presented that allows to obtain images with maximal contrast by means of fusing a series of images which were acquired under different illumination constellations. For this purpose, the notion of illumination space is introduced, and strategies for sampling this space are discussed. It is shown that the signal of interest contained in a physical texture often would be lost if standard image acquisition methods were used. In contrast to this, the presented approach shows a robust and reproducible way to obtain high-contrast images containing the relevant information for subsequent processing steps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.