In spite the prodigious growth in the market for digital cameras, they have yet to displace film-based cameras in the consumer market. This is largely due to the high cost of photographic resolution sensors. One possible approach to producing a low cost, high resolution sensor is to linearly scan a masked low resolution sensor. Masking of the sensor elements allows transform domain imaging. Multiple displaced exposures of such a masked sensor permits the device to acquire a linear transform of a higher resolution representation of the image than that defined by the sensor element dimensions. Various approaches have been developed in the past along these lines, but they often suffer from poor sensitivity, difficulty in being adapted to a 2D sensor or spatially variable noise response. This paper presents an approach based on a new class of Hadamard masks--Uniform Noise Hadamard Masks--which has superior sensitivity to simple sampling approaches and retains the multiresolution capabilities of certain Hadamard matrices, while overcoming the non-uniform noise response problems of some simple Hadamard based masks.
Expert interpretation of raster-based data, needed when, for example, automatic reconstruction of sparsely sampled data cannot produce accurate models, requires a means for interaction through which the expert's knowledge can be incorporated into the model to improve accuracy. If such expert interpretation is to be viable, the interaction must be intuitive, direct and flexible. We present a novel approach to the design of such interaction: the use of the discrete thin-plate spline permits interactive manipulation of the stiffness and tension parameters in the plate to control the behavior between control points; an object based approach allows raster based objects to be manipulated in an intuitive manner in the context of a visual representation of the objects. The editor adopts a problem driven approach which allows specialized editing tools to be developed for editing in a specific application domain. A prototype implementation of the editor is presented which provides insights into the advantages and limitations of the approach.
The characterization of a highly non-linear color print device can involve a large number of measurements of printed color output. If the measurement process is not automated this can be a significant fraction of the cost of developing a color model for a device. One way to limit the number of measurements required is to ensure that in any given region, only enough measurements are made to adequately characterize the local behavior. With no prior knowledge of the behavior, this requires an adaptive approach to the sampling. An adaptive sampling technique developed for this work, termed Model Accuracy Moderated Adaptive Sampling (MAMAS), is described. Simulation tests with and without measurement noise are presented and the results are compared to measurements using uniform regular sampling. The technique is also applied to a real printer, the Canon CLC500, for which some results are presented. The color model used for the print device is based on an interpolated look up table (ILUT). Because of the highly non-linear nature of the device being modeled a flexible technique is required to translate the irregular measurement samples into a regularly gridded model. A method based on a regularized linear spline was developed. Appropriate choice of the penalty function for the regularization can achieve a compromise between fitting the measured points and reducing the impact of measurement noise. A brief overview of the technique is presented.
KEYWORDS: Data modeling, Nonlinear image processing, 3D image processing, 3D modeling, Image restoration, Visual process modeling, Visualization, Systems modeling, Spatial frequencies, Modeling
An adaptive multi-dimensional interpolation technique for irregularly gridded data based on a regularized linear spline is described. The regularization process imposes a penalty or energy function which depends upon a sum of quadratic functions of the error at the data points and the gradient and curvature of the surface. The weighting of a given term in the penalty function is made to depend non-linearly on the first and second differences in the regularly gridded interpolation of the data. As a result the method is able to provide an interpolation which is sensitive to the local behavior of the data being interpolated. For example, data containing a discontinuity or crease can be smoothed to reduce noise without smoothing the discontinuity or crease. For the 2-D problem, the technique is analogous to a rectangular grid of stiff extensible rods defining an interpolation surface, with springs resisting: the displacement of the surface from the known data values; the extension of the rods, and the bending of one rod with respect to another. The weights in the penalty function are equivalent to a non-linear spring characteristic for which the spring constant is reduced at large displacements. For a given set of weights, the penalty function is quadratic. This leads to a set of linear equations which can be solved efficiently using iterative techniques. Implementations of the technique for irregular 2-D and 3-D data are described and results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.