Recent work has seen a surge of sparse representation based classification (SRC) methods applied to automatic target recognition problems. While traditional SRC approaches used l0 or l1 norm to quantify sparsity, spike and slab priors have established themselves as the gold standard for providing general tunable sparse structures on vectors. In this work, we employ collaborative spike and slab priors that can be applied to matrices to encourage sparsity for the problem of multi-view ATR. That is, target images captured from multiple views are expanded in terms of a training dictionary multiplied with a coefficient matrix. Ideally, for a test image set comprising of multiple views of a target, coefficients corresponding to its identifying class are expected to be active, while others should be zero, i.e. the coefficient matrix is naturally sparse. We develop a new approach to solve the optimization problem that estimates the sparse coefficient matrix jointly with the sparsity inducing parameters in the collaborative prior. ATR problems are investigated on the mid-wave infrared (MWIR) database made available by the US Army Night Vision and Electronic Sensors Directorate, which has a rich collection of views. Experimental results show that the proposed joint prior and coefficient estimation method (JPCEM) can: 1.) enable improved accuracy when multiple views vs. a single one are invoked, and 2.) outperform state of the art alternatives particularly when training imagery is limited.
The set of orthogonal eigen-vectors built via principal component analysis (PCA), while very effective for com-
pression, can often lead to loss of crucial discriminative information in signals. In this work, we build a new
basis set using synthetic aperture radar (SAR) target images via non-negative matrix approximations (NNMAs).
Owing to the underlying physics, we expect a non-negative basis and an accompanying non-negative coecient
set to be a more accurate generative model for SAR proles than the PCA basis which lacks direct physical interpretation. The NNMA basis vectors while not orthogonal capture discriminative local components of
SAR target images. We test the merits of the NNMA basis representation for the problem of automatic target
recognition using SAR images with a support vector machine (SVM) classier. Experiments on the benchmark
MSTAR database reveal the merits of basis selection techniques that can model imaging physics more closely
and can capture inter-class variability, in addition to identifying a trade-off between classication performance
and availability of training.
Multidimensional lookup tables (LUTs) are often used to describe the response of physical systems to multiple
inputs. However these tables are also tensors, and in this paper we will use tensor decomposition to greatly reduce
the number of parameters needed to generate an accurate approximation to the tensor, and discuss how to determine
these parameters from a small number of known tensor elements. We will use this approach to generate printer
models, which are CMY or CMYK to L*a*b* LUTs where each element is an L*a*b* value for one CMYK
formulation. The approach generates accurate results with a reasonable number of L*a*b* measurements, and can
be used when nothing else is known about the system. It also runs much faster than the physics based models that
are sometimes available for these systems.
Document images are obtained regularly by rasterization of document content and as scans of printed documents.
Resizing via background and white space removal is often desired for better consumption of these images, whether on
displays or in print. While white space and background are easy to identify in images, existing methods such as naïve
removal and content aware resizing (seam carving) each have limitations that can lead to undesirable artifacts, such as
uneven spacing between lines of text or poor arrangement of content. An adaptive method based on image content is
hence needed. In this paper we propose an adaptive method to intelligently remove white space and background content
from document images. Document images are different from pictorial images in structure. They typically contain
objects (text letters, pictures and graphics) separated by uniform background, which include both white paper space and
other uniform color background. Pixels in uniform background regions are excellent candidates for deletion if resizing
is required, as they introduce less change in document content and style, compared with deletion of object pixels. We
propose a background deletion method that exploits both local and global context. The method aims to retain the
document structural information and image quality.
Several color-imaging algorithms such as color gamut mapping to a target device and resizing of color images have traditionally involved pixel-wise operations. That is, each color value is processed independent of its neighbors in the image. In recent years, applications such as spatial gamut mapping have
demonstrated the virtues of incorporating spatial context into color processing tasks. In this paper, we
investigate the use of locally based measures of image complexity such as the entropy to enhance the
performance of two color imaging algorithms viz. spatial gamut mapping and content-aware resizing of
color images. When applied to spatial gamut mapping (SGM), the use of these spatially based local
complexity measures helps adaptively determine gamut mapping parameters as a function of image content
- hence eliminating certain artifacts commonly encountered in SGM algorithms. Likewise, developing
measures of complexity of color-content in a pixel neighborhood can help significantly enhance
performance of content-aware resizing algorithms for color images. While the paper successfully employs
intuitively based measures of image complexity, it also aims to bring to light potentially greater rewards
that may be reaped should more formal measures of local complexity of color content be developed.
Barcodes are widely utilized for embedding data in printed format to provide automated identification and
tracking capabilities in a number of applications. In these applications, it is desirable to maximize the number
of bits embedded per unit print area in order to either reduce the area requirements of the barcodes or to
offer an increased payload, which in turn enlarges the class of applications for these barcodes. In this paper,
we present a new high capacity color barcode. Our method operates by embedding independent data in two
different printer colorant channels via halftone-dot orientation modulation. In the print, the dots of the two
colorants occupy the same spatial region. At the detector, however, by using the complementary sensor channels
to estimate the colorant channels we can recover the data in each individual colorant channel. The method
therefore (approximately) doubles the capacity of encoding methods based on a single colorant channel and
provides an embedding rate that is higher than other known barcode alternatives. The effectiveness of the
proposed technique is demonstrated by experiments conducted on Xerographic printers. Data embedded at
a high density by using the two cyan and yellow colorant channels for halftone dot orientation modulation is
successfully recovered by using the red and blue channels for the detection, with an overall symbol error rate
that is quite small.
The principal challenge in hardcopy data hiding is achieving robustness to the print-scan process. Conventional
robust hiding schemes are not well-suited because they do not adapt to the print-scan distortion channel, and hence are fundamentally limited in a detection theoretic sense. We consider data embedding in images printed with clustered dot halftones. The input to the print-scan channel in this scenario is a binary halftone image, and hence the distortions are also intimately tied to the nature of the halftoning algorithm employed. We propose a new framework for hardcopy data hiding based on halftone dot orientation modulation. We develop analytic halftone threshold functions that generate elliptically shaped halftone dots in any desired orientation. Our hiding strategy then embeds a binary symbol as a particular choice of the orientation. The orientation is identified at the decoder via statistically motivated moments following appropriate global and local synchronization to adress the geometric distortion introduced by the print scan channel. A probabilistic model of the print-scan process, which conditions received moments on input orientation, allows for Maximum Likelihood (ML) optimal decoding. Our method bears similarities to the paradigms of informed coding and QIM, but also makes departures from classical results in that constant and smooth image areas are better suited for embedding via our scheme as opposed to busy or "high entropy" regions. Data extraction is automatically done from a scanned hardcopy, and results indicate significantly higher embedding rate than existing methods, a majority of which rely on visual or manual detection.
Inherent to most multi-color printing systems is the inability to achieve perfect registration between the primary
separations. Because of this, dot-on-dot or dot-off-dot halftone screen sets are generally not used, due to the
significant color shift observed in the presence of even the slightest misregistration. Much previous work has
focused on characterizing these effects, and it is well known that dot-off-dot printed patterns result in a higher
chroma (C*) relative to dot-on-dot. Rotated dot sets are used instead for these systems, as they exhibit a much
greater robustness against misregistration. In this paper, we make the crucial observation that while previous
work has used color shifts caused by misregistration to design robust screens, we can infact exploit these color
shifts to obtain estimates of misregistration. In particular, we go on to demonstrate that even low resolution
macroscopic color measurements of a carefully designed test patch can yield misregistration estimates that are
accurate up-to the sub-pixel level. The contributions of our work are as follows: 1.) a simple methodology to
construct test patches that may be measured to obtain misregistration estimates, 2.) derivation of a reflectance
printer model for the test patch so that color deviations in the spectral or reflectance space can be mapped to
misregistration estimates, and 3.) a practical method to estimate misregistration via scanner RGB measurements.
Experimental results show that our method achieves accuracy comparable to the state-of-the art but expensive
geometric methods that are currently used by high-end color printing devices to estimate misregistration.
Color printer calibration is the process of deriving correction
functions for device signals (e.g., CMYK), so that the device can
be maintained with a fixed known characteristic color response.
Since the colorimetric response of the printer can be a strong function
of the halftone, the calibration process must be repeated for
every halftone supported by the printer. The effort involved in the
calibration process thus increases linearly with the number of halftoning
methods. In the past few years, it has become common for
high-end digital color printers to be equipped with a large number of
halftones, thus making the calibration process onerous. We propose
a halftone independent method for correcting color (CMY or CMYK)
printer drift. Our corrections are derived by measuring a small number
of halftone independent fundamental binary patterns based on
the 22 binary printer model by Wang et al. Hence, the required
measurements do not increase as more halftoning methods are
added. First, we derive a halftone correction factor (HCF) that exploits
the knowledge of the relationship between the true printer
response and the 22-model predicted response for a given halftoning
scheme. Therefore, the true color drift can be accurately predicted
from halftone-independent measurements and corrected correspondingly.
Further, we develop extensions of our proposed color
correction framework to the case when the measurements of our
fundamental binary patches are acquired by a common desktop
scanner. Finally, we exploit the application of the HCF to correct
color drift across different media (papers) and for halftoneindependent
spatial nonuniformity correction.
Color printer calibration is the process of deriving correction functions for device CMYK signals, so that
the device can be maintained with a fixed known characteristic color response. Since the colorimetric
response of the printer can be a strong function of the halftone, the calibration process must be repeated for
every halftone supported by the printer. The effort involved in the calibration process thus increases
linearly with the number of halftoning methods. In the past few years, it has become common for high-end
digital color printers to be equipped with a large number of halftones thus making the calibration process
onerous . We propose a halftone independent method for correcting color (CMY/CMYK) printer drift. Our
corrections are derived by measuring a small number of halftone independent fundamental binary patterns
based on the 2×2 binary printer model by Wang et. al. Hence, the required measurements do not increase
as more halftoning methods are added. The key novelty in our work is in identifying an invariant halftone
correction factor (HCF) that exploits the knowledge of the relationship between the true printer response
and the 2×2 predicted response for a given halftoning scheme. We evaluate our scheme both quantitatively
and qualitatively against the printer color correction transform derived with the printer in its "default
state". Results indicate that the proposed method is very successful in calibrating a printer across a wide
variety of halftones.
KEYWORDS: Calibration, Printing, CMYK color model, Control systems, Transform theory, Image processing, Color image processing, Color reproduction, 3D printing, Computer engineering
Color device calibration is traditionally performed using one-dimensional per-channel tone-response corrections (TRCs). While one-dimensional TRCs are attractive in view of their low implementation complexity and efficient real-time processing of color images, their use severely restricts the degree of control that can be exercised along various device axes. A typical example is that 1-D TRCs in a printer can be used to either ensure gray balance along the C = M = Y axis or to provide a linear response in ΔE units along each of the individual (C, M and Y) axis but not both. This paper proposes a novel two-dimensional calibration architecture for color device calibration that enables significantly more control over the device color gamut with a modest increase in implementation cost. Results show significant improvement in calibration accuracy and stability when compared to traditional 1-D calibration.
Conventional grayscale error diffusion halftoning produces worms and other objectionable artifacts. Tone dependent error diffusion (Li and Allebach) reduces these artifacts by controlling the diffusion of quantization errors based on the input graylevel. Li and Allebach optimize error filter weights and thresholds for each (input)
graylevel based on a human visual system model. This paper extends tone dependent error diffusion to color. In color error diffusion, what color to render becomes a major concern in addition to finding optimal dot patterns. We present a visually optimum design approach for input level (tone) dependent error filters (for each color plane).
The resulting halftones reduce traditional error diffusion artifacts and achieve greater accuracy in color rendition.
Grayscale error diffusion introduces nonlinear distortion (directional artifacts and false textures), linear distortion (sharpening), and additive noise. Since error diffusion is 2-D sigma-delta modulation (Anastassiou, 1989), Kite et al. linearize error diffusion by replacing the thresholding quantizer with a scalar gain plus additive noise. Sharpening is proportional to the scalar gain. Kite et al. derive the sharpness control parameter value in threshold modulation (Eschbach and Knox, 1991) to compensate linear distortion. These unsharpened halftones are particularly useful in perceptually weighted SNR measures. False textures at mid-gray (Fan and Eschbach, 1994) are due to limit cycles, which can be broken up by using a deterministic bit flipping quantizer (Damera-Venkata and Evans, 2001). We review other variations on grayscale error diffusion to reduce false textures in shadow and highlight regions, including green noise halftoning Levien, 1993) and tone-dependent error diffusion (Li and Allebach, 2002). We then discuss color error diffusion in several forms: color plane separable (Kolpatzik and Bouman, 1992); vector quantization (Shaked et al. 1996); green noise extensions (Lau et al. 2000); and matrix-valued error filters (Damera-Venkata and Evans, 2001). We conclude with open research problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.