PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE-IS&T Proceedings Volume 6812, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We virtually restore faded black and white photographic prints by the method of decomposing the image into
a smooth component that contains edges and smoothed homogeneous regions, and a rough component that may
include grain noise but also fine detail. The decomposition into smooth and rough components is achieved using
a rational filter. Two approaches are considered; in one, the smooth component is histogram-stretched and then
gamma corrected before being added back to a homomorphically filtered version of the rough component; in the
other the image is initially gamma corrected and shifted towards white. Each approach presents improvements
with respect to the previously separately explored techniques of gamma correction alone, and the stretching
of the smooth component together with the homomorphical filtering of the rough component, alone. After
characterizing the image with the help of the scatter plot of a 2D local statistic of the type (local intensity, local
contrast), namely (local average, local standard deviation), the effects of gamma correction are studied as the
effects on the scatter plot, on the assumption that the quality of the image is related to the distribution of data
on the scatter plot. Also, the correlation coefficient between the local average and the local deviation on the one
hand, and the global average of the image play important descriptor roles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multivariate images are now commonly produced in many applications. If their process is possible due to
computers power and new programming languages, theoretical difficulties have still to be solved. Standard
image analysis operators are defined for scalars rather than for vectors and their extension is not immediate.
Several solutions exist but their pertinence is hardly linked to context. In the present paper we are going to get
interested in segmentation of vector images also including a priori knowledge. The proposed strategy combines
a decision procedure (where points are classified) and an automatic segmentation scheme (where regions are
properly extracted). The classification is made using a Bayesian classifier. The segmentation is computed via
a region growing method: the morphological Watershed transform. A direct computation of the Watershed
transform on vector images is not possible since vector sets are not ordered. So, the Bayesian classifier is used
for computing a scalar distance map where regions are enhanced or attenuated depending on their similitude
to a reference shape: the current distance is the Mahalanobis distance. This combination allows to transfer
the decision function from pixels to regions and to preserve the advantages of the original Watershed transform
defined for scalar functions. The algorithm is applied for segmenting colour images (with a priori) and medical images, especially dermatology images where skin lesions have to be detected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a new technique to detect random-valued impulse noise in images. In this method, the noisy
pixels are detected iteratively through several phases. In each phase, a pixel will be marked as a noisy pixel if it does not
have sufficient number of similar pixels inside the neighborhood window. The size of the window increases over the
phases, so does the sufficient similar neighbor criterion. After the detection phases, all noisy pixels will be corrected in a
recovering process. We compare the performance of this method with other recently published methods, in terms of peak
signal to noise ratio and perceptual quality of the restored images. From the simulation results we observe that this
method outperforms all other methods at medium to high noise rates. The algorithm is very fast, providing consistent
performance over a wide range of noise rates. It also preserves fine details of the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common distortion in videos acquired in long range observation systems is image instability in form of chaotic local
displacements of image frames caused by fluctuations in the refraction index of the atmosphere turbulence. At the same
time, such videos, which are designed to present moving objects on a stable background, contain tremendous
redundancy that potentially can be used for image stabilization and perfecting provided reliable separation of stable
background from true moving objects. Recently, it was proposed to use this redundancy for resolution enhancement of
turbulent video through elastic registration, with sub-pixel accuracy, of segments of video frames that represent stable
scenes. This paper presents results of investigation, by means of computer simulation, into how parameters of such a
resolution enhancement process affect its performance and its potentials and limitations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for the detection of cracks in old paper photographs is presented. Cracks in photographic prints usually
result from a folding of the paper support of the photograph; they often extend across the entire image, along a
preferred orientation. A first clue we exploit for crack detection is the fact that old prints have a characteristic
sepia hue, due to aging and to the type of processing used at the time; a break of the gelatin exposes the white
support paper; likewise, a break of the paper causes a black region in the digitized image. Thus, cracks are
usually achromatic; this fact can be used for their detection on a color space with an explicit hue component.
A series of parallel microcracks that run along the direction of a main crack usually result as well; even though
the gelatin may not be broken, the folds corresponding to these microcracks cause a set of image discontinuities,
observable at a high-enough resolution. In an interactive process, the user indicates the ends of the crack on
the frame of the photo and the algorithm detects the crack pixels. In addition to color, the algorithm uses a
multidirectional, multiresolution Gabor approach and mathematical morphology. The resulting method provides
crack detection with good performance, as evidenced by the corresponding Receiver Operating Characteristics
(ROC) graphs.1
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an image restoration technique exploiting regularized inversion and the recent block-matching and 3D
filtering (BM3D) denoising filter. The BM3D employs a non-local modeling of images by collecting similar image
patches in 3D arrays. The so-called collaborative filtering applied on such a 3D array is realized by transformdomain
shrinkage. In this work, we propose an extension of the BM3D filter for colored noise, which we use in
a two-step deblurring algorithm to improve the regularization after inversion in discrete Fourier domain. The
first step of the algorithm is a regularized inversion using BM3D with collaborative hard-thresholding and the
seconds step is a regularized Wiener inversion using BM3D with collaborative Wiener filtering. The experimental
results show that the proposed technique is competitive with and in most cases outperforms the current best
image restoration methods in terms of improvement in signal-to-noise ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated Explosive Detection Systems (EDS) utilizing Computed Tomography (CT) generate a series of 2-D projections from a series of X-ray scans OF luggage under inspection. 3-D volumetric images can also be generated from the collected data set. Extensive data manipulation of the 2-D and 3-D image sets for detecting the presence of explosives is done automatically by EDS. The results are then forwarded to human screeners for final review. The final determination as to whether the luggage contains an explosive and needs to be searched manually is performed by trained TSA (Transportation Security Administration) screeners following an approved TSA protocol. The TSA protocol has the screeners visually inspect the resulting images and the renderings from the EDS to determine if the luggage is suspicious and consequently should be searched manually. Enhancing those projection images delivers a higher quality screening, reduces screening time and also reduces the amount of luggage that needs to be manually searched otherwise. This paper presents a novel edge detection algorithm that is geared towards, though not exclusive to, automated explosive detection systems. The goal of these enhancements is to provide a higher quality screening process while reducing the overall screening time and luggage search rates. Accurately determining the location of edge pixels within 2-D signals, often the first step in segmentation and recognition systems indicates the boundary between overlapping objects in a luggage. Most of the edge detection algorithms such as Canny, Prewitt, Roberts, Sobel, and Laplacian methods are based on the first and second derivatives/difference operators. These operators detect the discontinuities in the differences of pixels. These approaches are sensitive to the presence of noise and could produce false edges in noisy images. Including large scale filters, may avoid errors generated by noise, but often simultaneously eliminating the finer edge details as well. This paper proposes a novel pixels ratio based edge detection algorithm which is immune to noise. The new method compares ratios of pixels in multiple directions to an adaptive threshold to determine edges in different directions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this contribution, we propose an adaptive multiresolution denoising technique operating in the wavelet domain that
selectively enhances object contours, extending a restoration scheme based on edge oriented wavelet representation by
means of adaptive surround inhibition inspired by the human visual system characteristics. The use of the complex edge
oriented wavelet representation is motivated by the fact that it is tuned to the most relevant visual image features. In this
domain, an edge is represented by a complex number whose magnitude is proportional to its "strength" while phase
equals the orientation angle. The complex edge wavelet is the first order dyadic Laguerre Gauss Circular Harmonic
Wavelet, acting as a band limited gradient operator. The anisotropic sharpening function enhances or attenuates
large/small edges more or less deeply, accounting for masking effects induced by textured background. Adapting
sharpening to the local image content is realized by identifying the local statistics of natural and artificial textures like
grass, foliage, water, composing the background. In the paper, the whole mathematical model is derived and its
performances are validated on the basis of simulations on a wide data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need for information extracted from remotely sensed data has increased in recent decades. To address this issue,
research is being conducted to develop a complete multi-stage supervised object recognition system. The first stage of
this system couples genetic programming with standard unsupervised clustering algorithms to search for the optimal
preprocessing function. This manuscript addresses the quantification and the characterization of the uncertainty involved
in the random creation of the first set of candidate solutions from which the algorithm begins. We used a Monte Carlo
type simulation involving 800 independent realizations and then analyzed the distribution of the final results. Two
independent convergence approaches were investigated: [1] convergence based solely on genetic operations (standard)
and [2] convergence based on genetic operations with subsequent insertion of new genetic material (restarting). Results
indicate that the introduction of new genetic material should be incorporated into the preprocessing framework to
enhance convergence and to reduce variability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic detection of runway hazards from a moving platform under poor visibility conditions is a multifaceted problem. The general approach that we use relies on looking at several frames of the video imagery to determine the presence of objects. Since the platform is in motion during the acquisition of these frames, the first step in the process is to correct for platform motion. Extracting the scene structure from the frames is our next goal. To rectify, enhance the details and to remove fog we perform multiscale retinex followed by edge detection on the imagery. In this paper, we concentrate on the automatic determination of runway boundaries from the rectified, enhanced, and edge-detected imagery. We will examine the performance of edgedetection algorithms for images that have poor contrast, and quantify their efficacy as runway edge detectors. Additionally, we will define qualitative criteria to determine the best edge output image. Finally, we will find an optimizing parameter for the detector that would help us to automate the detection of objects on the runway and thus the whole process of hazard detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents some analytic tools for modeling and analyzing the performance of latent
fingerprint matchers. While the tools and techniques presented are not necessarily new, the
manner in which these are employed is believed to be novel. It is shown that using relatively
simple techniques, valuable insights are provided into what is otherwise nearly an intractable
problem.
In addition, three other topics are touched upon:
1) We proved a thumb-nail sketch of NIST's ELFT project. The purpose of the ELFT
project is to investigate the steps required to achieve a partial lights-out latent
fingerprint processing capability;
2) Preliminary data obtained from this project are analyzed using the proposed
techniques;
3) An analysis is provided for predicting the performance of a "fully-populated"
system when measurements can realistically only be performed on a small subset of
the full-up repository/background/gallery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of wood is of increasing importance in wood industry. One important quality aspect is the average annual
ring width and its standard deviation that is related to the wood strength and stiffness. We present a camera based
measurement system for annual ring measurements. The camera system is designed for outdoor use in forest harvesters. Several challenges arise, such as the quality of cutting process, camera positioning and the light variations. In the freshly cut surface of log end the annual rings are somewhat unclear due to small splinters and saw marks. In the harvester the optical axis of camera cannot be set orthogonally to the log end causing non-constant resolution of the image. The amount of natural light in forest varies from total winter darkness to midsummer brightness. In our approach the image is first geometrically transformed to orthogonal geometry. The annual ring width is measured with two-dimensional power spectra. The two-dimensional power spectra combined with the transformation provide a robust method for estimating the mean and the standard deviation of annual ring width. With laser lighting the variability due to natural lighting can be minimized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tomographic image reconstruction is computationally very demanding. In all cases the backprojection represents the performance bottleneck due to the high operational count and resulting high demand put on the memory subsystem. In this study, we present the implementation of a cone beam reconstruction algorithm on the Cell Broadband Engine (CBE) processor aimed at real-time applications. The cone-beam backprojection performance was assessed by backprojecting a half-circle scan of 512 projections of 10242 pixels into a volume of size 5123 voxels. The projections are acquired on a C-Arm scanner and directed in real time to a CBE-based platform for real-time reconstruction. The acquisition speed typically ranges between 17 and 35 projections per second. On a CBE processor clocked at 3.2 GHz, our implementation performs this task in ~13 seconds, allowing for real time reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adaptive filtering is a compute-intensive algorithm aimed at effectively reducing noise without blurring the structures
contained in a set of digital images. In this study, we take a generalized approach for adaptive filtering based on seven
oriented filters, each individual filter implemented by a two-dimensional (2D) convolution with a mask size of 11 by 11
pixels. Digital radiology workflow imposes severe real-time constraints that require the use of hardware acceleration
such as provided by multicore processors. Implementing complex algorithms on heterogeneous multicore architectures is
a complex task especially for taking advantage of the DMA engines. We have implemented the algorithm on a Cell
Broadband Engine (CBE) processor clocked at 3.2 GHz using a generic framework for multicore processors. This
implementation is capable of filtering images of 5122 pixels at a throughput of 40 frames per second while allowing
changing the parameters in real time. The resulting images are directed to the DR monitor or to the real-time computed
tomography (CT) reconstruction engine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time knowledge of capsule volume of an organ provides a valuable clinical tool for 3D biopsy applications. It is
challenging to estimate this capsule volume in real-time due to the presence of speckles, shadow artifacts, partial volume
effect and patient motion during image scans, which are all inherent in medical ultrasound imaging.
The volumetric ultrasound prostate images are sliced in a rotational manner every three degrees. The automated
segmentation method employs a shape model, which is obtained from training data, to delineate the middle slices of
volumetric prostate images. Then a "DDC" algorithm is applied to the rest of the images with the initial contour
obtained. The volume of prostate is estimated with the segmentation results.
Our database consists of 36 prostate volumes which are acquired using a Philips ultrasound machine using a Side-fire
transrectal ultrasound (TRUS) probe. We compare our automated method with the semi-automated approach. The mean
volumes using the semi-automated and complete automated techniques were 35.16 cc and 34.86 cc, with the error of
7.3% and 7.6% compared to the volume obtained by the human estimated boundary (ideal boundary), respectively. The
overall system, which was developed using Microsoft Visual C++, is real-time and accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a set of methods to enable fully automated analysis of a novel label-free spinning-disc format microarray
system. This microarray system operates in a dual-channel mode, simultaneously acquiring fluorescence as well as
interferometric signals. The label-free interferometric component enables the design of robust gridding methods, which
account for rotational effects difficult to estimate in traditional microarray image analysis. Printing of microarray
features in a Cartesian grid is preferable for commercial systems because of the benefits of using existing DNA/protein
printing technologies. The spinning disc operation of the microarray requires spatial transformation of Cartesian
microarray features, from the reader/scanner frame of reference to the disc frame of reference. We describe a fast spatial
transformation method with no measurable degradation in the quality of transformed data, for this purpose. The gridding
method uses frequency-domain information to calculate grid spacing and grid rotation. An adaptive morphological
segmentation method is used to segment microarray spots with variable sizes accurately. The entire process, from the
generation of the raw data to the extraction of biologically relevant information, can be performed without any manual
intervention, allowing for the deployment of high-throughput systems. These image analysis methods have enabled this
microarray system to achieve superior sensitivity limits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an image understanding algorithm for automatically identifying and ranking different image
regions into several levels of importance. Given a color image, specialized maps for classifying image content namely:
weighted similarity, weighted homogeneity, image contrast and memory colors are generated and combined to provide a
metric for perceptual importance classification. Further analysis yields a region ranking map which sorts the image
content into different levels of significance. The algorithm was tested on a large database of color images that consists of
the Berkeley segmentation dataset as well as many other internal images. Experimental results show that our technique
matches human manual ranking with 90% efficiency. Applications of the proposed algorithm include image rendering,
classification, indexing and retrieval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a local area-based, discontinuity-preserving stereo matching algorithm that achieves high quality
results near depth discontinuities as well as in homogeneous regions. To address the well-known challenge of defining appropriate support windows for local stereo methods, we use the anisotropic Local Polynomial Approximation (LPA) - Intersection of Confidence Intervals (ICI) technique. It can adaptively select a nearoptimal
anisotropic local neighborhood for each pixel in the image. Leveraging this robust pixel-wise shape-adaptive
support window, the proposed stereo method performs a novel matching cost aggregation step and an
effective disparity refinement scheme entirely within a local high-confidence voting framework. Evaluation using
the benchmark Middlebury stereo database shows that our method outperforms other local stereo methods, and
it is even better than some algorithms using advanced but computationally complicated global optimization
techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate and robust image change detection and motion segmentation has been of substantial interest in the image
processing and computer vision communities. To date no single motion detection algorithm has been universally
superior while biological vision systems are so adept at it. In this paper, we analyze image sequences using phase
plots generated from sequential image frames and demonstrate that the changes in pixel amplitudes due to the
motion of objects in an image sequence result in phase space behaviour resembling a chaotic signal. Recent research
in neural signals have shown biological neural systems are highly responsive to chaos-like signals resulting from
aperiodic forcing functions caused by external stimuli. We then hypothesize an alternative physics-based motion
algorithm from the traditional optical flow algorithm. Rather than modeling the motion of objects in an image as a
flow of grayscale values as in optical flow, we propose to model moving objects in an image scene as aperiodic
forcing functions, impacting the imaging sensor, be it biological or silicon-based. We explore the applicability of
some popular measures for detecting chaotic phenomena in the frame-wise phase plots generated from sequential
image pairs and demonstrate their effectiveness on detecting motion while robustly ignoring illumination change.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Regardless the final targeted application (compression, watermarking, texture analysis, indexation, ...), image/video modelling in the DCT domain is generally approached by tests of concordance with some well known pdfs (like Gaussian, generalised Gaussian, Laplace, Rayleigh ...). Instead of forcing the images/videos to stick to such theoretical models, our study aims at estimating the true pdf characterising their behaviour. In this respect, we considered three intensively used ways of applying DCT, namely on whole frames, on 4x4 blocks, and on 8x8 blocks. In each case, we first prove that a law modelling the corresponding coefficients exists. Then, we estimate this law by Gaussian mixtures and finally we identify the generality of such model with respect to the data on which it was computed and to the estimation method it relies on.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gradient operators are commonly used in edge detection. Usually, proper smoothing processing is performed on the
original image when a gradient operator is applied. Generally, the smoothing processing is embedded in the gradient
operator, such that each component of the gradient operator can be decomposed into some smoothing processing and a
discrete derivative operator, which is defined as the difference of two adjacent values or the difference between the two
values on the two sides of the position under check. When the image is smoothed, the edges of the main objects are also
smoothed such that the differences of the adjacent pixels across edges are lowered down. In this paper, we define the
derivative of f at a point x as f'(x)=g(x+Δx)-g(x-Δx), where g is the result of smoothing f with a smoothing filter, and Δx is an increment of x and it is properly selected to work with the filter. When Δx=2, sixteen gradient directions can be obtained and they provide a finer measurement than usual for gradient operators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Natural compound eyes combine a small eye volume with a large field of view (FOV) at the cost of comparatively low spatial resolution. Based on these principles, an artificial apposition compound-eye imaging system has been developed. In this system the total FOV is given by the number of channels along one axis multiplied with the sampling angle between channels. In order to increase the image resolution for a fixed FOV the sampling angle is made small. However, depending on the size of the acceptance angle, the FOVs of adjacent channels overlap which causes a reduction of contrast in the overall image. In this work we study the feasibility of using digital post-processing methods for images obtained with a thin compound-eye camera to overcome this reduction in contrast. We chose the Wiener filter for the post-processing and carried out simulations and experimental measurements to verify its use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The explosion of VoD and HDTV services opened a new direction in watermarking applications: compressed domain
watermarking, promising at least tenfold speed increase. While sound technical approaches to this emerging field are
already available in the literature, at our best knowledge the present paper is the first related theoretical study. It
considers the ISO/IEC 14496-10:2005 standard (also known as MPEG-4 AVC) and objectively describes with
information theory concepts (noisy channel, noise matrices) the effects of the real-life watermarking attacks (like
rotations, linear and non-linear filtering, StirMark). All the results are obtained on a heterogeneous corpus of 7 video
sequences summing up to about 3 hours.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Active site prediction, well-known for drug design and medical diagnosis, is a major step in the study and prediction
of interactions between proteins. The specialized literature provides studies of common physicochemical
and geometric properties shared by active sites. Among these properties, this paper focuses on the travel depth
which takes a major part in the binding with other molecules. The travel depth of a point on the protein solvent
excluded surface (SES) can be defined as the shortest path accessible for a solvent molecule between this point
and the protein convex hull.
Existing algorithms providing an estimation of this depth are based on the sampling of a bounding box volume
surrounding the studied protein. These techniques make use of huge amounts of memory and processing time
and result in estimations with precisions that strongly depend on the chosen sampling rate. The contribution of
this paper is a surface-based algorithm that only takes samples of the protein SES into account instead of the
whole volume. We show this technique allows a more accurate prediction, at least 50 times faster.
A validation of this method is also proposed through experiments with a statistical classifier taking as inputs
the travel depth and other physicochemical and geometric measures for active site prediction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using our recently modified curve-fitting-topological-coding (CFTC) computer program, we can
automatically obtain a precise topological code to represent the topological property of a closely
reconstructed boundary of a selected object in an edge-detected picture. This topological property
is perhaps the most important property to be used for object identification. It is very accurate, yet
very robust, because the topological property is independent of geometrical location, shape, size,
orientation, and viewing angles. It is very accurate if two different objects to be differentiated or to
be identified have different boundary topologies. Patch noise and obscuring noise can also be
automatically eliminated as shown in some live experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The iris is currently believed to be the most accurate biometric for human identification. The majority of fielded iris
identification systems are based on the highly accurate wavelet-based Daugman algorithm. Another promising
recognition algorithm by Ives et al uses Directional Energy features to create the iris template. Both algorithms use
Hamming distance to compare a new template to a stored database. Hamming distance is an extremely fast computation,
but weights all regions of the iris equally. Work from multiple authors has shown that different regions of the iris contain
varying levels of discriminatory information. This research evaluates four post-processing similarity metrics for
accuracy impacts on the Directional Energy and wavelets based algorithms. Each metric builds on the Hamming distance
method in an attempt to use the template information in a more salient manner. A similarity metric extracted from the
output stage of a feed-forward multi-layer perceptron artificial neural network demonstrated the most promise. Accuracy
tables and ROC curves of tests performed on the publicly available Chinese Academy of Sciences Institute of
Automation database show that the neural network based distance achieves greater accuracy than Hamming distance at
every operating point, while adding less than one percent computational overhead.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new nonlinear joint fusion and anomaly detection technique for mine detection applications using
two different types of sensor data (synthetic aperture radar (SAR) and Hyperspectral sensor (HS) data). A well-known
anomaly detector so called the RX algorithm is first extended to perform fusion and detection simultaneously at the pixel
level by appropriately concatenating the information from the two sensors. This approach is then extended to its
nonlinear version. The nonlinear fusion-detection approach is based on the statistical kernel learning theory which
explicitly exploits the higher order dependencies (nonlinear relationships) between the two sensor data through an
appropriate kernel. Experimental results for detecting anomalies (mines) in hyperspectral imagery are presented for
linear and nonlinear joint fusion and detection for a co-registered SAR and HS imagery. The result show that the
nonlinear techniques outperform linear versions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new image-interpolation approach where one can adjust edge sharpness and texture intensity
according to one's taste. This approach is composed of the three stages. At the first stage, with the BV-G imagede-composition
variational model, an image is represented as a product of its two components so that its separated
structural component may correspond to a cartoon image-approximation and its separated texture components may
collect almost all oscillatory variations representing textures, and the texture component can be amplified or attenuated
according to user's taste. At the second stage, each separated component is interpolated with an interpolation method
suitable to it. Since the structural component keeps sharp edges, its proper interpolation method is a TV-regularization
super-resolution interpolation method that can restore frequency components higher than the Nyquist frequency and
remove sample-hold blurs without producing ringing artifacts near edges. The texture component is an oscillatory
function, and its proper interpolation method is a smoothness-regularization super-resolution interpolation method that
can restore continuous variations and remove the blurs. At the final stage, the two interpolated components are
combined. The approach enlarges images without not only blurring edges but also destroying textures, and removes
blurs caused by the sample-hold and/or the optical low-pass filter without producing ringing artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Majority of image filtering techniques are designed under assumption that noise is of special, a priori known type and it
is i.i.d., i.e. spatially uncorrelated. However, in many practical situations the latter assumption is not true due to several
reasons. Moreover, spatial correlation properties of noise might be rather different and a priori unknown. Then the assumption
that noise is i.i.d. under real conditions of spatially correlated noise commonly leads to considerable decrease
of a used filter effectiveness in comparison to a case if this spatial correlation is taken into account. Our paper deals with
two basic aspects. The first one is how to modify a denoising algorithm, in particular, a discrete cosine transform (DCT)
based filter in order to incorporate a priori or preliminarily obtained knowledge of spatial correlation characteristics of
noise. The second aspect is how to estimate spatial correlation characteristics of noise for a given image with appropriate
accuracy and robustness under condition that there is some a priori information about, at least, noise type and statistics
like variance (for additive noise case) or relative variance (for multiplicative noise). We also present simulation
results showing the effectiveness (the benefit) of taking into consideration noise correlation properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face recognition continues to meet significant challenges in reaching accurate results and still remains one of the activities where humans outperform technology. An attractive approach in improving face identification is provided by the fusion of multiple imaging sources such as visible and infrared images. Hyperspectral data, i.e. images collected over hundreds of narrow contiguous light spectrum intervals constitute a natural choice for expanding face recognition image fusion, especially since it may provide information beyond the normal visible range, thus exceeding the normal human sensing. In this paper we investigate the efficiency of hyperspectral face recognition through an in house experiment that collected data in over 120 bands within the visible and near infrared range. The imagery was produced using an off the shelf sensor in both indoors and outdoors with the subjects being photographed from various angles. Further processing included spectra collection and feature extraction. Human matching performance based on spectral properties is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressed video is the digital raw material provided by video-surveillance systems and used for archiving and
indexing purposes. Multimedia standards have therefore a direct impact on such systems. If MPEG-2 used to be the
coding standard, MPEG-4 (part 2) has now replaced it in most installations, and MPEG-4 AVC/H.264 solutions are now
being released. Finely analysing the complex and rich MPEG-4 streams is a challenging issue addressed in that paper.
The system we designed is based on five modules: low-resolution decoder, motion estimation generator, object motion
filtering, low-resolution object segmentation, and cooperative decision. Our contributions refer to as the statistical
analysis of the spatial distribution of the motion vectors, the computation of DCT-based confidence maps, the automatic
motion activity detection in the compressed file and a rough indexation by dedicated descriptors. The robustness and
accuracy of the system are evaluated on a large corpus (hundreds of hours of in-and outdoor videos with pedestrians and
vehicles). The objective benchmarking of the performances is achieved with respect to five metrics allowing to estimate
the error part due to each module and for different implementations. This evaluation establishes that our system analyses
up to 200 frames (720x288) per second (2.66 GHz CPU).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DSA images suffer from challenges like system X-ray noise and artifacts due to patient movement. In this paper, we present a two-step strategy to improve DSA image quality. First, a hierarchical deformable registration algorithm is used to register the mask frame and the bolus frame before subtraction. Second, the resulted DSA image is further enhanced by background diffusion and nonlinear normalization for better visualization. Two major changes are made in the hierarchical deformable registration algorithm for DSA images: 1) B-Spline is used to represent the deformation field in order to produce the smooth deformation field; 2) two features are defined as the attribute vector for each point in the image, i.e., original image intensity and gradient. Also, for speeding up the 2D
image registration, the hierarchical motion compensation algorithm is implemented by a multi-resolution framework. The proposed method has been evaluated on a database of 73 subjects by quantitatively measuring signal-to-noise (SNR) ratio. DSA embedded with proposed strategies demonstrates an improvement of 74.1% over conventional DSA in terms of SNR. Our system runs on Eigen's DSA workstation using C++ in Windows environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose several improvements to the original non-local means algorithm introduced by Buades
et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive
character of the image in order to denoise the image unlike conventional denoising algorithms, which typically
operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm
has a high computational cost.
An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar
windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely
biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting
their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only
contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar,
we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is
further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation
time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm,
our proposed method produces images with increased PSNR and better visual performance in less computation
time.
Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality
and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are
much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing
tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital
image forgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical flow algorithms for estimating image local motion in video sequences are based on the first term Taylor series
expansion approximation of image variations caused by motion, which requires computing image spatial derivatives. In
this paper we report an analytical assessment of lower bounds of optical flow estimation errors defined by the accuracy
of the Taylor series expansion approximation of image variations and results of experimental comparison of performance
of known optical flow methods, in which image differentiation was implemented through different commonly used
numerical differentiation methods and through DFT/DCT based algorithms for precise differentiation of sampled data.
The comparison tests were carried out using simulated sequences as well as real-life image sequences commonly used
for comparison of optical flow methods. The simulated sequences were generated using, as test images, pseudo-random
images with uniform spectrum within a certain fraction of the image base band specified by the image sampling rate, the
fraction being a parameter specifying frequency contents of test images. The experiments have shown that performance
of the optical flow methods can be significantly improved compared to the commonly used numerical differentiation
methods by using the DFT/ DCT-based differentiation algorithms especially for images with substantial high-frequency
content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Super-resolution (SR) video processing method with reduced block artifacts using conventional block-matching algorithm (BMA) is proposed. To get high-quality SR results, accurate motion vectors are necessary in registration process. For real applications, block-based motion estimators are widely used, which show block-based motion errors if their motion vectors are employed for the registration. Incorrectly registered pixels due to the block-based motion errors limit the image quality improvement of the SR processing and even degrade the results by causing block artifacts. To reduce the artifacts from the inaccurately registered pixels, a weighting function using three-dimensional confidence measure is proposed in this paper. The measure uses spatial and inter-channel analysis to suppress the weight on incorrectly registered pixels during the SR process. Motion-compensated pixel differences and motion vector variances between previous and current frames are utilized for spatial analysis, and motion vector variance with constant acceleration model and pixel difference variances through LR frames are used for inter-channel analysis. Experimental results show significantly improved results in error regions keeping enhanced quality with accurately registered pixels, when motion vectors are found by conventional BMAs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an efficient solution for digital images sharpening, the Adaptive Directional Sharpening with
Overshoot Control (ADSOC), a method based on a high-pass filter able to perform a stronger sharpening in the detailed
zones of the image, preserving the homogeneous regions. The basic objective of this approach is to reduce the undesired
effects. The sharpening introduced along strong edges or into uniform regions could provide unpleased ringing artifacts
and noise amplification, which are the most common drawbacks of the sharpening algorithms. The ADSOC allows to the
user to choose the ringing intensity and it doesn't increase the isolated noisy pixel luminance value. Moreover, the
ADSOC works the orthogonally respect to the direction of the edges in the blurred image, in order to yield a more
effective contrast enhancement. The experiments showed good algorithm performances in terms of booth visual quality
and computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the fast migration of high resolution displays for home and office environments there is a strong demand for high quality picture scaling. This is caused on the one hand by large picture sizes and on the other hand due to an enhanced visibility of picture artifacts on these displays [1]. There are many proposals for an enhanced spatial interpolation adaptively matched to picture contents like e.g. edges. The drawback of these approaches is the normally integer and often limited interpolation factor. In order to achieve rational factors there exist combinations of adaptive and non adaptive linear filters, but due to the non adaptive step the overall quality is notably limited. We present in this paper a content adaptive polyphase interpolation method which uses "offline" trained filter coefficients and an "online" linear filtering depending on a simple classification of the input situation. Furthermore we present a new approach to a content adaptive interpolation polynomial, which allows arbitrary polyphase interpolation factors at runtime and further improves the overall interpolation quality. The main goal of our new approach is to optimize interpolation quality by adapting higher order polynomials directly to the image content. In addition we derive filter constraints for enhanced picture quality. Furthermore we extend the classification based filtering to the temporal dimension in order to use it for an intermediate image interpolation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image scrambling is used to make images visually unrecognizable such that unauthorized users have difficulty decoding
the scrambled image to access the original image. This article presents two new image scrambling algorithms based on
Fibonacci p-code, a parametric sequence. The first algorithm works in spatial domain and the second in frequency
domain (including JPEG domain). A parameter, p, is used as a security-key and has many possible choices to guarantee
the high security of the scrambled images. The presented algorithms can be implemented for encoding/decoding both in
full and partial image scrambling, and can be used in real-time applications, such as image data hiding and encryption.
Examples of image scrambling are provided. Computer simulations are shown to demonstrate that the presented
methods also have good performance in common image attacks such as cutting (data loss), compression and noise. The
new scrambling methods can be implemented on grey level images and 3-color components in color images. A new
Lucas p-code is also introduced. The scrambling images based on Fibonacci p-code are also compared to the scrambling
results of classic Fibonacci number and Lucas p-code. This will demonstrate that the classical Fibonacci number is a
special sequence of Fibonacci p-code and show the different scrambling results of Fibonacci p-code and Lucas p-code.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prostate cancer is the most commonly diagnosed cancer in males in the United States and the second leading
cause of cancer death. While the exact cause is still under investigation, researchers agree on certain risk factors
like age, family history, dietary habits, lifestyle and race. It is also widely accepted that cancer distribution
within the prostate is inhomogeneous, i.e. certain regions have a higher likelihood of developing cancer. In
this regard extensive work has been done to study the distribution of cancer in order to perform biopsy more
effectively. Recently a statistical cancer atlas of the prostate was demonstrated along with an optimal biopsy
scheme achieving a high detection rate.
In this paper we discuss the complete construction and application of such an atlas that can be used in a
clinical setting to effectively target high cancer zones during biopsy. The method consists of integrating intensity
statistics in the form of cancer probabilities at every voxel in the image with shape statistics of the prostate in
order to quickly warp the atlas onto a subject ultrasound image. While the atlas surface can be registered to a
pre-segmented subject prostate surface or instead used to perform segmentation of the capsule via optimization
of shape parameters to segment the subject image, the strength of our approach lies in the fast mapping of cancer
statistics onto the subject using shape statistics. The shape model was trained from over 38 expert segmented
prostate surfaces and the atlas registration accuracy was found to be high suggesting the use of this method to
perform biopsy in near real time situations with some optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Enhancing an image in such a way that maintains image edges is a difficult problem. Many current methods for image
enhancement either smooth edges on a small scale while improving contrast on a global scale or enhance edges on a
large scale while amplifying noise on a small scale. One method which has been proposed for overcoming this is
anisotropic diffusion, which views each image pixel as an energy sync which interacts with the surrounding pixels based
upon the differences in pixel intensities and conductance values calculated from local edge estimates. In this paper, we
propose a novel image enhancement method which makes use of these smoothed images produced by diffusion methods.
The basic steps of this algorithm are: a) decompose an image into a smoothed image and a difference image, for
example by using anisotropic diffusion or as in Lee's Algorithm [14]; b) apply two image enhancement algorithms, such
as alpha rooting [7] or logarithmic transform shifting [15]; c) fuse these images together, for example by weighting the
two enhanced images and summing them for the final image. Computer simulations comparing the results of the
proposed method and current state-of-the-art enhancement methods will be presented. These simulations show the
higher performance, both on the basis of subjective evaluation and objective measures, of the proposed method over
current methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Landsat-7 Enhanced Thematic Mapper Plus (ETM+) is the sensor payload on the Landsat-7 satellite imager
(launched on April 15th, 1999) that is a derivative of the Landsat-4 and 5 Thematic Mapper (TM) land imager sensors.
Scan Line Corrector (SLC) malfunctioning appeared onboard on May 31, 2003. The SLC-Off problem was caused by
failure of the SLC which compensates for the forward motion of the satellite [1]. As ETM+ is still capable of acquiring
images with the SLC-Off mode, the need of applying new techniques and using other data sources to reconstruct the
missed data is a challenging for scientists and final users of remotely sensed images. One of the predicted future roles of
the Advanced Land Imager (ALI) onboard the Earth Observer One (EO-1) is its ability to offer a potential technological
direction for Landsat data continuity missions [2]. In this regard more than the purposes of the work as fabricating the
gapped area in the ETM+ the attempt to evaluate the ALI imagery ability is another noticeable point in this work. In the
literature there are several techniques and algorithms for gap filling. For instance local linear histogram matching [3],
ordinary kriging, and standardized ordinary cokriging [4]. Here we used the Regression Based Data Combination
(RBDC) in which it is generally supposed that two data sets (i.e. Landsat/ETM+ and EO-1/ALI) in the same spectral
ranges (for instance band 3 ETM+ and band 4 ALI in 0.63 - 0.69 μm) will have meaningful and useable statistical
characteristics. Using this relationship the gap area in ETM+ can be filled using EO-1/ALI data. Therefore the process is
based on the knowledge of statistical structures of the images which is used to reconstruct the gapped areas. This paper
presents and compares four regression based techniques. First two ordinary methods with no improvement in the
statistical parameters were undertaken as Scene Based (SB) and Cluster Based (CB) followed by two statistically
developed algorithms including Buffer Based (BB) and Weighted Buffer Based (WBB) techniques. All techniques are
executed and evaluated over a study area in Sulawesi, Indonesia. The results indicate that the WBB and CB approaches
have superiority over the SB and BB methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prostate repeat biopsy has become one of the key requirements in today's prostate cancer detection. Urologists are
interested in knowing previous 3-D biopsy locations during the current visit of the patient. Eigen has developed a system
for performing 3-D Ultrasound image guided prostate biopsy. The repeat biopsy tool consists of three stages: (1)
segmentation of the prostate capsules from previous and current ultrasound volumes; (2) registration of segmented
surfaces using adaptive focus deformable model; (3) mapping of old biopsy sites onto new volume via thin-plate splines
(TPS). The system critically depends on accurate 3-D segmentation of capsule volumes. In this paper, we study the
effect of automated segmentation technique on the accuracy of 3-D ultrasound guided repeat biopsy. Our database
consists of 38 prostate volumes of different patients which are acquired using Philips sidefire transrectal ultrasound
(TRUS) probe. The prostate volumes were segmented in three ways: expert segmentation, semi-automated segmentation,
and fully automated segmentation. New biopsy sites were identified in the new volumes from different segmentation
methods, and we compared the mean squared distance between biopsy sites. It is demonstrated that the performance of
our fully automated segmentation tool is comparable to that of semi-automated segmentation method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of improving contour detection by filling gaps between collinear contour pieces. A fast algorithm is proposed which takes into account local edge orientation and local curvature. Each edge point is replaced by a curved elongated patch, whose orientation and curvature match the local edge orientation and edge. The proposed contour completion algorithm is integrated in a multiresolution framework for contour detection. Experimental results show the superiority of the proposed method to other well-established approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a novel method for watermarking and ciphering color images is presented. The aim of the system is
to allow the watermarking of encrypted data without requiring the knowledge of the original data. By using this
method, it is also possible to cipher watermarked data without damaging the embedded signal. Furthermore, the
extraction of the hidden information can be performed without deciphering the cover data and it is also possible
to decipher watermarked data without removing the watermark. The transform domain adopted in this work is the Fibonacci-Haar wavelet transform. The experimental results show the effectiveness of the proposed scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The deblurring of images corrupted by radial blur is studied. This type of blur appears in images acquired during an
any
camera translation having a substantial component orthogonal to the image plane. The point spread functions (PSF PSF)
describing this blur are spatially varying. However, this blurring process does not mix together pixels lying on differen different
radial lines, i.e. lines stemming from a unique point in the image, the so called "blur center". Thus, in suitable pola polar
coordinates, the blurring process is essentially a 1-D linear operator, described by the multiplication with the blurrin blurring
matrix.
We consider images corrupted simultaneously by radial blur and noise. The proposed deblurring algorithm is base
based
on two separate forms of regularization of the blur inverse. First, in the polar domain, we invert the blurring matri matrix
using the Tikhonov regularization. We then derive a particular modeling of the noise spectrum after both the regularize regularized
inversion and the forward and backward coordinate transformations. Thanks to this model, we successfully use a denoisin denoising
algorithm in the Cartesian domain. We use a non-linear spatially adaptive filter, the Pointwise Shape-Adaptive DCT, i in
order to exploit the image structures and attenuate noise and artifacts.
Experimental results demonstrate that the proposed algorithm can effectively restore radial blurred images corrupted by additive white Gaussian noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a novel data hiding scheme in which a payload is embedded into the discrete cosine transform domain. The characteristics of the Human Visual System (HVS) with respect to image fruition have been exploited to adapt the strength of the embedded data and integrated in the design of a digital image watermarking system. By using an HVS-inspired image quality metric, we study the relation between the amount of data that can be embedded and the resulting perceived quality. This study allows one to increase the robustness of the watermarked image without damaging the perceived quality, or, as alternative, to reduce the impairments produced by the watermarking process given a fixed embedding strength. Experimental results show the effectiveness and the robustness of the proposed solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photometry of crowded fields is an old theme of astronomical image processing. Large space surveys in the UV (ultraviolet), like the GALEX mission (135-175 nm and 170-275 nm range), confronts us again with challenges like, very low light levels, poor resolution, variable stray-light in background, the extended and badly known PSFs (point spread functions), etc. However the morphological similitude of these UV images to their counterparts in the visible bands, suggests that we use all this high resolution data as the starting reference for the UV analysis. We choose the Bayesian approach. However there is not a straightforward way leading from the basic idea to its practical implementation. We will describe in this paper the path which starts with the original procedure (presented in a previous paper) and ends on the useful one. After a brief recall on the Bayesian method, we describe the process applied to restore from the UV images the point spread function (PSF) and the background due to stray-light. In the end we display the photometric performances reached for each channel and we discuss the consequences of the imperfect knowledge of background, the inaccuracy on object centring and on the PSF model. Results show a clear improvement by more than 2 mag on the magnitude limit and in the completeness of the measured objects relative to classical methods (it corresponds to more than 75000 new objects per GALEX field, i.e. approx 25% more). The simplicity of the Bayesian approach eased the analysis as well as the corrections needed in order to obtain a useful and reliable photometric procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face Recognition has been a major topic of research for many years and several approaches have been developed,
among which the Principal Component Analysis (PCA) algorithm using Eigenfaces is the most popular. Eigenfaces
optimally extract a reduced basis set that minimizes reconstruction error for the face class prototypes. The method is
based on second-order pixel statistics and does not address higher-order statistical dependencies such as relationships
among three or more pixels. Independent Component Analysis (ICA) is a recently developed linear transformation
method for finding suitable representations of multivariate data, such that the components of the representation are
as statistically independent as possible. The face image class prototypes in ICA are considered to be a linear-mixture
of some unknown set of basis images that are assumed to be statistically independent, in the sense that the pixel
values of one basis image cannot be predicted from that of another. This research evaluates the performance of ICA
for face recognition under varying conditions like change of expression, change in illumination and partial occlusion.
We compare the results with that of standard PCA, employing the Yale face database for the experiments and the
results show that ICA is better under certain conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Microcalcification detection in a mammogram is an effective method to find the early stage of breast tumor. Especially,
computer aided diagnosis (CAD) improves the working performance of radiologists and doctors as it offers an efficient
microcalcification detection. In this paper, we propose a microcalcification detection system which consists of three
modules; coarse detection, clustering, and fine detection module. The coarse detection module finds candidate pixels
from an entire mammogram which are suspected as a part of a microcalcification. The module not only extracts two
median contrast features and two contrast-to-noise ratio features, but also categorizes the candidate pixels with a linear
kernel-based SVM classifier. Then, the clustering module forms the candidate pixels into regions of interest (ROI) using
a region growing algorithm. The objective of the fine detection module is to decide whether the corresponding region
classifies as a microcalcification or not. Eleven features including distribution, variance, gradient, and various edge
components are extracted from the clustered ROIs and are fed into a radial basis function-based SVM classifier to
determine the microcalcification. In order to verify the effectiveness of the proposed microcalcification detection system,
the experiments are performed with full-field digital mammogram (FFDM). We also compare its detection performance
with an ANN-based detection system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel edge detector has been developed that utilises statistical masks and neural networks for the optimal detection of edges over a wide range of image types. The failure of many common edge detection techniques has been observed when analysing concealed weapons X-ray images, biomedical images or images with significant levels of noise, clutter or texture. This novel technique is based on a statistical edge detection filter that uses a range of two-sample statistical tests to evaluate any local image texture differences and by applying a pixel region mask (or kernel) to the image analyse the statistical properties of that region. The range and type of tests has been greatly expanded from the previous work of Bowring et al.1 This process is further enhanced by applying combined multiple scale pixel masks and multiple statistical tests, to Artificial Neural Networks (ANN) trained to classify different edge types. Through the use of Artificial Neural Networks (ANN) we can combine the output results of several statistical mask scales into one detector. Furthermore we can allow the combination of several two sample statistical tests of varying properties (for example; mean based, variance based and distribution based). This combination of both scales and tests allows the optimal response from a variety of statistical masks. From this we can produce the optimum edge detection output for a wide variety of images, and the results of this are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.