PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8063, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The issue of privacy in video surveillance has drawn a lot of interest lately. However, thorough performance analysis and
validation is still lacking, especially regarding the fulfillment of privacy-related requirements. In this paper, we first
review recent Privacy Enabling Technologies (PET). Next, we discuss pertinent evaluation criteria for effective privacy
protection. We then put forward a framework to assess the capacity of PET solutions to hide distinguishing facial
information and to conceal identity. We conduct comprehensive and rigorous experiments to evaluate the performance of
face recognition algorithms applied to images altered by PET. Results show the ineffectiveness of naïve PET such as
pixelization and blur. Conversely, they demonstrate the effectiveness of more sophisticated scrambling techniques to foil
face recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Watermarking is known to be a very difficult task. Robustness, Distortion, Payload, Security, Complexity are many constraints to deal with. When applied to a video stream, the difficulty seems to be growing in comparison to image watermarking. Many classical non malicious manipulations of a compressed stream may suppress the embedded information. For example, a simple re-compression of a DVD movie (MPEG2 compression) to a DivX movie will defeat most of the current state-of-the-art watermarking systems. In this talk, we will expose the different techniques in order to watermark a video compressed stream. Before, we will present the H.264/AVC standard which is one of the most powerful video-compression algorithms. The discussion about video watermarking will be illustrated with H.264 streams. The specific example of traitor tracing will be presented. Deadlocks will be discussed, and then the possible extensions and the future applications will conclude the presentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel technology that significantly enhances security and trust in wireless and wired communication networks has
been developed. It is based on integration of a novel encryption mechanism and novel data packet structure with
enhanced security tools. This novel data packet structure results in an unprecedented level of security and trust, while at
the same time reducing power consumption and computing/communication overhead in networks. As a result, networks
are provided with protection against intrusion, exploitation, and cyber attacks and posses self-building, self-awareness,
self-configuring, self-healing, and self-protecting intelligence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past several years there has been an apparent shift in research focus in the area of digital steganography and
steganalysis - a shift from primarily image based methods to a new focus on broader multimedia techniques. More
specifically the area of digital audio steganography is of prime interest. We introduce a new high capacity, covert
channel data embedding and recovery system for digital audio carrier files using a key based encoding and decoding
method. It will be shown that the added information file is interleaved within the carrier file and is fully indexed
allowing for segmented extraction and recovery of data at chosen start and stop points in the sampled stream. The
original audio quality is not affected by the addition of this covert data. The embedded information can also be secured
by a binary key string or cryptographic algorithm and resists statistical analytic detection attempts. We will also
describe how this new method can be used for data compression and expansion applications in the transfer and storage of
digital multimedia to increase the overall data capacity and security.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Smart sensors can gather all kind of information and process it. Cameras are still dominating and smart cameras can offer
services for face recognition or person tracking. Operators are building collaborations to cover a larger area, to save costs
and to add more and different sensors. Cryptographic methods may achieve integrity and confidentiality between
operators, but not trust. Even if a partner or one of his sensors is authenticated, no statements can be made about the
quality of the sensor data. Hence, trust must be established between the partners and their sensors. Trust can be built
based on past experience. A reputation system collects opinions of operators about the behavior of sensors and calculates
trust based on these opinions. Many reputation systems have been proposed, e.g., for authentication of files in peer-topeer
networks. This work presents a new reputation system, which is designed to calculate the trustworthiness of smart
sensors and smart sensor systems. A new trust model, including functions to calculate and update trust on past
experiences, is proposed. When fusing information of multiple sensors, it cannot always be reconstructed, which
information led to a bad result. Hence, an approach for fair rating is shown. The proposed system has been realized in a
Service-Oriented Architecture for easy integration in existing smart sensor systems, e.g., smart surveillance systems. The
model itself can be used in every decentralized heterogeneous smart sensor network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimedia Signal Processing Algorithms and Systems
In this paper, we discuss the effects of uncommon time constraints to the stopping criteria for managing timecritical
decision-making in human-machine interaction. The continued growth of multimedia signal processing
demands real-time solutions supporting a human-machine interactive decision process. We provide a methodology
that locates best alternatives in human-machine interactive decision-making by including human perception in the
effects of uncommon time constraints, instead of the prevailing heuristics. The proposed methodology contributes
to the stopping criteria of such as an entry-exit control procedure using a monitor device by introducing the alert
and confirm built-in functions into human-machine interaction for mitigating the misapprehension of decision
makers that originates from human perception.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Threshold operators are conventionally used in wavelet-based denoising applications. Different thresholding schemes have been
suggested to achieve improved balance between mitigating various signal distortions and preserving signal details. In general, these
state-of-the-art threshold operators are nonlinear shrinkage functions such as well known "soft" and "hard" thresholds and their
hybrids. Recently a nonlinear polynomial threshold has been introduced which integrates several known approaches and can be
optimized using a least squares technique. While significantly improving the performance - this approach is computationally intensive
and is not flexible enough for band-adaptive processing.
In this paper an adaptive least mean squared (LMS) optimization approach is proposed and studied which drastically reduces
computational load and is convenient for band-adaptive denoising scenarios. The approach is successfully applied to 1D and 2D
signals, and the results demonstrate improved performance in comparison with the conventional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fingerprint recognition is one of the most commonly used forms of biometrics and has been widely used in daily life
due to its feasibility, distinctiveness, permanence, accuracy, reliability, and acceptability. Besides cost, issues related to
accuracy, security, and processing time in practical biometric recognition systems represent the most critical factors that
makes these systems widely acceptable. Accurate and secure biometric systems often require sophisticated enhancement
and encoding techniques that burdens the overall processing time of the system. In this paper we present a comparison
between common digital and optical enhancementencoding techniques with respect to their accuracy, security and
processing time, when applied to biometric fingerprint systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper aims to provide a remote fingerprint object authentication protocol dedicated to anti-counterfeiting
applications. The corresponding security model is given. The suggested scheme is based on an Elliptic Curve
Cryptography (ECC) encryption adding a mechanism to control integrity at the verification stage. The
privacy constraint useful in many applications leads us to embed a Private Information Retrieval scheme in
the protocol.
As in a previous SPIE presentation, we begin with an optical reader. We drastically lower the amount of
computation made at this stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper offers an innovative image processing technique (smart data compression) for some Department of Defense
and Government users, who may be disadvantaged in terms of network and resource availability as they operate at the
tactical edge. Specifically, we propose using the concept of autonomous anomaly detection to significantly reduce the
amount of data transmitted to the disadvantaged user. The primary sensing modality is hyperspectral, where a national
asset is expected to fly over the region of interest acquiring and processing data in real time, but transmitting only the
corresponding data of scene anomalies, their spatial relationships in the imagery, range and navigational direction.
Results from a proof of principle experiment using real hyperspectral imagery are encouraging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hypercomplex approaches are seeing increased application to signal and image processing problems. The use of multicomponent
hypercomplex numbers, such as quaternions, enables the simultaneous co-processing of multiple signal or
image components. This joint processing capability can provide improved exploitation of the information contained in
the data, thereby leading to improved performance in detection and recognition problems. In this paper, we apply
hypercomplex processing techniques to the logo image recognition problem. Specifically, we develop an image matcher
by generalizing classical phase correlation to the biquaternion case. We further incorporate biquaternion Fourier domain
alpha-rooting enhancement to create Alpha-Rooted Biquaternion Phase Correlation (ARBPC). We present the
mathematical properties which justify use of ARBPC as an image matcher. We present numerical performance results of
a logo verification problem using real-world logo data, demonstrating the performance improvement obtained using the
hypercomplex approach. We compare results of the hypercomplex approach to standard multi-template matching
approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by
the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance
and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology,
but many other applications require a software-defined encoder. High quality compression features needed for some
applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer
electronics device. An application may also need to efficiently combine compression with other functions such as noise
reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low
power, field upgradable implementation.
Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array
with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to
operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be
used to express all of the encoding processes including motion compensation, transform and quantization, and entropy
coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as
a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported
without the need for explicit global synchronization control.
An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork
processor device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Methods for embedment of stream cipher rules into compressive Elias-type entropy coders are presented. Such
systems have the ability to simultaneously compress and encrypt input data; thereby providing a fast and secure
means for data compression. Procedures for maintaining compressive performance are articulated with focus on
further compression optimizations. Furthermore, a novel method is proposed which exploits the compression
process to hide cipherstream information in the case of a known plaintext attack. Simulations were performed on
images from a variety of classes in order to grade and compare the compressive and computational costs of the
novel system relative to traditional compression-followed-by-encryption methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Secure wireless connectivity between mobile devices and financial/commercial establishments is mature, and so is the
security of remote authentication for mCommerce. However, the current techniques are open for hacking, false
misrepresentation, replay and other attacks. This is because of the lack of real-time and current-precise-location in the
authentication process. This paper proposes a new technique that includes freshly-generated real-time personal biometric
data of the client and present-position of the mobile device used by the client to perform the mCommerce so to form a
real-time biometric representation to authenticate any remote transaction. A fresh GPS fix generates the "time and
location" to stamp the biometric data freshly captured to produce a single, real-time biometric representation on the
mobile device. A trusted Certification Authority (CA) acts as an independent authenticator of such client's claimed realtime
location and his/her provided fresh biometric data. Thus eliminates the necessity of user enrolment with many
mCommerce services and application providers. This CA can also "independently from the client" and "at that instant of
time" collect the client's mobile device "time and location" from the cellular network operator so to compare with the
received information, together with the client's stored biometric information. Finally, to preserve the client's location
privacy and to eliminate the possibility of cross-application client tracking, this paper proposes shielding the real location
of the mobile device used prior to submission to the CA or authenticators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face recognition is one of the most desirable biometric-based authentication schemes to control access to sensitive
information/locations and as a proof of identity to claim entitlement to services. The aim of this paper is to develop
block-based mechanisms, to reduce recognition errors that result from varying illumination conditions with emphasis on
using error correction codes. We investigate the modelling of error patterns in different parts/blocks of face images as a
result of differences in illumination conditions, and we use appropriate error correction codes to deal with the
corresponding distortion. We test the performance of our proposed schemes using the Extended Yale-B Face Database,
which consists of face images belonging to 5 illumination subsets depending on the direction of light source from the
camera. In our experiments each image is divided into three horizontal regions as follows: region1, three rows above the
eyebrows, eyebrows and eyes; region2, nose region and region3, mouth and chin region. By estimating statistical
parameters for errors in each region we select suitable BCH error correction codes that yield improved recognition
accuracy for that particular region in comparison to applying error correction codes to the entire image. Discrete Wavelet
Transform (DWT) to a depth of 3 is used for face feature extraction, followed by global/local binarization of coefficients
in each subbands. We shall demonstrate that the use of BCH improves separation of the distribution of Hamming
distances of client-client samples from the distribution of Hamming distances of imposter-client samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a simplied fast method to estimate the head pose in monocular images. The head pose is
estimated comparing the position of the landmark points localized in the face, which correspond to pre-selected
points of a 3D face model. To localize these points on a face image (such as the eyes, mouth and nose) we use
Active Shape Models. These points are detected in the 2D space, and next the 3D face model is adjusted by
geometric transformations to the 2D points on the face image to estimate the head angular position in the 3D
space. Our preliminary experimental results are encouraging, since they show that our simplied approach has
a competitive accuracy when compared to more sophisticated methods available in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we proposed a new skin recognition approach for human identification. An automatic skin
segmentation method is developed to detect the location of eyes and the skin area. A 1D Log-Gabor filter is
used for skin feature extraction and template generation. And the center area of the skin template is matched
with the database. The experimental results show that the proposed skin recognition method can achieve
better performance compared to face recognition systems when it is used on the CASIA-Iris-Distance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometrics identifies/verifies a person using his/her physiological or behavioral
characteristics. It is becoming an important ally for law enforcement and homeland security.
However, there are some safety and privacy concerns: biometric based systems can be accessed
when users are under threat, reluctant or even unconscious states. In this paper, we introduce a
new method which can identify a person and detect his/her willingness. Our experimental results
show that the new approach can enhance the security by checking the consent signature while
achieving very high recognition accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a
segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters
through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin
via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground
segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that
our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this article is to give a practical overview of certain commercial software for investigation of the
new iPhone 4. It is demonstrated how important data stored in the iPhone are investigated. Different cases of
investigations are presented that are well-suited for forensics lab work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video
processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA
features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance
algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical
tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and
verified video and interface functions from a standard video framework are utilized to significantly reduce development
and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's
Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a
Nios-II processor using Altera's Avalon Memory Mapped protocol.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital steganographic algorithms hide secret messages in seemingly innocent cover objects, such as images.
Steganographic algorithms are rapidly evolving, reducing distortions, and making detection of altered cover objects by
steganalysis algorithms more challenging. The value of current steganographic and steganalysis algorithms is difficult to
evaluate until they are tested on realistic datasets. We propose a system approach to steganalysis for reliably detecting
steganographic objects among a large number of images, acknowledging that most digital images are intact. The system
consists of a cascade of intrinsic image formations filters (IIFFs), where the IIFFs in the early stage are designed to filter
out non-stego images based on real world constraints, and the IIFFs in the late stage are designed to detect intrinsic
features of specific steganographic routines. Our approach makes full use of all available constraints, leading to robust
detection performance and low probability of false alarm. Our results based on a large image set from Flickr.com
demonstrate the potential of our approach on large-scale real-world repositories.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this article is to show forensic investigation methods for mobile phones to students in a university
forensic lab environment. Students have to learn the usefulness of forensic procedures to ensure evidence
collection, evidence preservation, forensic analysis, and reporting.
Open source tools as well as commercial forensic tools for forensic investigation of modern mobile (smart)
phones are used. It is demonstrated how important data stored in the mobile device are investigated. Different
scenarios of investigations are presented that are well-suited for forensics lab work in university.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a palmprint identification system using Finite Ridgelet Transform (FRIT) and Bayesian classifier.
FRIT is applied on the ROI (region of interest), which is extracted from palmprint image, to extract a set of distinctive
features from palmprint image. These features are used to classify with the help of Bayesian classifier. The proposed
system has been tested on CASIA and IIT Kanpur palmprint databases. The experimental results reveal better
performance compared to all well known systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometric systems such as face recognition must address four key challenges: efficiency, robustness, accuracy and
security. Isometric projection has been proposed as a robust dimension reduction technique for a number of applications,
but it is computationally demanding when applied to high dimensional spaces such as the space of face images. On the
other hand, wavelet transforms have shown to provide an efficient tool for facial feature representation and face
recognition with significant reduction in dimension. In this paper, we propose a hybrid approach that combines the
efficiency and robustness of wavelet transforms with isometric projections for face features extraction in the transformed
domain to be used for recognition. We shall compare the recognition accuracy of our approach with the accuracy of
other commonly used projection techniques in the wavelet domain such as PCA and LDA. The security of biometric
templates is addressed by adopting a lightweight random projection technique as an add-on subsystem. The results are
based on experiments conducted on a publicly available benchmark face database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although biometric authentication is perceived to be more reliable than traditional authentication schemes, it becomes
vulnerable to many attacks when it comes to remote authentication over open networks and raises serious privacy
concerns. This paper proposes a biometric-based challenge-response approach to be used for remote authentication
between two parties A and B over open networks. In the proposed approach, a remote authenticator system B (e.g. a
bank) challenges its client A who wants to authenticate his/her self to the system by sending a one-time public random
challenge. The client A responds by employing the random challenge along with secret information obtained from a
password and a token to produce a one-time cancellable representation of his freshly captured biometric sample. The
one-time biometric representation, which is based on multi-factor, is then sent back to B for matching. Here, we argue
that eavesdropping of the one-time random challenge and/or the resulting one-time biometric representation does not
compromise the security of the system, and no information about the original biometric data is leaked. In addition to
securing biometric templates, the proposed protocol offers a practical solution for the replay attack on biometric systems.
Moreover, we propose a new scheme for generating a password-based pseudo random numbers/permutation to be used
as a building block in the proposed approach. The proposed scheme is also designed to provide protection against
repudiation. We illustrate the viability and effectiveness of the proposed approach by experimental results based on two
biometric modalities: fingerprint and face biometrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a palmprint based verification system using SIFT features and Lagrangian network graph technique.
We employ SIFT for feature extraction from palmprint images whereas the region of interest (ROI) which has been
extracted from wide palm texture at the preprocessing stage, is considered for invariant points extraction. Finally,
identity is established by finding permutation matrix for a pair of reference and probe palm graphs drawn on extracted
SIFT features. Permutation matrix is used to minimize the distance between two graphs. The propsed system has been
tested on CASIA and IITK palmprint databases and experimental results reveal the effectiveness and robustness of the
system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The PreNotiS (preventive notification system) was proposed to address the current lack in consumer prevention and
disaster informatics systems. The underscore of this letter is to propose PreNotiS as a provision of trusted proxies of
information sourcing to be integral to the disaster informatics framework. To promote loose coupling among subsystems,
PreNotiS has evolved into a model-view-controller (MVC) architecture via object-oriented incremental prototyping. The
MVC specifies how all subsystems and how they interact with each other.
A testing framework is also proposed for the PreNotiS to verify multiple concurrent user access which might be
observable during disasters. The framework relies on conceptually similar self-test modules to help with serviceability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper demonstrates wavelet-denoising approach using polynomial threshold operators in 3-dimensional
applications. This paper compares the efficacy of different denoising algorithms on 3D biomedical images using 3D
wavelet transform. The denoising mechanism is demonstrated by mitigating noise of different variances using
polynomial thresholding. Our approach is to apply a parameterized threshold and optimally choose the parameters for
high performance noise suppression depending on the nature of the images and noise.
Comparative studies in the wavelet domain conclude that the presented method is viable for 3D applications. It also
confirms the feasibility in using the polynomial threshold operators as a wavelet-polynomial threshold based
interpolation filter. The filter applied to assist three spatial-based interpolation algorithms (i.e. Nearest-neighbor,
Bilinear, and Bicubic) and to a spectral wavelet-based interpolation algorithm. Simulation shows that the denoising using
polynomial threshold operators mitigates distortions for the interpolation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image similarity measures are crucial for image processing applications which require comparisons to ideal reference
images in order to assess performance. The Structural Similarity (SSIM), Gradient Structural Similarity (GSSIM),
4-component SSIM (4-SSIM) and 4-component GSSIM (4-GSSIM) indexes are motivated by the fact that the human
visual system is adapted to extract local structural information. In this paper, we propose a new measure which enhances
the gradient information used for quality assessment. An analysis of the proposed image similarity measure using the
LIVE database of distorted images and their corresponding subjective evaluations of visual quality illustrate the
improved performance of the proposed metric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hands-on experiments with electronic devices have been recognized as an important element in the field of engineering
to help students get familiar with theoretical concepts and practical tasks. The continuing increase the student number,
costly laboratory equipment, and laboratory maintenance slow down the physical lab efficiency.
As information technology continues to evolve, the Internet has become a common media in modern education. Internetbased
remote laboratory can solve a lot of restrictions, providing hands-on training as they can be flexible in time and the
same equipment can be shared between different students. This article describes an on-going remote hands-on
experimental radio modulation, network and mobile applications lab project "eComLab". Its main component is a remote
laboratory infrastructure and server management system featuring various online media familiar with modern students,
such as chat rooms and video streaming.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new image contrast enhancement method based on the complex Ensemble Empirical Mode
Decomposition (EEMD) and alpha-rooting approaches. The new scheme also takes advantage of Fourier transform
properties and complex Intrinsic Mode Functions (IMFs). The experimental results show that the images obtained
by this combination are more appealing than the best results yielded by classical alpha-rooting technique. This
algorithm is tested on the medical images, hazy aerial images and other low contrast images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a color image enhancement algorithm based on shifting the histogram of logarithmic DCT
coefficients of the luminance component. The novelty of this work lies in utilizing both the spatial domain histogram
concept and transform domain enhancement techniques to perform the enhancement. This paper also demonstrates a
quantitative measurement based upon contrast entropy to choose the optimal parameters and determines the
effectiveness of the method. In addition, different color spaces including HSV, YCbCr, and Principle Component
Analysis (PCA) have been examined for the algorithm to determine the best visual results. We also present a
comprehensive review of four commonly used color image enhancement techniques, such as mutiscale retinex with
color restoration, multi contrast enhancement, multi contrast enhancement with dynamic compression and color image
enhancement by scaling, and compare them with the proposed histogram shifting techniques. Computer simulations and
analysis show that the histogram shifting method outperforms the commonly used methods for most images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The common treatments of melancholia are psychotherapy and taking medicines. The psychotherapy treatment which
this study focuses on is limited by time and location. It is easier for psychiatrists to grasp information from clinical
manifestation but it is difficult for psychiatrists to collect information from patients' daily conversations or emotion. To
design a system which psychiatrists enable to capture patients' daily symptoms will show great help in the treatment.
This study proposes to use fuzzy data mining algorithm to find association rules among keywords segmented from
patients' daily voice/text messages to assist psychiatrists extract useful information before outpatient service. Patients of
melancholia can use devices such as mobile phones or computers to record their own emotion anytime and anywhere and
then uploading the recorded files to the back-end server for further analysis. The analytical results can be used for
psychiatrists to diagnose patients' degrees of melancholia. Experimental results will be given to verify the effectiveness
of the proposed methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, driven by the development of steganalysis methods, steganographic algorithms have been
evolved rapidly with the ultimate goal of an unbreakable embedding procedure, resulting in recent
steganographic algorithms with minimum distortions, exemplified by the recent family of Modified Matrix
Encoding (MME) algorithms, which has shown to be most difficult to be detected. In this paper we propose a
compressed sensing based on approach for intrinsic steganalysis to detect MME stego messages. Compressed
sensing is a recently proposed mathematical framework to represent an image (in general, a signal) using a
sparse representation relative to an overcomplete dictionary by minimizing the l1-norm of resulting
coefficients. Here we first learn a dictionary from a training set so that the performance will be optimized
using the KSVD algorithm; since JPEG images are processed by 8x8 blocks, the training examples are 8x8
patches, rather than the entire images and this increases the generalization of compressed sensing. For each
8x8 block, we compute its sparse representation using OMP (orthogonal matching pursuit) algorithm. Using
computed sparse representations, we train a support vector machine (SVM) to classify 8x8 blocks into stego
and non-stego classes. Then given an input image, we first divide it into 8x8 blocks. For each 8x8 block, we compute its sparse representation and classify it using the trained SVM. After all the 8x8 blocks are classified, the entire image is classified based on the majority rule of 8x8 block classification results. This allows us to achieve a robust decision even when 8x8 blocks can be classified only with relatively low accuracy. We have tested the proposed algorithm on two datasets (Corel-1000 dataset and a remote sensing image dataset) and have achieved 100% accuracy on classifying images, even though the accuracy of classifying 8x8 blocks is only 80.89%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the possibilities of forensic investigation of CD, DVD and Blu-ray Discs is presented. It is shown
which information can be read by using freeware and commercial software for forensic examination.
It particulary it describes the visualization of hidden content and the possibility to find out information about
the burning hardware used for writing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.