PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 8406, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose two solutions for fingerprint local image reconstruction based on Gabor filtering. Gabor
filtering is a popular method for fingerprint image enhancement. However, the reliability of the information in the output image suffers, when the input image has a poor quality. This is the result of the spurious estimates of frequency and
orientation by classical approaches, particularly in the scratch regions. In both techniques of this paper, the scratch marks are recognized initially using reliability image which is calculated using the gradient images. The first algorithm is based on an inpainting technique and the second method employs two different kernels for the scratch and the non-scratch
parts of the image to calculate the gradient images. The simulation results show that both approaches allow the actual
information of the image to be preserved while connecting discontinuities correctly by approximating the orientation
matrix more genuinely.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compared to iris recognition, sclera recognition which uses line descriptor can achieve
comparable recognition accuracy in visible wavelengths. However, this method is too time-consuming
to be implemented in a real-time system. In this paper, we propose a GPU-based
parallel computing approach to reduce the sclera recognition time. We define a new descriptor
in which the information of KD tree structure and sclera edge are added. Registration and
matching task is divided into subtasks in various sizes according to their computation
complexities. Every affine transform parameters are generated by searching on KD tree.
Texture memory, constant memory, and shared memory are used to store templates and transform
matrixes. The experiment results show that the proposed method executed on GPU can
dramatically improve the sclera matching speed in hundreds of times without accuracy
decreasing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new technique to obfuscate an authentication-challenge program (named LocProg) using randomly
generated data together with a client's current location in real-time. LocProg can be used to enable any handsetapplication
on mobile-devices (e.g. mCommerce on Smartphones) that requires authentication with a remote
authenticator (e.g. bank). The motivation of this novel technique is to a) enhance the security against replay attacks,
which is currently based on using real-time nonce(s), and b) add a new security factor, which is location verified by two
independent sources, to challenge / response methods for authentication. To assure a secure-live transaction, thus
reducing the possibility of replay and other remote attacks, the authors have devised a novel technique to obtain the
client's location from two independent sources of GPS on the client's side and the cellular network on authenticator's
side. The algorithm of LocProg is based on obfuscating "random elements plus a client's data" with a location-based
key, generated on the bank side. LocProg is then sent to the client and is designed so it will automatically integrate into
the target application on the client's handset. The client can then de-obfuscate LocProg if s/he is within a certain range
around the location calculated by the bank and if the correct personal data is supplied. LocProg also has features to
protect against trial/error attacks. Analysis of LocAuth's security (trust, threat and system models) and trials based on a
prototype implementation (on Android platform) prove the viability and novelty of LocAuth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many computer vision applications benefit from image registration where the mutual geometric information between
images is estimated. With this estimation, the perspective of one image can be altered such that mutual information can
easily be determined. This task is an essential step in object recognition. Existing methods seek to minimize some
dissimilarity measure through optimization approaches such as the gradient descent method or particle swarm theory.
The challenge associated with the optimization methods lies in the unintended convergence of local minima or maxima.
Feature-based approaches attempt to identify keypoints of an image that are suitable for the homography estimation;
however, these methods produce a large set of candidate points. We propose a comprehensive image registration method
that takes advantage of feature point detection but imposes a strict method for identifying optimal interest points for the
estimation of the homography matrix. The proposed method combines feature-based results with texture-based
optimizations for the selection of control points. The preliminary experimental results show that our methodology can
greatly reduce the computational time while improving registration accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video
surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many
video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks,
airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the
object's shape, texture and color. In addition, its visual features will change under different lighting and weather
conditions.
This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of
images. In developed system, motion detection and background subtraction are combined with motion-region-saving,
skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are
further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the
proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions
with various cigarette sizes, colors, and shapes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent work, low-frequency AC signal encryption, decryption and retrieval using system-parameter based
keys at the receiver stage of an acousto-optic (A-O) Bragg cell under first-order feedback have been demonstrated [1,2].
The corresponding nonlinear dynamics have also been investigated using the Lyapunov exponent and the so-called
bifurcation maps [3]. The results were essentially restricted to A-O chaos around 10 KHz, and (baseband) signal
bandwidths in the 1-4 KHz range. The results have generally been satisfactory, and parameter tolerances (prior to severe
signal distortion at the output) in the ±5% - ±10% range have been obtained. Periodic AC waveforms, and a short audio
clip have been examined in this series of investigations. Obviously, a main drawback in the above series of simulations
has been the low and impractical signal bandwidths used. The effort to increase the bandwidth involves designing a
feedback system with much higher chaos frequency that would then be amenable to higher BW information. In this
paper, we re-visit the problem for the case where the feedback delay time is reduced considerably, and the system
parameters in the transmitter adjusted in order to drive the system with a DC driver bias into chaos. Reducing the
feedback time delay to less than 1 μs, an average chaos frequency of about 10 MHz was achieved after a few trials. For
the AC application, a chaos region was chosen that would allow a large enough dynamic range for the width of the
available passband. Based on these dynamic choices, periodic AC signals with 1 MHz (fundamental) bandwidth were used for the RF bias driver (along with a DC bias). A triangular wave and a rectangular pulse train were chosen as examples. Results for these cases are presented here, along with comments on the system performance, and the possibility of including (static) images for signal encryption. Overall, the results are encouraging, and affirm the possibility of using A-O chaos for securely transmitting and retrieving information in the mid-RF range (a few 10s of MHz).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To deal with severe variation in recording conditions, most biometric systems acquire multiple biometric samples, at the
enrolment stage, for the same person and then extract their individual biometric feature vectors and store them in the
gallery in the form of biometric template(s), labelled with the person's identity. The number of samples/templates and
the choice of the most appropriate templates influence the performance of the system. The desired biometric template(s)
selection technique must aim to control the run time and storage requirements while improving the recognition accuracy
of the biometric system. This paper is devoted to elaborating on and discussing a new two stages approach for biometric
templates selection and update. This approach uses a quality-based clustering, followed by a special criterion for the
selection of an ultimate set of biometric templates from the various clusters. This approach is developed to select
adaptively a specific number of templates for each individual. The number of biometric templates depends mainly on the
performance of each individual (i.e. gallery size should be optimised to meet the needs of each target individual). These
experiments have been conducted on two face image databases and their results will demonstrate the effectiveness of
proposed quality-guided approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of
extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques
have been developed to deal with these variations, resulting in improved performances. This paper aims to model
template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in
uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not
on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image
windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class
and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed
approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall
demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording
conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face identification at a distance is very challenging since captured images are often degraded by blur and noise.
Furthermore, the computational resources and memory are often limited in the mobile environments. Thus, it is very
challenging to develop a real-time face identification system on the mobile device. This paper discusses face
identification based on frequency domain matched filtering in the mobile environments. Face identification is performed
by the linear or phase-only matched filter and sequential verification stages. The candidate window regions are decided
by the major peaks of the linear or phase-only matched filtering outputs. The sequential stages comprise a skin-color test
and an edge mask filtering test, which verify color and shape information of the candidate regions in order to remove
false alarms. All algorithms are built on the mobile device using Android platform. The preliminary results show that
face identification of East Asian people can be performed successfully in the mobile environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimedia Signal Processing Algorithms and Systems
This paper is concerned with pre-processing and segmentation tasks that influence the performance of Optical Character
Recognition (OCR) systems and handwritten/printed text recognition. In Arabic, these tasks are adversely effected by
the fact that many words are made up of sub-words, with many sub-words there associated one or more diacritics that
are not connected to the sub-word's body; there could be multiple instances of sub-words overlap. To overcome these
problems we investigate and develop segmentation techniques that first segment a document into sub-words, link the
diacritics with their sub-words, and removes possible overlapping between words and sub-words. We shall also
investigate two approaches for pre-processing tasks to estimate sub-words baseline, and to determine parameters that
yield appropriate slope correction, slant removal. We shall investigate the use of linear regression on sub-words pixels
to determine their central x and y coordinates, as well as their high density part. We also develop a new incremental
rotation procedure to be performed on sub-words that determines the best rotation angle needed to realign baselines. We
shall demonstrate the benefits of these proposals by conducting extensive experiments on publicly available databases
and in-house created databases. These algorithms help improve character segmentation accuracy by transforming
handwritten Arabic text into a form that could benefit from analysis of printed text.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Saccadic eyes are important human behaviors and have important applications in commercial and security fields. In this
paper, we focus on saccadic eyes recognition from 3-D shape data acquired from a 3-D near infrared sensor. Two salient
features, normal vectors of meshes and curvatures of surfaces, are extracted. The distributions of normal vectors and
curvatures are computed to represent eye states. The support vector machine (SVM) is applied to classify eyes states into
saccadic and non-saccadic eyes states. To verify the proposed method, we performed three groups of experiments using
different strategies for samples selected from 300 3-D data, and present experimental results that demonstrate the
effectiveness and robustness of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Feature extraction is a key component of typical pattern recognition algorithms. Usually performance of feature
extractors is governed by several parameters. Characterizing parameter value effect on feature extraction performance is
valuable for aiding in appropriate parameter value selection. Often, the parameter space is discretized and the effect of
discrete parameter values on feature variation is analyzed. However, it can be problematic to determine a discretization
density that contains suitable parameter values. To address this issue, this paper further explores a previously-introduced
sensitivity measure for quantifying feature variation as a function of parameter space sampling density. Further
mathematical properties of the sensitivity measure are determined. Closed form expressions for special feature set
relationships are derived. We investigate sensitivity measure convergence properties as a function of increasing
parameter space sampling density. We present conditions for sensitivity measure convergence, and provide closed form
expressions for the limiting values. We show how sensitivity measure convergence can be used to choose an appropriate
parameter space sampling density. Numerical examples of sensitivity measure convergence, validating the theoretical
results, are presented for feature extraction on natural imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an algorithm for salient region detection, which can be used in object detection, tracking, video
monitoring, and other machine vision fields. The proposed algorithm firstly computes the global similarity between
video frames to create a similarity map. Then it finds the coarse positions of moving regions in the similarity map.
Finally, the moving objects are obtained by computing color similarity. The experimental results show that the proposed
algorithm can detect salient regions effectively and can extract the moving objects in video sequences with static
background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce a novel technique for covertly embedding data throughout an image using redundant number
system decomposition over non-standard digital bit planes. It will be shown that this new steganography method has
minimal visual distortive affects while also preserving both first and second order cover statistics, making it less
susceptible to most general steganalysis attacks. This paper begins by reviewing the common numerical methods used
in today's binary-based steganography algorithms. We then describe some the basic concepts and challenges when
attempting to implement complex embedding algorithms that are based on a redundant number system. Finally, we
introduce a novel class of numerical based multiple bit-plane decomposition systems, which we define as Adjunctive
Numerical Representations. This new system will not only provide the statistical stability required for effective
steganography but will also give us an improvement in the embedding capacity for our multimedia carrier files.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surveillance systems became powerful. Objects can be identified and intelligent surveillance services can generate
events when a specific situation occurs. Such surveillance services can be organized in a Service Oriented
Architecture (SOA) to fulfill surveillance tasks for specific purposes. Therefore the services process information
on a high level, e.g., just the position of an object. Video data is still required to visualize a situation to an
operator and is required as evidence in court. Processing of personal related and sensitive information threatens
privacy. To protect the user and to be compliant with legal requirements it must be ensured that sensitive
information can only be processed for a defined propose by specific users or services. This work proposes an
architecture for Access Control that enforces the separation of data between different surveillance tasks. Access
controls are enforced at different levels: for the users starting the tasks, for the services within the tasks processing
data stored in central store or calculated by other services and for sensor related services that extract information
out of the raw data and provide them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A highly studied problem in image processing and the field of electrical engineering in general is the recovery of a true
signal from its noisy version. Images can be corrupted by noise during their acquisition or transmission stages. As noisy
images are visually very poor in quality, and complicate further processing stages of computer vision systems, it is
imperative to develop algorithms which effectively remove noise in images. In practice, it is a difficult task to
effectively remove the noise while simultaneously retaining the edge structures within the image. Accordingly, many
de-noising algorithms have been considered attempt to intelligent smooth the image while still preserving its details.
Recently, a non-local means (NLM) de-noising algorithm was introduced, which exploited the redundant nature of
images to achieve image de-noising. The algorithm was shown to outperform current de-noising standards, including
Gaussian filtering, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding.
However, the NLM algorithm was developed in the spatial domain, and therefore, does not leverage the benefit that
multi-scale transforms provide a framework in which signals can be better distinguished by noise. Accordingly, in this
paper, a multi-scale NLM (MS-NLM) algorithm is proposed, which combines the advantage of the NLM algorithm and
multi-scale image processing techniques. Experimental results via computer simulations illustrate that the MS-NLM
algorithm outperforms the NLM, both visually and quantitatively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Existing face recognition schemes are mostly based on extracting biometric feature vectors either from whole face images, or from a
fixed facial region (e.g., eyes, nose, and mouth). Extreme variation in quality conditions between biometric enrolment and verification
stages badly affects the performance of face recognition systems. Such problems have partly motivated several investigations into the
use of partial facial features for face recognition. Nevertheless, partial face recognition is potentially useful in several applications, for
instance, it used in forensics for detectives to identify individuals after some accidents such as fire or explosion. In this paper, we
propose a scheme to fuse the biometric information of partial face images incrementally based on their recognition accuracy (or
discriminative power) ranks. Such fusion scheme uses the optimal ratio of full/partial face images in each different quality condition.
We found that such scheme is also useful for full face images to enhance authentication accuracy significantly. Nevertheless, it
reduces the required storage requirements and processing time of the biometric system. Our experiments show that the required ratio
of full/partial facial images to achieve optimal performance varies from (5%) to (80%) according to the quality conditions whereas the
authentication accuracy improves significantly for low quality biometric samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose to develop a computer assisted reading (CAR) tool for ocular disease. This involves
identification and quantitative description of the patterns in retinal vasculature. The features taken into account are
fractal dimension and vessel branching. Subsequently a measure combining all these features are designed which would
help in quantifying the progression of the disease. The aim of the research is to develop algorithms that would help with
parameterization of the eye fundus images, thus improving the diagnostics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric
recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED),
utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications
of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is
implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point
convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4
major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further
divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped
convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix
TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using
Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible
for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the process of analyzing the surfaces of 3d scanned objects, it is desirable to perform per-vertex calculations on a
region of connected vertices, much in the same way that 2d image filters perform per-pixel calculations on a window of
adjacent pixels. Operations such as blurring, averaging, and noise reduction would be useful for these applications, and
are already well-established in 2d image enhancement. In this paper, we present a method for adapting simple windowed
2d image processing operations to the problem domain of 3d mesh surfaces. The primary obstacle is that mesh surfaces
are usually not flat, and their vertices are usually not arranged in a grid, so adapting the 2d algorithms requires a change
of analytical models. First we characterize 2d rectangular arrays as a special case of a graph, with edges between
adjacent pixels. Next we treat filter windows as a limitation on the walks from a given source node to every other
reachable node in the graph. We tested the common windowed average, weighted average, and median operations. We
used 3d meshes comprised of sets of vertices and polygons, modeled as weighted undirected graphs. The edge weights
are taken as the Euclidean distance between two vertices, calculated from their XYZ coordinates in the usual way. Our
method successfully provides a new way to utilize these existing 2d filters. In addition, further generalizations and
applications are discussed, including potential applications in any field that uses graph theory, such as social networking,
marketing, telecom networks, epidemiology, and others.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SIP proxy server is without any doubts centerpiece of any SIP IP telephony infrastructure. It also often provides other
services than those related to VoIP traffic. These softswitches are, however, very often become victims of attacks and
threats coming from public networks. The paper deals with a system that we developed as an analysis and testing tool to
verify if the target SIP server is adequately secured and protected against any real threats. The system is designed as an
open-source application, thus allowing independent access and is fully extensible to other test modules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of the paper is to show the usefulness of modern forensic software tools for iPhone examination. In
particular, we focus on the new version of Elcomsoft iOS Forensic Toolkit and compare it with Oxygen Forensics
Suite 2012 regarding functionality, usability and capabilities. It is shown how these software tools works and
how capable they are in examining non-jailbreaked and jailbreaked iPhones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently a lot of plant biology research focuses on understanding the genetic, physiological, and ecology of plants. Root
is an important organ for plant to uptake nutrient and water from the surrounding soil. The capability of plant to obtain
nutrient and water is closely related to root physiology. Quantitative measurement and analysis of plant root architecture
parameters are very important for understanding and study growth of plant. A fundamental aim of developmental plant
root biology is to understand how the three-dimensional morphology of plant roots arises through cellular mechanisms.
However, traditional anatomical studies of plant development have mainly relied on two-dimensional images. Though
this may be sufficient for some aspects of plant biology, deeper understanding of plant growth and function increasingly
requires at least some amount of three dimensional measures and use chemical staining as a technique to bring pseudo
structure and segmentation to the cross section image data. Thus parameters like uniformity of illumination and
thickness of the specimen then becomes critical. Unfortunately these are also the causes of major variations. The
variation of thickness of specimen can be interpreted as an effect which increases the latent dimensionality of the data.
Addressing the variability due to specimen thickness can then be viewed in a manifold learning framework, wherein it is
assumed that the data of interest lies on an embedded manifold within the higher-dimensional space and can be
visualized in low dimensional space, using manifold learning constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this paper is to demonstrate the possibility of forensic determination of DVD-R(W) disks regarding the serial number of the used DVD burner. As it was already shown that the burner identication (serial number and type of burner) works for CD ROMs, it was largely unknown that this burner identication works for DVD-R(W) as well when special drives and special software is used. A detailed analysis is given in this paper. Furthermore, a case study for a forensics training program for investigators is developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile readers used for optical identification of manufactured products can be tampered in different ways: with hardware
Trojan or by powering up with fake configuration data. How a human verifier can authenticate the reader to be handled
for goods verification?
In this paper, two cryptographic protocols are proposed to achieve the verification of a RAM-based system through a
trusted auxiliary machine. Such a system is assumed to be composed of a RAM memory and a secure block (in practice
a FPGA or a configurable microcontroller). The system is connected to an input/output interface and contains a Non
Volatile Memory where the configuration data are stored. Here, except the secure block, all the blocks are exposed to
attacks.
At the registration stage of the first protocol, the MAC of both the secret and the configuration data, denoted M0 is
computed by the mobile device without saving it then transmitted to the user in a secure environment. At the verification
stage, the reader which is challenged with nonces sendsMACs / HMACs of both nonces and MAC M0 (to be recomputed),
keyed with the secret. These responses are verified by the user through a trusted auxiliary MAC computer unit. Here the
verifier does not need to tract a (long) list of challenge / response pairs. This makes the protocol tractable for a human
verifier as its participation in the authentication process is increased. In counterpart the secret has to be shared with the
auxiliary unit. This constraint is relaxed in a second protocol directly derived from Fiat-Shamir's scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
IT security and computer forensics are important components in the information technology. From year to year,
incidents and crimes increase that target IT systems or was done with their help. More and more companies and
authorities have security problems in their own IT infrastructure. To respond to these incidents professionally, it is
important to have well trained staff. The fact that many agencies and companies work with very sensitive data makes it
necessary to further train the own employees in the field of IT forensics. Motivated by these facts, a training concept,
which allows the creation of practical exercises, is presented in this paper. The focus is on the practical implementation
of forensic important relationships.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Natural languages like Arabic, Kurdish, Farsi (Persian), Urdu, and any other similar languages have many features,
which make them different from other languages like Latin's script. One of these important features is diacritics. These
diacritics are classified as: compulsory like dots which are used to identify/differentiate letters, and optional like short
vowels which are used to emphasis consonants. Most indigenous and well trained writers often do not use all or some of
these second class of diacritics, and expert readers can infer their presence within the context of the writer text. In this
paper, we investigate the use of diacritics shapes and other characteristic as parameters of feature vectors for Arabic
writer identification/verification. Segmentation techniques are used to extract the diacritics-based feature vectors from
examples of Arabic handwritten text.
The results of evaluation test will be presented, which has been carried out on an in-house database of 50 writers. Also
the viability of using diacritics for writer recognition will be demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of methods have been developed in the past for color image enhancement, including retinex and color
constancy algorithms. Retinex theory is based on psychophysical experiments using mondrian patterns.
Recently, multi-scale retinex algorithms have been developed. They combine several "Retinex" outputs to produce a
single output image which has both good dynamic range compression and color constancy, as well as good tonal
rendition. Unfortunately, multi-scale retinex processing time is consuming. In this paper we present a new algorithm
for color and thermal image enhancement. Additionally, an experimental prototype system for fusing the two data
types with depth data to create a three-dimensional map of the datasets is presented. The image processing algorithm
utilizes a combination of fourier domain and retinex algorithms. Different types of thermal and natural
scene NASA images have been tested, along with other imagery. The primary advantages of the image processing
algorithm are the reduced computational complexity and the contrast enhancement performance. Experimental
results demonstrate that the algorithm works well with underexposed images. The algorithm also gives better
contrast enhancement in most cases, thus bringing out the true colors in the image. It thus helps in achieving
both color constancy and local contrast enhancement. We compare the presented method with enhancement based
on NASA's Multi-scale Retinex. Statistically and quantitatively, we have shown that our technique indeed results in
enhanced images, with our argument validated by conducting experiments on human observers. Additionally, the
fusion of 2-dimensional (2D) thermal, 2D RGB, and 3-dimensional (3D) depth data (TRGBD) can be analyzed and
researched for the purpose of extrapolating thermal conductance and other thermal properties within a scanned
environment. This will allow for the determination of energy assessments regarding structural boundaries, the
effectiveness of insulation, leakages of heat, water and refrigerant, and computing the true value of observed thermal
losses/gains as they are related not only to thermal properties, but geometries as well. The data generated from this
analysis can be used in many other domains and process evaluation fields such as medical, geological, architectural
and others.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.