PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6569, including the Title Page, Copyright
information, Table of Contents, the Conference Committee listing, Introduction, and a Dedication to Larry A. Stockum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target tracking with radar and sonar is done in either spherical or rectangular coordinates. Often tracking is done in one reference frame while filtering, usually Kalman, is done in another reference frame. It is commonly assumed that the probability density functions can be treated the same in both reference frames. An extended Kalman filter is used under this assumption that the probability density function of the measurements after conversion can be adequately characterized by the mean and standard deviation. The transformations from spherical coordinates (see manuscript) to Cartesian coordinates (see manuscript) is a non-linear transformation, so the statistical characteristics of the measurement process noise are changed significantly by the transformation. Thus, the characteristics which tracking filters are designed to optimize with respect to are changed as well by these coordinate transformations. Typical engineering practice uses approximations rather than exact solutions. The objective of this paper is to provide means to analytically characterize the probability density functions of these coordinate transformations. We then investigate the impact of approximating the noise statistics of these transformed coordinates on track quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A study is performed of several multiple model tracking filter architectures that do not employ a Markov Switching
Matrix in its weighting mathematics. The Markov Switching Matrix which is common to multiple model tracking
filters does not have an "optimum" rule for defining its constituent probabilities. The only real constraint on
the probabilities is that each row of the matrix must add to unity. The other general rule is that the diagonal
elements should be "close to unity" and the off-diagonal terms should be correspondingly "small". Other than
these constraints, values are typically selected by observing the filter tracking performance over a wide set
of trajectory types and target dynamics. Several architectures are presented and their tracking performance
discussed. Comparisons are made with the performance of a conventional IMM for the same data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The interacting multiple model (IMM) estimator, which mixes and blends results of multiple filters according to their
mode probabilities, is frequently used to track targets whose motion is not well-captured by a single model. This paper
extends the use of an IMM estimator to computing impact point predictions (IPPs) of small ballistic munitions whose
motion models change when they reach transonic and supersonic speeds. Three approaches for computing IPPs are
compared. The first approach propagates only the track from the most likely mode until it impacts the ground. Since
this approach neglects inputs from the other modes, it is not desirable if multiple modes have near-equal probabilities.
The second approach for computing IPPs propagates tracks from each model contained in the IMM estimator to the
ground independent of each other and combines the resulting state estimates and covariances on the ground via a
weighted sum in which weights are the model probabilities. The final approach investigated here is designed to take
advantage of the computational savings of the first without sacrificing input from any of the IMM's modes. It fuses the
tracks from the models together and propagates the fused track to the ground. Note that the second and third approaches
reduce to the first if one of the models has a mode probability of one. Results from all three approaches are compared
in simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An energy-aware, collaborative target tracking algorithm is proposed for ad-hoc wireless sensor networks. At every time
step, current measurements from four sensors are chosen for target motion estimation and prediction. The algorithm is
implemented distributively by passing sensing and computation operations from a subset of sensors to another. A robust
multimodel Rao-Blackwellised particle filter algorithm is presented for tracking high maneuvering ground target in the
sensor field. Not only is the proposed algorithm more computation efficient than generic particle filter for high dimension
nonlinear and non-Gaussian estimation problems, but also it can tackle the target's maneuver perfectly by
stratified particles sampling from a set of system models. In the simulation comparison, a high maneuvering target
moves through an acoustic sensor network field. The target is tracked by both generic PF and the multimodel RBPF
algorithms. The results show that our approach has great performance improvements, especially when the target is making maneuver.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of the research is to develop acquisition, tracking, and pointing technologies for the Bifocal Relay Mirror
Spacecraft and verify these technologies with the experimental test-bed. Because of the stringent accuracy requirement
of the laser beam and the agile maneuverability requirement, significant research is needed to develop acquisition,
tracking, and pointing technologies for the Bifocal Relay Mirror Spacecraft. In this paper, development of the Bifocal
Relay Mirror Spacecraft experimental test-bed is presented in detail. The current operational results are also presented
including precision attitude control of the spacecraft for fine tracking and pointing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Friction is a well-known performance limitation for gimbaled EO director systems. While much research study has been directed to bearing friction, the well-known friction models in literature, being represented in time, position, and rate domain, are not amenable to most LOS jitter analysis. Furthermore the type of mission profiles to which large gimbals are subjected have received limited attention in this field of research, so the selection of an appropriate friction model is not obvious. This paper fits popular friction models to experimental data, and studies the models in frequency domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The laser beam director (LBD) is a reflective telescope which has a long optical Coude path. Its optical path can be off
the original aligned if rotational jitter of the telescope turret occurs or structural deformation in the LBD due to external
disturbances occurs. It can make the laser beam deviate from the line of sight of the telescope. Therefore misalignment
must be monitored and corrected. We adapt null optics to the telescope in order to monitor alignment state of the LBD.
Misalignment is corrected in real time through the fast steering mirror located between the primary and secondary mirror
of the telescope. Test results on rotation of the telescope turret show that aligned state of the LBD is kept in spite of
rotational jitter of the turret.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The key measurements of Pointing/Tracking system performance are the abilities to responsively acquire intended
targets and to maintain low track error (the error associated with stabilizing the target in the image scene). Typically
good target acquisition and track performance is readily attainable under "nominal" conditions, i.e. targets of high Signal
to Noise Ratio (SNR), or targets that have easily discernable relative motion with respect to other possible targets or
clutter. It is in the absence of these favorable conditions that a track preprocessor is highly advantageous, and possibly
necessary, to meet performance requirements. Typical scenarios involve a relatively small, stationary or slow moving,
distant target within a field of view with a considerable amount of background clutter. To this end, the main thrust of the
Tunable Wavelet Target Extraction Preprocessor (TWTEP) is the ability to discern targets from within complex clutter.
With this capability, the TWTEP then "extracts" the target, of arbitrary shape and size, and presents only a resultant
image composed of solely a high SNR target to a track process. The result: A track process that provides an enhanced
capability to accommodate both nominal and stressful pointing/tracking scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel approach to target tracking using a measurement process based on spatio-temporal fractal
error. Moving targets are automatically detected using one-dimensional temporal fractal error. A template derived from
the two-dimensional spatial fractal error is then extracted for a designated target to allow for correlation-based template
matching in subsequent frames. The outputs of both the spatial and temporal fractal error components are combined and
presented as input to a kinematic tracking filter. It is shown that combining the two outputs provides improved tracking
performance in the presence of noise, occlusion, other moving objects, and when the target of interest stops moving.
Furthermore, reconciliation of the spatial and temporal components also provides a useful mechanism for detecting
occlusion and avoiding template drift, a problem typically present in correlation-based trackers. Results are
demonstrated using airborne MWIR sequences from the DARPA VIVID dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increasing demand for the protection of persons and facilities requires the application of sophisticated technologies
for surveillance and object tracking. For this purpose appropriate sensors are used like imaging IR sensors suitable for
day/night operation and laser radar supplying 3D information of the scenario. In this context there is a requirement of
automatic and semi-automatic methods supporting the human observer in his decision-making process. A prevalent task
is automatic tracking of striking objects like vehicles or individual persons in an image sequence during a time slice.
Classical methods are based on template matching implying certain shortcomings concerning homogeneous background
or passing objects occluding the target object. The authors propose a new concept for generating templates for IR target
signatures based on the interpretation of laser range data in order to optimize the tracking process. The testbed is realized
by a helicopter equipped with a multisensor suite (laser radar, imaging IR, GPS, IMU). Results are demonstrated by the
analysis of an exemplary data set. A vehicle situated in a complex scenario is acquired by a forward moving sensor
platform and is tracked robustly by the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we focus on the problem of automated surveillance in a parking lot scenario. We call our research system
VANESSA, for Video Analysis for Nighttime Surveillance and Situational Awareness. VANESSA is capable of: 1)
detecting moving objects via background modeling and false motion suppression, 2) tracking and classifying pedestrians
and vehicles, and 3) detecting events such as person entering or exiting a vehicle. Moving object detection utilizes a
multi-stage cascading approach to identify pixels that belong to the true objects and reject any spurious motion, (e.g.,
due to vehicle headlights or moving foliage). Pedestrians and vehicles are tracked using a multiple hypothesis tracker
coupled with a particle filter for state estimation and prediction. The space-time trajectory of each tracked object is
stored in an SQL database along with sample imagery to support video forensics applications. The detection of pedestrians
entering/exiting vehicles is accomplished by first estimating the three-dimensional pose and the corresponding entry
and exit points of each tracked vehicle in the scene. A pedestrian activity model is then used to probabilistically assign
pedestrian tracks that appear or disappear in the vicinity of these entry/exit points. We evaluate the performance of
tracking and pedestrian-vehicle association on an extensive data set collected in a challenging real-world scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a manually operated visual tracking system, a camera operator follows an object of interest by moving the
camera, then gains additional details about the object by zooming in. As the active vision field progresses, the
ability to automate such a system is nearing fruition. One hurdle limiting the deployment of real-time visual
tracking systems is in the object recognition algorithms that often have restrictive scale and pose requirements.
If those conditions are not met, the performance of the system rapidly degrades to failure. The ability of an
automatic fixation system to capture quality video of a non cooperative moving target is strongly related to the
response time of the mechanical pan, tilt, and zoom platform. However, the price of such a platform rises with its
performance. The goal of this work is to investigate the feasibility and issues that arise when using inexpensive
off-the-shelf components in the development of a visual tracking system that provides scale-invariant tracking.
One of the main challenges is in the zooming action. Optical zoom acts as a measurement gain, amplifying both
resolution and tracking error. Previous work has shown that adding a second camera with fixed focal length can
assist the zooming camera if it loses fixation, effectively bounding the error. Furthermore, optical zoom has a
longer time-constant than digital zoom. This work proposes a dual camera hybrid zoom configuration where
digital zoom is combined with optical zoom to achieve a behavior closer to an ideal zooming action.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The depth of absorption bands in observed spectra of distant, bright sources can be used to estimate range to the source. Previous efforts in this area relied on Beer's Law to estimate range from observations of infrared CO2 bands, with disappointing results. A modified approach is presented that uses band models and observations of the O2 absorption band near 762 nm. This band is spectrally isolated from other atmospheric bands, which enables direct estimation of molecular absorption from observed intensity. Range is estimated by comparing observed values of band-average absorption, (see manuscript), against predicted curves derived from either historical data or model predictions. Accuracy of better than 0.5% has been verified in short-range (up to 3km) experiments using a Fourier transform interferometer at 1cm-1 resolution. A conceptual design is described for a small, affordable passive ranging sensor suitable for use on tactical aircraft for missile attack warning and time-to-impact estimation. Models are used to extrapolate experimental results (using 1 cm-1 resolution data) to analyze expected performance of this filter-based system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to assess an object in space, in the air, or on the ground, it is first necessary to acquire and track it. In this paper, we will discuss distant object tracking, both passively and actively, and show some recent results from the Air Force Maui Optical and Supercomputing (AMOS) site and also from the Starfire Optical Range (SOR). In the past ten years, we have moved well beyond passive tracking on objects, to obtain the first-ever high-bandwidth closed loop tracks on skin satellites. But even passive tracking has developed further, with the advent of new sensor technology and also new beam control stabilization techniques. We will review some of our results here. In addition, a colleague and I have developed some new techniques to unambiguously estimate the active tracking jitter and boresight errors solely from the signal returned by the object being illuminated. We will review some of those results as well, and point the reader toward a more thorough published paper on that topic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The tomographic scanning (TOSCA) imager was invented by the author in 2003. Initially, the system was based on
reconstructing an image from the signal of a simple single pixel, conical scan FM-reticle sensor using tomographic
techniques. Although the system has been used for several decades for real-time tracking purposes, the imaging
properties of the single pixel conical scan reticle system was left undiscovered until recently, although multi-target
discrimination was demonstrated with multi-spectral versions of the system. The initial system presented by the author
demonstrated the ability to discriminate between multiple spots in the field of view in a fairly simple scenario.
Advances have been made in both theory and technology, mainly with the introduction of the nutating circular aperture
in the scanning optics, and the use of Fourier transform ramp filters during reconstruction, and TOSCA is in principle
found to be a perfect imaging system, only limited by practical aspects such as the number of angular scans, the spatial
sampling, noise and vibration. The simplicity of the hardware, together with the rapid advances in high performance,
low cost computing means the system has a potential for low-cost applications such as in expendable multi-spectral
thermal imagers. This paper will present the current state of the technology, including improvements in algorithms and reticle shapes, and
look at artefacts found in various images due to different geometries, as well as ways to handle these artefacts. Several
noise generating processes and their effects will be presented and illustrated with results from digital simulations.
Requirements for image processing in terms of computing power are investigated, together with the potential for
parallelization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Heterogeneous camera based surveillance systems provide us with a more robust tracking of objects. To take advantage
of additional cameras, it is necessary to establish geometrical relationship between the cameras and relationship between
an object and a camera. This paper presents an algorithm that can track a non-rigid objects in real-time in the night watch
system which does not contain sufficient light. The proposed method adopted hierarchical active shape model(ASM) for
real-time tracking and adaptive landmark point assignment for reducing computational load at each level. Active Shape
Model is robust for tracking non-rigid objects and overcomes the occlusion, because it changes an average shape of an
object with trained contour information of an object. This proposed tracking algorithm uses the information from CCD
sensor for tracking objects in the day time, and uses the information form IR sensor for tracking objects in the night time.
When the perfect occlusion occurs, the proposed algorithm predicts movements of an object using the historical tracking
information and it can keep the object tracking. Through the results of this experiment, we found out that we can track an
object both day and night with an trained contour information of an object, and confirm that robust tracking can be done
in a part occlusion. Therefore, the proposed algorithm we will develop a real-time region alignment algorithm for a
heterogeneous camera-based surveillance system under a complex environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video tracking is used in military operations and homeland defense. Multiple cameras are mounted on an airplane that flies in a circle and points to a central location. The images are pre-registered and a single large image is sent to a ground station at the rate of a frame per second. The first step needed for tracking is measurements. The video undergoes additional registration and processing to produce multi-frame motion detections. These measurements are passed to the tracking algorithm. Tracking through an urban environment has its own unique challenges. Targets frequently cross paths, go behind one another, and go behind buildings or into shadowed areas. Additional challenges include Move-Stop-Move, parallax, and track association with highly similar targets. These challenges need to be overcome with up to a thousand vehicles, so processing speed is crucial. The project is Open-Source to aid in overcoming these technical challenges. Alternative trackers (IMM, MHT), features, association methods, track-initiation and deletion (M/N or LU), state variables, or other specialized routines (for Move-Stop-Move, parallax, etc.) will be tried and analyzed with representative data. By keeping it Open-Source, any ideas to improve the system can be easily implemented and analyzed. This paper presents current findings and state of the project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As more and more research effort is drawn into
object tracking algorithms, the ability to assess the performance of
these algorithms quantitatively has become a fundamental issue in
computer vision. Because tracking systems have to operate in widely
varying conditions (different weather conditions, background and
target characteristics, etc), a large test bed of video sequences is
needed in order to obtain a comprehensive evaluation of a tracker
across the whole range of its operating conditions. However, it is
very unlikely that a dataset of real video sequences representative
of the whole range of operating conditions of a tracker together
with its ground truth could be obtained, and building a realistic
synthetic dataset of such sequences would require costly advanced
simulation platforms.
In the new evaluation method proposed in this paper, the operational
criteria of the tracking system are turned into objective measures
and used to generate a synthetic dataset, non-photorealistic, but
statistically representative of the whole range of operating
conditions. The assessment of an algorithm using our method provides
both a quantitative evaluation of the algorithm and the borders of
its validity domain. The performance measurement of an algorithm on
a synthetic sequence is shown to be consistent with the measurement
on a real sequence with the same criteria. The benefit of this
approach is twofold: it provides the developer with a way to
concentrate on the weaknesses of his algorithm, and helps the system
designer to choose the algorithm that best fits the operating
constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a desire to use Linux in military systems. Customers are requesting contractors to use open source to the
maximal possible extent in contracts. Linux is probably the best operating system of choice to meet this need. It is
widely used. It is free. It is royalty free, and, best of all, it is completely open source. However, there is a problem.
Linux was not originally built to be a real time operating system. There are many places where interrupts can and will
be blocked for an indeterminate amount of time. There have been several attempts to bridge this gap. One of them is
from RTLinux, which attempts to build a microkernel underneath Linux. The microkernel will handle all interrupts and
then pass it up to the Linux operating system. This does insure good interrupt latency; however, it is not free [1].
Another is RTAI, which provides a similar typed interface; however, the PowerPC platform, which is used widely in real
time embedded community, was stated as "recovering" [2]. Thus this is not suited for military usage. This paper
provides a method for tuning a standard Linux kernel so it can meet the real time requirement of an embedded system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared signature measurement capability has a key role in the electronic warfare (EW) self protection systems'
development activities. In this article, the IRLooK System and its capabilities will be introduced. IRLooK is a truly
innovative mobile infrared signature measurement system with all its design, manufacturing and integration
accomplished by an engineering philosophy peculiar to ASELSAN. IRLooK measures the infrared signatures of military
and civil platforms such as fixed/rotary wing aircrafts, tracked/wheeled vehicles and navy vessels. IRLooK has the
capabilities of data acquisition, pre-processing, post-processing, analysis, storing and archiving over shortwave, mid-wave
and long wave infrared spectrum by means of its high resolution radiometric sensors and highly sophisticated
software analysis tools.
The sensor suite of IRLooK System includes imaging and non-imaging radiometers and a spectroradiometer. Single or
simultaneous multiple in-band measurements as well as high radiant intensity measurements can be performed. The
system provides detailed information on the spectral, spatial and temporal infrared signature characteristics of the targets.
It also determines IR Decoy characteristics. The system is equipped with a high quality field proven two-axes tracking
mount to facilitate target tracking. Manual or automatic tracking is achieved by using a passive imaging tracker. The
system also includes a high quality weather station and field-calibration equipment including cavity and extended area
blackbodies. The units composing the system are mounted on flat-bed trailers and the complete system is designed to be
transportable by large body aircraft.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In early 2001, Boeing-SVS, Inc. (BSVS) began an internal research and development (IR&D) project, dubbed the dual
line of sight (DLOS) experiment, to perform risk reduction on the development of the control systems and mode logic
for a strategic laser relay mirror system. The DLOS experiment uses primarily commercial off-the-shelf (COTS)
hardware and real-time system software, plus internally-designed gimbals and flexible mode logic tools to emulate a
scalable relay mirror engagement. The high-level, nominal engagement sequence begins with the laser source gimbal
establishing a line of sight with the relay receiver gimbal by closing passive acquisition and fine-tracking loops.
Simultaneously, the receiver gimbal closes passive acquisition and fine-tracking loops on the laser source, and a low-power,
660-nanometer alignment laser is propagated through the system. Finally, the transmitter gimbal closes passive
acquisition and fine-track loops on a target, and the system propagates a simulated high-energy laser (HEL) on that line
of sight onto target models. In total, the DLOS experiment closes 28 control loops. For the strategic scenario, a model
rocket target is illuminated with a light-emitting diode and tracked by the BSVS advanced reconfigurable trackers using
a centroid algorithm. The strategic scenario also uses a 532-nanometer laser to close an active track loop using a Linux
tracker. To better align with our business capture strategy, the emphasis of the experiment in 2005 has shifted to
emulating an urban tactical engagement and developing weapon system operator consoles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an image seeker simulation including image processing, servo control, target model, and missile
trajectory. We propose a software architecture for a seeker embedded computer. It makes core processing algorithms
including image processing reusable at the source level through multiple platforms. The embedded software simulator
implemented in C/C++, the servo control simulator implemented in Matlab, and the integrated simulator combined the
both simulators based on Windows Component Object Module (COM) technology is presented. The integrated
simulation enables developers to practice an interactive study between image processing and servo control about
missions including lock-on and target tracking. The implemented simulator can be operated in low cost computer
systems. This can be used to algorithm development and analysis at the design, implementation, and evaluation.
Simulation examples for a short range ground-to-ground missile seeker are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vibration Control and Stabilization in EO Equipment: Joint Session with Conference 6561
This paper describes the use of adaptive filtering to control vibration and optical jitter. Adaptive filtering is a class of
signal processing techniques developed over the last several decades and applied since to applications ranging from
communications to image processing. Basic concepts in adaptive filtering and feedforward control are reviewed. A
series of examples in vibration, motion and jitter control, including cryocoolers, ground-based active optics systems,
flight motion simulators, wind turbines and airborne optical beam control systems, illustrates the effectiveness of the
adaptive methods. These applications make use of information and signals that originate from system disturbances and
minimize the correlations between disturbance information and error and performance measures. The examples
incorporate a variety of disturbance types including periodic, multi-tonal, broadband stationary and non-stationary.
Control effectiveness with slowly-varying narrowband disturbances originating from cryocoolers can be extraordinary,
reaching 60 dB of reduction or rejection. In other cases, performance improvements are only 30-50%, but such
reductions effectively complement feedback servo performance in many applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pointing and imaging performance of precision optical systems is degraded by disturbances on the system that create optical jitter. These disturbances can be caused by structural motion of optical components due to vibration sources that (1) originate within the optical system, (2) originate external to the system and are transmitted through the structural path in the environment, and (3) are air-induced vibrations from acoustic noise. Beam control systems can suppress optical jitter, and active control techniques can be used to extend performance by incorporating information from accelerometers, microphones, and other auxiliary sensors. In some applications, offline fixed gain controllers can be used to minimize jitter. However there are many applications in which a real-time adaptive control approach would yield improved optical performance. Often we would like the capability to adapt in real-time to a system which is time-varying or whose disturbances are non-stationary and hard to predict. In the presence of these harsh, ever-changing environments we would like to use every available tool to optimize performance. Improvements in control algorithms are important, but another potentially useful tool is a real-time adaptive control method employing optimal sensing strategies. In this approach,
real-time updating of reference sensors is provided to minimize optical jitter. The technique selects an optimal subset of sensors to use as references from an array of possible sensor locations. The optimal, weighted reference sensor set is well correlated with the disturbance and when used with an adaptive control algorithm, results in improved line-of-sight jitter performance with less computational burden compared to a controller which uses multiple reference sensors. The proposed technique is applied to an experimental test bed in which multiple proof-mass actuators generate structural vibrations on a flexible plate. These vibrations are transmitted to an optical mirror mounted on the plate, resulting in optical jitter as measured by a position sensing detector. Accelerometers mounted on the plate are used to form the set of possible optimal reference sensors. Reduction of the structural vibration of optical components is attained using a fast steering mirror which results in a reduction of the corresponding jitter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inertial Reference Units (IRUs) are the basic reference for a precision pointing system. These units must provide an
inertially stable light source to be used as the reference to align the outgoing laser beam and to reject beam train jitter
due to vibrations. The IRU will be subjected to 6 degrees of freedom motion during operation. The correct operation of
an IRU requires it to measure the angular motion and not be affected by the linear input vibration. Testing of these units
is difficult, since the vibration input motion may be perfectly correlated between the angular inputs and the linear inputs.
This correlation makes it impossible to separate the angular and linear IRU responses during a test, even with perfect
measurements of the input vibrations. The solution to this problem is to obtain a vibration test station that can provide
linear motion without any angular motion, and angular motion without linear motion. This paper will describe the
evaluation of the test tables and show test data from an IRU that indicates how these tests can be beneficial in identifying
performance problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new approach to closed-loop control of optical jitter with a new liquid crystal beam steering
device. In contrast to conventional fast steering mirrors, where the laser beam is reflected of the controlled mirror
surface, the transmissive liquid crystal beam steering device optically redirects the laser beam. The new device
has no moving parts and requires low operating power. This research suggest the new device can replace the fast
steering mirrors in a variety of electro-optic systems. The functionality of the transmissive liquid crystal beam
steering device along with the analysis of real-time adaptive control experiments are described in this paper. The
experimental results show that the new liquid crystal beam steering device can reject disturbances with an LTI
feedback controller, and that the disturbance rejection capability can be improved significantly with feedforward
adaptive control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.