PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9249, including the Title Page, Copyright information, Table of Contents, Invited Panel Discussion, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target range of Infra-Red (IR) point target warning systems, is determined by the effective entrance pupil diameter of
the system's optics. In addition, the system's F/# is usually set by the detector (as in cryogenically cooled detectors).
Moreover, the detector's aspect-ratio usually set the field proportions (5:4, 4:3, etc.).Thus for example, for wide angle
systems, the horizontal coverage angle is usually determining the vertical one.
We propose a system including anamorphic optics that changes the effective focal length of each axis independently,
keeping the detector's given F/#, thus changing the effective aperture. Since the range is approximately proportional to
the effective aperture, we achieve a range improvement of the square-root of the vertical and horizontal focal length
ratio, reducing the vertical coverage accordingly. In this way, we make the field proportions less dependent of the
detector's proportions.
Using this method, it is made possible for wide angle systems to improve target detection range on the expense of the
vertical coverage, without changing the horizontal coverage or increasing the amount of detection units (e.g. FLIRs).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper concerns with the problem of disturbing radiation derived from the interior of radiometric
microbolometer-based infrared cameras. The amount of internal radiation depends particularly on the ambient
temperature. Variation of ambient temperature leads to a change of the temperature distribution inside the camera. The
approach proposed here is determining the disturbing radiation without using a shutter by measuring the internal thermal
state with several temperature probes and deducing the disturbing radiation flux. Because of this discrete temperature
measurement it is not possible to determine the present thermal state of the camera interior as precise as performing a
shutter process. Therefore, the position of the temperature measurement is crucial for the significance of the relation
between measured temperature and disturbing radiation flux. Furthermore, the transient thermal behavior during a
cooling or heating period of the camera enclosure is a non-ergodic process [1]. Two approaches facing these problems
are analyzed.
The first approach is based on the usage of more than one temperature probe at different positions inside the camera.
Each position of temperature measurement has its own characteristic of heat conductance and convection parameters.
Therefore, the low-pass behavior and the corresponding response time of the measured temperature in relation to the
ambient temperature differ. Developing a thermal model using different probes with a higher significance of the transient
thermal trend reduces the calculation uncertainty.
A second approach is to separate the transient and the steady-state behavior of the calculation model. If the camera is
able to follow a slow change of ambient temperature completely, then it stays always in steady state and the process is
ergodic. Only in case of an abrupt change of ambient temperature the thermal behavior leaves the steady state and a
transient correction factor is necessary. This factor has to take the history of the measured temperature into account.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ensuring that a future nuclear arms control agreement can be verified is a complex technical challenge. Tamper Indicating Enclosures (TIEs) are likely to be deployed as part of a chain of custody regime, providing an indication of an unauthorised attempt to access an item within the agreement. This paper focuses on the assessment of optical fibre techniques for ensuring boundary control as part of a TIE design. The results of optical fibre damage, subsequent repair attempts, enclosure construction considerations and unique identification features have been evaluated for a selection of fused-silica optical fibres. This paper focuses on detecting a fibre repair attempt, presents a method for increasing repair resistance and a method for uniquely identifying an enclosure using the optical signature from the embedded optical fibre.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Passive athermalization of lenses has become a key-technology for automotive and other outdoor applications using
modern uncooled 25, 17 and 12 micron pixel pitch bolometer arrays. Typical pixel counts for thermal imaging are
384x288 (qVGA), 640x480 (VGA), and 1024x768 (XGA). Two lens arrangements (called Doublets) represent a
cost effective way to satisfy resolution requirements of these detectors with F-numbers 1.4 or faster.
Thermal drift of index of refraction and the geometrical changes (in lenses and housing) versus temperature defocus
the initial image plane from the detector plane. The passive athermalization restricts this drop of spatial resolution in
a wide temperature range (typically -40°C…+80°C) to an acceptable value without any additional external refocus.
In particular, lenses with long focal lengths and high apertures claim athermalization. A careful choice of lens and
housing materials and a sophistical dimensioning lead to three different principles of passivation: The Passive
Mechanical Athermalization (PMA) shifts the complete lens cell, the Passive Optical and Mechanical
Athermalization (POMA) shifts only one lens inside the housing, the Passive Optical Athermalization (POA) works
without any mechanism.
All three principles will be demonstrated for a typical narrow-field lens (HFOV about 12°) with high aperture
(aperture based F-number 1.3) for the actual uncooled reference detector (17micron VGA). Six design examples
using different combinations of lens materials show the impact on spatial lens resolution, on overall length, and on
weight.
First order relations are discussed. They give some hints for optimization solutions.
Pros and cons of different passive athermalization principles are evaluated in regards of housing design, availability
of materials and costing. Examples with a convergent GASIR®1-lens in front distinguish by best resolution, short
overall length, and lowest weight.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electro-optical imaging sensors are widely distributed and used for many different tasks in military operations and civil
security. However, their operational capability can be easily disturbed by laser radiation. The likeliness of such an
incidence has dramatically increased in the past years due to the free availability of high-power laser pointers. These
laser systems, offering laser powers of several watts, pose an increased risk to the human eye as well as to electro-optical
sensors. An adequate protection of electro-optical sensors against dazzling is highly desirable. Such protection can be
accomplished with different technologies; however, none of the existing technologies can provide a sufficient protection.
All current protection measures possess individual advantages and disadvantages.
We present the results on the performance of two different protection technologies. The evaluation is based on automatic
optical pattern recognition of sensor images taken from a scene containing triangles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today, new generation of powerful non-linear image processing are used for real time super-resolution or noise
reduction. Optronic imagers with such features are becoming difficult to assess, because spatial resolution and sensitivity
are now related to scene content. Many algorithms include regularization process, which usually reduces image
complexity to enhance spread edges or contours. Small important scene details can be then deleted by this kind of
processing. In this paper, a binary fractal test target is presented, with a structured clutter pattern and an interesting autosimilarity
multi-scale property. The apparent structured clutter of this test target gives a trade-off between a white noise,
unlikely in real scenes, and very structured targets like MTF targets. Together with the fractal design of the target, an
assessment method has been developed to evaluate automatically the non-linear effects on the acquired and processed
image of the imager. The calculated figure of merit is to be directly comparable to the linear Fourier MTF. For this
purpose the Haar wavelet elements distributed spatially and at different scales on the target are assimilated to the sine
Fourier cycles at different frequencies. The probability of correct resolution indicates the ability to read correct Haar
contrast among all Haar wavelet elements with a position constraint. For the method validation, a simulation of two
different imager types has been done, a well-sampled linear system and an under-sampled one, coupled with super-resolution
or noise reduction algorithms. The influence of the target contrast on the figures of merit is analyzed. Finally,
the possible introduction of this new figure of merit in existing analytical range performance models, such as TRM4
(Fraunhofer IOSB) or NVIPM (NVESD) is discussed. Benefits and limitations of the method are also compared to the
TOD (TNO) evaluation method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A software application, SIST, has been developed for the simulation of the video at the output of a thermal imager. The approach offers a more suitable representation than current identification (ID) range predictors do: the end user can
evaluate the adequacy of a virtual camera as if he was using it in real operating conditions. In particular, the ambiguity in the interpretation of ID range is cancelled. The application also allows for a cost-efficient determination of the optimal design of an imager and of its subsystems without over- or under-specification: the performances are known early in the development cycle, for targets, scene and environmental conditions of interest. The simulated image is also a powerful method for testing processing algorithms. Finally, the display, which can be a severe system limitation, is also fully
considered in the system by the use of real hardware components. The application consists in Matlabtm routines that
simulate the effect of the subsystems atmosphere, optical lens, detector, and image processing algorithms. Calls to
MODTRAN® for the atmosphere modeling and to Zemax for the optical modeling have been implemented. The realism of the simulation depends on the adequacy of the input scene for the application and on the accuracy of the subsystem
parameters. For high accuracy results, measured imager characteristics such as noise can be used with SIST instead of
less accurate models. The ID ranges of potential imagers were assessed for various targets, backgrounds and atmospheric conditions. The optimal specifications for an optical design were determined by varying the Seidel aberration coefficients to find the worst MTF that still respects the desired ID range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army’s target acquisition models, the ACQUIRE and Target Task Performance (TTP) models, have
been employed for many years to assess the performance of thermal infrared sensors. In recent years, ACQUIRE
and the TTP models have been adapted to assess the performance of visible sensors. These adaptations have
been primarily focused on the performance of an observer viewing a display device. This paper describes an
implementation of the TTP model to predict field observer performance in maritime scenes.
Predictions of the TTP model implementation were compared to observations of a small watercraft taken in
a field trial. In this field trial 11 Australian Navy observers viewed a small watercraft in an open ocean scene.
Comparisons of the observed probability of detection to predictions of the TTP model implementation showed
the normalised RSS metric overestimated the probability of detection. The normalised Pixel Contrast using a
literature value for V50 yielded a correlation of 0.58 between the predicted and observed probability of detection.
With a measured value of N50 or V50 for the small watercraft used in this investigation, this implementation of
the TTP model may yield stronger correlation with observed probability of detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detected thermal contrast is a recently defined figure of merit introduced to describe the overall performance of a
detector detecting radiation from a thermal source. We examine the detected thermal contrast for the case where the
target emissivity can be assumed to be a function of the temperature and independent of the wavelength within a narrow
wavelength interval of interest. Exact expressions are developed to evaluate the thermal contrast detected by both thermal
and quantum detectors for focal-plane radiation detecting instruments. Expressions for the thermal contrast of a blackbody,
an intrinsic radiative quantity of a body independent of the detection process, and simplified expressions for the detected
thermal contrast for target emissivities which are well approximated by the grey body approximation are also given. It is
found the contribution in the detected thermal contrast consists of two terms. The first results from changes occurring in the
emissivity of a target with temperature while the second results from purely radiative processes. The size of the detected
thermal contrast is found to be similar for the two detector types within typical infrared wavelength intervals of interest,
contradicting a result previously reported in the literature. The exact results are presented in terms of a polylogarithmic
formulation of the problem and extend a number of approximation schemes that have been proposed and developed in the
past.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The prediction of the effectiveness of a space-based sensor for its designated application in space (e.g. special earth
surface observations or missile detection) can help to reduce the expenses, especially during the phases of mission
planning and instrumentation. In order to optimize the performance of such systems we simulate and analyse the entire
operational scenario, including:
- optional waveband
- various orbit heights and viewing angles
- system design characteristics, e. g. pixel size and filter transmission
- atmospheric effects, e. g. different cloud types, climate zones and seasons
In the following, an evaluation of the appropriate infrared (IR) waveband for the designated sensor application is given.
The simulation environment is also capable of simulating moving objects like aircraft or missiles. Therefore, the spectral
signature of the object/missile as well as its track along a flight path is implemented. The resulting video sequence is
then analysed by a tracking algorithm and an estimation of the effectiveness of the sensor system can be simulated.
This paper summarizes the work carried out at Fraunhofer IOSB in the field of simulation and modelling for the
performance optimization of space based sensors.
The paper is structured as follows: First, an overview of the applied simulation and modelling software is given. Then,
the capability of those tools is illustrated by means of a hypothetical threat scenario for space-based early warning
(launch of a long-range ballistic missile (BM)).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we focus on developing the infrared (IR) sensor and performance analysis model for space-based IR
systems which are designed for detection of space targets in the earth and the earth-limb background. Corresponding to
the sensor observation geometry, a simplified transmittance calculation scheme applicable to large-scale scenes as well
as a mathematical model for pixel-by-pixel irradiance calculation is proposed. By defining the apparent contrast of
targets in simulated IR images, a model for detection performance analysis is developed for sensors operating in different
spectral bands. Typical simulation examples are presented to validate the current model and methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today, the civil market provides quite a number of different 3D-Sensors covering ranges up to 1 km. Typically these
sensors are based on single element detectors which suffer from the drawback of spatial resolution at larger distances.
Tasks demanding reliable object classification at long ranges can be fulfilled only by sensors consisting of detector
arrays. They ensure sufficient frame rates and high spatial resolution. Worldwide there are many efforts in developing
3D-detectors, based on two-dimensional arrays.
This paper presents first results on the performance of a recently developed 3D imaging laser radar sensor, working in
the short wave infrared (SWIR) at 1.5 μm. It consists of a novel Cadmium Mercury Telluride (CMT) linear array APD
detector with 384x1 elements at a pitch of 25 μm, developed by AIM Infrarot Module GmbH. The APD elements are
designed to work in the linear (non-Geiger) mode. Each pixel will provide the time of flight measurement, and, due to
the linear detection mode, allowing the detection of three successive echoes. The resolution in depth is 15 cm, the
maximum repetition rate is 4 kHz. We discuss various sensor concepts regarding possible applications and their
dependence on system parameters like field of view, frame rate, spatial resolution and range of operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Background subtraction is one of commonly used techniques for many applications such as human detection in images.
For background estimation, principal component analysis (PCA) is an available method. Since the background
sometimes changes according to illumination change or due to a newly appeared stationary article, the eigenspace should
be updated momentarily. A naïve algorithm for eigenspace updating is to update the covariance matrix. Then, the
eigenspace is updated by solving the eigenvalue problem for the covariance matrix. But this procedure is very time
consuming because covariance matrix is a very large size matrix. In this paper we propose a novel method to update the
eigenspace approximately with exceedingly low computational cost. Main idea to decrees computational cost is to
approximate the covariance matrix by low dimensional matrix. Thus, computational cost to solve eigenvalue problem
becomes exceedingly decrease. A merit of the proposed method is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a face-classification architecture for long-wave infrared (IR) images implemented on a Field Programmable
Gate Array (FPGA). The circuit is fast, compact and low power, can recognize faces in real time and
be embedded in a larger image-processing and computer vision system operating locally on an IR camera. The
algorithm uses Local Binary Patterns (LBP) to perform feature extraction on each IR image. First, each pixel
in the image is represented as an LBP pattern that encodes the similarity between the pixel and its neighbors.
Uniform LBP codes are then used to reduce the number of patterns to 59 while preserving more than 90% of the
information contained in the original LBP representation. Then, the image is divided into 64 non-overlapping
regions, and each region is represented as a 59-bin histogram of patterns. Finally, the algorithm concatenates all
64 regions to create a 3,776-bin spatially enhanced histogram. We reduce the dimensionality of this histogram
using Linear Discriminant Analysis (LDA), which improves clustering and enables us to store an entire database
of 53 subjects on-chip. During classification, the circuit applies LBP and LDA to each incoming IR image in real
time, and compares the resulting feature vector to each pattern stored in the local database using the Manhattan
distance. We implemented the circuit on a Xilinx Artix-7 XC7A100T FPGA and tested it with the UCHThermalFace
database, which consists of 28 81 x 150-pixel images of 53 subjects in indoor and outdoor conditions.
The circuit achieves a 98.6% hit ratio, trained with 16 images and tested with 12 images of each subject in the
database. Using a 100 MHz clock, the circuit classifies 8,230 images per second, and consumes only 309mW.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a digital architecture for face detection on infrared (IR) images. We use Local Binary
Patterns (LBP) to build a feature vector for each pixel, which represents the texture of the image in a vicinity
of that pixel. We use a Support Vector Machine (SVM), trained with 306 images of 51 different subjects, to
recognize human face textures. Finally, we group the classified pixels into rectangular boxes enclosing the faces
using an algorithm for connected components. These boxes can then be used to track, count, or identify faces
in a scene, for example. We implemented our architecture on a Xilinx XC6SLX45 FPGA and tested it on 306
IR images of 51 subjects, different from the data used to train the SVM. The circuit correctly identifies 100%
of the faces in the images, and reports 4.5% of false positives. We also tested the system on a set of IR video
streams featuring multiple faces per image, with varied poses and backgrounds, and obtained a hit rate of 94.5%,
with 7.2% false positives. The circuit uses less than 25% of the logic resources available on the FPGA, and can
process 313 640x480-pixel images per second with a 100MHz clock, while consuming 266mW of power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our world is constantly changing, and this has its effect on worldwide military operations. For example, there is a change
from conventional warfare into a domain that contains asymmetric threats as well. The availability of high-quality
imaging information from Electro-Optical (EO) sensors is of high importance, for instance for timely detection and
identification of small threatening vessels in an environment with a large amount of neutral vessels. Furthermore, Rules
of Engagement often require a visual identification before action is allowed. The challenge in these operations is to
detect, classify and identify a target at a reasonable range, while avoiding too many false alarms or missed detections.
Current sensor technology is not able to cope with the performance requirements under all circumstances. For example,
environmental conditions can reduce the sensor range in such a way that the operational task becomes challenging or
even impossible. Further, limitations in automatic detection algorithms occur, e.g. due to the effects of sun glints and
spray which are not yet well-modelled in the detection filters. For these reasons, Tactical Decision Aids will become an
important factor in future operations to select the best moment to act.
In this paper, we describe current research within The Netherlands on this topic. The Defence Research and
Development Programme “Multifunctional Electro-Optical Sensor Suite (MEOSS)” aims at the development of
knowledge necessary for optimal employment of Electro-Optical systems on board of current and future ships of the
Royal Netherlands Navy, in order to carry out present and future maritime operations in various environments and
weather conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For maritime situational awareness, it is important to identify currently observed ships as earlier encounters. For
example, past location and behavior analysis are useful to determine whether a ship is of interest in case of piracy and
smuggling. It is beneficial to verify this with cameras at a distance, to avoid the costs of bringing an own asset closer to
the ship. The focus of this paper is on ship recognition from electro-optical imagery. The main contribution is an analysis
of the effect of using the combination of descriptor localization and compact representations. An evaluation is performed
to assess the usefulness in persistent tracking, especially for larger intervals (i.e. re-identification of ships). From the
evaluation on recordings of imagery, it is estimated how well the system discriminates between different ships.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In general, long range detection, recognition and identification in visual and infrared imagery are hampered by
turbulence caused by atmospheric conditions. The amount of turbulence is often indicated by the refractive-index
structure parameter Cn2. The value of this parameter and its variation is determined by the turbulence effects over the
optical path. Especially along horizontal optical paths near the surface (land-to-land scenario) large values and
fluctuations of Cn2 occur, resulting in an extremely blurred and shaky image sequence. Another important parameter is
the isoplanatic angle, θ0, which is the angle where the turbulence is approximately constant. Over long horizontal paths
the values of θ0 are typically very small; much smaller than the field-of-view of the camera.
Typical image artefacts that are caused by turbulence are blur, tilt and scintillation. These artefacts occur often locally in
an image. Therefore turbulence corrections are required in each image patch of the size of the isoplanatic angle. Much
research has been devoted to the field of turbulence mitigation. One of the main advantages of turbulence mitigation is
that it enables visual recognition over larger distances by reducing the blur and motion in imagery. In many (military)
scenarios this is of crucial importance. In this paper we give a brief overview of two software approaches to mitigate the
visual artifacts caused by turbulence. These approaches are very diverse in complexity. It is shown that a more complex
turbulence mitigation approach is needed to improve the imagery containing medium turbulence. The basic turbulence
mitigation method is only capable of mitigating low turbulence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current night-time training using (flight-, driving-) simulators is hindered by the lack of realism. Effective night-time training requires the simulation of the illusions and limitations experienced while wearing Night Vision Goggles during the night. Various methods exist that capture certain sensor effects such as noise and the characteristic halos around lights. However, other effects are often discarded, such as the fact that image intensifiers are especially sensitive to near-infrared (NIR) light, which makes vegetation appear bright in the image (the chlorophyll effect) and strongly affects the contrast of objects against their background. Combined with the contrast and resolution reduction in NVG imagery, a scene at night may appear totally different than during the day. In practice these effects give rise to misinterpretations and illusions. When training persons on how to deal with such illusions it is essential to simulate them as accurately as possible . We present a method based on our Colour-Fusion technique (see Toet & Hogervorst, Opt. Eng. 2012) to create a realistic NVG simulation from daytime imagery, which allows for training of the typical effects experienced while wearing NVG during the night.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple power-logarithm histogram modification operator is proposed to enhance digital image contrast. First a
logarithm operator reduces the effect of spikes and transforms the image histogram into a smoothed one that
approximates a uniform histogram while retaining the relative size ordering of the original bins. Then a power operator
transforms this smoothed histogram into one that approximates the original input histogram. Contrast enhancement is
then achieved by using the cumulative distribution function of the resulting histogram in a standard lookup table based
histogram equalization procedure. The method is computationally efficient, independent of image content, and (unlike
most existing contrast enhancement algorithms) does not suffer from the occurrence of artifacts, over-enhancement and
unnatural effects in the processed images. Experimental results on a wide range of different images show that the
proposed method effectively enhances image contrast while preserving overall image brightness, and yields results that
are comparable or even of higher quality than those provided by previous state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
End-to-end Electro-Optical system performance tests such as TOD, MRTD and MTDP require the effort of several
trained human observers, each performing a series of visual judgments on the displayed output of the system. This
significantly contributes to the costs of sensor testing. Currently, several synthetic human observer models exist that can
replace real human observers in the TOD sensor performance test and can be used in a TOD based Target Acquisition
(TA) model. The reliability that may be expected with such a model is of key importance. In order to systematically test
HVS (Human Vision System) models for automated TOD sensor performance testing, two general sets of human
observer TOD threshold data were collected. The first set contains TOD data for the unaided human eye. The second set
was collected on imagery processed with sensor effects, systematically varying primary sensor parameters such as
diffraction blur, pixel pitch, and spatial noise. The set can easily be extended to other sensor effects including dynamic
noise, boost, E-zoom, or fused sensor imagery and may serve as a benchmark for competing human vision and sensor
performance models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Airborne platforms are recording large amounts of video data. Extracting the events which are needed to see is a time-demanding task for analysts. The reason for this is that the sensors record hours of video data in which only a fraction of
the footage contains events of interest. For the analyst, it is hard to retrieve such events from the large amounts of video
data by hand. A way to extract information more automatically from the data is to detect all humans within the scene.
This can be done in a real-time scenario (both on-board as on the ground station) for strategic and tactical purposes and
in an offline scenario where the information is analyzed after recording to acquire intelligence (e.g. a daily life pattern).
In this paper, we evaluate three different methods for object detection from a moving airborne platform. The first one is a
static person detection algorithm. The main advantage of this method is that it can be used on single frames, and therefor
does not depend on the stabilization of the platform. The main disadvantage of this method is that the number of pixels
needed for the detection is pretty large. The second method is based on detection of motion-in-motion. Here the
background is stabilized, and clusters of pixels that move with respect to this stabilized background are detected as
moving object. The main advantage is that all moving objects are detected, the main disadvantage is that it heavily
depends on the quality of the stabilization. The third method combines both previous detection methods.
The detections are tracked using a histogram-based tracker, so that missed detections can be filled in and a trajectory of
all objects can be determined. We demonstrate the tracking performance using the three different detections methods on
the publicly available UCF-ARG aerial dataset. The performance is evaluated for two human actions (running and
digging) and varying object sizes. It is shown that a combined detection approach (static person detection and motion-in-motion
detection) gives better tracking results for both human actions than using one of the detectors alone. Furthermore
it can be concluded that the minimal height of humans must be 20 pixels to guarantee a good tracking performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a system to extract metadata about human activities from full-motion video recorded from a UAV.
The pipeline consists of these components: tracking, motion features, representation of the tracks in terms of their motion
features, and classification of each track as one of the human activities of interest. We consider these activities: walk,
run, throw, dig, wave. Our contribution is that we show how a robust system can be constructed for human activity
recognition from UAVs, and that focus-of-attention is needed. We find that tracking and human detection are essential
for robust human activity recognition from UAVs. Without tracking, the human activity recognition deteriorates. The
combination of tracking and human detection is needed to focus the attention on the relevant tracks. The best performing
system includes tracking, human detection and a per-track analysis of the five human activities. This system achieves an
average accuracy of 93%. A graphical user interface is proposed to aid the operator or analyst during the task of
retrieving the relevant parts of video that contain particular human activities. Our demo is available on YouTube.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For years, scientists have used thermal broadband cameras to perform target characterization in the longwave (LWIR)
and midwave (MWIR) infrared spectral bands. The analysis of broadband imaging sequences typically provides energy,
morphological and/or spatiotemporal information. However, there is very little information about the chemical nature of
the investigated targets when using such systems due to the lack of spectral content in the images. In order to improve
the outcomes of these studies, Telops has developed dynamic multispectral imaging systems which allow synchronized
acquisition on 8 channels, at a high frame rate, using a motorized filter wheel. An overview of the technology is
presented in this work as well as results from measurements of solvent vapors and minerals. Time-resolved multispectral
imaging carried out with the Telops system illustrates the benefits of spectral information obtained at a high frame rate
when facing situations involving dynamic events such as gas cloud dispersion. Comparison of the results obtained using
the information from the different acquisition channels with the corresponding broadband infrared images illustrates the
selectivity enabled by multispectral imaging for characterization of gas and solid targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SYSIPHE is an airborne hyperspectral imaging system, result of a cooperation between France (Onera and DGA) and
Norway (NEO and FFI). It is a unique system by its spatial sampling -0.5m with a 500m swath at a ground height of
2000m- combined with its wide spectral coverage -from 0.4μm to 11.5μm in the atmospheric transmission bands.
Its infrared component, named SIELETERS, consists in two high étendue imaging static Fourier transform
spectrometers, one for the midwave infrared and one for the longwave infrared. These two imaging spectrometers are
closely similar in design, since both are made of a Michelson interferometer, a refractive imaging system, and a large
IRFPA (1016x440 pixels). Moreover, both are cryogenically cooled and mounted on their own stabilization platform
which allows the line of sight to be controlled and recorded. These data are useful to reconstruct and to georeference the
spectral image from the raw interferometric images.
The visible and shortwave infrared component, named Hyspex ODIN-1024, consists of two spectrographs for VNIR and
SWIR based on transmissive gratings. These share a common fore-optics and a common slit, to ensure perfect
registration between the VNIR and the SWIR images. The spectral resolution varies from 5nm in the visible to 6nm in
the shortwave infrared.
In addition, the STAD, the post processing and archiving system, is developed to provide spectral reflectance and
temperature products (SRT products) from calibrated georeferenced and inter-band registered spectral images at the
sensor level acquired and pre-processed by SIELETERS and Hyspex ODIN-1024 systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The HySpex ODIN-1024 is an airborne VNIR-SWIR hyperspectral imaging system which advances the state of the art
with respect to both performance and system functionality. HySpex ODIN-1024 is designed as a single instrument for
both VNIR (0.4 to 1 μm wavelength) and SWIR (1 to 2.5 μm) rather than being a combination of two separate
instruments. With the common fore-optics of the single instrument, a more accurate and stable co-registration is achieved
across the full spectral range compared to having two individual instruments. For SWIR the across-the-track resolution is
1024 pixels, while for VNIR the user of the instrument can choose a resolution of either 1024 or 2048 pixels. In addition
to high spatial resolution, the optical design enables low smile- and keystone distortion and high sensitivity obtained
through low F-numbers of F1.64 for VNIR and F2.0 for SWIR. The camera utilizes state of the art scientific CMOS
(VNIR) and MCT (SWIR) sensors with low readout noise, high speed and spatial resolution. The system has an onboard-calibration
subsystem to monitor the stability of the instrument during variations in environmental conditions. It features
an integrated real-time processing functionality, enabling real-time detection, classification, and georeferencing. We
present an overview of the performance of the instrument and results from airborne data acquisitions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Institute of Optical Sensor Systems (OS) at the Robotics and Mechatronics Center of the German Aerospace Center
(DLR) has more than 30 years of experience with high-resolution imaging technology. This paper shows the institute’s
scientific results of the leading-edge detector design CMOS in a TDI (Time Delay and Integration) architecture. This
project includes the technological design of future high or multi-spectral resolution spaceborne instruments and the
possibility of higher integration. DLR OS and the Fraunhofer Institute for Microelectronic Circuits and Systems (IMS) in
Duisburg were driving the technology of new detectors and the FPA design for future projects, new manufacturing
accuracy and on-chip processing capability in order to keep pace with the ambitious scientific and user requirements. In
combination with the engineering research, the current generation of space borne sensor systems is focusing on VIS/NIR
high spectral resolution to meet the requirements on earth and planetary observation systems. The combination of large-swath
and high-spectral resolution with intelligent synchronization control, fast-readout ADC (analog digital converter)
chains and new focal-plane concepts opens the door to new remote-sensing and smart deep-space instruments. The paper
gives an overview of the detector development status and verification program at DLR, as well as of new control
possibilities for CMOS-TDI detectors in synchronization control mode.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on the development and optimization of mesa-processed InGaAs/InAlAs avalanche photodiodes (APD)
for short-wave infrared applications with demand for high gain and low breakdown voltage. The APDs were
grown by molecular beam epitaxy. Dark and photo current measurements of fully processed APDs reveal high
dynamic range of 104 and gain larger than 40 for 25 V reverse bias voltage and cooled operation at 140 K. A
maximum gain larger than 300 is demonstrated for room temperature as well as 140 K. Two different approaches
to determine the gain of the APD structures are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thanks to the various developments presently available, SWIR technology presents a growing interest and gives the
opportunity to address a large spectrum of applications such as defense and security (night vision, active imaging), space
(earth observation), transport (automotive safety) or industry (non destructive process control).
InGaAs material, initially developed for telecommunications detectors, appears as a good candidate to satisfy SWIR
detection needs. The lattice matching with InP constitutes a double advantage to this material: attractive production
capacity and uncooled operation thanks to low dark current level induced by high quality material.
In the context of this evolving domain, the InGaAs imagery activities from III-VLab were transferred to Sofradir, which
provides a framework for the production activity with the manufacturing of high performances products: CACTUS320
and CACTUS640.
The developments towards VGA format with 15μm pixel pitch, lead today to the industrialization of a new product:
SNAKE SW. On one side, the InGaAs detection array presents high performances in terms of dark current and quantum
efficiency. On the other side, the low noise ROIC has different additional functionalities. Then this 640x512 @ 15μm
module appears as well suited to answer the needs of a wide range of applications.
In this paper, we will present the Sofradir InGaAs technology, the performances of our last product SNAKE SW and the
perspectives of InGaAs new developments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in miniaturization of IR imaging technology have led to a growing market for mini thermal-imaging
sensors. In that respect, Sofradir development on smaller pixel pitch has made much more compact products available to
the users. When this competitive advantage is mixed with smaller coolers, made possible by HOT technology, we
achieved valuable reductions in the size, weight and power of the overall package. At the same time, we are moving
towards a global offer based on digital interfaces that provides our customers simplifications at the IR system design
process while freeing up more space. This paper discusses recent developments on hot and small pixel pitch technologies
as well as efforts made on compact packaging solution developed by SOFRADIR in collaboration with CEA-LETI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of early fire detection in areas classified as potentially explosive is considered in this paper. These include,
for example, some types of facilities and plants, which may cause environmental disasters in case of fires. Hard safety
requirements impose serious terms for the technical performance the detectors for the protection of such objects from the
fire. Detector itself should not cause a fire. The main danger is open conductive parts in the construction of the sensitive
elements of detectors, which can lead to the generation of sparks and fire. The using of fiber-optic technology allows
creating smoke and heating fire detectors, which only the sensors will be located in the protected area, and all electronic
components generate signals and their processing may be removed at considerable distances measured by kilometers.
The block diagram of the fire smoke point detector based on fiber-optic technology is considered, the mathematical
description of the propagation of optical radiation through the sensing element of the detector is provided, sensitivity is
analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present time methods of optical encryption are actively developed. The majority of existing methods of optical
encryption use not only light intensity distribution, easily registered with photosensors, but also its phase distribution
which require application of complex holographic schemes in conjunction with spatially coherent monochromatic
illumination. This leads to complex optical schemes and low decryption quality. To eliminate these disadvantages it is
possible to implement optical encryption using spatially incoherent monochromatic illumination which requires
registration of light intensity distribution only.
Encryption is accomplished by means of optical convolution of image of scene to be encrypted and encryption
diffractive optical element (DOE) point spread function (PSF) which serves as encryption key. Encryption process is
described as follows. Scene is illuminated with spatially-incoherent monochromatic light. In the absence of encryption
DOE lens forms image of scene in photosensor plane. DOE serves as encryption element, its PSF - encryption key.
Light passing through DOE forms convolution of object image and DOE PSF. Registered by photosensor convolution is
encrypted image. Decryption was conducted numerically on computer by means of inverse filtration with regularization.
Kinoforms were used as encryption DOE because they have single diffraction order. Two liquid crystal (LC) spatial light
modulators (SLM) were used to implement dynamic digital information input and dynamic encryption key change. As
input scene amplitude LC SLM HoloEye LC2002 with 800×600 pixels 32×32 μm2 and 256 gray levels was used. To
image synthesized encryption kinoforms phase LC SLM HoloEye PLUTO VIS with 1920×1080 pixels 8×8 μm2 and 256 phase levels was used. Set of test images was successfully optically encrypted and then numerically decrypted.
Encrypted images contents are hidden. Decrypted images despite quite high noise levels are positively recognizable.
Results of optical encryption and numerical decryption are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although the development of small formats (640x480 pixel arrays) and amorphous silicon microbolometers has greatly
decreased detector cost, another important component of a thermal camera, optics, still prohibit a breakthrough for highvolume
commercial systems. Aspheric lenses used in the thermal imaging are typically made using the costly single
point diamond turning(SPDT) process of expensive single crystal materials (Ge and ZnS, etc). As a potential solution to
reduce cost, compression molding method using chalcogenide glass has been attracted to fabricate IR optic. The present
paper reports fabrication of a molded chalcogenide glass lens module for thermal security camera. In addition, the
molded chalcogenide glass lens was evaluated through form error, roughness and decentration for each surface of the
molded lens. From evaluation results, we verified that the molded lens is capable of adopting to thermal imaging
applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Forward Looking InfraRed (FLIR) imaging system has been widely used for both military and civilian purposes.
Military applications include target acquisition and tracking, night vision system. Civilian applications include thermal
efficiency analysis, short-ranged wireless communication, weather forecasting and other various applications. The
dynamic range of FLIR imaging system is larger than one of commercial display. Generally, auto gain controlling and
contrast enhancement algorithm are applied to FLIR imaging system. In IR imaging system, histogram equalization and
plateau equalization is generally used for contrast enhancement. However, they have no solution about the excessive
enhancing when luminance histogram has been distributed in specific narrow region. In this paper, we proposed a
Regional Density Distribution based Wide Dynamic Range algorithm for Infrared Camera Systems. Depending on the
way of implementation, the result of WDR is quite different. Our approach is single frame type WDR algorithm for
enhancing the contrast of both dark and white detail without loss of bins of histogram with real-time processing. The
significant change in luminance caused by conventional contrast enhancement methods may introduce luminance
saturation and failure in object tracking. Proposed method guarantees both the effective enhancing in contrast and
successive object tracking. Moreover, since proposed method does not using multiple images on WDR, computation
complexity might be significantly reduced in software / hardware implementation. The experimental results show that
proposed method has better performance compared with conventional Contrast enhancement methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-resolution, multi-pixels and large field of view (FOV) infrared (IR) detector is an important research direction, which greatly improves the target detection capability. This paper addresses the infrared target detection under the
guidance of attention mechanism. The Gabor filter is used to extract the elementary visual feature of infrared image for its
orientation selectiveness. Then it researches the reasons that produce visual saliency in frequency domain, and provides the
multichannel feature combination strategy to generate the feature map. Further, a novel saliency detection model using Fourier spectrum filtering, is presented to calculate feature regions of infrared image. Experimental results using a wide range of real IR images demonstrate that the proposed algorithm is robust and effective, yielding satisfying results for infrared target detection in large FOV with complex background and low SNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital holography is popular tool for research and practical applications in various fields of science and technology.
Most widespread method of optical reconstruction implements digital hologram display on spatial light modulators
(SLM). Optical reconstruction of digital holograms is used for remote display of static and dynamic 2D and 3D scenes,
in optical information processing, metrology, interferometry, microscopy, etc. Holograms recorded with digital cameras
are amplitude type. Therefore quality of its optical reconstruction with phase SLM is worse compared to amplitude
SLM. However application of phase SLM can provide higher diffraction efficiency. To improve quality of optical
reconstruction with phase SLM, method of SLM phase modulation depth reduction at digital hologram display is
proposed. To our knowledge, this method was applied only in analog holography. Also two other methods of quality
improvement are considered: hologram to kinoform conversion and holograms multiplexing. Numerical experiments on
modelling of digital Fourier holograms recording and their optical reconstruction by phase SLM were performed.
Method of SLM phase modulation depth reduction at digital holograms display was proposed and tested. SLM phase
modulation depth ranged from 0 to 2π. Quantity of hologram phase levels equal to 256 corresponds to 2π phase
modulation depth. To keep SLM settings while changing phase modulation depth hologram phase distribution was
renormalized instead. Dependencies of reconstruction quality on hologram phase modulation depth were obtained. Best
quality is achieved at 0.27π÷0.31π phase modulation depth. To reduce speckle noise, hologram multiplexing can be
applied. Modeling of multiplex holograms optical reconstruction was conducted. Speckle noise reduction was achieved.
For improvement of digital hologram optical reconstruction quality and diffraction efficiency hologram to kinoform
conversion can be used. Firstly numerically reconstructed image of object was obtained. Then this image was used for
kinoform synthesis. Diffraction efficiency was increased by 6.4 times in comparison with hologram reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the apparent optical performance variation of an infrared sensor caused by laminar flow field surrounding a
highly supersonic projectile with cone shape head. An optical ray tracing model was constructed and numerical
simulations of the aero-optical effects were performed by computational fluid dynamics (CFD) analysis and isodensity
surface based ray tracing computation. To maximize modeling and computation efficiency, the number of sampling
isodensity layers was reduced to less than 5 for improved discretization of the inhomogeneous gradient index (GRIN)
media. Using this method, the simulation results show that the BSE is smaller than about 2.8 arcsec when the projectile
flies at 25, 35, 50 km in altitude, Mach 4, 6 in speed, and 0°, 10° in angle of attack. The technical details and
implications of the optical ray tracing model are presented together with the simulation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.