PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11538, including the Title Page, Copyright Information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Welcome to the conference on Electro-Optical Remote Sensing XIV
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging though scattering or strongly diffuse media is an outstanding challenge that persists despite significant progress over the years. We can characterise the scattering strength of a material by quantifying its thickness in terms of the number of transport mean free paths (TMFPs), which represents the distance over which all information of the photon’s initial direction is lost. This value can vary across various orders of magnitude depending on the medium, for example between 1 mm in biological tissue to 1 m in fog. A relatively straightforward approach to imaging in diffuse media is to time gate out all photons except the ``ballistic” photons that will have experienced little or no scattering. Unfortunately, ballistic photons are attenuated exponentially: imaging through 10 TMFPs implies a loss of 1e-5 that still allows detection with a decent laser and detector; 100 TMFPs implies an attenuation of roughly 30 orders of magnitude. All photons have therefore undergone significant scattering with the implication that image information must be spread over the full temporal distribution of the photon arrival times at the receiver. We will overview recent work aimed at retrieving image information from highly scattering media >>10 TMFPs) by using single photon sensitive cameras to record the full spatial and temporal distribution of transmitted photons, to which we then apply computational methods in order to retrieve images of embedded objects with sub-mm resolution. We will also discuss ongoing efforts to image through dynamically scattering media, i.e. that change in time, using machine learning approaches. We are able to successfully image objects even in situations in which the scattering is so strong that no speckle memory effect is present and the continuously changing medium does not allow to measure a transmission matrix, thus defeating all previous approaches to this problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of automatically locating, classifying and identifying an object within a point cloud that has been acquired by scanning a scene with a ladar. The recent work [E. Bae, Automatic scene understanding and object identification in point clouds, Proceedings of SPIE Volume 11160, 2019] approached the problem by first segmenting the point cloud into multiple classes of similar objects, before a more sophisticated and computationally demanding algorithm attempted to recognize/identify individual objects within the relevant class. The overall approach could find and identify partially visible objects with high confidence, but has the potential of failing if the object of interest is placed right next to other objects from the same class, or if the object in interest is scattered into several disjoint parts due to occlusions or slant view angles. This paper proposes an improvement of the algorithm that allows it to handle both clustered and scattered scenarios in a unified way. It proposes an intermediate step between segmentation and recognition that extracts objects from the relevant class based on similarity between their distance function and the distance function of a reference shape for different view angles. The similarity measure accounts for occlusions and partial visibility naturally, and can be expressed analytically in the distance coordinate for azimuth and elevation angles within the field of view (FOV). This reduces the dimensions for which to search from three to two. Furthermore, calculations can be limited to parts of the FOV corresponding to the relevant segmented region. In consequence, the computational efficiency of the algorithm is high and it is possible to match against the reference shape for multiple discrete view angles. The subsequent recognition step analyzes the extracted objects in more details and avoids suffering from discretization and conversion errors. The algorithm is demonstrated in various maritime examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to support maritime search and rescue activities, an affordable gated-viewing instrument has been developed within the TRAGVIS project. The instrument, which got the name TRAGVIS after the project’s name, has the purpose of vision enhancement during night-time missions under bad visibility conditions. TRAGVIS consists of a compact, eye-safe NIR (near-infrared) laser light source and a monochromatic 1.3 Mpixel camera and has a field of view similar to the one of common field glasses. The camera sensor was recently upgraded from the Onyx to the Bora rev.A sensor from Teledyne–e2v, and a thorough comparison will be shown between them. Several field tests were conducted on an out of service airport and in a maritime environment. The measured gray values of the instrument were calibrated to the reflectivity of the targets at different distances. Furthermore, the performance of the instrument has been studied under different visibility conditions. Therefore, several images of the same target were taken with the gated-viewing mode enabled and disabled. These measurements showed that even during the presence of a light fog with an extinction coefficient of 3.8 km−1, the measured contrast decreased by more than a factor of 3 when gated-viewing was disabled. Contrary to this, no significant decrease in the contrast could be identified using the gated-viewing feature of our instrument. In the maritime environment, field tests at a close harbor were performed for the identification of different maritime objects such as sailing boats, rubber boats and drones. Using several TRAGVIS images taken at monotone increasing gate distances, a simple method was applied to build three dimensional images of this maritime scenery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Water optical properties are crucial for airborne ocean optical sensing. Laser depth sounding has been used for many decades and is now a well-developed commercial technique for bottom depth charting. Airborne lidar also has the capability to generate depth profiles of water scattering properties. Recently, satellite based lidars have also been shown to measure water depth. The NASA satellite ICESat-2 (Ice, Cloud and land Elevation Satellite-2) carries a photoncounting laser altimeter used to measure, for example, the elevation of ice sheets, glaciers and sea ice. The raw data can also be used to measure water depth in shallow waters. Passive EO remote sensing from aircraft and satellites can also be used for shallow water bathymetry and environmental monitoring as well as for estimating various optical parameters. In order to plan and execute airborne EO sensing over water, a prior knowledge of optical parameters of the water is important. There are many data sources for optical water parameters including Secchi disc depths, turbidity profiles, phytoplankton, coloured dissolved organic matter (CDOM), humus substances and others. Many of these parameters are related to water colour and transmission. The paper will give examples of such data for the Baltic Sea and how they relate to geographical and seasonal variations. Examples of laser depth sounding in the Baltic will be given. Depth capability for airborne lidars are discussed in relation to new technology developments. Finally, the need for a comprehensive forecast model of water clarity is discussed including its relation to seasonal, geographical and weather variations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the high availability and the easy handling of small drones, the number of reported incidents caused by UAVs both intentionally and accidentally is increasing. To be capable to prevent such incidents in future, it is essential to be able to detect UAVs. However, not every small flying object poses a potential threat and therefore the object not only has to be detected, but also classified or identified. Typical 360◦ scanning LiDAR systems can be deployed to detect and track small objects in the 3D sensor data at ranges of up to 50 m. Unfortunately, in most cases the verification and classification of the detected objects is not possible due to the low resolution of this type of sensor. In high-resolution 2D images, a differentiation of flying objects seems to be more practical, and at least cameras in the visible spectrum are well established and inexpensive. The major drawback of this type of sensor is the dependence on an adequate illumination. An active illumination could be a solution to this problem, but it is usually impossible to illuminate the scene permanently. A more practical way would be to select a sensor with a different spectral sensitivity, for example in the thermal IR. In this paper, we present an approach for a complete chain of detection, tracking and classification of small flying objects such as micro UAVs or birds, using a mobile multi-sensor platform with two 360◦ LiDAR scanners and pan-and-tilt cameras in the visible and thermal IR spectrum. The flying objects are initially detected and tracked in 3D LiDAR data. After detection, the cameras (a grayscale camera in the visible spectrum and a bolometer sensitive in the wavelength range of 7.5 µm to 14 µm) are automatically pointed to the object’s position, and each sensor records a 2D image. A convolutional neural network (CNN) realizes the identification of the region of interest (ROI) as well as the object classification (we consider classes of eight different types of UAVs and birds). In particular, we compare the classification results of the CNN for the two camera types, i.e. for the different wavelengths. The high number of training data for the CNN as well as the test data used for the experiments described in this paper were recorded at a field trial of the NATO group SET-260 (“Assessment of EO/IR Technologies for Detection of Small UAVs in an Urban Environment”) at CENZUB, Sissonne, France.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the growing threat of a wide range of unmanned aerial vehicles (UAVs), including consumer micro-drones that are increasingly used for defense purposes, the need to develop active and passive countermeasures against armed and intelligence gathering UAVs has been identified in order to increase force protection, critical infrastructure resilience, and information security. Several counter-drone solutions have been reported. To accomplish the task of UAV localization, in addition to electro-optical detection and tracking, we consider the distance determination as a fundamental task for stand-alone observation stations. Laser Ranging is one of the promising tools and currently widely used in determination of distance to large objects or slow-moving targets. Within the scope of this paper we are going to evaluate the features and limitations, when applying it for performing laser ranging on UAV. As a target, we are using a typical representative of commercial micro UAV. In this paper, we will present theoretical analysis and experimental results of UAV laser range measurements under realistic environmental conditions. Investigations about the laser transmitter signal are included. We are going to describe qualitative results of laser ranging at different operational modes. Research is based on ranging micro UAV from different directions and various distances. To estimate the maximum ranging distance on the limits of our system, we are going to apply an artificial scaled reference target. This target creates the same optical reflectance cross section as our commercial micro UAV with a given distance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The dependence of IR transmission on specific weather conditions is important to understanding the performance and limitations of IR sensor systems in such conditions. In this work, atmospheric transmission spectra were recorded in snowy conditions in Alvdalen, Sweden along with simultaneous measurements of meteorological data. MODTRAN was used to model the predicted transmission spectra to compare with and verify the experimental data. The apparent path averaged extinction was calculated and plotted against precipitation rate, relative humidity and wind speed for a range of wavelengths. Weak positive correlations were found between all three meteorological parameters and measured extinction. The data showed slightly stronger correlations between meteorological variables and extinction for longer wavelengths, however further data collection in a wider range of conditions is needed to confirm this result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently there is a considerable development of small, lightweight, lidar systems, for applications in autonomous cars. The development gives possibilities to equip small UAVs with this type of sensor. Adding an active sensor component, beside the more common passive UAV sensors, can give additional capabilities. This paper gives experimental examples of lidar data and discusses applications and capabilities for the platform and sensor concept including the combination with data from other sensors. The lidar can be used for accurate 3D measurements and has a potential for detection of partly occluded objects. Additionally, positioning of the UAV can be obtained by a combination of lidar data and data from other low-cost sensors (such as inertial measurement units). The capabilities are attainable both for indoor and outdoor shortrange applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a method for jointly estimating intrinsic calibration and internal clock synchronisation for a pantilt- zoom (PTZ) camera using only data that can be acquired in the field during normal operation. Results show that this method is a promising starting point towards using software to replace costly timing hardware in such cameras. Through experiments we provide calibration and clock synchronisation for an off-the-shelf low-cost PTZ camera, and observe a greatly improved directional accuracy, even during mild manoeuvres.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is generally an important performance gap between theoretical and experimental results of optical designs. In theoretical results, all factors seem to be perfect but experimental results are highly affected from real factors such as production criteria, atmospheric conditions, experimental environment etc. Therefore, a certain number of practical tests have to be done in field in order to change parameters to compensate for these factors. In case of laser optical designs, some specific tests are done in order to make the needed laser spot size which is achieved in analysis software to the practical spot size. Generally, manual inspection of the spot after each test by an expert or complex spot analysis tools are required. However, these methods are generally over-complex for simple spot size calculations. In this work, we propose a method to inspect and measure laser spot size on a material remotely using a camera and an analysis software we developed. Using the proposed image processing pipeline in the software, the laser spot image on the material is captured from the camera and processed in order to give a final spot size value in pixels. This method provides an easy, flexible and effective spot size calculation and experimental comparison. In addition, our work includes a comparison of two different processing techniques for calculating spot size. Using our proposed method, we got fast and accurate practical results in comparison with tests which are done using manual inspection or expensive tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article deals with the exact measurement of wavelength intensities in the vicinity of the frequency of the light source at the output of the polarizing optic fiber sensor. A functional polarizing fiber optic temperature sensor is used as the measuring element, and the frequency variations are induced by a birefringence change in the fiber retaining polarization due to temperature changes. Different speed of light propagation in the two polarization planes of the polarization maintaining optical fiber was used to excite the birefringence. Highly detailed measurements show that the intensity peaks vary depending on the changing temperature of the optical fiber due to the changing birefringence. Thanks to accurate measuring instruments, the dependence between the instantaneous change in the polarization state and the change in the maximum intensity at given wavelengths were observed. The described measurement determines whether wavelength variations may be a suitable alternative to evaluating polarization changes, which in some situations is difficult and costly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An outline for an Extended Color Acquisition joint Model (ECAM) that combines the European TRM4/Visible- Reflectivity Category model with a human perception color difference metrics, enables the Detection, Recognition, Identification (DRI) range performance prediction of color imagers. The prediction of objects acquisition ranges is enabled by applying the TRM model for a color imaging system combined with human observer perceived difference technique. The new concept suggests objects' DRI ranges prediction model of a color image that are treated with two different successive approaches: A physical/hardware 'zone' that includes a colored 4-bar scheme that represents the target and background object, the atmosphere, an imaging color camera and a display in which TRM4 – reflective mode, is applied independently for each of the 3 primary colors. The computation steps in this section are very similar to TRM4-reflective mode described in the Fraunhofer IOSB Technical report 2016/09 -TRM4.v2. A human perception 'zone' where the eye/brain system is involved: the emerged photons from the display are absorbed and processed by a human observer and an image color difference metrics is applied. This work applies a technique for comparing original image and its reproduction to evaluate the difference between target and background represented by a two color standard 4-bar shape, observed at a given range, as captured and presented by a color camera with a color display. The S-SCIELAB metrics that reflects the spatial frequency response to different colors combined with CIEDE2000 difference technique are applied to calculate the perceived difference between a target and a background at each range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.