A nanosatellite controUcommunications concept is described using a 'state machine' control paradigm and optical communications to dramatically reduce the mass and power consumption for payloads that can tolerate a low, intermittent data rate.
Novel readout and support-circuit concepts in a CMOS imager are described for the replacement of conventional silicon photodiodes in applications requiring bandwidths of 1 MHz and more. Conventional silicon photodiodes are the detector of choice for high-speed applications but have been replaced by CCD arrays and CMOS imagers when spatial resolution is needed. These imagers can be comparable to photodiodes in responsivity and generally have the advantages of lower noise, low cost, compatibility with 'system-on-a-chip architectures and the ensuing low power and low cost. For space applications, the CMOS devices are highly tolerant of ionizing radiation. With a novel readout architecture and suitable digital and analog support circuitry on- chip, a CMOS imager concept has been identified to replace conventional photodiodes in some important niche applications.
KEYWORDS: Signal to noise ratio, Sensors, Target detection, Cameras, Visualization, Prototyping, Modulation transfer functions, Point spread functions, Signal detection, Optical components
A prototype, wide-field, optical sense-and-avoid instrument was constructed from low-cost commercial off-the-shelf
components, and configured as a network of smart camera nodes. To detect small, general-aviation aircraft
in a timely manner, such a sensor must detect targets at a range of 5-10 km at an update rate of a few
Hz. This paper evaluates the
flight test performance of the "DragonflEYE" sensor as installed on a Bell 205
helicopter. Both the Bell 205 and the Bell 206 (intruder aircraft) were fully instrumented to record position and
orientation. Emphasis was given to the critical case of head-on collisions at typical general aviation altitudes and
airspeeds. Imagery from the DragonflEYE was stored for the offline assessment of performance. Methodologies
for assessing the key figures of merit, such as the signal-to-noise ratio, the range at first detection (R0) and
angular target size were developed. Preliminary analysis indicated an airborne detection range of 6:7 km under
typical visual meteorological conditions, which significantly exceeded typical visual acquisition ranges under the
same conditions.
A compact, lightweight Earth horizon sensor has been designed based on uncooled infrared microbolometer array technology developed at INO. The design has been optimized for use on small satellites in Low Earth Orbits. The sensor may be used either as an attitude sensor or as an atmospheric limb detector. Various configurations may be implemented for both spinning and 3-axis stabilized satellites. The core of the sensor is the microbolometer focal plane array equipped with 256 x 1 VOx thermistor pixels with a pitch of 52 μm. The optics consists of a single Zinc Selenide lens with a focal length of 39.7 mm. The system's F-number is 3.8 and the detector limited Noise Equivalent Temperature Difference is estimated to be 0.75 K at 300 K for the 14 - 16 μm wavelength range. A single-sensor configuration will have a mass of less than 300g, a volume of 125 cm3 and a power consumption of 600 mW, making it well-suited for small satellite missions.
Expected temporal effects in a night vision goggle (NVG) include the fluorescence time constant, charge depletion at high signal levels, the response time of the automatic gain control (AGC) and other internal modulations in the NVG. There is also the possibility of physical damage or other non-reversible effects in response to large transient signals. To study the temporal behaviour of an NVG, a parametric Matlab model has been created. Of particular interest in the present work was the variation of NVG gain, induced by its automatic gain control (AGC), after a short, intense pulse of light. To verify the model, the reduction of gain after a strong pulse was investigated experimentally using a simple technique. Preliminary laboratory measurements were performed using this technique. The experimental methodology is described, along with preliminary validation data.
Night vision devices (NVDs) or night-vision goggles (NVGs) based on image intensifiers improve nighttime visibility
and extend night operations for military and increasingly civil aviation. However, NVG imagery is not equivalent to
daytime vision and impaired depth and motion perception has been noted. One potential cause of impaired perceptions
of space and environmental layout is NVG halo, where bright light sources appear to be surrounded by a disc-like halo.
In this study we measured the characteristics of NVG halo psychophysically and objectively and then evaluated the
influence of halo on perceived environmental layout in a simulation experiment. Halos are generated in the device and
are not directly related to the spatial layout of the scene. We found that, when visible, halo image (i.e. angular) size was
only weakly dependent on both source intensity and distance although halo intensity did vary with effective source
intensity. The size of halo images surrounding lights sources are independent of the source distance and thus do not obey
the normal laws of perspective. In simulation experiments we investigated the effect of NVG halo on judgements of
observer attitude with respect to the ground during simulated flight. We discuss the results in terms of NVG design and
of the ability of human operators to compensate for perceptual distortions.
For objects on a plane, a "scale factor" relates the physical dimensions of the objects to the corresponding dimensions in a camera image. This scale factor may be the only calibration parameter of importance in many test applications. The scale factor depends on the angular size of a pixel of the camera, and also on the range to the object plane. A measurement procedure is presented for the determination of scale factor to high precision, based on the translation of a large-area target by a precision translator. A correlation analysis of the images of a translated target against a reference image is used to extract image shifts and the scale factor. The precision of the measurement is limited by the translator accuracy, camera noise and various other secondary factors. This measurement depends on the target being translated in a plane perpendicular to the optic axis of the camera, so that the scale factor is constant during the translation. The method can be extended to inward-looking 3D camera networks and can, under suitable constraints, yield both scale factor and transcription angle.
Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.
When a bright light source is viewed through Night Vision Goggles (NVG), the image of the source can appear enveloped in a “halo” that is much larger than the “weak-signal” point spread function of the NVG. The halo phenomenon was investigated in order to produce an accurate model of NVG performance for use in psychophysical experiments. Halos were created and measured under controlled laboratory conditions using representative Generation III NVGs. To quantitatively measure halo characteristics, the NVG eyepiece was replaced by a CMOS imager. Halo size and intensity were determined from camera images as functions of point-source intensity and ambient scene illumination. Halo images were captured over a wide range of source radiances (7 orders of magnitude) and then processed with standard analysis tools to yield spot characteristics. The spot characteristics were analyzed to verify our proposed parametric model of NVG halo event formation. The model considered the potential effects of many subsystems of the NVG in the generation of halo: objective lens, photocathode, image intensifier, fluorescent screen and image guide. A description of the halo effects and the model parameters are contained in this work, along with a qualitative rationale for some of the parameter choices.
Anecdotal reports by pilots flying with Night Vision Goggles (NVGs) in urban environments suggest that halos produced by bright light sources impact flight performance. The current study developed a methodology to examine the impact of viewing distance on perceived halo size. This was a first step in characterizing the subtle phenomenon of halo. Observers provided absolute size estimates of halos generated by a red LED at several viewing distances. Physical measurements of these halos were also recorded. The results indicated that the perceived halo linear size decreased as viewing distance was decreased. Further, the data showed that halos subtended a constant visual angle on the goggles (1°48’, ±7’) irrespective of distance up to 75’. This invariance with distance may impact pilot visual performance. For example, the counterintuitive apparent contraction of halo size with decreasing viewing distance may impact estimates of closure rates and of the spatial layout of light sources in the scene. Preliminary results suggest that halo is a dynamic phenomenon that requires further research to characterize the specific perceptual effects that it might have on pilot performance.
Perception of motion-defined form is important in operational tasks such as search and rescue and camouflage breaking. Previously, we used synthetic Aviator Night Vision Imaging System (ANVIS-9) imagery to demonstrate that the capacity to detect motion-defined form was degraded at low levels of illumination (see Macuda et al., 2004; Thomas et al., 2004). To validate our simulated NVG results, the current study evaluated observer’s ability to detect motion-defined form through a real ANVIS-9 system. The image sequences consisted of a target (square) that moved at a different speed than the background, or only depicted the moving background. For each trial, subjects were shown a pair of image sequences and required to indicate which sequence contained the target stimulus. Mean illumination and hence image noise level was varied by means of Neutral Density (ND) filters placed in front of the NVG objectives. At each noise level, we tested subjects at a series of target speeds. With both real and simulated NVG imagery, subjects had increased difficulty detecting the target with increased noise levels, at both slower and higher target speeds. These degradations in performance should be considered in operational planning. Further research is necessary to expand our understanding of the impact of NVG-produced noise on visual mechanisms.
KEYWORDS: Goggles, Visualization, Night vision, Night vision goggles, Light sources and illumination, Light sources, Modulation transfer functions, Defense and security, Standards development, Psychophysics
Several methodologies have been used to determine resolution acuity through Night Vision Goggles. The present study compared NVG acuity estimates derived from the Hoffman ANV-126 and a standard psychophysical grating acuity task. For the grating acuity task, observers were required to discriminate between horizontal and vertical gratings according to a method of constant stimuli. Psychometric functions were generated from the performance data, and acuity thresholds were interpolated at a performance level of 70% correct. Acuity estimates were established at three different illumination levels (0.06-5X10-4 lux) for both procedures. These estimates were then converted to an equivalent Snellen value. The data indicate that grating acuity estimates were consistently better (i.e. lower scores) than acuity measures obtained from the Hoffman ANV-126. Furthermore significant differences in estimated acuity were observed using different tube technologies. In keeping with previous acuity investigations, although the Hoffman ANV-126 provides a rapid operational assessment of tube acuity, it is suggested that more rigorous psychophysical procedures such as the grating task described here be used to assess the real behavioural resolution of tube technologies.
Damage in CMOS image sensors caused by heavy ions with moderate energy (~10MeV) are discussed through the effects on transistors and photodiodes. SRIM (stopping and range of ions in matter) simulation results of heavy ion radiation damage to CMOS image sensors implemented with standard 0.35μm and 0.18μm technologies are presented. Total ionizing dose, displacement damage and single event damage are described in the context of the simulation. It is shown that heavy ions with an energy in the order of 10 MeV cause significant total ionizing dose and displacement damage around the active region in 0.35μm technology, but reduced effects in 0.18μm technology. The peak of displacement damage moves into the substrate with increasing ion energy. The effect of layer structure in the 0.18 and 0.35 micron technologies on heavy ion damage is also described.
A concept is described for the detection and location of transient objects, in which a "pixel-binary" CMOS imager is used to give a very high effective frame rate for the imager. The sensitivity to incoming photons is enhanced by the use of an image intensifier in front of the imager. For faint signals and a high enough frame rate, a single "image" typically contains only a few photon or noise events. Only the event locations need be stored, rather than the full image. The processing of many such "fast frames" allows a composite image to be created. In the composite image, isolated noise events can be removed, photon shot noise effects can be spatially smoothed and moving objects can be de-blurred and assigned a velocity vector. Expected objects can be masked or removed by differencing methods. In this work, the concept of a combined image intensifier/CMOS imager is modeled. Sensitivity, location precision and other performance factors are assessed. Benchmark measurements are used to validate aspects of the model. Options for a custom CMOS imager design concept are identified within the context of the benefits and drawbacks of commercially available night vision devices and CMOS imagers.
An optical beam combined with an array detector in a suitable geometrical arrangement is well-known to provide a range measurement based on the image position. Such a 'triangulation' rangefinder can measure range with short-term repeatability below the 10-5 level, with the aid of spatial and temporal image processing. This level of precision is achieved by a centroid measurement precision of ±0.02 pixel. In order to quantify its precision, accuracy and linearity, a prototype triangulation rangefinder was constructed and evaluated in the laboratory using a CMOS imager and a collimated optical source. Various instrument, target and environmental conditions were used. The range-determination performance of the prototype instrument is described, based on laboratory measurements and augmented by a comprehensive parametric model. Temperature drift was the dominant source of systematic error. The temperature and vibration environments and target orientation and motion were controlled to allow their contributions to be independently assessed. Laser, detector and other effects were determined both experimentally and through modeling. Implementation concepts are presented for a custom CMOS imager that can enhance the performance of the rangefinder, especially with regards to update rate.
The capture of a wide field of view (FOV) scene by dividing it into multiple sub-images is a technique with many precedents in the natural world, the most familiar being the compound eyes of insects and arthropods. Artificial structures of networked cameras and simple compound eyes have been constructed for applications in robotics and machine vision. Previous work in this laboratory has explored the construction and calibration of sensors which produce multiple small images (of ~150 pixels in diameter) for high-speed object tracking.
In this paper design options are presented for electronic compound eyes consisting of 101 - 103 identical 'eyelets'. To implement a compound eye, multiple sub-images can be captured by distributing cameras and/or image collection optics. Figures of merit for comparisons will be developed to illustrate the impact of design choices on the field of view, resolution, information rate, image processing, calibration, environmental sensitivity and compatibility with integrated CMOS imagers.
Whereas compound eyes in nature are outward-looking, the methodology and subsystems for an outward-looking compound-eye sensor are similar for in an inward-looking sensor, although inward-looking sensors have a common region viewable to all eyelets simultaneously. The paper addresses the design considerations for compound eyes in both outward-looking and inward-looking configurations.
Vanishing point and Z-tranform image center calibration techniques are reported for a prototype “compound-eye” camera system which can contain up to 25 “eyelets”. One application of this system is to track a fast-moving object, such as a tennis ball, over a wide field of view. Each eyelet comprises a coherent fiber bundle with a small imaging lens at one end. The other ends of the fiber bundles are aligned on a plane, which is re-imaged onto a commercial CMOS camera. The design and implementation of the Dragonfleye prototype is briefly described. Calibration of the image centers of the eyelet lenses is performed using a vanishing point technique, achieving an error of approximately ±0.2 pixels. An alternative technique, the Z-transform, is shown to be able to achieve similar results. By restricting the application to a two-dimensional surface, it is shown that similar accuracies can be achieved using a simple homography transformation without the need for calibrating individual eyelets. Preliminary results for object tracking between eyelets are presented, showing an error between actual and measured positions of around 3.5 mrad.
Night vision devices are important tools that extend the operational capability of military and civilian flight operations. Although these devices enhance some aspects of night vision, they distort or degrade other aspects. Scintillation of the NVG signal at low light levels is one of the parameters that may affect pilot performance. We have developed a parametric model of NVG image scintillation. Measurements were taken of the output of a representative NVG at low light levels to validate the model and refine the values of the embedded parameters. A simple test environment was created using a photomultiplier and an oscilloscope. The model was used to create sequences of simulated NVG imagery that were characterized numerically and compared with measured NVG signals. The sequences of imagery are intended for use in laboratory experiments on depth and motion-in-depth perception.
KEYWORDS: Visualization, Night vision, Target detection, Photons, Motion models, Visual process modeling, Signal to noise ratio, Software development, Night vision goggles, Image visualization
The influence of Night Vision Goggle-produced noise on the perception of motion-defined form was investigated using synthetic imagery and standard psychophysical procedures. Synthetic image sequences incorporating synthetic noise were generated using a software model developed by our research group. This model is based on the physical properties of the Aviator Night Vision Imaging System (ANVIS-9) image intensification tube. The image sequences either depicted a target that moved at a different speed than the background, or only depicted the background. For each trial, subjects were shown a pair of image sequences and required to indicate which sequence contained the target stimulus. We tested subjects at a series of target speeds at several realistic noise levels resulting from varying simulated illumination. The results showed that subjects had increased difficulty detecting the target with increased noise levels, particularly at slower target speeds. This study suggests that the capacity to detect motion-defined form is degraded at low levels of illumination. Our findings are consistent with anecdotal reports of impaired motion perception in NVGs. Perception of motion-defined form is important in operational tasks such as search and rescue and camouflage breaking. These degradations in performance should be considered in operational planning.
Compound eyes are a highly successful natural solution to the issue of wide field of view and high update rate for vision systems. Applications for an electronic implementation of a compound eye sensor include high-speed object tracking and depth perception. In this paper we demonstrate the construction and operation of a prototype compound eye sensor which currently consists of up to 20 eyelets, each of which forms an image of approximately 150 pixels in diameter on a single CMOS image sensor. Post-fabrication calibration of such a sensor is discussed in detail with reference to experimental measurements of accuracy and repeatability.
A pixel-parallel image sensor readout technique is demonstrated for CMOS active pixel sensors to facilitate a range of applications where the high-speed detection of the presence of an object, such as a laser spot, is required. Information concerning the object’s location and size is more relevant that a captured image for such applications. A sensor for which the output comprises the numbers of pixels above a global threshold in both rows and columns is demonstrated in 0.18 μm CMOS technology. The factors limiting the ultimate performance of such a system are discussed. Subsequently, techniques for enhancing information retrieval from the sensor are introduced,including centroid calculations using multiple thresholds, multi-axis readout, and run-length encoding.
Mosaic imagers increase field of view cost effectively, by connecting single-chip cameras in a coordinated manner equivalent to a large array o9f sensors. Components that would conventionally have been in separate chips can be integrated on the same focal plane by using CMOS image sensors (CIS). Here, a mosaic imaging system is constructed using CIS connected through a bus line which shares common input controls and output(s), and enables additional cameras to be inserted with little system modification. The image- bus consumes relatively low power by employing intelligent power control techniques. However, the bandwidth of the bus will still limit the number of camera modules that can be connected in the mosaic array. Hence, signal-processing components, such as data reduction and encoding, are needed on-chip in order to achieve high readout speed. One such method is described in which the number and sizes of pixel clusters above an intensity threshold are determined using a novel 'object positioning algorithm' architecture. This scheme identifies significant events or objects in the scene before the camera's data are transmitted over the bus, thereby reducing the effective bandwidth. In addition, basic modules in single-chip camera are suggested for efficient data transfer and power control in mosaic imager.
The detection of incipient wildfires from space is optimized by high spatial resolution, redundant coverage of a large swath, modest spectral resolution, and a high image frame rate. The desired information rate can exceed 109 bytes/sec, which is difficult to achieve with conventional sensor designs. A design is described for a distributed sensor consisting of 102 - 103 identical detection modules linked by a serial bus to a central controller. Each detection module or 'chipxel' contains an intelligent bus interface, a detector array, a multiplexer, amplifiers, digitizers, local data and program memory, a local controller, and modest image reprocessing. Clock, timing, and power control can also be present. The baseline detector element is an active CMOS image sensor, although a mix of detectors can share a common readout structure. The paper will describe the specifications for a two-chip implementation of a chipxel for space-based wildfire detection, with emphasis on the intelligent bus interface, power control, and on-chip preprocessing. Key analog and digital elements of the chip have been implemented in CMOS 0.35 micrometer technology, while ancillary functions and design augmentations can be evaluated in a gate array or similar hardware.
A novel multispectral remote sensing instrument for microsatellites is described. By using 102 - 103 'chipxels,' a combination of high angular resolution, large coverage region, multispectral operation, and redundancy can be achieved. Each 'chipxel' has a detector array, optics, electronics, and an intelligent bus interface.
MeteorWatch is a concept for the observation of small meteor events from a microsatellite in low earth orbit. To achieve high spatial resolution (about 1 km), fast update rate (up to 50 Hz), and large instantaneous coverage (107 km2), a distributed sensor is appropriate. The MeteorWatch sensor design has about 300 independent detection modules linked by a data bus to a central controller and image processor. Each detection module has a camera, digitizer, controller, image preprocessor, and bus interface. In operation, each detection module decides on the probability that a particular image has a meteor. Meteor event rates are expected to be low compared to the data rate, so that preprocessing at the detector modules reduces traffic on the data bus to the central controller. Image sequences with probable meteors are sent to the central controller for further processing and extraction of the meteor parameters. This paper gives an overview of MeteorWatch and describes the image processing approach, including partitioning of the tasks between the detection modules and the central image processor, the selection of clutter-rejection algorithms and the limits of detection for small meteors.
The night-time temperature in the altitude region from 90 to 120 km is known to be characterized by a steep gradient caused by heating due to ultraviolet absorption of sunlight in the middle thermosphere during the day, yet measurements of this gradient are scarce. Remote optical sensing methods fail in this region because the few nightglow emissions above 100 km are contaminated by photochemical reaction energy. We address this measurement by considering the scattered return signal from a laser emitting horizontally from a rocket as it traverses the region. Two analyses are presented. The ideal method consists of measurement of the Raman rotational spectrum of the combined N2 and O2 back-scattered signals by means of interference filter spatial spectral scanning. A quantitative estimate of such a measurement shows that this method, while only marginally practical at the moment, holds significant promise. The second method consists of measuring the back-scattered Rayleigh signal, which is a thousand times brighter, and deducing the temperature from the density scale height. This measurement is shown by quantitative precision estimates to be practical using today's technology. Proposed optical configurations for both methods are presented, and the limitations of both are explored.
The uniformity of the output of an integrated, quasi-linear array of bolometer elements is evaluated in terms of substrate temperature gradients, variations in bolometer thermal conductivity and temperature coefficient of resistance, and self-heating during readout. With a suitable offset compensation procedure, the array non-uniformity can be as low as a few parts per million of the DC offset voltage. Uniform substrate temperature changes as large as 10K can be tolerated.
A passive cryocooler has been developed for the cooling of small payloads to temperatures as low as 145 K. Although designed for a specific electronics experiment on the STRV-1d microsatellite, the device is suitable for a wide range of applications. The cryocooler uses coated surfaces for tailored radiative cooling. Mechanical support between components is provided by fiberglass struts. The measured end temperature reached is 151 K in a liquid nitrogen dewar which extrapolates to an end temperature of lower than 145 K in space. Thermal vacuum testing and random vibration testing at levels consistent with an Ariane 5 launch have been performed as part of formal qualification for the STRV mission. In this paper, details of the design, analysis, fabrication and testing of the passive cryocooler are presented.
KEYWORDS: Resistance, Bolometers, Sensors, Prototyping, Digital electronics, Electronics, Temperature metrology, Semiconductors, Detector arrays, Control systems
Integration of detector arrays and digital CMOS circuitry can confer significant performance improvements on an imaging system. In this paper we present an integrated sensor array based on (Figure 1), micro bolometer (MB) elements deposited on a CMOS substrate containing electronics for random access readout, amplification, gain and offset control and digitization. Such integrated MB arrays are effective components in a novel implementation of an earth-horizon attitude sensor for satellites. The bolometer elements are used to distinguish the earth's thermal IR from the space background. For this application, the reduced detectivity of MB arrays compared with cooled IR detectors can be tolerated. Low mass, enhanced reliability, and low power consumption are gained by using an uncooled IR detector, and by using an integrated circuit design. These considerations are especially important for microsatellites. The low cost per array facilitates the use of multiple arrays, which allows significant flexibility in the optical and systems designs. The integrated chip design allows for random-access readout, on-chip gain and offset compensation and local control of pixel geometry, which contribute to the overall system effectiveness and help to allay any performance reductions that come from reduced detectivity.
Timothy Pope, Hubert Jerominek, Christine Alain, Francis Picard, R. Wayne Fuchs, Mario Lehoux, Rose Zhang, Carol Grenier, Yves Rouleau, Felix Cayer, Simon Savard, Ghislain Bilodeau, Jean-Francois Couillard, Carl Larouche, Paul Thomas
Three types of uncooled IR bolometric detector arrays equipped with 256 X 1 and 256 X 40 VO2 thermistor pixels and on-chip readout electronics are presented. These reconfigurable arrays consist of 50 micrometer X 50 micrometer pixels and CMOS readout electronics that can be operated either in random access mode or in self-scanning mode. Depending on the operational conditions, the NETD of the arrays can be as low as 20 mK.
The Satellite Attitude Sensor (SAS) utilizes IR bolometer arrays to perform the role of a staring horizon sensor with good accuracy and a wide field of view that avoids mechanical scanning. An innovative concept has resulted in simple and inexpensive design suitable for the requirements of both low and high altitude orbits. Although SAS was designed especially for Earth-observing micro-satellites, it is applicable to satellites with geosynchronous or highly elliptical orbits and the update rate is sufficient for use with a spinning satellite or a rocket. The ability of the sensor to operate in both spinning and 3-axis stabilized modes will allow it to fulfill the needs of large variety of scientific and remote sensing applications. Laboratory test are in process to verify the design.
Discrimination of small wildfires by dual-wavelength imaging at high spatial resolution (10 m) must be made against a variable background (scene clutter) caused by diffuse and specular sunlight reflections and self-emission from the scene. Small fires can be readily detected at wavelengths longer than about 1400 nm in the near infrared. From a specific ground spot, the ratio of signal intensities in bands near 2400 and 3700 nm gives an 'effective temperature' that is a useful discriminant against scene clutter. Co- registration of the scene images in the two wavelength regions is important when applying the technique, particularly from a space platform where the angular size of the ground footprint is small. This paper shows that atmospheric refraction and turbulence can be ignored, while window wedge angles, lens centration errors and spectral variations in the size of the ground footprint must be dealt with by calibration or additional signal processing.
KEYWORDS: Sensors, Satellites, Infrared sensors, Spatial resolution, Flame detectors, Signal to noise ratio, Telescopes, Satellite imaging, Space telescopes, Signal processing
Small, incipient wildfires in the boreal forest can be detected with a space-based infrared sensor that uses currently available technology. At night, wildfires with a temperature of 700 K or more and an area of 1 m2 should be visible if there is a clear line of sight to the sensor. Sensor refinements and signal processing could enhance this level of detection. Clouds, topographical variations and the forest canopy may obscure the line of sight, so that multiple looks would significantly improve the probability of detection of a small fire. The relatively long revisit time of a satellite-based sensor is a constraint of the fire management application. Although the cost and revisit time of a spaceborne sensor are currently too high for it to replace airborne sensors, there is an important role as an adjunct sensor operating at night. Many of the specifications of an infrared sensor for wildfire detection are similar to those for space-based surveillance applications, so that useful infrared imagery from space may become available to the forest management community at relatively low cost.
KEYWORDS: Sensors, Infrared sensors, Infrared radiation, Signal processing, Diffraction, Detector arrays, Infrared detectors, Signal detection, Signal to noise ratio, Switching
Airborne infrared fire location can be used to augment other techniques for the detection of small, incipient forest fires. Described here is a new real-time spatial/spectral scanner concept, the Adaptive Infrared Forest Fire Sensor, which employs an acousto-optical tunable filter (AOTF), an indium antimonide or similar array detector, and a steerable scan mirror to enhance the probability of detecting small wildfires and to reduce the rate of false alarms caused by variations in the forest scene, atmosphere, and sun position. Rapid switching of the wavelength of operation in response to the received signal permits the scanner system to optimize the spectral information about a particular spatial location. Measurements on a laboratory prototype system are in progress to verify the salient features of the design concept.
The Electro-Optics Laboratory of the Institute for Space and Terrestrial Science characterizes array detectors under a wide range of operating conditions in a test facility based on a uniform optical source, a flexible array controller, a cryostat, and comprehensive and data acquisition hardware and software. Source characteristics, ambient temperature, clock/bias parameters, and output signal conditioning can be varied to maximize the useful information about the devices under test. Emphasis has been placed on achieving a high level of accuracy and reproducibility in the measurements. Results from representative CCD arrays are used to illustrate design highlights and facility capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.