This paper presents a study of face recognition performance as a function of light level using intensified near infrared imagery in conjunction with thermal infrared imagery. Intensification technology is the most prevalent in both civilian and
military night vision equipment, and provides enough enhancement for human operators to perform standard tasks under extremely low-light conditions. We describe a comprehensive data collection effort undertaken by the authors to image subjects under carefully controlled illumination and quantify the performance of standard face recognition algorithms on visible, intensified and thermal imagery as a function of light level. Performance comparisons for automatic face recognition are reported using the standardized implementations from the CSU Face Identification Evaluation System, as well as Equinox own algorithms. The results contained in this paper should constitute the initial step for analysis and
deployment of face recognition systems designed to work in low-light level conditions.
Image fusion of complementary broadband spectral modalities has been extensively studied for providing performance enhancements to various military applications. With the growing availability of COTS and customized video cameras that image in VIS-NIR, SWIR, MWIR and LWIR, there is a corresponding increase in the practical exploitation of different combinations of fusion between any of these respective spectrums. Equinox Corporation has been developing a unique line of products around the concept of a single unified video image fusion device that can centrally interface with a variety of input cameras and output displays, together with a suite of algorithms that support image fusion across the diversity of possible combinations of these imaging modalities. These devices are small in size, lightweight and have power consumption in the vicinity of 1.5 Watts making them easy to integrate into portable systems.
KEYWORDS: Facial recognition systems, Near infrared, Cameras, Visible radiation, Detection and tracking algorithms, Light sources and illumination, Microchannel plates, Sensors, System identification, Video
This paper presents a systematic study of face recognition performance as a function of light level using intensified near infrared imagery. This technology is the most prevalent in both civilian and military night vision equipment, and provides enough intensification for human operators to perform standard tasks under extremely low-light conditions. We describe a comprehensive data collection effort undertaken by the authors to image subjects under carefully controlled illumination and quantify the performance of standard face recognition algorithms on visible and intensified imagery as a function of light level. Performance comparisons for automatic face recognition are reported using the standardized implementations from the CSU Face Identification Evaluation System. The results contained in this paper should constitute the initial step for analysis and deployment of face recognition systems designed to work in low-light level conditions.
Equinox Corporation has developed two new video board products for real-time image fusion of visible (or intensified visible/near-infrared) and thermal (emissive) infrared video. These products can provide unique capabilities to the dismounted soldier, maritime/naval operations and Unmanned Aerial Vehicles (UAVs) with low-power, lightweight, compact and inexpensive FPGA video fusion hardware. For several years Equinox Corporation has been studying and developing image fusion methodologies using the complementary modalities of the visible and thermal infrared wavebands including applications to face recognition, tracking, sensor development and fused image visualization. The video board products incorporate Equinox's proprietary image fusion algorithms into an FPGA architecture with embedded programmable capability. Currently included are (1) user interactive image fusion algorithms that go significantly beyond standard "A+B" fusion providing an intuitive color visualization invariant to distracting illumination changes, (2) generalized image co-registration to compensate for parallax, scale and rotation differences between visible/intensified and thermal IR, as well as non-linear optical and display distortion, and (3) automatic gain control (AGC) for dynamic range adaptation.
Recent research has demonstrated distinct advantages using thermal infrared imaging for improving face recognition performance. While conventional video cameras sense reflected light, thermal infrared cameras primarily measure emitted radiation from objects at just above room temperature (e.g., faces). Visible and thermal infrared image data collections of frontal views of faces have been on-going at NIST for over two years producing the most comprehensive database known to involve thermal infrared imagery of human faces. Rigorous experimentation with this database has revealed consistently superior recognition performance of algorithms when applied to thermal infrared particularly under variable illumination conditions. An end-to-end face recognition system incorporating simultaneous coregistered thermal infrared and visible has been developed and tested both indoors and outdoors with good performance.
Over the last decade there has been study of separating ground objects from background using multispectral imagery in the reflective spectrum from 400-2500nm. In this paper we explore using two broadband spectral modalities; visible and ShortWave InfraRed (SWIR),
for detection of minelike objects, obstacles and camouflage. Whereas multispectral imagery is sensed over multiple narrowband wavelengths, sensing over two broadband spectrums has the advantage of increased signal rsulting from integrated energy over larger spectrums. Preliminary results presented here show that very basic image fusion processing applied to visible and SWIR imagery produces reasonable illumination invariant segmentation of objects against background. This suggests the use of a simplified compact camera architecture using visible and SWIR sensing focal plane arrays for performing detection of mines and other important objects of interest.
A key issue for face recognition has been accurate identification under variable illumination conditions. Conventional video cameras sense reflected light so that image gray values are a product of both intrinsic skin reflectivity and external incident illumination, obfuscating intrinsic reflectivity of skin. It has been qualitatively observed that thermal imagery of human faces is invariant to changes in indoor and outdoor illumination, although there never has been any rigorous quantitative analysis to confirm this assertion published in the open literature. Given the significant potential improvement to the performance of face recognition algorithms using thermal IR imagery, it is important ot quantify observed illumination invariance and to establish a solid physical basis for this phenomenon. Image measurements are presented from two of the primarily used spectral regions for thermal IR; 3-5 micron MidWave IR and the 8-14 micron LWIR. All image measurements are made with respect to precise blackbody ground-truth. Radiometric calibration procedures for two different kinds of thermal IR sensors are presented and are emphasized as being an integral part to data collection protocols and face recognition algorithms.
We present a new formalism for the treatment and understanding of multispectral imags and multisensor fusion based on first order contrast information. Although little attention has been paid to the utility of multispectral contrast, we develop a theory for multispectral contrast that enables us to produce an optimal grayscale visualization of the first order contrast of an image with an arbitrary number of bands. In particular, we consider multiple registered visualization of multi-modal medical imaging. We demonstrate how our methodology can reveal significantly more interpretive information to a radiologist or image analyst, who can use it in a number of image understanding algorithms. Existing grayscale visualization strategies are reviewed and a discussion is given as to why our algorithm performs better. A variety of experimental results from medical imagin and remotely sensed data are presented.
A new method is introduced for the registration of MRI and CT scans of the head, based on the first order geometry of the images. Registration is accomplished by optimal alignment of gradient vector fields between respective MRI and CT images. We show that the summation of the squared inner products of gradient vectors between images is well-behaved, having a strongly peaked maximum when images are exactly registered. This supports our premise that both magnitude and orientation of edge information are important features for image registration. A number of experimental results are presented demonstrating the accuracy of our performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.