The effect of low sample frame rate on interpretability is often confused with the impact it has on encoding processes.
In this study, the confusion was avoided by ensuring that none of the low-frame rate clips had coding artifacts. Under
these conditions, the lowered frame rate was not associated with a statistically significant change in interpretability.
Airborne, high definition 720P, 60 FPS video clips were used as source material to produce test clips with varying
sample frame rates, playback rates, and degrees of target motion. Frame rates ranged from 7.5 FPS to 60 FPS.
Playback rates ranged up to 8X normal speed. Target motion ranged from near zero MPH up to 300 MPH.
KEYWORDS: Video, Video surveillance, Cognitive modeling, Video compression, Signal to noise ratio, Video processing, Quality measurement, Cameras, Motion measurement, Image processing
Processing framework for cognitive modeling to predict video interpretability is discussed. Architecture consists of
spatiotemporal video preprocessing, metric computation, metric normalization, pooling of like metric groups with
masking adjustments, multinomial logistic pooling of Minkowski pooled groups of similar quality metrics, and
estimation of confidence interval of final result.
KEYWORDS: Video, Cameras, Computer programming, Video compression, Calibration, Video surveillance, Image quality, Modulation, Signal to noise ratio, Motion measurement
The effect of various video encoders, and compression settings is examined using the subjective task-based performance
metric, Video National Imagery Interpretability Rating Scale (Video-NIIRS), and a perceptual quality metric Subjective
Assessment Methodology of Video Image Quality (SAMVIQ). Subjective results are compared to objective
measurements.
KEYWORDS: Video, Scattering, Modulation transfer functions, Signal attenuation, Video surveillance, Situational awareness sensors, Atmospheric particles, Turbulence, Signal to noise ratio, Target detection
The following material is given to address the effect of low slant angle on video interpretability: 1) an equation for the
minimum slant angle as a function of field-of-view to prevent no more than a &sqrt2; change in GSD across the scene; 2)
evidence for reduced situational awareness due to errors in perceived depth at low slant angle converting to position
errors; 3) an equation for optimum slant angle and target orientation with respect to maximizing exposed target area; 4)
the impact of the increased probability of occlusion as a function of slant angle; 5) a derivation for the loss of resolution
due to atmospheric turbulence and scattering. In addition, modifications to Video-NIIRS for low slant angle are
suggested. The recommended modifications for low-slant angle Video-NIIRS are: 1) to rate at or near the center of the
scene; and 2) include target orientations in the Video-NIIRS criteria.
The effect of video compression is examined using the task-based performance metrics of the new Video National
Intelligence Interpretability Rating Scale (Video NIIRS). Video NIIRS is a subjective task criteria scale similar to the
well-known Visible NIIRS used for still image quality measurement. However, each task in the Video NIIRS includes a
dynamic component that requires video of sufficient spatial and temporal resolution. The loss of Video NIIRS due to
compression is experimentally measured for select cases. The results show that an increase in the compression and an
associated increase in artifacts reduces task based interpretability and lowers the Video-NIIRS rating of the video clips.
The extent of the effect has implications for system design.
The Video National Imagery Interpretability Rating Standard (V-NIIRS) consists of a ranked set of subjective criteria to
assist analysts in assigning an interpretability quality level to a motion imagery clip. The V-NIIRS rating standard is
needed to support the tasking, retrieval, and exploitation of motion imagery. A criteria survey was conducted to yield
individual pair-wise criteria rankings and scores. Statistical analysis shows good agreement with expectations across
the 9-levels of interpretability, for each of the 7 content domains.
A perceptual evaluation compared tracking performance when using color versus panchromatic synthetic imagery at low
frame rates. Frame rate was found to have an effect on tracking performance for the panchromatic motion imagery.
Color was found to be associated with improved tracking performance at 2 frames per second (FPS), but not at 6 FPS or
greater. A self estimate of task confidence given by the respondents was found to be correlated to the measured tracking
performance, which supports the use of task confidence as a proxy for task performance in the future development and
validation of a motion imagery rating scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.