KEYWORDS: Artificial intelligence, Information security, Software development, Systems modeling, Engineering, Search and rescue, Automation, Model based design, Design
Large Language Models (LLMs) provide new capabilities to rapidly reform, regroup; and reskill for new missions, opportunities, and respond to an ever-changing operational landscape. Agile contracts can enable larger flow of value in new development contexts. These methods of engagement and partnership enable the establishment of high performing teams through the forming, storming, norming, and performing stages that then inform the best liberating structures that exceed traditional rigid hierarchical models or even established mission engineering methods. Use of Generative AI based on LLMs coupled with modern agile model-based engineering in design allows for automated requirements decomposition trained in the lingua franca of the development team and translation to the dialects of other domain disciplines with the business acumen afforded by proven approaches in industry. Cutting-edge AI automations to track and adapt knowledge, skills, and abilities across ever changing jobs and roles will be illustrated using prevailing architecture frameworks, model-based system engineering, simulation, and decision-making assisted approaches to emergent objectives.
Reinforcement learning for agent autonomous actions requires many repetitive trials to succeed. The idea of this paper is to distribute the trials across a city-scale geospatial map. This has the advantage of providing rationale for the trial-totrial variance because each location is slightly different. The technique can simultaneously train the agent and deduce where difficult and potentially dangerous intersections exist in the city. The concept is illustrated using readily available open-source tools.
Various tools are now available to assist the roboticist in developing autonomy algorithms for tasks such as path planning or collision avoidance. Many tools support the integration of live or simulated RGB cameras, LIDAR, radar, and IMU sensors. This paper will describe adding an RF sensor. The proposed RF sensor detects radio and locates emitters in the environment for the purpose of collision avoidance. We outline an approach to share data to help locate and avoid collisions. The protocol is designed to maximize safety, privacy, security, timeliness, and other desirable properties discussed in the paper. Preliminary results are shown to illustrate the concepts.
This paper describes our current multi-agent reinforcement learning concepts to complement or replace classic operational planning techniques. A neural planner is used to generate many possible paths. Training of the neural planner is a onetime task using a physics-based model to create the training data. The outputs of the neural planner are achievable paths. The path intersections are represented as decision waypoint nodes in a graph. The graph is interpreted as a Markov Decision Process (MDP). The resulting MDP is much faster than non-discretized spaces to train multi-agent reinforcement algorithms because only high-level decision waypoints are considered. The technique is applicable to multiple domains including air, space, land, sea, and cyber-physical domains.
KEYWORDS: Neural networks, Computer simulations, Network architectures, Computer programming, Process modeling, Finite element methods, Systems modeling, Motion models, Complex systems
Recent breakthroughs in deep net processing have shown the ability to compute solutions to physics-based problems such as the three-body problem many orders-of-magnitude times faster. In this paper, we show how a deep autoencoder, trained on paths generated using a dynamical, physics-based model can generate comparable routes much faster. The autogenerated routes have all the properties of a physics-based model without the computational burden of explicitly solving the dynamical equations. This result is useful for planning and multi-agent reinforcement learning simulation purposes. In addition, the fast route planning capability may prove useful in real time situations such as collision avoidance or fast dynamic targeting response.
Sustainment of fishing stocks can be accomplished by reducing illegal fishing. Enforcement requires timely intelligence. Often the perpetrators escape the enforcement zone to meet up with the fish buyers at sea where they conduct illegal transactions. Transshipments at sea enable criminal endeavors of all kinds. This paper addresses detecting fishingrelated behaviors from track data, associating RF and satellite imagery to identify the vessels, and using the evidence to build a confident case to support prosecution and deterrence efforts.
The advancement in video compression technology can result in more sensitivity to bit errors. Bit errors can propagate causing sustained loss of interpretability. In the worst case, the decoder "freezes" until it can re-synchronize with the stream. Detection of artifacts enables downstream processes to avoid corrupted frames. A simple template approach to detect block stripes and a more advanced cascade approach to detect compression artifacts was shown to correlate to the presence of artifacts and decoder messages.
Measured indicators such as resolution, blur, noise and artifact estimates are used to predict video interpretability. The indicators show the effect of compression, lost packets, and enhancements. The indicators and metadata-derived resolution can also be used to select appropriate algorithms for further enhancement or exploitation.
The Motion Imagery Standards Board (MISB) has previously established a metadata "micro-architecture" for
standards-based tracking. The intent of this work is to facilitate both the collaborative development of competent
tracking systems, and the potentially distributed and dispersed execution of tracker system components in real-world
execution environments. The approach standardizes a set of five quasi-sequential modules in image-based tracking.
However, in order to make the plug-and-play architecture truly useful we need metrics associated with each module
(so that, for instance, a researcher who "plugs in" a new component can ascertain whether he/she did better or worse
with the component). This paper proposes the choice of a new, unifying set of metrics based on an informationtheoretic
approach to tracking, which the MISB is nominating as DoD/IC/NATO standards.
The effect of low sample frame rate on interpretability is often confused with the impact it has on encoding processes.
In this study, the confusion was avoided by ensuring that none of the low-frame rate clips had coding artifacts. Under
these conditions, the lowered frame rate was not associated with a statistically significant change in interpretability.
Airborne, high definition 720P, 60 FPS video clips were used as source material to produce test clips with varying
sample frame rates, playback rates, and degrees of target motion. Frame rates ranged from 7.5 FPS to 60 FPS.
Playback rates ranged up to 8X normal speed. Target motion ranged from near zero MPH up to 300 MPH.
KEYWORDS: Video, Video surveillance, Cognitive modeling, Video compression, Signal to noise ratio, Video processing, Quality measurement, Cameras, Motion measurement, Image processing
Processing framework for cognitive modeling to predict video interpretability is discussed. Architecture consists of
spatiotemporal video preprocessing, metric computation, metric normalization, pooling of like metric groups with
masking adjustments, multinomial logistic pooling of Minkowski pooled groups of similar quality metrics, and
estimation of confidence interval of final result.
KEYWORDS: Video, Cameras, Computer programming, Video compression, Calibration, Video surveillance, Image quality, Modulation, Signal to noise ratio, Motion measurement
The effect of various video encoders, and compression settings is examined using the subjective task-based performance
metric, Video National Imagery Interpretability Rating Scale (Video-NIIRS), and a perceptual quality metric Subjective
Assessment Methodology of Video Image Quality (SAMVIQ). Subjective results are compared to objective
measurements.
KEYWORDS: Video, Scattering, Modulation transfer functions, Signal attenuation, Video surveillance, Situational awareness sensors, Atmospheric particles, Turbulence, Signal to noise ratio, Target detection
The following material is given to address the effect of low slant angle on video interpretability: 1) an equation for the
minimum slant angle as a function of field-of-view to prevent no more than a &sqrt2; change in GSD across the scene; 2)
evidence for reduced situational awareness due to errors in perceived depth at low slant angle converting to position
errors; 3) an equation for optimum slant angle and target orientation with respect to maximizing exposed target area; 4)
the impact of the increased probability of occlusion as a function of slant angle; 5) a derivation for the loss of resolution
due to atmospheric turbulence and scattering. In addition, modifications to Video-NIIRS for low slant angle are
suggested. The recommended modifications for low-slant angle Video-NIIRS are: 1) to rate at or near the center of the
scene; and 2) include target orientations in the Video-NIIRS criteria.
The effect of video compression is examined using the task-based performance metrics of the new Video National
Intelligence Interpretability Rating Scale (Video NIIRS). Video NIIRS is a subjective task criteria scale similar to the
well-known Visible NIIRS used for still image quality measurement. However, each task in the Video NIIRS includes a
dynamic component that requires video of sufficient spatial and temporal resolution. The loss of Video NIIRS due to
compression is experimentally measured for select cases. The results show that an increase in the compression and an
associated increase in artifacts reduces task based interpretability and lowers the Video-NIIRS rating of the video clips.
The extent of the effect has implications for system design.
The Video National Imagery Interpretability Rating Standard (V-NIIRS) consists of a ranked set of subjective criteria to
assist analysts in assigning an interpretability quality level to a motion imagery clip. The V-NIIRS rating standard is
needed to support the tasking, retrieval, and exploitation of motion imagery. A criteria survey was conducted to yield
individual pair-wise criteria rankings and scores. Statistical analysis shows good agreement with expectations across
the 9-levels of interpretability, for each of the 7 content domains.
A perceptual evaluation compared tracking performance when using color versus panchromatic synthetic imagery at low
frame rates. Frame rate was found to have an effect on tracking performance for the panchromatic motion imagery.
Color was found to be associated with improved tracking performance at 2 frames per second (FPS), but not at 6 FPS or
greater. A self estimate of task confidence given by the respondents was found to be correlated to the measured tracking
performance, which supports the use of task confidence as a proxy for task performance in the future development and
validation of a motion imagery rating scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.