PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
1Office of Naval Research Global (United States) 2U.S. Army Ground Vehicle Systems Ctr. (United States) 3U.S. Air Force Civil Engineer Ctr. (United States)
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, improvements of mobile robot machines have gained a great deal of interest because of their usefulness and importance of the actual life. Robotics researcher's focus extends to legged walking robot, that employs a mechanical limb for maneuvers which keeps it easier on rough and irregular terrain than wheeled and tracked machines. Standard Periodic moving gates cannot adjust with such obstacles and hazards. Various types of terrain need specific gait style in order to achieve better performance on locomotion. Throughout this analysis, via the adaptive walking gate, the challenge of performing a predetermined path in the Cartesian environment is constructed. Phantom ll which is a six legged robot is designed and simulated using the Matlab SimMechanicsTM toolbox to determine and calculate the integrated adaptive walking gate and the hexapod robot dynamics. Furthermore, the case study of Phantom ll hexapod is analyzed in the kinematic model which involves of two major types, inverse and forward kinematics for all legs. The robot's performance factor and kinematic constraints are taken into account. The simulation data demonstrated the appropriateness of using the adaptive walking gait. Reducing energy consumption performs an essential function in the multi-legged devices locomotion which utilized for maintenance applications. Detailed dynamic analysis of the power-efficient for hexapod moving machine during navigation across gradient inclined surface is provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We will describe the high-level design goals of SupplyBot, which will be a high performance humanoid robot, with physical capabilities approaching that of a human. The robot will feature high range of motion joint designs, new compact hydraulic actuators, advanced materials, and utilize advanced manufacturing processes to achieve one of the highest range of motion of any humanoid robot to date, while also maintaining one of the highest power-to-weight ratios.
We will discuss innovative joint designs, robust shell designs, and various subcomponents to reduce weight and improve strength of the robot. Finally, we will describe algorithms for autonomously crossing a variety of mobility challenges and operator interfaces for remotely operating the robot through complex environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our goal in this work is to expand the theory and practice of robot locomotion by addressing critical challenges associated with the robotic biomimicry of bat aerial locomotion. Bats wings exhibit fast wing articulation and can mobilize as many as 40 joints within a single wingbeat. Mimicking bat flight can be a significant ordeal and the current design paradigms have failed as they assume only closed-loop feedback roles through sensors and conventional actuators while ignoring the computational role carried by morphology. In this paper, we propose a design framework called Morphing via Integrated Mechanical Intelligence and Control (MIMIC) which integrates a small and low energy actuators to control the robot through a change in morphology. In this paper, using the dynamic model of Northeastern University’s Aerobat, which is designed to test the effectiveness of the MIMIC framework, it will be shown that computational structures and closed-loop feedback can be successfully used to mimic bats stable flight apparatus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biologically-inspired robots are a very interesting and difficult branch of robotics dues to its very rich dynamical and morphological complexities. Among them, flying animals, such as bats, have been among the most difficult to take inspiration from as they exhibit complex wing articulation. We attempt to capture several of the key degrees-of- freedom that are present in the natural flapping gait of a bat. In this work, we present the mechanical design and analysis of our flapping wing robot, the Aerobat, where we capture the plunging and flexion-extension in the bat's flapping modes. This robot utilizes gears, cranks, and four-bar linkage mechanisms to actuate the arm-wing structure composed of rigid and exible components monolithically fabricated using PolyJet 3D printing. The resulting robot exhibits wing expansion and retraction during the downstroke and upstroke respectively which minimizes the negative lift and results in a more efficient flapping gait.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern robotic technologies enable the development of semiautonomous ground robots capable of supporting military field operations. Particular attention has been devoted to the robotic mule concept, which aids soldiers in transporting loads over rugged terrain. While existing mule concepts are promising, current configurations are rated for payloads exceeding 1000 lbs., placing them in the size and weight class of small cars and ATVs. These large robots are conspicuous by nature and may not successfully carry out infantry resupply missions in an active combat zone. Conversations with active soldiers, veterans, and military engineers have spotlighted a need for a compact, lightweight, and low-cost robotic mule. This platform would ensure reliable last-mile delivery of critical supplies to predetermined rally points. We present a design for such a compact robotic mule, the μSMET. The proposed design envisions a versatile platform, integrated with the Squad Multipurpose Equipment Transport (SMET), which will ferry supplies to soldiers in combat, evacuate the wounded, and help transport loads on a forced march. The μSMET can adjust its geometry to suit specific payloads and adapt to the terrain, is light enough to be carried by a soldier and sturdy enough to evacuate a soldier, and has adequate off-road mobility to follow an infantry unit. The μSMET’s variable geometry enhances mobility over challenging terrain: its rear wheel assembly can expand to increase its stability or contract to reduce its profile. This publication will describe the design and construction of a prototype μSMET.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a dynamic modeling and experimental evaluation of reaction forces and joints torque of a hexapod robot walking in a straight line with a tripod gate. First, the kinematic model of the robot was considered by solving the forward and the inverse kinematics of each leg. Based on tripod walking gait and motion in a straight-line path, the trajectory of each leg is generated for support and transfer phase. Second, a complete dynamic model is presented to estimate the feet force reaction, and the torque of each joint. The foot interaction with the ground modeled depend on a compliant model, and a force distribution method is performed to find the required friction forces. This can minimize energy consumption and slippage possibility. Then, with the aid of using Matlab SimMechanics, a straight-line motion with tripod gate is simulated, and feet forces distribution and joints torques are calculated. Finally, real-time torque of each joint and normal force of each leg tip is measured, assuming at least three legs always remain in contact with the ground (which called support phase), and the other three are in swing phase, which offers two options for torque and force measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous Ground Vehicles: Joint Session with Volumes 11748 and 11758
In this paper, we perform an analysis of sequence risk in a portfolio (defined here as the risk of a one-time 20% market loss from an enemy we call Mr. Market) with a Monte Carlo simulation of many different "fixed" lifetime returns. The worst possible case is if you have a market loss right at or near retirement (that is when you have the most money to lose). Interestingly, bonds are a poor hedge against this, unless of course you knew exactly when the loss was going to occur. Bonds can actually increase your probability of running out of money because of their poor return, at least with the assumptions that we have made. Taken in homeopathic amounts, bonds can limit the worst-case scenario, however.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Leidos has completed a two year Rapid Innovation Fund (RIF) effort with the Army CCDC Ground Vehicles Systems Center (GVSC) entitled “Vision Based Localization” (VBL) to provide long duration precision navigation for ground vehicles in a GPS denied environment. The Leidos system, called the Vision Integrated Spatial Estimator (VISE), uses Convolutional Neural Networks (CNNs) to extract position information from monocular camera feeds. VISE runs the Leidos Dynamically Reconfigurable Particle Filter (DRPF) as the engine for sensor fusion, enabling incorporation of open source road network information to aid the navigation solution in real time without having to make simplifying assumptions about the measurement likelihood distribution. The VISE system was demonstrated in September 2019 by completing a 4 hour / 160 km drive test in Detroit MI in a GPS denied situation and achieving a < 20 m median error with a 20 m final error. Details of the results are presented, including video of the particle filtering system and the CNN processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work a novel approach to collision avoidance radar is presented. Leveraging the chipsets currently in development for the automotive industry, with an operational band of 77 to 81 GHz a new frontend has been developed using a distributed frequency swept antenna. The frequency swept antenna is able to steer the beam based on the transmit frequency. Multiple sub-array elements are distributed across the bumper of a vehicle increasing the aperture size of the system for improved beam resolution, thereby leading to better system sensitivity. To this end, system modeling was implemented to study the tradeoff between system sensitivity or range and power, gain, antenna aperture size and number of sub-array elements. The sub-arrays were designed, optimized, manufactured, and characterized using a conformal, flexible, low loss high frequency Rogers 3003 material. The measured far field patterns of the developed antenna array demonstrated consistent angular steering characteristics of -4° to 8° over the frequency from 75 to 85GHz with minimum reflection. The developed sub-array elements are cascaded and then synchronized using electronically controlled, high resolution, wideband, low loss W-band phase shifters. To drive such a large distributed array, we also focused on the development of high-resolution or analog phase shifters with 360 degrees coverage from 77-81GHz. The phase shifter chip was designed based on three-vector method and manufactured by leveraging SiGe foundry run. RF integration of the fabricated chips along with the control circuit was also conducted to demonstrate fully phase control over the band of interest. The packaged antenna subarrays and phase shifters are integrated together to form a distributed array. Through synchronization, coherent operation of the system can be established, enhancing the angular resolution of the system. The developed antenna array will be integrated with a Frequency Modulated Continuous Wave (FMCW) transceiver for applications of automobile radars.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial Intelligence/Machine Learning and Unmanned Systems: Joint Session with Volumes 11746 and 11758
Next-generation autonomous vehicles will require a level of team coordination that cannot be achieved using traditional Artificial Intelligence (AI) planning algorithms or Machine Learning (ML) algorithms alone. We present a method for controlling teams of military aircraft in air battle applications by using a novel combination of deep neuroevolution with an allocation-based task assignment algorithm. We describe the neuroevolution techniques that enable a deep neural network to evolve an effective policy, including a novel mutation operator that enhances the stability of the evolution process. We also compare this new method to policy gradient Reinforcement Learning (RL) techniques that we have utilized in previous work, and explain why neuroevolution presents several benefits in this particular application domain. The key analytical result is that neuroevolution makes it easier to select long sequences of actions following a consistent pattern, such as a continuous turning maneuver that occurs frequently in air engagements. We additionally describe multiple ways in which this neuroevolution approach can be integrated with allocation algorithms such as the Kuhn- Munkres Hungarian algorithm. We explain why gradient-free methods are particularly amenable to this hybrid approach and open up exciting new algorithmic possibilities. Since neuroevolution requires thousands of training episodes, we also describe an asynchronous parallelization scheme that yields order of magnitude speedup by evaluating multiple individuals from the evolving population simultaneously. Our deep neuroevolution approach out-performs human-programmed AI opponents with a win rate greater than 80% in multi-agent Beyond Visual Range air engagement simulations developed using AFSIM.†
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned aerial vehicles (UAV) are now used in a large number of applications. In order to accomplish autonomous navigation, UAVs must be equipped with robust and accurate localization systems. Most localization solutions available today rely on global navigation satellite systems (GNSS). However, such systems are known to introduce instabilities as a result of interference. More advanced solutions now use computer vision. While deep learning has now become the state-of-the-art in many areas, few attempts were made to use it for localization. In this paper, we present an entirely new type of approach based on convolutional neural networks (CNN). The network is trained with a new purpose-built dataset constructed using publicly available aerial imagery. Features extracted with the model are integrated in a particle filter for localization. Initial validation using real-world data, indicated that the approach is able to accurately estimate the localization of a quadcopter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past few years, Unmanned Aerial Vehicles (UAVs) have known important progress in their technology, spreading their adoption and their use in various types of applications. More recently, researchers have become more interested in the use of multiple UAVs and UAV swarms. In this work, we are interested in the use of vision-based deep learning algorithms for UAVs tracking and pursuit. The goal here is to use recent deep learning object detection, coupled with a ‘Search Area’ prediction approach, to detect and track a target UAV from images captured by another UAV. The detected position outputs the necessary controls for real-time maneuvering and tracking. The proposed architecture was tested on different simulated conditions. The approach was able to process videos at high frame rates and get a mean average precision above 90%. The obtained results show the possibility of using vision-based deep learning for detecting and tracking UAVs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The intersection between control algorithms and the environment pose multiple issues regarding safe and reliable operations of remote-controlled and autonomous quadcopters for commercial and defense applications. This is particularly true in urban environments, which can pose significant problems to navigation and safety. We are developing a new platform for the development and testing of control schemes for quad-copters in urban environments, with emphasis on the intersection of drone and environmental physics, the uncertainty inherent in each, and control algorithms employed. As our basis, we are using Unreal Engine, which provides exibility for physics and controls used, in addition to state-of-the-art visualization, environmental interactions (e.g. collision simulation) and user interface tools. We incorporate the open-source, open-architecture PixHawk PX4 software platform, with the object of transitioning control algorithms to hardware in the future. Finally, we convert models of actual cities from MapBox and OpenStreetMap for use in Unreal Engine. We conclude with a demonstration of human-controlled drone ight in a section of Chicago, IL with light, uni-directional winds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low altitude Unmanned Aerial Systems (UASs) provide a highly flexible and capable platform for remote sensing and autonomous control. There are many applications that would benefit from an additional bird’s eye view, including mapping, environmental reconnaissance, and search and rescue to name a few. An autonomous (or partially autonomous) drone could assist in several of these scenarios, freeing the operator to focus on higherlevel strategic planning. While numerous commercial drones exist on the market, none truly provide a flexible foundation for vision guided autonomy research. Herein, we propose the design of a physical UAS platform, called VADER (Visually Aware Drone for Environmental Reconnaissance), and an accompanying simulation environment that addresses many of these tasks. In particular, we show how Commercial Off The Shelf (COTS) hardware and open source software can now be combined to realize powerful end-to-end UAS research solutions. The beauty of unifying these factors is accelerated prototyping and minimal time to migrate and test in the real world. This article outlines VADER and case studies are presented to demonstrate capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the near future, drones are expected to increase their interest and be adopted for an ever-growing number of tasks in different fields. This creates multiple challenges in terms of system/navigation, requiring significant integration efforts, multiple testbeds and deployment results, and novel protocols. One of the main aspects under studying by scientific community regards the possibility of using drones as relays in the sky. This brings to a series of issues around the channel link and the drones behavior in wireless communications both for sending commands or data in the bidirectional channel. In literature, many works are dedicated to the channel in drone environment and to the path condition for the specific considered scenario. In this work, we present an analysis of drone channel in a realistic path loss model for wireless communications, considering geometry parameters, the coverage radius and drones height on order to provide the correct connectivity to users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of Unmanned Aerial Vehicles (UAVs) has attracted prominent attention from researchers, engineers, and investors in multidisciplinary fields such as agriculture, signal coverage, emergency situations, disaster events, farmland and environment monitoring, 3D-mapping, and so forth. The paper focuses on the application of a two layer architecture where a WSN is used to collect data coming from sensors monitoring a crop and a drones’ layer where UAV can gather data stored by WSN gateway to transmit them to a data center for processing and feature extractions. The architecture has been evaluated in terms of overall data gathering task and data storage requested at the WSN GTW considering WSN islands disseminated in a crop.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Self Organizing, Collaborative Unmanned ISR Robotic Teams: Joint Session with Volumes 11753 and 11758
A promising application of swarm robotics is environmental monitoring, whereby large numbers of self-organizing agents are deployed concurrently to gather information on a short- or long-term basis. The greater resilience and attritability of a swarm system may be particularly valuable in hazardous or adversarial environments where a proportion of agents are likely to be damaged or destroyed. In such contexts the profitable information gathered by sensor payloads on robots is associated with material risk. Relatively little work has been done to consider how to manage this risk autonomously and effectively at the individual and system level. The field of financial risk management provides pre-existing tools and frameworks to get a head-start on this challenge. The method of Value at Risk (VaR) allows the easy quantification of prospective losses over a defined time period and confidence interval. Here, we consider VaR in a multi-agent context where the environment is intrinsically risky, for example containing damaging radiation sources. In agent-based simulations, individuals calculate VaR in real time and broadcast a self-triggered alert to their neighbors when their VaR limit is breached, helping them to avoid the area. This minimal communication system is effective at decreasing overall swarm exposure to hazards, while permitting agents to make risk-weighted explorations of hazardous areas. We further discuss the opportunity for finance-inspired risk management frameworks to be developed in the multi-agent systems context.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel underwater autonomy architecture that combines reasoning about prior history with dynamic selection of the best set of action options for the current environment. Prior history is recorded as a set of indexed episodes, including features best describing the environment each episode occurred in, the set of actions and their parameters, and the system’s performance during the episode. Based on previous history most related to current experience, the robot dynamically selects actions and/or parameters most likely to succeed in the immediate environment; the action space is represented as a dynamic Hierarchical Task Network (HTN). We have implemented and tested the architecture in UWSim, on a simulated Blue ROV-2 with a 4 DOF manipulator in a 3D motion planning domain, where the task goal is to touch a designated underwater object with the arm’s end effector. We have shown that after just 20 episodes of learning, the robot converges on a stable global policy that maximizes success rates of object touch task. The architecture is designed to be relatively domain-independent, and is applicable to a variety of underwater tasks, such as survey/search, manipulation, active perception, etc. We are currently extending our implementation to a survey domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous navigation (also known as self-driving) has rapidly advanced in the last decade for on-road vehicles. In contrast, off-road vehicles still lag in autonomous navigation capability. Sensing and perception strategies used successfully in on-road driving fail in the off-road environment. This is because on-road environments can often be neatly categorized both semantically and geometrically into regions like driving lane, road shoulder, and passing lane and into objects like stop sign or vehicle. The off-road environment is neither semantically nor geometrically tidy, leading to not only difficulty in developing perception algorithms that can distinguish between drivable and non-drivable regions, but also difficulty in the determination of what constitutes "drivable" for a given vehicle. In this work, the factors affecting traversability are discussed, and an algorithm for assessing the traversability of off-road terrain in real time is developed and presented. The predicted traversability is compared to ground-truth traversability metrics in simulation. Finally, we show how this traversability metric can be automatically calculated by using physics-based simulation with the MSU Autonomous Vehicle Simulator (MAVS). A simulated off-road autonomous navigation task using a real-time implementation of the traversability metric is presented, highlighting the utility of this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have designed a game theory based framework in order to compute effective agent asset laydown and courses of action (COAs) for adversarial scenarios. Our technical approach is based on Stackelberg security game theory, which is a specialization of game theory for adversarial situations and deterrence. Security games approaches provide a scalable optimization framework to determine geospatial COAs for agents. Specifically it can exploit intelligence about adversaries by constraining courses of action search and eliminating dominated COAs. Several issues arise in providing a game theory for a communication-constrained environment. Current minimax payoff models for sensing an adversary/obstacles consider the probability for the defender to sense an adversary. For limited acoustic sensing, this term now has path dependence such as building interiors and areas with transmission issues. Next, an extension to account for loss or degradation of defender assets is required. A candidate solution being considered is to have an agent update/choose degraded contingency strategies at each communication. We are also evaluating providing refined strategies as a function of time if communications are out and how to account for effect of uncertainty in our knowledge of agent member loss for updated strategies. We are employing simulators at NRL DC to model multiagent trajectories and allowing the testing of the game theory approach based on environmental conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A lot of research has recently been performed on mobile robots because of their value in human life. In various applications and areas, robotics are used. Maintaining the robot's protection and accessibility is important. If the robot doesn't always enable self-recovery, it will not be able to attain its target. This paper studies a tracked mobile robot control and path planning using the dijkstra's algorithm. Robot Operating System (ROS) is the software prototyping platform. The robot's basic mission and its control mechanism is explained. The analysis is carried out using three ultrasonic sensors requiring a low-effort framework enabling exploration in the robot route zone. In a simulated environment, the presented method was verified and the results showed successful path planning with obstacles avoidance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The usage of Unmanned Ground Vehicles (UGVs) in defence application is increasing, and much research effort is put into the field. Also, many defence vehicle producers are developing UGV platforms. However, the autonomy functionality of these systems are still in need of improvement. At the Norwegian Defence Research Establishment a project for developing an autonomous UGV was started in 2019 and use a Milrem THeMIS 4.5 from Milrem Robotics as the base platform for achieving this. In this paper we will describe the modifications made to the vehicle to make it ready for autonomous operations. We have added three cameras and a Lidar as vision sensors, for navigation we have added a GNSS, IMU and velocity radar, and all sensors get a common time stamp from a time server. All the sensors have been mounted on a common aluminium profile, which is mounted in the front of the vehicle. The vision and navigation sensors have been mounted on the common aluminium profile to ensure that the direction the vision sensors observe is known with as little uncertainty as possible. In addition to the hardware modification, a control software framework has been developed on top of Milrem’s controller. The vehicle is interfaced using ROS2, and is controlled by sending velocity commands for each belt. We have developed a hardware abstraction module that interfaces the vehicle and adds some additional safety features, a trajectory tracking module and a ROS simulation framework. The control framework has been field tested and results will be shown in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The applications of unmanned aerial systems (UASs) have grown in popularity due to their simplicity and availability. The quality of UAS’s performance depends usually on adding several sensors and controllers that improve accuracy and flight performance. However, this typically increases the overall cost of the system. In this paper, a technique to enhance the performance while maintaining UAS affordability is proposed. This technique involves the use of an estimation strategy to extract hidden information from only a few sensors while improving the quality of the achieved signal. The simulation results of this method show strong performance, and are compared with another well-known estimation method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Agricultural land is a strategic government resource. Successful agribusiness depends on good land management. Therefore, it is important to take into account many points, including the degree of overgrowth of agricultural land. Overgrowing of fields leads to a decrease in the quality of soil fertility. Monitoring of geographical objects is most easily carried out with the help of geographic information systems and unmanned aerial vehicles. This reduces the time spent exploring the area, as well as improving the reliability of the results. In our case, this is the definition of overgrowth of agricultural fields. The developed methodology will be useful both for state agribusiness monitoring and control services and for landowners. This determines the relevance of scientific research. The work aims to develop a method based on computer vision or automated processing of images from unmanned aerial vehicles (UAVs) to determine the degree of overgrowth of agricultural fields. As a result, a conceptual model of the methodology was built, as well as an information system developed on its basis for thematic image processing. A snapshot of an agricultural plot of the Vologda region (Russia) is used as source material. An error matrix is used to evaluate the image's division into areas of agricultural fields and vegetation. The matrix is based on a comparison of the reference result of decoding with the result of using the method. The matrix allows us to take into account errors associated with the incorrect classification. The reliability assessment consists of two stages: the creation of a matrix, the dimension of which is determined by the number of classes; the calculation of statistical accuracy estimates in percentage terms, based on the results. Using the error matrix and Cohen's Kappa index, the reliability of the decoding was evaluated using the developed methodology. Compared to conventional decoding without preprocessing steps and trainable classification, the confidence increased by 11%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.