The intersection between control algorithms and the environment pose multiple issues regarding safe and reliable operations of remote-controlled and autonomous quadcopters for commercial and defense applications. This is particularly true in urban environments, which can pose significant problems to navigation and safety. We are developing a new platform for the development and testing of control schemes for quad-copters in urban environments, with emphasis on the intersection of drone and environmental physics, the uncertainty inherent in each, and control algorithms employed. As our basis, we are using Unreal Engine, which provides exibility for physics and controls used, in addition to state-of-the-art visualization, environmental interactions (e.g. collision simulation) and user interface tools. We incorporate the open-source, open-architecture PixHawk PX4 software platform, with the object of transitioning control algorithms to hardware in the future. Finally, we convert models of actual cities from MapBox and OpenStreetMap for use in Unreal Engine. We conclude with a demonstration of human-controlled drone ight in a section of Chicago, IL with light, uni-directional winds.
The Army anticipates that future battles will be in more complex and dynamic environments, requiring the Army to push modernization priorities. In order for Soldiers to thrive within these challenging operational contexts, they must rapidly adapt to leverage and integrate technology in order to gain and maintain overmatch over near peer adversaries. Teaming will be especially critical for mission success. Soldier teams will need to be adaptive and fluid in their roles to respond to dynamic mission demands. Technology can be leveraged to enable and enhance teaming between human and humanagent teams. Augmented reality (AR) technology may provide an adaptive solution for information sharing across individuals and teams to promote a common operational picture within future operational environments. Here, we present a small teams study where dyads leveraged technological tools that helped facilitate teaming during a simulated mission planning and rehearsal scenario. Partners worked together to plan a path to extract a high value target while avoiding obstacles and hostile forces. Dyads completed missions using two technologies counterbalanced across the study. The first condition was reflective of current methods for mission planning in the Army; dyads used a Table Top to plan, rehearse, and execute the simulated mission. In the second condition, dyads used the Microsoft HoloLens to complete the mission in an augmented reality environment. This paper will present findings of how perceived teaming efficacy and performance relate to mission performance and workload in the two technologies.
KEYWORDS: Visualization, Visual analytics, 3D modeling, Virtual reality, Data processing, 3D displays, Data analysis, Scientific visualization, Displays, Human-machine interfaces
Advancement in the areas of high performance computing and computational sciences have facilitated the generation of an enormous amount of research data by computational scientists - the volume, velocity and variability of Big 'Research' Data has increased across all disciplines. An immersive and non-immersive analytics platform capable of handling extreme-scale scientific data will enable scientists to visualize unwieldy simulation data in an intuitive manner and guide the development of sophisticated and targeted analytics to obtain usable information. Our immersive and non-immersive visualization work is an attempt to provide computational scientists with the ability to analyze the extreme-scale data generated. The main purpose of this paper is to identify different characteristics of a scientific data analysis process to provide a general outline for the scientists to select the appropriate visualization systems to perform their data analytics. In addition, we will include some of the details on how to how the immersive and non-immersive visualization hardware and software are setup. We are confident that the findings in our paper will provide scientists with a streamlined and optimal visual analytics workflow.
Major advancements in computational and sensor hardware have enormously facilitated the generation and collection of research data by scientists - the volume, velocity and variety of Big ’Research’ Data has increased across all disciplines. A visual analytics platform capable of handling extreme-scale data will enable scientists to visualize unwieldy data in an intuitive manner and guide the development of sophisticated and targeted analytics to obtain useable information. Reconfigurable Visual Computing Architecture is an attempt to provide scientists with the ability to analyze the extreme-scale data collected. Reconfigurable Visual Computing Architecture requires the research and development of new interdisciplinary technological tools that integrate data, realtime predictive analytics, visualization, and acceleration on heterogeneous computing platforms. Reconfigurable Visual Computing Architecture will provide scientists with a streamlined visual analytics tool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.