PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
1Office of Naval Research Global (United States) 2U.S. Army Ground Vehicle Systems Ctr. (United States) 3U.S. Air Force Civil Engineer Ctr. (United States)
This PDF file contains the front matter associated with SPIE Proceedings Volume 12124, including the Title Page, Copyright information, Table of Contents, and Conference Committee listings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small robotic programs of record for the Army have mainly been utilized for short-duration video reconnaissance and explosive ordinance interrogation, and while these missions are critical, they utilize only teleoperation which limits the platforms reach and capabilities for missions they can support. Now improvements in new technology allow autonomy to be added to a small Army platform with little loss to the current platform functionality and can be expanded over time to limit risk. One major functionality improvement is that environmental mapping information that can be generated and provided to the user, while the autonomous navigation of the robot uses that same mapping information, and a generated 3-dimentional map is developed at a later stage. This paper discusses the mapping and autonomy capability integration from a hardware and software perspective onto the Man Transportable Robot System Increment II (MTRS Inc II) platform, an Army program of record.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional quadrotors received great attention in trajectory design and fault-tolerant control in these years. The direction of each thrust is perpendicular to the body because of the geometrics in mechanical design. Comparing with the conventional quadrotor, a novel quadrotor named quad-tilt-rotor brings better freedom in manipulating the thrust vector. Quad-tilt-rotor augments the additional degrees of freedom in the thrust, providing the possibility of violating the normal direction of the thrust in the conventional quadrotor. This provides the ability of greater agility in control. This paper presents a novel design of a quad-tilt-rotor (quad-cone-rotor) whose thrust can be assigned along the edge of a cone shape. Besides the inheriting merits in agile from quad-tilt-rotor, the quad-cone-rotor is expected to take fault-tolerant control in the severe dynamic failure (total loss in all thrusts). We simulate the control result in a UAV simulator in SIMULINK, MATLAB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensing, Processing, and Safety for Ground Vehicles I: Joint Session with Conferences 12115 and 12124
Game theory approaches, including those of the Stackelberg form, provide leader-follower strategies for agent patrolling. Specifically we use the developed approaches to agent patrolling in arbitrary spatial environments. The environment is discretized and has a topology structure of a directed graph. The patrolling agent follows a randomized patrol path along the graph. The adversarial agent desires to access certain target nodes in the graph and is assumed to take a certain amount of time to complete the intrusion of a target node. An optimization formulation with the structure constraints is used to provide a patroller strategy that maximizes its expected utility. Several issues arise in providing a game theory solution for an environment that affects sensor performance. Current minimax payoff models for sensing an adversary consider the probability for the defender to sense an adversary. For environmentally limited sensing, this term now has path dependence such as building interiors and areas with transmission issues. However the limitation of sensing was not previously considered and we have modified a constraint to consider this.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate and robust navigation is vital for unmanned vehicles, even when GNSS (global navigation satellite systems, e.g., GPS) are unavailable or unreliable. In this paper we present MONST3R, a system for determining the absolute position and orientation of a UAV (unmanned aerial vehicle), by matching images from an onboard camera to a georeferenced 3D model of the operation area. The position is estimated by finding the position and orientation such that an image, rendered from the 3D model, is maximally similar to the image from the vehicle-mounted camera. Experimental results, which validate the method, are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensing, Processing, and Safety for Ground Vehicles II: Joint Session with Conferences 12115 and 12124
Traversal in off-road conditions for Unmanned Ground Vehicles is highly relevant for defence applications, with an increasing amount of research being put into the field. A central part of the autonomous traversal is path following, where there currently exist many stable controllers. However, conventional path following controllers often relies on ideal vehicle models that make assumptions about the terrain that are no longer valid in off-road conditions. Therefore, research is needed into how conventional controllers are affected by off-road terrain and if extending the vehicle model with relevant parameters can improve the performance. In this paper, a controller based on Active Disturbance Rejection Control and a controller based on Instantaneous Centre of Rotation are tested against a conventional controller in off-road conditions. Results from simulations illustrate how the conventional controller is affected by variations in the vehicle’s rotation centre while the proposed controllers have improved performance when simulating rough terrain conditions. Real world experiments were conducted in uneven sandy terrain, where all of the controllers showed decent performance, but the proposed controllers had the lowest cross-track error. Future Unmanned Ground Vehicle operations can improve performance by using the proposed controllers when the vehicle is experiencing rough terrain where the Instantaneous Centre of Rotation is considerably shifted from its ideal location. On the other hand, the conventional controller should produce decent performance in moderate conditions. Further research is needed to understand what types of real world conditions make the performance of the conventional controller significantly decrease, thus justifying the use of one of the proposed controllers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite numerous investments toward autonomous vehicle technology this past decade the ensured safe operation of these systems is still an unresolved issue for both commercial and defense systems due to decision uncertainty. In complex dynamic domains (e.g. intersections or congested terrain) the expected mode of operation for ensured safety of these unmanned systems is still direct human control (whether through direct vehicle input or through teleoperation). This paper presents research toward an autonomous vehicle safety reasoning system that provides a novel approach to temporally address scene uncertainty to increase the safety envelope for commercial and defense systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The research trend of these last few years in the Vehicular Ad-hoc Network (VANET) domain is measuring an enormous raise of interest. Car manufacturers are developing new models that are even more involved in the Internet of Vehicle (IoV) networking. Here, vehicles can communicate between them exploiting VehicleTo-Everything (V2X) communication paradigms to acquire more data about the surrounding environments. However, the vehicular communications fields attracts researchers for developing new solutions for enhancing the on board comfort and increase drivers and passengers safety. In this work, a new device called Safety On Board Device (SOBD) is proposed to increase performances of the on board system. A dedicated protocol for promoting a vehicles cooperation has been designed by exploiting Wave Short Message Protocol (WSMP) and WAVE Short Message (WSM) beacons format. The main goal may help drivers to have a faster reaction to dangerous situations. This will be tested in a simulated urban environment to demonstrate the effective goodness of the proposal. This system will help to reduce the overall number of collisions achieving a reduction of traffic jams and measured journey time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Self-Organizing, Collaborative, Unmanned Robotics Teams: Joint Session with Conferences 12119 and 12124
The ability to explore dangerous buildings or hostile landscapes using a swarm of inexpensive mini drones is relevant to many search and rescue or surveillance scenarios encountered by civilian first responders and military personnel. Swarms of mini drones, implementing various path planning algorithms, provide a unique solution in situations where there is the risk to human life or use of expensive Unmanned Aerial Vehicle technology would be cost-prohibitive or both. Although inexpensive, off-the-shelf drones contain stabilization circuitry and onboard cameras, they suffer restricted flying time and lack GPS systems. The limited capability of such drones has curtailed their use by researchers investigating practical search and genetic algorithms, and many researchers rely on simulation, rather than testing with actual drones. In this paper, we describe an ad hoc framework for testing swarm algorithms while taking the first step toward implementing swarm intelligence using low-cost, offthe-shelf drones and an inexpensive network router. We initially created a public dataset, MINIUAV, including images of Tello and TelloEdu mini-drones taken from our live drone video recordings and photos scraped from various internet resources. Using the images, we then trained a deep-learning-based YOLOv4-Tiny (You Only Look Once) object detector allowing us to implement a swarm intelligence rule where drones act collectively based on a swarm alignment rule. Our results show the object detector allows a drone to identify a neighboring drone with greater than 90% accuracy. Finally, the dataset used to train the object detector will be made available on request.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual homing is a lightweight approach to visual navigation which does not require GPS. It is very attractive for robot platforms with a low computational capacity. However, a limitation is that the stored home location must be initially within the field of view of the robot. Motivated by the increasing ubiquity of camera information we propose to address this line-of-sight limitation by leveraging camera information from other robots and fixed cameras. To home to a location that is not initially within view, a robot must be able to identify a common visual landmark with another robot that can be used as an ‘intermediate’ home location. We call this intermediate location identification step the “Do you see what I see” (DYSWIS) task. We evaluate three approaches to this problem: SIFT based, CNN appearance based, and a semantic approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thanks to the maturity of their field, Unmanned Aerial Vehicles (UAVs) have a wide range of applications. Recently, we have witnessed an increase in the usage of multiple UAVs and UAV swarm due to their ability to achieve more complex tasks. Our goal is to use deep learning methods for object detection in order to detect and track a target drone in an image captured by another drone. In this work, we review four popular object detection categories: two-stage (anchor-based) methods, one-stage (anchor-based) methods, anchor free methods and Transformer-based methods. We compare these methods’ performance (COCO benchmark) and detection speed (FPS) for the task of real-time monocular 2D object detection between dual drones. We created a new dataset using footage from different scenes such as cities, villages, forests, highways, and factories. In our dataset, drone target bounding boxes are present at multiple scales. Our experiments show that anchor free and Transformer-based methods have the best performance. As for detection speed, the one-stage methods obtain the best results followed by and anchor free methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remotely-operated and autonomous platforms with image and video sensors are being applied to new applications every day, e.g., wildland fire monitoring, search and rescue operations. Further, edge computing devices provide significant onboard computational capability, supporting an increasingly complex range of on-board autonomy and analytics. Combined with real-world wireless network limitations, there is an increasing interest in compressed video and image products. To reduce data volumes, a viable approach is to send a task description or query to the robotic platform. The platform can then publish a highly compressed image product back to an operator that is designed to answer the specific query. We develop a framework for evaluating compressed image products by focusing on their ability to support specific tasks or queries. We develop a task model based on Item-Response Theory and implement it as a multi-level Bayesian model, and we evaluate the utility of this model with an object classification task. We demonstrate the approach by comparing two different image compression methods using inexperienced users recruited with the Amazon Mechanical Turk (AMT) platform. The result is a potential reduction in file size from gigabytes to less than a few megabytes without loss in task performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, the combination of unmanned aerial vehicles (UAVs) and free-space optical (FSO) has gained researchers’ attention as this combination is viewed as a potential candidate for future high capacity front-haul communication links. However, one of the main impairments affecting ground-to-UAV FSO links is signal outages or fading induced by clouds and rain. This manuscript focuses on the BER performance of ground-to-UAV-based FSO communication links using MODTRAN atmospheric data. The transmittance data gathered from the MODTRAN software for various cloud types, rain rates, and altitude was used to evaluate the BER performance of the FSO system under weak atmospheric turbulence. A comparative BER performance of a 2 Gpbs FSO link was analyzed for three different cloud models (Cumulus, Stratus, and Altostratus), four types of rain rate (no rain, drizzle, moderate and heavy rain) at multiple zenith angles (ranging from 0 to 60°) and for a maximum FSO uplink range of 2 km. The wavelengths of interest used for this study were 850 nm, 1064 nm, and 1550 nm. It is concluded that the inclement weather conditions may severely degrade the FSO link at ranges less than 1 km. Also, the 1550 nm wavelength outperformed the other two wavelengths for every chosen model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous research has shown that many luminance normalization mechanisms are engaged when viewing scenes with high dynamic range (HDR) luminance. In one such phenomenon, areas of similar luminance contextually facilitate the perception of ambiguous textures. Using inspiration from biological circuitry, we developed a recurrent spiking neural network that reproduces experimental results of contextual facilitation in HDR images. The network uses correlations between luminance and texture to correctly classify and segment ambiguous textures in images. While many deep neural networks can successfully perform many types of image analysis, they have limited ability to process images under naturalistic HDR illumination, requiring millions of neurons and power hungry GPUs. It is an open question if a recurrent spiking neural network can minimize the number of neurons required to perform HDR image segmentation based on texture. To that end, we designed a biologically inspired proof-of-concept recurrent SNN that can perform such a task. The network is implemented using leaky integrate-and-fire neurons, with CuBa synapses. We use the Nengo LOIHI API to simulate the network, so it can be run on Intel’s LOIHI neuromorphic hardware. The network uses a highly recurrent structure to both group image elements based on luminance and texture, and to seamlessly combine these modalities to correctly segment ambiguous textures. Furthermore, we can continuously modulate how much luminance or texture contribute to the segmentation. We surmise that further development of this network will improve the resilience of optical flow computations under environments with complex naturalistic illumination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
AI/ML and Unmanned Systems: Joint Session with Conferences 12113 and 12124
Deep Learning (DL) requires a massive, labeled dataset for supervised semantic segmentation. Getting massive labeled data under a new setting (target domain) to perform semantic segmentation requires huge efforts in time and resources. One possible solution is domain adaptation (DA) where researchers transform the data distribution of existent annotated public data (source domain) to resemble the target domain. We develop a model on this transformed data. Nevertheless, this poses the questions of what source domain/s to utilize, and what types of transformation to perform on that domain/s. In this research work, we study those answers by benchmarking different data transformation approaches on source-only and single-source domain adaptation setups. We provide a new well-suited dataset using unmanned ground vehicle Husarion ROSbot 2.0 to analyze and demonstrate the relative performance of different DA approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human at every age, can recognize any kind of object very easily even at a short time just by observing it. Recognizing an object has been a challenge for a machine since the introduction of a computation model that mimics how brain’s neurons work more than 70 years ago. The invention of artificial neural network especially its derivative namely deep learning has improved the performance of an intelligent machine’s object recognition. However, the training scheme that is a part of neural network-based learning methodology requires other efforts such as providing a huge number of data along with their annotation that impacts to the need of high-performance computing equipment. Faced to the need of an object recognizer that requires just a small number of information, light computing, and can be deployed quickly without the hassle of doing any training, we propose a fast object recognizer inspired by human cognitive computation called as Knowledge Growing System (KGS) which is a model of Cognitive Artificial Intelligence. By using the Iris dataset, the one that has been proven as test data for years, we proved that KGS can obtain an accuracy of 85.93 % in average by only observing a small number of information, that is only 15 data out of 150 or 10%. Based on this result, we plan to extend KGS to recognize more complex objects such as airplanes and unmanned vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With advances in unmanned and autonomous vehicles, camera-based navigation is increasingly being used. A new low-cost navigation solution based on monochrome polarization filter arrays cameras is presented. For this purpose, we have developed our own acquisition pipeline and an image processing algorithm to find the relative heading of an Unmanned Ground Vehicle (UGV) thanks to the skylight polarization. The precision of the method has been quantified using a rotary stage. Then the system has been mounted on the UGV and the estimated heading is compared to a reference given by a GPS/Inertial Navigation System.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the Smooth Variable Structure Filter is used to extract the position and speed of a vehicle on a complex road. The filter is realized using FPGA (field programmer gate array). The FPGA’s resources and speed are examined at an optimal configuration. FPGA (Field Programmer Gate Array) Z-board from Xilinx is used in this work. The performance is examined and presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advancement of modern control systems leads to increasingly high standard on the capability of systems to make decisions and control strategies in an adaptive and efficient way. In many applications, the decision time and performance index of control are determined by stochastic processes. In this paper, we develop a family of new limit theorems on the joint convergence of partial sums of independent random vectors and associated random indexes under general assumptions. We demonstrate that the random index and the partial sum are asymptotically independent under a proper normalization, with the partial sum converges in distribution to a random variable of normal distribution. Moreover, we obtain limit theorems for functions of partial sums, random indexes, and parameters, which include central limit theorems as special cases. We also extend the results to Levy processes. An illustrative example is given on an integration system, which is a building block of control and decision systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An indispensable function of intelligent systems is to perform logical reasoning in the presence of uncertainty. In this paper, we establish a mathematical framework, called statemental credibility logic (SCL), for inference under uncertainty. The proposed SCL consists of statemental algebra and truth calculus. The statemental algebra deals with the operations of statements which can be representations of deterministic, vague, random events, and their mixture. The truth calculus discusses the evaluation and inference of the truth values of dichotomy, fuzzy, probabilistic statements, and their combinations. We generalize the classical Bayesian networks and develop robust inference methods which have potential to construct more capable and reliable inference engines of intelligent systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analysis of uncertain dynamic discrete-event systems is generally intractable by deterministic numeric methods. In this paper, we propose an adaptive Monte Carlo test method to analyze systems. In contrast to the conventional methods of estimating the probability that a system fails to satisfy prespecified requirements, our goal is to determine whether the probability that the system violates the requirements. To accomplish this goal, we exploit a testing method based on the sequential probability ratio (SPRT) method invented by Wald. We demonstrate that such method can result in a substantial reduction of computational complexity as compared to conventional methods. To make the test method rigorous, we develop exact methods for computing the probability of making wrong decisions and the average number of simulations runs. The proposed method can be applied to investigate the stability of a control system with parametric uncertainty.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article proposes an algorithm for forming a combined image obtained based on scanning the cutting table's working field and detecting contour of object. The creation of a single image, obtained in an automated mode, allows you to analyze objects and perform the initial formation of the shape in a vector or raster form. The process of creating a merged image includes the following operations: preliminary processing, to reduce the influence of noise component associated with lens contamination and interference on the sensitive matrix; simplification of images and selection of areas with a large number of local features; search for stable anchor points; formation of image transformation matrices; combining data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper proposes a solution to the problem of interpolation of the step function of displacement, obtained from the movement's formation by simple systems of object analysis. The analysis of the motion curve is carried out, taking into account the transformation of data into Cartesian coordinate systems and the processing of 2D signals. A multicriteria objective function is used as an interpolation method. This approach is based on solving the problem of minimizing the functional simultaneously according to three criteria. The first criterion is the mean square of the measure of the discrepancy between the input values and those obtained due to minimization. This criterion is to set the degree of approximation to the input data. As the second criterion, the function of the root-mean-square spread of the neighboring elements of the obtained values is used. This criterion allows you to minimize the scatter of data and set the smoothness of the function. As the third criterion, the root-mean-square functional between adjacent elements of the second group is used. This criterion allows one to increase the degree of smoothness of the function and the rate of convergence. The weighting function is adjusted using weighting factors. The paper provides recommendations on choosing these values, and the diagrams show the rationale for this choice. The graphs showing the effect of the rate of convergence of the results on the degree of smoothness of the function and the selected parameters of the method are presented. The graphs of the tool exit speed to the working point and the calculation of the path lengths are given. Examples of plotting the curves of functions obtained by machine vision systems located on robotic portal complexes are presented on test data sets. Data obtained in the visible range, with a resolution of 1280x1024 pixels, are presented in grayscale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.