PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Misty Blowers,1 Dan Popa,2 Muthu B. J. Wijesundara3
1Air Force Research Lab. (United States) 2The Univ. of Texas at Arlington (United States) 3The Univ. of Texas at Arlington Research Institute (United States)
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 949401 (2015) https://doi.org/10.1117/12.2199012
This PDF file contains the front matter associated with SPIE Proceedings Volume 9494, including the Title Page, Copyright information, Table of Contents, Authors, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 949403 (2015) https://doi.org/10.1117/12.2177415
Robotic skins with multi-modal sensors are necessary to facilitate better human-robotic interaction in non-structured
environments. Integration of various sensors, especially onto substrates with non-uniform topographies, is challenging
using standard semiconductor fabrication techniques. Printing is seen as a technology with great promise that can be
used for sensor fabrication and integration as it may allow direct printing of different sensors onto the same substrate
regardless of topology. In this work, we investigate Electro-Hydro-Dynamic (EHD) printing, a method that allows
printing of micron-sized features with a wide range of materials, for fabricating pressure sensor arrays using Poly(3,4-
ethylenedioxythiophene):Polystyrene Sulfonate (PEDOT:PSS). Fabrication of such sensors has been achieved by prepatterning
gold or platinum metallized interdigitated comb electrode arrays on a polyimide substrate, with three custom
made PEDOT:PSS based inks printed directly onto the electrode arrays. These three inks include a formulation of
PEDOT:PSS and NMP; PEDOT:PSS, PVP, and NMP; and PEDOT:PSS, PVP, Nafion, and NMP. All these inks were
successfully printed onto sensor elements. The initial results of bending-induced strain tests on the fabricated sensors
display that all the inks are sensitive to strain. This confirms their suitability for pressure and strain sensor applications;
however, the behavior of each ink; including sensitivity, linearity, and stability; is unique to the type.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Matt Saari, Bryan Cox, Matt Galla, Paul S. Krueger, Edmond Richer, Adam L. Cohen
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 949404 (2015) https://doi.org/10.1117/12.2179507
Fabricating a robotic component comprising 100s of distributed, connected sensors can be very difficult with current approaches. To address these challenges, we are developing a novel additive manufacturing technology to enable the integrated fabrication of robotic structural elements with distributed, interconnected sensors and actuators. The focus is on resistive and capacitive sensors and electromagnetic actuators, though others are anticipated. Anticipated applications beyond robotics include advanced prosthetics, wearable electronics, and defense electronics. This paper presents preliminary results for printing polymers and conductive material simultaneously to form small sensor arrays. Approaches to optimizing sensor performance are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 949405 (2015) https://doi.org/10.1117/12.2183259
This paper presents the first microscale micro force sensing mobile microrobot. The design consists of a planar, vision-based micro force sensor end-effector, while the microrobot body is made from photoresist mixed with nickel particles that is driven by an exterior magnetic field. With a known stiffness, the manipulation forces can be determined from observing the deformation of the end-effector through a camera attached to an optical microscope. After analyzing and calibrating the stiffness of a micromachined prototype, proof of concept tests are conducted to verify this microrobot prototype possessing the mobility and in-situ force sensing capabilities. This microscale micro-Force Sensing Mobile Microrobot (μFSMM) is able to translate with the speed up to 10 mm=s in a fluid environment. The calibrated stiffness of the micro force sensor end-effector of the μFSMM is on the order of 10-2 N=m. The force sensing resolution with the current vision system is approximately 100 nN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vincent Trenchant, Micky Rakotondrabe, Yassine Haddab
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 949407 (2015) https://doi.org/10.1117/12.2185682
The aim of this paper is the estimation of force in a two degrees of freedom (2-DoF) piezoelectric actuator devoted to microrobotic manipulation tasks. Due to the limited space and to the small sizes of the actuator, the use of external sensors to measure both the displacement and the force played into role during the tasks are impossible. Therefore the deal in this study consists to propose observer techniques to bypass the use of force sensors. Based on the unknown input observer (UIO) technique, force along the two directions (y and z axes) of the actuator can be estimated precisely and with convenient dynamics. Additionally to the force, the state vector of the actuator is also estimated. Experimental tests are carried out and demonstrate the e ectiveness of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 949408 (2015) https://doi.org/10.1117/12.2176596
As robotic ground systems advance in capabilities and begin to fulfill new roles in both civilian and military life, the limitation of slow operational speed has become a hindrance to the wide-spread adoption of these systems. For example, military convoys are reluctant to employ autonomous vehicles when these systems slow their movement from 60 miles per hour down to 40. However, these autonomous systems must operate at these lower speeds due to the limitations of the sensors they employ. Robotic Research, with its extensive experience in ground autonomy and associated problems therein, in conjunction with CERDEC/Night Vision and Electronic Sensors Directorate (NVESD), has performed a study to specify system and detection requirements; determined how current autonomy sensors perform in various scenarios; and analyzed how sensors should be employed to increase operational speeds of ground vehicles. The sensors evaluated in this study include the state of the art in LADAR/LIDAR, Radar, Electro-Optical, and Infrared sensors, and have been analyzed at high speeds to study their effectiveness in detecting and accounting for obstacles and other perception challenges. By creating a common set of testing benchmarks, and by testing in a wide range of real-world conditions, Robotic Research has evaluated where sensors can be successfully employed today; where sensors fall short; and which technologies should be examined and developed further. This study is the first step to achieve the overarching goal of doubling ground vehicle speeds on any given terrain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 949409 (2015) https://doi.org/10.1117/12.2177641
In recent years, advancements in computer vision, motion planning, task-oriented algorithms, and the availability and cost reduction of sensors, have opened the doors to affordable autonomous robots tailored to assist individual humans. One of the main tasks for a personal robot is to provide intuitive and non-intrusive assistance when requested by the user. However, some base robotic platforms can’t perform autonomous tasks or allow general users operate them due to complex controls. Most users expect a robot to have an intuitive interface that allows them to directly control the platform as well as give them access to some level of autonomous tasks. We aim to introduce this level of intuitive control and autonomous task into teleoperated robotics.
This paper proposes a simple sensor-based HMI framework in which a base teleoperated robotic platform is sensorized allowing for basic levels of autonomous tasks as well as provides a foundation for the use of new intuitive interfaces. Multiple forms of HMI’s (Human-Machine Interfaces) are presented and software architecture is proposed. As test cases for the framework, manipulation experiments were performed on a sensorized KUKA YouBot® platform, mobility experiments were performed on a LABO-3 Neptune platform and Nexus 10 tablet was used with multiple users in order to examine the robots ability to adapt to its environment and to its user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940A (2015) https://doi.org/10.1117/12.2177371
When robots can sense and interpret the activities of the people they are working with, they become more of a team member and less of just a piece of equipment. This has motivated work on recognizing human actions using existing robotic sensors like short-range ladar imagers. These produce three-dimensional point cloud movies which can be analyzed for structure and motion information. We skeletonize the human point cloud and apply a physics-based velocity correlation scheme to the resulting joint motions. The twenty actions are then recognized using a nearest-neighbors classifier that achieves good accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940B (2015) https://doi.org/10.1117/12.2179281
The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940C (2015) https://doi.org/10.1117/12.2177399
Surface electromyography (SEMG) has been shown to be a robust and reliable interaction method allowing for basic control of powered prosthetic devices. Research has shown a marked decrease in EMG-classification efficiency throughout activities of daily life due to socket shift and movement and fatigue as well as changes in degree of fit of the socket throughout the subject's lifetime. Users with the most severe levels of amputation require the most complex devices with the greatest number of degrees of freedom. Controlling complex dexterous devices with limited available inputs requires the addition of sensing and interaction modalities. However, the larger the amputation severity, the fewer viable SEMG sites are available as control inputs. Previous work reported the use of intra-socket pressure, as measured during wrist flexion and extension, and has shown that it is possible to control a powered prosthetic device with pressure sensors. In this paper, we present data correlations of SEMG data with intra-socket pressure data. Surface EMG sensors and force sensors were housed within a simulated prosthetic cuff fit to a healthy-limbed subject. EMG and intra-socket force data was collected from inside the cuff as a subject performed pre-defined grip motions with their dominant hand. Data fusion algorithms were explored and allowed a subject to use both intra-socket pressure and SEMG data as control inputs for a powered prosthetic device. This additional input modality allows for an improvement in input classification as well as information regarding socket fit through out activities of daily life.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940D (2015) https://doi.org/10.1117/12.2177035
Lidar systems are well known for their ability to measure three-dimensional aspects of a scene. This attribute of Lidar has been widely exploited by the robotics community, among others. The problem of resolving ranges of layered objects (such as a tree canopy over the forest floor) has been studied from the perspective of airborne systems. However, little research exists in studying this problem from a ground vehicle system (e.g., a bush covering a rock or other hazard). This paper discusses the issues involved in solving this problem from a ground vehicle. This includes analysis of extracting multi-return data from Lidar and the various laser properties that impact the ability to resolve multiple returns, such as pulse length and beam size. The impacts of these properties are presented as they apply to three different Lidar imaging technologies: scanning pulse Lidar, Geiger-mode flash Lidar, and Time-of-Flight camera. Tradeoffs associated with these impacts are then discussed for a ground vehicle Lidar application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940E (2015) https://doi.org/10.1117/12.2186825
Mesoscale robots, including active capsules, are a promising and well suited approach for minimal invasive intrabody intervention. However, within the numerous works, the main limitation in these robots is the embedded energy used for their locomotion and for the tasks they should accomplish. The limited autonomy and the limited power make them finally unusable for real situations such as active capsules inside body during several tens of minutes. In this paper, we propose an approach to power mesoscale robots by using energy harvesting techniques through a piezoelectric cantilever structure embedded on the robot and through an oscillating magnetic excitation. The physical model of the proposed system is carried out and simulation results are yielded and analyzed accordingly to the influencing parameters such as the number of layers in the cantilever and its dimensions. Finally, the feasability of this solution is proved and perspectives are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940F (2015) https://doi.org/10.1117/12.2190851
This paper describes initial work on untethered microscale flying structures as a platform for new class of aerial MEMS microrobots. We present and analyze both biomimetic structures based partially on wing designs of smallest flying insects on Earth, as well as stress-engineered structures powered by radiometric (thermal) forces. The latter devices, also called MEMS Microfliers are 300 μm × 300 μm × 1.5 μm in size, and are fabricated out of polycrystalline silicon. A convex chassis, formed through a novel in-situ masked post-release stress-engineering process, ensures their static inflight stability. High-speed optical micrography was used to image these MEMS microfliers in mid-flight, analyzing their flight profile.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940G (2015) https://doi.org/10.1117/12.2183258
Microrobots, sub-millimeter untethered microactuators, have applications including cellular manipulation, microsurgery, microassembly, tissue culture, and drug delivery. Laser-induced opto-thermocapillary flow-addressed bubble (OFB) microrobots are promising for these applications. In the OFB microrobot system, laser patterns generate thermal gradients within a liquid media, creating thermocapillary forces that actuate the air bubbles that serve as microrobots. A unique feature of the OFB microrobot system is that the optical control enables the parallel yet independent actuation of microrobots. This paper reports on the development of an automated control system for the independent addressing of many OFB microrobots in parallel. In this system, a spatial light modulator (SLM) displayed computer-generated holograms to create an optical pattern consisting of up to 50 individual spots. Each spot can control a single microrobot, so the control of array of microrobots was accomplished with sequence of holograms. Using the control system described in this paper, single, multiple, and groups of microrobots were created, repositioned, and maneuvered independently within a set workspace. Up to 12 microrobots were controlled independently and in parallel. To the best knowledge of the authors, this is the largest number of parallel, independent microrobot actuation reported to date.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ashis Gopal Banerjee, Andrew Barnes, Krishnanand N. Kaipa, Jiashun Liu, Shaurya Shriyam, Nadir Shah, Satyandra K. Gupta
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940H (2015) https://doi.org/10.1117/12.2181346
Collaborative teams of human operators and mobile ground robots are becoming popular in manufacturing plants
to assist humans with a lot of the repetitive tasks such as the packing of related objects into different units,
an operation known as kitting. In this paper, we present an ontology to provide a unified representation of all
kitting-related tasks, which are decomposed into atomic actions that are either computational involving sensing,
perception, planning, and control, or physical involving actuation and manipulation. The ontology is then used
in a stochastic integer linear program for optimum partitioning of the atomic tasks between the robots and
humans. Preliminary experiments on a single robot, single human case yield promising results where the kitting
operations are completed with lower durations and manipulation failure rates using human-robot partnership
versus just the human or only the robot. This success is achieved by the robot seeking human assistance for
visual perception tasks while performing the other tasks primarily on its own.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Oguz Yetkin, Kristi Wallace, Joseph D. Sanford, Dan O. Popa
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940I (2015) https://doi.org/10.1117/12.2177449
A novel system is presented to control a powered prosthetic device using a gesture tracking system worn on a user’s sound hand in order to detect different grasp patterns. Experiments are presented with two different gesture tracking systems: one comprised of Conductive Thimbles worn on each finger (Conductive Thimble system), and another comprised of a glove which leaves the fingers free (Conductive Glove system). Timing tests were performed on the selection and execution of two grasp patterns using the Conductive Thimble system and the iPhone app provided by the manufacturer. A modified Box and Blocks test was performed using Conductive Glove system and the iPhone app provided by Touch Bionics. The best prosthetic device performance is reported with the developed Conductive Glove system in this test. Results show that these low encumbrance gesture-based wearable systems for selecting grasp patterns may provide a viable alternative to EMG and other prosthetic control modalities, especially for new prosthetic users who are not trained in using EMG signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940K (2015) https://doi.org/10.1117/12.2185683
This paper deals with the feedforward control of the vibrations of a 2-DOF piezoelectric micropositioner in order to damp the vibrations in the direct axes and in the cross-couplings. The actuator exhibit badly damped vibrations in its direct transfers as well as in the cross-couplings transfers. We therefore propose a bivariable control which does not require sensors to reduce the vibrations in the different axes. The proposed scheme reduces all modes of vibrations for both outputs through extending the monovariable zero placement input shaping technique into bivariable. Experimental tests have been carried out and demonstrate the efficiency of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940L (2015) https://doi.org/10.1117/12.2192746
This paper deals with the feedforward control of vibrations in a 2-axis piezoelectric actuator devoted to precise positioning. The actuator is very prized for high precision spatial positioning applications, but its positioning capability as well as the stability of the final tasks are compromised by badly-damped vibrations, especially during high-speed positioning operation. In addition to these vibrations, the presence of strong cross-couplings between different actuator axis poses challenge in the feedforward control scheme. This paper proposes a bivariable feedforward standard H1 approach to suppress the vibrations in the direct transfers and to reduce the amplitudes of the cross-couplings. The proposed approach is simple to handle and easy to implement, comparatively to the commonly used techniques for oscillations suppression. Experimental tests demonstrate the efficiency of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940M (2015) https://doi.org/10.1117/12.2175896
Spiking neural networks (SNNs) have drawn considerable excitement because of their computational
properties, believed to be superior to conventional von Neumann machines, and sharing properties
with living brains. Yet progress building these systems has been limited because we lack a design
methodology. We present a gene-driven network growth algorithm that enables a genetic algorithm
(evolutionary computation) to generate and test SNNs. The genome for this algorithm grows O(n)
where n is the number of neurons; n is also evolved. The genome not only specifies the network
topology, but all its parameters as well. Experiments show the algorithm producing SNNs that
effectively produce a robust spike bursting behavior given tonic inputs, an application suitable for
central pattern generators. Even though evolution did not include perturbations of the input spike
trains, the evolved networks showed remarkable robustness to such perturbations. In addition, the
output spike patterns retain evidence of the specific perturbation of the inputs, a feature that could be
exploited by network additions that could use this information for refined decision making if required.
On a second task, a sequence detector, a discriminating design was found that might be considered an
example of “unintelligent design”; extra non-functional neurons were included that, while inefficient,
did not hamper its proper functioning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940N (2015) https://doi.org/10.1117/12.2177214
An experimental study of a neural network modeled by an adaptive Lotka-Volterra system follows. With totally inhibitory connections, this system can be embedded in a simple classification network. This network is able to classify and monitor its inputs in a spontaneous nonlinear fashion without prior training. We describe a framework for leveraging this behavior through an example involving breast cancer diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940O (2015) https://doi.org/10.1117/12.2177603
Many of the real-world problems, – including human knowledge, communication, biological, and cyber network analysis, – deal with data entities for which the essential information is contained in the relations among those entities. Such data must be modeled and analyzed as graphs, with attributes on both objects and relations encode and differentiate their semantics. Traditional data mining algorithms were originally designed for analyzing discrete objects for which a set of features can be defined, and thus cannot be easily adapted to deal with graph data. This gave rise to the relational data mining field of research, of which graph pattern learning is a key sub-domain [11]. In this paper, we describe a model for learning graph patterns in collaborative distributed manner. Distributed pattern learning is challenging due to dependencies between the nodes and relations in the graph, and variability across graph instances. We present three algorithms that trade-off benefits of parallelization and data aggregation, compare their performance to centralized graph learning, and discuss individual benefits and weaknesses of each model. Presented algorithms are designed for linear speedup in distributed computing environments, and learn graph patterns that are both closer to ground truth and provide higher detection rates than centralized mining algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940S (2015) https://doi.org/10.1117/12.2177400
Improving the surveillance capacity over wide zones requires a set of smart battery-powered Unattended Ground Sensors capable of issuing an alarm to a decision-making center. Only high-level information has to be sent when a relevant suspicious situation occurs. In this paper we propose an innovative bio-inspired approach that mimics the human bi-modal vision mechanism and the parallel processing ability of the human brain. The designed prototype exploits two levels of analysis: a low-level panoramic motion analysis, the peripheral vision, and a high-level event-focused analysis, the foveal vision. By tracking moving objects and fusing multiple criteria (size, speed, trajectory, etc.), the peripheral vision module acts as a fast relevant event detector. The foveal vision module focuses on the detected events to extract more detailed features (texture, color, shape, etc.) in order to improve the recognition efficiency. The implemented recognition core is able to acquire human knowledge and to classify in real-time a huge amount of heterogeneous data thanks to its natively parallel hardware structure. This UGS prototype validates our system approach under laboratory tests. The peripheral analysis module demonstrates a low false alarm rate whereas the foveal vision correctly focuses on the detected events. A parallel FPGA implementation of the recognition core succeeds in fulfilling the embedded application requirements. These results are paving the way of future reconfigurable virtual field agents. By locally processing the data and sending only high-level information, their energy requirements and electromagnetic signature are optimized. Moreover, the embedded Artificial Intelligence core enables these bio-inspired systems to recognize and learn new significant events. By duplicating human expertise in potentially hazardous places, our miniature visual event detector will allow early warning and contribute to better human decision making.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940U (2015) https://doi.org/10.1117/12.2176536
Recent advances in machine learning with big data sets has allowed for significant advances in the optimisation of classification and recognition systems. However, for applications such as situational awareness systems, the entirety of the available data dwarfs the amount permissible for a training set with tractable machine learning optimization times. Furthermore, the performance of any optimized system is highly dependent of the training set correctly and completely representing the entire data space of scenarios. In this paper we present a technique to characterize the entire data space to ascertain the key factors for representation and subsequently select a subset that statistically represents the correct mix of scenarios. We demonstrate the effectiveness of these characterization and subset selection techniques by using a genetic algorithm to optimize the performance of a gunfire recognition system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940V (2015) https://doi.org/10.1117/12.2180153
The KDD-99 Cup dataset is dead. While it can continue to be used as a toy example, the age of this dataset makes it all but useless for intrusion detection research and data mining. Many of the attacks used within the dataset are obsolete and do not reflect the features important for intrusion detection in today's networks. Creating a new dataset encompassing a large cross section of the attacks found on the Internet today could be useful, but would eventually fall to the same problem as the KDD-99 Cup; its usefulness would diminish after a period of time. To continue research into intrusion detection, the generation of new datasets needs to be as dynamic and as quick as the attacker. Simply examining existing network traffic and using domain experts such as intrusion analysts to label traffic is inefficient, expensive, and not scalable. The only viable methodology is simulation using technologies including virtualization, attack-toolsets such as Metasploit and Armitage, and sophisticated emulation of threat and user behavior. Simulating actual user behavior and network intrusion events dynamically not only allows researchers to vary scenarios quickly, but enables online testing of intrusion detection mechanisms by interacting with data as it is generated. As new threat behaviors are identified, they can be added to the simulation to make quicker determinations as to the effectiveness of existing and ongoing network intrusion technology, methodology and models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Daniela I. Moody, Cathy J. Wilson, Joel C. Rowland, Garrett L. Altmann
Proceedings Volume Next-Generation Robotics II; and Machine Intelligence and Bio-inspired Computation: Theory and Applications IX, 94940W (2015) https://doi.org/10.1117/12.2177590
Advanced pattern recognition and computer vision algorithms are of great interest for landscape characterization, change detection, and change monitoring in satellite imagery, in support of global climate change science and modeling. We present results from an ongoing effort to extend neuroscience-inspired models for feature extraction to the environmental sciences, and we demonstrate our work using Worldview-2 multispectral satellite imagery. We use a Hebbian learning rule to derive multispectral, multiresolution dictionaries directly from regional satellite normalized band difference index data. These feature dictionaries are used to build sparse scene representations, from which we automatically generate land cover labels via our CoSA algorithm: Clustering of Sparse Approximations. These data adaptive feature dictionaries use joint spectral and spatial textural characteristics to help separate geologic, vegetative, and hydrologic features. Land cover labels are estimated in example Worldview-2 satellite images of Barrow, Alaska, taken at two different times, and are used to detect and discuss seasonal surface changes. Our results suggest that an approach that learns from both spectral and spatial features is promising for practical pattern recognition problems in high resolution satellite imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.