Behavioral experiments on fruit flies had shown that they are attracted by near objects and they prefer front-to-back motion.
In this paper a visual orientation model is implemented on the Eye-Ris vision system and tested using a roving platform.
Robotic experiments are used to collect statistical data regarding the system behaviour: followed trajectories, dwelling
time, distribution of gaze direction and others strictly resembling the biological experimental setup on the flies. The
statistical analysis has been performed in different scenarios where the robot faces with different object distribution in the
arena. The acquired data has been used to validate the proposed model making a comparison with the fruit fly experiments.
Visual learning is an important aspect of fly life. Flies are able to extract visual cues from objects, like colors,
vertical and horizontal distributedness, and others, that can be used for learning to associate a meaning to
specific features (i.e. a reward or a punishment). Interesting biological experiments show trained stationary
flying flies avoiding flying towards specific visual objects, appearing on the surrounding environment. Wild-type
flies effectively learn to avoid those objects but this is not the case for the learning mutant rutabaga defective in
the cyclic AMP dependent pathway for plasticity. A bio-inspired architecture has been proposed to model the
fly behavior and experiments on roving robots were performed. Statistical comparisons have been considered
and mutant-like effect on the model has been also investigated.
This work describes a software/hardware framework where cognitive architectures can be realized and applied to
control different kinds of robotic platforms. The framework can be interfaced with a robot prototype mediating
the sensory-motor loop. Moreover 2D and 3D kinematic or dynamic simulation environments can be used to
evaluate the robotic system cognitive capabilities. Here, we address design choices and implementation issues
related to the proposed robotic programming environment, taking attention to its modular structure, important
characteristic for a flexible and powerful framework. The main advantage introduced by the proposed architecture
consists in the rapid development of applications, that can be easily tested on different robotic platforms either
real or simulated, because the differences are properly masked by the architecture. Simultaneously, to validate
the functionality of the proposed system an "ad hoc" simulator is implemented.
In this paper a new general purpose perceptual control architecture is presented and applied to robot navigation
in cluttered environments. In nature, insects show the ability to react to certain stimuli with simple reflexes
using direct sensory-motor pathways, which can be considered as basic behaviors, while high brain regions provide
secondary pathway allowing the emergence of a cognitive behavior which modulates the basic abilities. Taking
inspiration from this evidence, our architecture modulates, through a reinforcement learning, a set of competitive
and concurrent basic behaviors in order to accomplish the task assigned through a reward function. The core of
the architecture is constituted by the Representation layer, where different stimuli, triggering competitive reflexes,
are fused to form a unique abstract picture of the environment. The representation is formalized by means of
Reaction-Diffusion nonlinear partial differential equations, under the paradigm of the Cellular Neural Networks,
whose dynamics converges to steady-state Turing patterns. A suitable unsupervised learning, introduced at
the afferent (input) stage, leads to the shaping of the basins of attractions of the Turing patterns in order to
incrementally drive the association between sensor stimuli and patterns. In this way, at the end of the leaning
stage, each pattern is characteristic of a particular behavior modulation, while its trained basin of attraction
contains the set of all the environment conditions, as recorded through the sensors, leading to the emergence of
that particular behavior modulation. Robot simulations are reported to demonstrate the potentiality and the
effectiveness of the approach.
This paper aims to describe how the AnaFocus' Eye-RIS family of vision systems has been successfully embedded
within the roving robots developed under the framework of SPARK and SPARK II European projects to solve the
action-oriented perception problem in real time. Indeed, the Eye-RIS family is a set of vision systems which are
conceived for single-chip integration using CMOS technologies. The Eye-RIS systems employ a bio-inspired
architecture where image acquisition and processing are truly intermingled and the processing itself is carried out in two
steps. At the first step, processing is fully parallel owing to the concourse of dedicated circuit structures which are
integrated close to the sensors. These structures handle basically analog information. At the second step, processing is
realized on digitally-coded information data by means of digital processors. On the other hand, SPARK I and SPARK II
are European research projects which goal is to develop completely new sensing-perceiving-moving artefacts inspired by
the basic principles of living systems and based on the concept of "selforganization". As a result, its low-power
consumption together with its huge image-processing capabilities makes the Eye-RIS vision system a suitable choice to
be embedded within the roving robots developed under the framework of SPARK projects and to implement in real time
the resulting mathematical models for action-oriented perception.
This paper describes a correlation-based navigation algorithm, based on an unsupervised learning paradigm for spiking neural networks, called Spike Timing Dependent Plasticity (STDP). This algorithm was implemented on a new bio-inspired hybrid mini-robot called TriBot to learn and increase its behavioral capabilities. In fact
correlation based algorithms have been found to explain many basic behaviors in simple animals. The main interesting consequence of STDP is that the system is able to learn high-level sensor features, based on a set of basic reflexes, depending on some low-level sensor inputs. TriBot is composed of 3 modules, the first two
being identical and inspired by the Whegs hybrid robot. The peculiar characteristics of the robot consists in the innovative shape of the three-spoke appendages that allow to increase stability of the structure. The last module is composed of two standard legs with 3 degrees of freedom each. Thanks to the cooperation among these
modules, TriBot is able to face with irregular terrains overcoming potential deadlock situations, to climb high obstacles compared to its size and to manipulate objects. Robot experiments will be reported to demonstrate the potentiality and the effectiveness of the approach.
In this work a biologically inspired network of spiking neurons is used for robot navigation control. The two tasks
taken into account are obstacle avoidance and landmark-based navigation. The system learns the correlation
among unconditioned stimuli (pre-wired sensors) and conditioned stimuli (high level sensors) through Spike
Timing Dependent Plasticity (STDP). In order to improve the robot behaviours not only the synaptic weight
but also the synaptic delay is subject to learning. Modulating the synaptic delay the robot is able to store
the landmark position, like in a short time memory, and to use this information to smooth the turning actions
prolonging the landmark effects also when it is no more visible. Simulations are carried out in a dynamic
simulation environment and the robotic system considered is a cockroach-inspired hexapod robot. The locomotion
signals are generated by a Central Pattern Generator and the spiking network is devoted to control the heading of
the robot acting on the amplitude of the leg steps. Several scenarios have been proposed, for instance a T-shaped
labyrinth, used in laboratory experiments with mice to demonstrate classical and operant conditioning, has been
considered. Finally the proposed adaptive navigation control structure can be extended in a modular way to
include other features detected by new sensors included in the correlation-based learning process.
In this paper a new methodology for landmark navigation will be introduced. Either for animals or for artificial
agents, the whole problem of landmark navigation can be divided into two parts: first, the agent has to recognize,
from the dynamic environment, space invariant objects which can be considered as suitable landmarks for driving
the motion towards a goal position; second, it has to use the information on the landmarks to effectively navigate
within the environment. Here, the problem of determining landmarks has been addressed by processing the
external information through a spiking network with dynamic synapses plastically tuned by an STDP algorithm.
The learning processes establish correlations between the incoming stimuli, allowing the system to extract from
the scenario important features which can play the role of landmarks. Once established the landmarks, the
agent acquires geometric relationships between them and the goal position. This process defines the parameters
of a recurrent neural network (RNN). This in turn drives the agent navigation, filtering the information about
landmarks given within an absolute reference system (e.g the North). When the absolute reference is not available,
a safety mechanism acts to control the motion maintaining a correct heading. Simulation results showed the
potentiality of the proposed architecture: this is able to drive an agent towards the desired position in presence
of stimuli subject to noise and also in the case of partially obscured landmarks.
In this paper we study the problem of obstacle avoidance for a redundant manipulator. The manipulator
is controlled through an already developed recurrent neural network, called MMC-model (Mean of Multiple
Computation), able to solve the kinematics of manipulators in any configuration. This approach solves both
problems of direct and inverse kinematics by simple numerical iterations. The MMC-model here proposed is
constituted by a linear part that performs the topological analysis without any constraint and by a second layer,
with nonlinear blocks used to add the constraints related to both the mechanical structure of the manipulator
and the obstacles located in the operative space. The control architecture was evaluated in simulation for a
planar manipulator with three links. The robot starting from a given initial configuration is able to reach a
target position chosen in the operative space avoiding collisions with an obstacle placed in the plane. The
obstacle is identified by simulated sensors placed on each link, they can measure the distance between link and obstacle. The reaction to the obstacle proximity can be modulated through a damping factor that improves the smoothing of the robot trajectory. The good results obtained open the way to a hardware implementation for the real-time control of a redundant manipulator.
Locomotion control of legged robots is nowadays a field in continuous evolution. In this work a bio-inspired
control architecture based on the stick insect is applied to control the hexapod robot Gregor. The control
scheme is an extension of Walknet, a decentralized network inspired by the stick insect, that on the basis of
local reflexes generates the control signals needed to coordinate locomotion in hexapod robots. Walknet has
been adapted to the specific mechanical structure of Gregor that is characterized by specialized legs and a
sprawled posture. In particular an innovative hind leg geometry, inspired by the cockroach, has been considered
to improve climbing capabilities. The performances of the new control architecture have been evaluated in
dynamic simulation environments. The robot has been endowed with distance and contact sensors for obstacle
detection. A heading control is used to avoid large obstacles, and an avoidance reflex, as can be found in stick
insects, has been introduced to further improve climbing capabilities of the structure. The reported results,
obtained in different environmental configurations, stress the adaptive capabilities of the Walknet approach:
Even in unpredictable and cluttered environments the walking behaviour of the simulated robot and the robot
prototype, controlled through a FPGA based board, remained stable.
In order to solve the navigation problem of a mobile robot in an unstructured environment a versatile sensory
system and efficient locomotion control algorithms are necessary. In this paper an innovative sensory system for
action-oriented perception applied to a legged robot is presented. An important problem we address is how to
utilize a large variety and number of sensors, while having systems that can operate in real time. Our solution is
to use sensory systems that incorporate analog and parallel processing, inspired by biological systems, to reduce
the required data exchange with the motor control layer. In particular, as concerns the visual system, we use the
Eye-RIS v1.1 board made by Anafocus, which is based on a fully parallel mixed-signal array sensor-processor
chip. The hearing sensor is inspired by the cricket hearing system and allows efficient localization of a specific
sound source with a very simple analog circuit. Our robot utilizes additional sensors for touch, posture, load,
distance, and heading, and thus requires customized and parallel processing for concurrent acquisition. Therefore
a Field Programmable Gate Array (FPGA) based hardware was used to manage the multi-sensory acquisition
and processing. This choice was made because FPGAs permit the implementation of customized digital logic
blocks that can operate in parallel allowing the sensors to be driven simultaneously. With this approach the
multi-sensory architecture proposed can achieve real time capabilities.
In this paper a new methodology for action-oriented perception will be introduced. It is based on a previous
method that used Turing Patterns in CNNs for the arousal of "perceptual states" as representation of the
environmental condition. The emerging patterns were associated to codes which gave rise to learnable actions on
a moving robot. Recently the new paradigm of Winnerless Competition (WLC) was taken into consideration to
represent a suitable, bioinspired and efficient method to generate sequences of neural activations, strictly related
to the spatial-temporal activity of input sensors. This fascinating property was recently peculiarly measured
in the olfactory system, in particular in groups of neurons belonging to the insects' Antennal Lobe and to the
mammalians' Olfactory Bulb. Taking inspiration from these experimental results and from the analytical model
of the WLC, a cellular nonlinear model generating sequences of cell activation, representing the input pattern
at the sensory level, will be used in an action-oriented perception framework. In fact simulation results showed
the potentiality of the WLC approach to design dynamic networks for discrimination and classification, with
a potentially huge memory capacity. In the present manuscript the WLC principle, implemented in a network
of FitzHugh Nagumo neurons will be used within the whole framework for action-oriented perception, and the
results will be applied to a roving robot.
KEYWORDS: Sensors, Signal to noise ratio, Robotic systems, Data processing, Environmental sensing, Space robots, Dynamical systems, Signal processing, Sensory processes, Bacteria
Common design of a robot searching for a target emitting sensory stimulus (e.g. odor or sound) makes use of
the gradient of the sensory intensity. However, the intensity may decay rapidly with distance to the source,
then weak signal-to-noise ratio strongly limits the maximal distance at which the robot performance is still
acceptable. We propose a simple deterministic platform for investigation of the searching problem in an uncertain
environment with low signal to noise ratio. The robot sensory layer is given by a differential sensor capable of
comparing the stimulus intensity between two consecutive steps. The sensory output feeds the motor layer
through two parallel sensory-motor pathways. The first "reflex" pathway implements the gradient strategy,
while the second "integrating" pathway processes sensory information by discovering statistical dependences
and eventually correcting the results of the first fast pathway. We show that such parallel sensory information
processing allows greatly improve the robot performance outside of the robot safe area with high signal to noise ratio.
KEYWORDS: Neurons, Sensors, Chromium, FDA class I medical device development, Data processing, Sensor networks, Navigation systems, Biomimetics, Control systems, Artificial neural networks
In this paper a biologically-inspired network of spiking neurons is used for robot navigation control. The implemented scheme is able to process information coming from the robot contact sensors in order to avoid obstacles and on the basis of these actions to learn how to respond to stimuli coming from range finder sensors.
The implemented network is therefore able of reinforcement learning through a mechanism based on operant conditioning. This learning takes place according to a plasticity law in the synapses, based on spike timing. Simulation results discussed in the paper show the suitability of the approach and interesting adaptive properties of the network.
In this paper a model for auditory perception is introduced. This model is based on a network of integrate-and-fire and resonate-and-fire neurons and is aimed to control the phonotaxis
behavior of a roving robot. The starting point is the model of
phonotaxis in Gryllus Bimaculatus: the model consists of four integrate-and-fire neurons and is able of discriminating the calling song of male cricket and orienting the robot towards the sound source. This paper aims to extend the model to include an
amplitude-frequency clustering. The proposed spiking network shows
different behaviors associated with different characteristics of
the input signals (amplitude and frequency). The behavior implemented on the robot is similar to the cricket behavior, where some frequencies are associated with the calling song of male crickets, while other ones indicate the presence of predators.
Therefore, the whole model for auditory perception is devoted to
control different responses (attractive or repulsive) depending on
the input characteristics. The performance of the control system
has been evaluated with several experiments carried out on a
roving robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.