Recently, tomographic flow-cytometry in label-free modality has been demonstrated. The tomographic apparatus operates in Quantitative phase imaging (QPI) mode and gives the possibility to retrieve the 3D refractive index distribution of single cells flowing along a microfluidic channel. One of the challenging topic correlated to QPI is the need to extract subcellular compartments as QPI lacks the typical specificity guaranteed by standard fluorescence microscopy (FM). Here we show the possibility of retrieving the specific refractive index distribution of multiple sub-cellular organelles from the 3D tomograms of flowing cells. Furthermore, we show a novel model of representation and fruition of 3D tomograms, displayed in an immersive virtual reality (VR) environment. Thus, new scenarios can be opened not only for fruition but also for analyzing the quantitative measurements of the whole 3D structure of any and each cell.
KEYWORDS: Virtual reality, Intelligence systems, Augmented reality, System integration, Safety, Point clouds, Deep learning, Transportation, Standards development, Decision support systems
Maintenance in the railway context has today reached very high safety standards. Still, despite these high standards, the sector's goal remains to continue to use resources and technologies to achieve a total absence of accidents. Our study proposes creating an integrated monitoring system to support the awareness of a planning operator. The system consists of 3 blocks that collect data from the field, process them, and identify anomalies. Subsequently, these data are displayed interactively in a virtual environment that realistically reproduces the piece of railway line we are analyzing. The planning operator can navigate the virtual environment with extreme awareness and plan maintenance interventions. Finally, the prepared maintenance cards will be made available in augmented reality to support the maintainers in locating the intervention area and executing the task.
The term Metaverse was introduced in 1992. Lately, the concept has become a popular buzzword in the general public, thanks to Meta. And yet, its intrinsic reliance on the convergence between enabling technologies such as virtual reality, the internet, and social networks cannot be understated. It might prove a powerful tool for enhancing data discovery and interpretation, especially considering a collaborative setup. The work presented here aims to investigate the interaction between a real user and digital objects in a virtual world to understand which aspects of attention must be focused on to obtain a natural and comfortable use. Our work involved the generation of Metaballs and the interaction with them by a user in a virtual environment. Metaballs are particular implicit surfaces of arbitrary typology, widely used in Computer Graphics to model curved objects. The ”organic” look and feel of how they interact with each other and their resemblance to soft tissues have proved a natural fit for tasks such as surgery simulation. Still, these implicit surfaces can represent even real objects at an even smaller scale, like cells of living organisms, and the ability to comfortably interact with this supersized version of real objects could open the door to new possibilities.
Monitoring and maintaining the health state of existing bridges is a time-consuming and critical task. To reduce the time and effort required for a first screening to prioritize risks, deep-learning-based object detectors can be used. In detail, automatic defect and damage recognition on existing elements of existing bridges can be performed using single-stage detectors, such as YOLOv5. To this end, a database of typical defects was gathered and labeled by domain experts and YOLOv5 was trained, tested, and validated. Results showed good effectiveness and accuracy of the proposed methodology, opening new scenarios and the potentialities of artificial intelligence for automatic defect detection on bridges.
This paper presents a preliminary study for evaluating the quality of welds in thermomagnetic switches using 3D sensing and machine learning techniques. A 3D sensor based on laser triangulation is used to gather the point cloud of the component. The point cloud is then processed to extract hand-crafted signatures for binary classification: defective or non-defective component. Features such as Gaussian and mean curvatures, density, and quadric surface properties, are used for building these significant signatures. Different machine learning models, including decision trees, Support Vector Machines, k-nearest neighbors, random forests, ensemble classifiers, and Artificial Neural Networks, are trained using the built signatures to classify the weld as defective or non-defective. Preliminary results on actual data achieve high classification accuracy (<84%) on all the tested models.
Augmented reality is one of the technologies, which in recent years has been most in the spotlight for communities as diverse as researchers, industrial actors and gamers. A common need in almost any scenarios is to “register” the virtual world with the real one, so that the right virtual objects can be accurately placed in the user's view. Although positioning could be aided by global systems such as GPS, there are situations in which its accuracy or feasibility cannot be guaranteed. Indeed, a few sectors could be prevented from exploring augmented reality as a disrupting technology if this need cannot be adequately fulfilled. In this work, photogrammetry is investigated for scenarios in which a few static already known and well-defined real-world objects can be used for anchoring in a broader area. The goal is to create a solid and reliable augmented reality framework in terms of precise placement of objects with the aim of using it in contexts where other solutions lack the required accuracy. In particular, this work considers as the primary use case a solution developed using Microsoft Hololens 2 for the positioning of digital objects in the context of railway maintenance by exploiting the recognition of real objects in the environment through photogrammetry techniques. Indeed, only a precise positioning of the objects will allow the pervasive diffusion of this technology in sectors such as health, military and in any case in all those contexts where accuracy and reliability are essential elements for ensuring safety of operations.
We tackle the problem of robot localization by means of a deep learning (DL) approach. Convolutional neural networks, mainly devised for image analysis, are at the core of the proposed solution. An optical encoder neural network (OE-net) is devised to give back the relative pose of a robot by processing consecutive images. A monocular camera, fastened on the robot and oriented toward the floor, collects the vision data. The OE-net takes a pair of the acquired consecutive images as input and provides the relative pose information. The neural network is trained using a supervised learning approach. This preliminary study, made on synthetic images, suggests that a convolutional network and hence a DL approach can be a viable complement to the traditional visual odometry for robot ego-motion estimation. The obtained outcomes look very promising.
This paper proposed an efficient method to provide a robust occupancy grid useful for robot navigation tasks. An omnidirectional indoor robot accomplishing logistics tasks, has been equipped with stereocameras for detecting the presence of moving and fixed obstacles. The stereocamera provides a 3D point cloud. Starting from the tridimensional information, the occupancy map can be computed. Nevertheless, the point cloud often owns unstable points mainly due to low accurate disparity map and to light reflections on the floor that produce mismatching during the stereo matching phase. The point cloud has been opportunely filtered by using a cascade approach in order to get more robust occupancy grids. Passthrough filters are applied to remove the too far 3D points. Since high reflective floors produce unwanted 3D points, a color filter is also used to remove those points having saturated intensity values. The remaining floating points related always to the floor are then filtered out by taking advantage of the knowledge about the camera tilt. At this stage, a preliminary 2D occupancy grid is built to sample the point cloud. Each bin of occupancy map is then processed. In case the cell under investigation contains points, a distribution analysis about the point spread is performed. If the height of the highest point is under a determined threshold value, the cell value is set to zero. The unwanted floor points are thus furtherly removed. The cells containing a low number of points are also cleared. Finally, the isolated cells of occupancy grid and the cells that do not have enough valid neighboring cells are reset. The noisy points and the edge points of objects do not concur to produce inaccurate occupancy maps. Final outcomes prove as the proposed methodology enables to provide robust occupancy maps ensuring high performance in terms of processing time.
Contactless, non-destructive testing has always been an important pillar in crucial tasks performed in industrial applications, including post-assembly testing oriented toward aircraft manufacturing. This work will examine the topic from the point of view of the quality check for the lining of aircraft interiors, such as sidewalls and hatracks, with the aim of improving safety of operation and comfort during flights, using an automatic approach guided by the usage of computer vision system. In particular, it will present a multimodal approach using a 3d snapshot sensor and a color camera, for identifying defects and anomalies that belong to two distinct categories, such as geometrical and surface defects, where, in the aircraft manufacturing sectors, due to the low-volume production, when compared with the automotive industry, such quality check operation tasks are still done manually, with a low level of automation. The proposed approach is showing potential and has been demonstrated on a proof-of-concept system prototype funded through a European Commission Horizon 2020 research project under the Clean Sky 2 umbrella aimed at the growth of the aviation sector.
Algorithms based on a clever exploitation of artificial intelligence (AI) techniques are the key for modern multidisciplinary applications that are being developed in the last decades. AI approaches’ ability of extracting relevant information from data is essential to perform comprehensive studies in new multidisciplinary topics such as ecological informatics. For example, improving knowledge on cetaceans’ distribution patterns enables the acquisition of a strategic expertise for developing tools aimed to the preservation of the marine environment. In this paper we present an innovative approach, based on Random Forest and RUSBoost, aimed to define predictive models for presence/absence and abundance estimation of two classes of cetaceans: the striped dolphin Stenella coeruleoalba and the common bottlenose dolphin Tursiops truncatus. Sightings data from 2009 to 2017 have been collected and enriched by geo-morphological and meteorological data in order to build a comprehensive dataset of real observations used to train and validate the proposed algorithms. Results in terms of classification and regression accuracy demonstrate the feasibility of the proposed approach and suggest the application of such artificial intelligence based techniques to larger datasets, with the aim of enabling large scale studies as well as improving knowledge on data deficient species.
We propose a method for solving one of the significant open issues in computer vision: material recognition. A time-of-flight range camera has been employed to analyze the characteristics of different materials. Starting from the information returned by the depth sensor, different features of interest have been extracted using transforms such as Fourier, discrete cosine, Hilbert, chirp-z, and Karhunen–Loève. Such features have been used to build a training and a validation set useful to feed a classifier (J48) able to accomplish the material recognition step. The effectiveness of the proposed methodology has been experimentally tested. Good predictive accuracies of materials have been obtained. Moreover, experiments have shown that the combination of multiple transforms increases the robustness and reliability of the computed features, although the shutter value can heavily affect the prediction rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.