A portable, inexpensive, and easy-to-manufacture microfluidic device is developed for the detection of SARS-CoV-2 dsDNA fragments. In this device, four reaction chambers separated by carbon fiber rods are pre-loaded with isothermal amplification and CRISPR-Cas12a reagents. The reaction is carried out by simply pulling the rods, without the need for manual pipetting. To facilitate power-free pathogen detection, the entire detection is designed to be heated with a disposable hand warmer. After the CRISPR reaction, the fluorescence signal generated by positive samples is identified by naked eye, using an inexpensive flashlight. This simple and sensitive device will serve as a new model for the next-generation viral diagnostics in either hospital or resource-limited settings.
We propose a multilayer system to perform ice image retrieval. Ice images are typically texture-less, which adds difficulty in retrieving the images. To achieve high accuracy, high level local features are usually used in retrieving the images. However, most high level features contain high dimensionality that slows down the retrieval process. To overcome this problem, we divide the retrieval process into 3 steps. Each step filters out a large portion of images. As the features are constructed according to the ice image properties, one image can be quickly localized compared with the use of high-level features. The ice images are captured in Arctic, where the ice state changes dramatically due to the environmental and other influences. We build the first layer of the system on the utilization of color information and edges, as the color and the edges are the most critical characteristics of ice images. We divide the second layer into two sub-layers. The first sublayer is on the use of edge histogram. For the second sublayer, we detect salient points based on pixel values on the edge position and connect every adjacent points with straight lines. A new feature is built on the basis of distance scale of every adjacent salient points and the angles between connected lines. Our new feature is invariant to transformation, rotation and scaling. As the features in the first two layers are holistic features, the time performance is much better than high-level local features. The third layer is to apply Harris detector to find the correspondences between two features on a small set of filtered images. The experiments show that our system achieves good accuracy while maintaining much better time performance.
KEYWORDS: 3D modeling, Visualization, 3D image processing, Image registration, 3D image reconstruction, Atomic force microscopy, Cameras, Databases, Image restoration, Data modeling
Indoor localization is an important research topic for both of the robot and signal processing communities. In recent years, image-based localization is also employed in indoor environment for the easy availability of the necessary equipment. After capturing an image and sending it to an image database, the best matching image is returned with the navigation information. By allowing further camera pose estimation, the image-based localization system with the use of Structure-from-Motion reconstruction model can achieve higher accuracy than the methods of searching through a 2D image database. However, this emerging technique is still only on the use of outdoor environment. In this paper, we introduce the 3D SfM model based image-based localization system into the indoor localization task. We capture images of the indoor environment and reconstruct the 3D model. On the localization task, we simply use the images captured by a mobile to match the 3D reconstructed model to localize the image. In this process, we use the visual words and the approximate nearest neighbor methods to accelerate the process of nding the query feature's correspondences. Within the visual words, we conduct linear search in detecting the correspondences. From the experiments, we nd that the image-based localization method based on 3D SfM model gives good localization result based on both accuracy and speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.