The relationship between cellular geometry and cellular state and function is apparent, but not yet completely understood. Precise characterization of cellular state is important in many fields, from pathology to synthetic biology. High-content high-throughput microscopy is accessible to researchers now more than ever. This allows for collection of large amounts of cellular images. Naturally, the analysis of this data cannot be left to manual investigation and needs the use of efficient computing algorithms for cellular detection, segmentation, and tracking. Annotation is required for building high quality algorithms. Medical professionals and researchers spend a lot of effort and time in annotating cells. This task has proved to be very repetitive and time consuming. The experts’ time is valuable and should be used effectively. Our hypothesis is that active deep learning will help to share some of the burden that researchers face in their everyday work. In this paper, we focus specifically on the problem of cellular segmentation. We approach the segmentation task using a classification framework. Each pixel in the image is classified based on whether the patch around it resides on the interior, boundary or exterior of the cell. Deep convolutional neural networks (CNN) are used to perform the classification task. Active learning is the method used to reduce the annotation burden. Uncertainty sampling, a popular active learning framework is used in conjunction with the CNN to segment the cells in the image. Three datasets of mammalian nuclei and cytoplasm are used for this work. We show that active deep learning significantly reduces the number of training samples required and also improves the quality of segmentation.
Plankton is at the bottom of the food chain. Microscopic phytoplankton account for about 50% of all photosynthesis on Earth, corresponding to 50 billion tons of carbon each year, or about 125 billion tonnes of sugar[1]. Plankton is also the food for most species of fish, and therefore it represents the backbone of the aquatic environment. Thus, monitoring plankton is paramount to infer potential dangerous changes to the ecosystem. In this work we use a collection of plankton species extracted from a large dataset of images from the Woods Hole Oceanographic Institute (WHOI), to establish a basic set of morphological features for supporting the use of plankton as a biosensor. Using a perturbation detection approach, we show that it is possible to detect deviation from the average space of features for each species of plankton microorganisms, that we propose could be related to environmental threat or perturbations. Such an approach, can open the way for the development of an automatic Artificial Intelligence (AI) based system for using plankton as biosensor.
Changes in morphology and swimming dynamics of plankton by exposure to toxic chemicals are studied using a novel a new paradigm of image acquisition and computer vision system. Single cell ciliate Stentor coeruleus enclosed in a drop of water provide a means to automatically deposit many individual samples on a at surface. Chemicals of interest are automatically added to each drop while the dynamical and morphological changes are captured with an optical microscope. With computer vision techniques, we analyze the motion trajectory of each plankton sample, along with its shape information, quantifying the sub-lethal impact of chemicals on plankton health. The system enables large screening of hundreds of chemicals of environmental interest which may make their way into water habitats.
Biologists use optical microscopes to study plankton in the lab, but their size, complexity and cost makes widespread deployment of microscopes in lakes and oceans challenging. Monitoring the morphology, behavior and distribution of plankton in situ is essential as they are excellent indicators of marine environment health and provide a majority of Earth’s oxygen and carbon sequestration. Direct in-line holographic microscopy (DIHM) eliminates many of these obstacles, but image reconstruction is computationally intensive and produces monochromatic images. By using one laser and one white LED, it is possible to obtain the 3D location plankton by triangulation, limiting holographic reconstruction to only the voxels occupied by the plankton, reducing computation by several orders of magnitude. The color information from the white LED assists in the classification of plankton, as phytoplankton contains green-colored chlorophyll. The reconstructed plankton images are rendered in a 3D interactive environment, viewable from a browser, providing the user the experience of observing plankton from inside a drop of water.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.