We present a computational method, termed Wasserstein-induced flux (WIF), to robustly quantify the accuracy of individual localizations within a single-molecule localization microscopy (SMLM) dataset without ground- truth knowledge of the sample. WIF relies on the observation that accurate localizations are stable with respect to an arbitrary computational perturbation. Inspired by optimal transport theory, we measure the stability of individual localizations and develop an efficient optimization algorithm to compute WIF. We demonstrate the advantage of WIF in accurately quantifying imaging artifacts in high-density reconstruction of a tubulin network. WIF represents an advance in quantifying systematic errors with unknown and complex distributions, which could improve a variety of downstream quantitative analyses that rely upon accurate and precise imaging. Furthermore, thanks to its formulation as layers of simple analytical operations, WIF can be used as a loss function for optimizing various computational imaging models and algorithms even without training data.
Existing radar algorithms assume stationary statistical characteristics for environment/clutter. In practical scenarios, the statistical characteristics of the clutter can dynamically change depending on where the radar is operating. Non-stationarity in the statistical characteristics of the clutter may negatively affect the radar performance. Cognitive radar that can sense the changes in the clutter statistics, learn the new statistical characteristics, and adapt to these changes has been proposed to overcome these shortcomings. We have recently developed techniques for detection of statistical changes and learning the new clutter distribution for cognitive radar. In this work, we will extend the learning component. More specifically, in our previous work, we have developed a sparse recovery based clutter distribution identification to learn the distribution of the new clutter characteristics after the detected change in the statistics of the clutter. In our method, we have built a dictionary of clutter distributions and used this dictionary in orthogonal matching pursuit to perform sparse recovery of the clutter distribution assuming that the dictionary includes the new distribution. In this work, we propose a hypothesis testing based approach to detect whether the new distribution of the clutter is included in the dictionary or not, and suggest a method to dynamically update the dictionary. We envision that the successful outcomes of this work will be of high relevance to the adaptive learning and cognitive augmentation of the radar systems that are used in remotely piloted vehicles for surveillance and reconnaissance operations.
A cognitive radar framework is being developed to dynamically detect changes in the clutter characteristics, and to adapt to these changes by identifying the new clutter distribution. In our previous work, we have presented a sparse-recovery based clutter identification technique. In this technique, each column of the dictionary represents a specific distribution. More specifically, calibration radar clutter data corresponding to a specific distribution is transformed into a distribution through kernel density estimation. When the new batch of radar data arrives, the new data is transformed to a distribution through the same kernel density estimation method and its distribution characteristics is identified through sparse-recovery. In this paper, we extend our previous work to consider different kernels and kernel parameters for sparse-recovery-based clutter identification and the numerical results are presented as well. The impact of different kernels and kernel parameters are analyzed by comparing the identification accuracy of each scenario.
KEYWORDS: Molecules, Point spread functions, Statistical analysis, Photodetectors, Deconvolution, 3D image processing, 3D modeling, Microscopy, Super resolution microscopy, Algorithms
In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.
Most existing radar algorithms are developed under the assumption that the environment, data clutter, is known and stationary. However, in practice, the characteristics of clutter can vary enormously in time depending on the operational scenarios. If unaccounted for, these nonstationary variabilities may drastically hinder the radar performance. It is essential that the radar systems dynamically detect changes in the environment, and adapt to these changes by learning the new statistical characteristics of the environment. In this paper, we employ sparse recovery for clutter identification, specifically we identify the statistical profile the clutter follows. We use Monte Carlo simulations to simulate and test clutter data coming from various distributions.
Image interpolation and denoising are important techniques in image processing. Recently, there has been a growing interest in the use of Gaussian processes (GP) regression for interpolation and denoising of image data. However, exact GP regression suffers from 0 (N3) runtime for data size N, making it intractable for image
data. Our GP-grid algorithm reduces the runtime complexity of GP from 0 (N3) to 0 (N312). We provide comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter. The GP interpolation method outperforms the commonly used bilinear
interpolation method for polarimeters.
We propose an analytical framework to build a microfluidic microsphere-trap array device that enables simultaneous, efficient, and accurate screening of multiple biological targets in a single microfluidic channel. By optimizing the traps’ geometric parameters, the trap arrays in the channel of the device can immobilize microspheres of different sizes at different regions, obeying hydrodynamically engineered trapping mechanism. Different biomolecules can be captured by the ligands on the surfaces of microspheres of different sizes. They are thus detected according to the microspheres’ positions (position encoding), which simplifies screening and avoids target identification errors. To demonstrate the proposition, we build a device for simultaneous detection of two target types by trapping microspheres of two sizes. We evaluate the device performance using finite element fluidic dynamics simulations and microsphere-trapping experiments. These results validate that the device efficiently achieves position encoding of the two-sized microspheres with few fluidic errors, providing the promise to utilize our framework to build devices for simultaneous detection of more targets. We also envision utilizing the device to separate, sort, or enumerate cells, such as circulating tumor cells and blood cells, based on cell size and deformability. Therefore, the device is promising to become a cost-effective and point-of-care miniaturized disease diagnostic tool.
Microsphere arrays can be used to effectively detect, identify, and quantify biological targets, such as mRNAs, proteins, antibodies, and cells. In this work, we design a microfluidic microsphere-trap array device that enables simultaneous, efficient, and accurate screening of multiple targets on a single platform. Different types of targets are captured on the surfaces of microspheres of different sizes. By optimizing the geometric parameters of the traps, the trap arrays in this device can immobilize microspheres of different sizes at different regions with microfluidic hydrodynamic trapping. The targets are thus detected according to the microspheres’ positions (position-encoding), which simplifies screening and avoids errors in target identification. We validate the design using fluid dynamics finite element simulations by COMSOL Multiphysics software using microsphere of two different sizes. We also performed preliminary microspheretrapping experiments on a fabricated device using microspheres of one size. Our results demonstrate that the proposed device can achieve the position-encoding of the microspheres with few fluidic errors. This device is promising for simultaneous detection of multiple targets and become a cheap and fast disease diagnostic tool.
We build a microfluidic trap-based microsphere array device. In the device, we design a novel geometric structure of the trap array and employ the hydrodynamic trapping mechanism to immobilize the microspheres. We develop a comprehensive and robust framework to optimize the values of the geometric parameters to maximize the microsphere arrays’ packing density. We also simultaneously optimize multiple criteria, such as efficiently immobilizing a single microsphere in each trap, effectively eliminating fluidic errors such as channel clogging and multiple microspheres in a single trap, minimizing errors in subsequent imaging experiments, and easily recovering targets. Microsphere-trapping experiments have been performed using the optimized device and a device with un-optimized geometric parameters. These experiments demonstrate easy control of the transportation and manipulation of the microspheres in the optimized device. They also show that the optimized device greatly outperforms the un-optimized one.
We propose an interpolation algorithm for Division-of-Focal-Plane (DoFP) polarimeters based on the correlation
between neighboring pixels. DoFP polarimeters monolithically integrate pixelated nanowire polarization filters
with an array of imaging elements. DoFP sensors have been realized in the visible and near-infrared regime. The
advantages of DoFP sensors are twofold. First, they capture polarization information at every frame. Second,
they are compact and robust. The main disadvantage is the loss of spatial resolution due to the super-pixel
sampling paradigm at the focal plane. These sensors produce four low-resolution images, where each image
has been recorded by a linear polarization filter offset by 45 degrees. Our algorithm addresses the loss of
spatial resolution by utilizing the correlation information between the four polarization pixels in a super-pixel
configuration. The method is based on the following premise: if one or more of three polarization parameters
(angle of polarization, degree of polarization, and intensity) are known for a spatial neighborhood, then the
unknown pixel values for the 0° image, for example, can be computed from the intensity values from the 45°,
90° and 135° images. The proposed algorithm is applied to select cases and found to outperform the bicubic
spline interpolation method.
In this paper, we propose a novel method to solve the forward and inverse problems in diffuse optical tomography.
Our forward solution is based on the diffusion approximation equation and is constructed using the Feynman-Kac
formula with an interacting particle method. It can be implemented using Monte-Carlo (MC) method and thus
provides great flexibility in modeling complex geometries. But different from conventional MC approaches, it
uses excursions of the photons' random walks and produces a transfer kernel so that only one round of MC-based
forward simulation (using an arbitrarily known optical distribution) is required in order to get observations
associated with different optical distributions. Based on these properties, we develop a perturbation-based
method to solve the inverse problem in a discretized parameter space. We validate our methods using simulated
2D examples. We compare our forward solutions with those obtained using the finite element method and find
good consistency. We solve the inverse problem using the maximum likelihood method with a greedy optimization
approach. Numerical results show that if we start from multiple initial points in a constrained searching space,
our method can locate the abnormality correctly.
We analyze the performance of a novel detector array for detecting and localizing particle emitting sources. The array is spherically shaped and consists of multiple "eyelets," each having a conical shape with a lens on top and a particle detectors subarray inside. The array's configuration is inspired by and generalizes the biological compound eye: it has a global spherical shape and allows a large number of detectors in each eyelet. The array can be used to detect particles including photons (e.g. visible light, X or γ rays), electrons, protons, neutrons, or α particles. We analyze the performance of the array by computing statistical Cramer-Rao bounds on the errors in estimating the direction of arrival (DOA) of the incident particles. In numerical examples, we first show the influence of the array parameters on its performance bound on the mean-square angular error (MSAE). Then we optimize the array's configuration according to a min-max criterion, i.e. minimize the worst case lower bound of the MSAE. Finally we introduce two estimators of the source direction using the proposed array and analyze their performance, thereby showing that the performance bound is attainable in practice. Potential applications include artificial vision, astronomy, and security.
Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications.
We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the
dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.
KEYWORDS: Sensors, Land mines, Mining, Chemical fiber sensors, Explosives, Detection and tracking algorithms, Diffusion, Algorithm development, Chemical analysis, Palladium
We develop methods for the automatic detection and localization of landmines using chemical sensor arrays and statistical signal processing techniques. The transport of explosive vapors emanating from buried landmines is modeled as a diffusion process in a two layered system consisting of ground and air. The measurement and statistical models are derived by exploiting the associated concentration distribution. We derive a generalized likelihood ratio detector and evaluate its performance in terms of the probabilities of detection and false alarm. To determine the unknown location of a landmine we derive a maximum likelihood estimation algorithm and evaluate its performance by computing the Cramer-Rao bound. The results are applied to the design of chemical sensor arrays, satisfying criteria specific in terms of detection and estimation performance measure, and to optimally select the number and positions of sensors and the number of time samples. To illustrate the potential of the proposed techniques in a realistic demining scenario, we derive a moving sensor algorithm in which the stationary sensor array is replaced by a single moving sensor. Numerical examples are given to demonstrate the applicability of our results.
A framework for temporal analysis of left ventricular (LV) endocardial wall motion is presented. This approach uses the technique of 2D comb filtering to model the periodic nature of cardiac motion. A method for flow vector computation is presented which defines a relationship between image-derived, shape-based correspondences and a more desirable, smoothly varying, set of correspondences. A recursive filter is then constructed which takes into consideration this relationship as well as knowledge of temporal trends. Experimental results for contours derived from cycles of actual cardiac magnetic resonance images are presented. Applications to the analysis of regional LV wall function are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.