A proper spatial characterization of a laser beam profile is indisputably important for any laser-mater experiment as well as for protection of beamline optical elements. Method of ablation and desorption imprints provides thorough beam profile analysis applicable to a broad range of photon energies. This method, however, often requires up to thousands of shots which must be then manually analyzed. Here we present method based on deep learning image segmentation model which is able to substitute human element currently indispensable in this time-consuming ex situ post processing. It is a part of AbloCAM project – an universal device for semi-automatic beam profile analysis.
We describe a method for analyzing geometrical properties of cell nuclei from phase contrast microscopy images. This is useful in drug discovery for quantifying the effect of candidate chemical compounds, bypassing the need for fluorescence imaging. Fluorescence images are then only used for training our nuclei segmentation, avoiding the need for the time consuming expert annotations. Geometry based descriptors are calculated and aggregated and fed into a classifier to distinguish the different types of chemical treatments. The drug treatment can be distinguished from no treatment with accuracy better than 95% from fluorescence images and better than 77% from phase contrast images.
Infertility is becoming an issue for an increasing number of couples. The most common solution, in vitro fertilization, requires embryologists to carefully examine light microscopy images of human oocytes to determine their developmental potential. We propose an automatic system to improve the speed, repeatability, and accuracy of this process. We first localize individual oocytes and identify their principal components using CNN (U-Net) segmentation. Next, we calculate several descriptors based on geometry and texture. The final step is an SVM classifier. Both the segmentation and classification training is based on expert annotations. The presented approach leads to a classification accuracy of 70%.
We describe an automatic pipeline for processing ultrasound images of the carotid artery, consisting of image type classification, carotid artery localization, segmentation, feature descriptor extraction, and plaque stability classification. The aim is to distinguish between stable (safe) and progressive (dangerous) atherosclerotic plaques from a single standard ultrasound transversal or longitudinal B-mode examination. The processing pipeline uses modern deep CNN techniques, while the descriptors are based on geometry and wavelets to characterize texture. When testing on a large dataset of 28718 images from 413 patients, we found that our automatically calculated descriptors are statistically significantly different between the two classes with a very high significance level, p < 10−3 . We have also created a random forest-based classifier to distinguish between progressive and stable plaques, although its accuracy remains low (61 ∼ 62%).
For quick, efficient and accurate alignment and characterization of focused short-wavelength (i.e., extreme ultraviolet, soft x-ray, and x-ray) laser beams directly in the vacuum interaction chambers, an instrument has to be developed and implemented. AbloCAM should represent such a handy tool looking at ablation imprints of the beam in a suitable material without breaking vacuum and need for a liberation of exposed samples from the chamber to analyse them ex situ. First steps we made in this direction can be found in ref. [1] The technique of the fluence scan (F-scan method; for details see [2,3]), proven at several FEL facilities, e.g., FLASH (Free-electron LASer in Hamburg) and LCLS (Linac Coherent Light Source), makes possible to characterize the beam utilizing just an outer contour of the damage pattern. It is not necessary to measure a crater profile for the beam reconstruction. Not only lateral, but also a longitudinal distribution of irradiance can be determined in the focused beam by its imprinting (z-scan method [4]). Technically, the AbloCAM tool consists of a vacuum compatible motorized positioning system executing a series of well-defined irradiations of a chosen slab target according to algorithms fulfilling requirements of the combined F(z)-scan procedure. Damage patterns formed in that way should then be visualized in situ by means of Nomarski (DIC – Differential Interference Contrast) microscope equipped with the software which indicates and processes pattern outer contours. There is a feedback established between positioning and inspecting components and functions of the tool. The software helps to align and characterize any focused beam in the interaction chamber semi-automatically in a reasonable time.
We address the task of automatic detection of lesions caused by multiple myeloma (MM) in femurs or other long bones from CT data. Such detection is already an important part of the multiple myeloma diagnosis and staging. However, it is so far performed mostly manually, which is very time consuming. We formulate the detection as a multiple instance learning (MIL) problem, where instances are grouped into bags and only bag labels are available. In our case, instances are regions in the image and bags correspond to images. This has the advantage of requiring only subject-level annotation (ground truth), which is much easier to get than voxel-level manual segmentation. We consider a generalization of the standard MIL formulation where we introduce a threshold on the number of required positive instances in positive bags. This corresponds better to the classification procedure used by the radiology experts and is more robust with respect to false positive instances. We extend several existing MIL algorithms to solve the generalized case by estimating the threshold during learning. We compare the proposed methods with the baseline method on a dataset of 220 subjects. We show that the generalized MIL formulation outperforms standard MIL methods for this task. For the task of distinguishing between healthy controls and MM patients with infiltrations, our best method makes almost no mistakes with a mean AUC of 0.982 and F1 = 0.965. We outperform the baseline method significantly in all conducted experiments.
Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed method differs from classical region growing in three important aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speed-up. Second, our method uses learned statistical shape properties that encourage plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as an energy minimization and is solved either greedily or iteratively using graph cuts. We demonstrate the performance of the proposed method and compare it with alternative approaches on the task of segmenting individual eggs in microscopy images of Drosophila ovaries.
Image segmentation is widely used as an initial phase of many image analysis tasks. It is often advantageous to first group pixels into compact, edge-respecting superpixels, because these reduce the size of the segmentation problem and thus the segmentation time by an order of magnitudes. In addition, features calculated from superpixel regions are more robust than features calculated from fixed pixel neighborhoods. We present a fast and general multiclass image segmentation method consisting of the following steps: (i) computation of superpixels; (ii) extraction of superpixel-based descriptors; (iii) calculating image-based class probabilities in a supervised or unsupervised manner; and (iv) regularized superpixel classification using graph cut. We apply this segmentation pipeline to five real-world medical imaging applications and compare the results with three baseline methods: pixelwise graph cut segmentation, supertexton-based segmentation, and classical superpixel-based segmentation. On all datasets, we outperform the baseline results. We also show that unsupervised segmentation is surprisingly efficient in many situations. Unsupervised segmentation provides similar results to the supervised method but does not require manually annotated training data, which is often expensive to obtain.
Periodic variations in patterns within a group of pixels provide important information about the surface of interest and can be used to identify objects or regions. Hence, a proper analysis can be applied to extract particular features according to some specific image properties. Recently, texture analysis using orthogonal polynomials has gained attention since polynomials characterize the pseudo-periodic behavior of textures through the projection of the pattern of interest over a group of kernel functions. However, the maximum polynomial order is often linked to the size of the texture, which implies in many cases, a complex calculation and introduces instability in higher orders leading to computational errors. In this paper, we address this issue and explore a pre-processing stage to compute the optimal size of the window of analysis called “texel.” We propose Haralick-based metrics to find the main oscillation period, such that, it represents the fundamental texture and captures the minimum information, which is sufficient for classification tasks. This procedure avoids the computation of large polynomials and reduces substantially the feature space with small classification errors. Our proposal is also compared against different fixed-size windows. We also show similarities between full-image representations and the ones based on texels in terms of visual structures and feature vectors using two different orthogonal bases: Tchebichef and Hermite polynomials. Finally, we assess the performance of the proposal using well-known texture databases found in the literature.
In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be
used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion
in the quality control. Also, the individual islet volume distribution is interesting — it has been indicated that
smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The
input of the volume estimation methods are segmented images of individual islets. The segmentation step is
not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical
or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does
not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are
observed. We present a simulation study comparing the performance of the volume estimation methods in
different scenarios and an experimental study comparing the methods on a real dataset.
This paper deals with separation of merged Langerhans islets in segmentations in order to evaluate correct histogram of islet diameters. A distribution of islet diameters is useful for determining the feasibility of islet transplantation in diabetes. First, the merged islets at training segmentations are manually separated by medical experts. Based on the single islets, the merged islets are identified and the SVM classifier is trained on both classes (merged/single islets). The testing segmentations were over-segmented using watershed transform and the most probable back merging of islets were found using trained SVM classifier. Finally, the optimized segmentation is compared with ground truth segmentation (correctly separated islets).
This paper deals with color normalization of microscopy images of Langerhans islets in order to increase robustness of the islet segmentation to illumination changes. The main application is automatic quantitative evaluation of the islet parameters, useful for determining the feasibility of islet transplantation in diabetes. First, background illumination inhomogeneity is compensated and a preliminary foreground/background segmentation is performed. The color normalization itself is done in either lαβ or logarithmic RGB color spaces, by comparison with a reference image. The color-normalized images are segmented using color-based features and pixel-wise logistic regression, trained on manually labeled images. Finally, relevant statistics such as the total islet area are evaluated in order to determine the success likelihood of the transplantation.
We present an algorithm to segment a set of parallel, intertwined and bifurcating fibers from 3D images, targeted for the identification of neuronal fibers in very large sets of 3D confocal microscopy images. The method consists of preprocessing, local calculation of fiber probabilities, seed detection, tracking by particle filtering, global supervised seed clustering and final voxel segmentation. The preprocessing uses a novel random local probability filtering (RLPF). The fiber probabilities computation is performed by means of SVM using steerable filters and the RLPF outputs as features. The global segmentation is solved by discrete optimization. The combination of global and local approaches makes the segmentation robust, yet the individual data blocks can be processed sequentially, limiting memory consumption. The method is automatic but efficient manual interactions are possible if needed. The method is validated on the Neuromuscular Projection Fibers dataset from the Diadem Challenge. On the 15 first blocks present, our method has a 99.4% detection rate. We also compare our segmentation results to a state-of-the-art method. On average, the performances of our method are either higher or equivalent to that of the state-of-the-art method but less user interactions is needed in our approach.
Evaluation of images of Langerhans islets is a crucial procedure for planning an islet transplantation, which is a promising diabetes treatment. This paper deals with segmentation of microscopy images of Langerhans islets and evaluation of islet parameters such as area, diameter, or volume (IE). For all the available images, the ground truth and the islet parameters were independently evaluated by four medical experts. We use a pixelwise linear classifier (perceptron algorithm) and SVM (support vector machine) for image segmentation. The volume is estimated based on circle or ellipse fitting to individual islets. The segmentations were compared with the corresponding ground truth. Quantitative islet parameters were also evaluated and compared with parameters given by medical experts. We can conclude that accuracy of the presented fully automatic algorithm is fully comparable with medical experts.
We register images based on their multiclass segmentations, for cases when correspondence of local features cannot be established. A discrete mutual information is used as a similarity criterion. It is evaluated at a sparse set of location on the interfaces between classes. A thin-plate spline regularization is approximated by pairwise interactions. The problem is cast into a discrete setting and solved efficiently by belief propagation. Further speedup and robustness is provided by a multiresolution framework. Preliminary experiments suggest that our method can provide similar registration quality to standard methods at a fraction of the computational cost.
We present an algorithm for geometric matching of graphs embedded in 2D or 3D space. It is applicable for registering any
graph-like structures appearing in biomedical images, such as blood vessels, pulmonary bronchi, nerve fibers, or dendritic
arbors. Our approach does not rely on the similarity of local appearance features, so it is suitable for multimodal registration
with a large difference in appearance. Unlike earlier methods, the algorithm uses edge shape, does not require an initial
pose estimate, can handle partial matches, and can cope with nonlinear deformations and topological differences.
The matching consists of two steps. First, we find an affine transform that roughly aligns the graphs by exploring the
set of all consistent correspondences between the nodes. This can be done at an acceptably low computational expense by
using parameter uncertainties for pruning, backtracking as needed. Parameter uncertainties are updated in a Kalman-like
scheme with each match.
In the second step we allow for a nonlinear part of the deformation, modeled as a Gaussian Process. Short sequences
of edges are grouped into superedges, which are then matched between graphs. This allows for topological differences.
A maximum consistent set of superedge matches is found using a dedicated branch-and-bound solver, which is over 100
times faster than a standard linear programming approach. Geometrical and topological consistency of candidate matches
is determined in a fast hierarchical manner.
We demonstrate the effectiveness of our technique at registering angiography and retinal fundus images, as well as
neural image stacks.
The Lung Test Images from Motol Environment (Lung TIME) is a new publicly available dataset of thoracic CT scans with manually annotated pulmonary nodules. It is larger than other publicly available datasets. Pulmonary nodules are lesions in the lungs, which may indicate lung cancer. Their early detection significantly improves
survival rate of patients. Automatic nodule detecting systems using CT scans are being developed to reduce physicians' load and to improve detection quality. Besides presenting our own nodule detection system, in this article, we mainly address the problem of testing and comparison of automatic nodule detection methods. Our
publicly available 157 CT scan dataset with 394 annotated nodules contains almost every nodule types (pleura attached, vessel attached, solitary, regular, irregular) with 2-10mm in diameter, except ground glass opacities (GGO). Annotation was done consensually by two experienced radiologists. The data are in DICOM format,
annotations are provided in XML format compatible with the Lung Imaging Database Consortium (LIDC). Our computer aided diagnosis system (CAD) is based on mathematical morphology and filtration with a subsequent classification step. We use Asymmetric AdaBoost classifier. The system was tested using TIME, LIDC and
ANODE09 databases. The performance was evaluated by cross-validation for Lung TIME and LIDC, and using the supplied evaluation procedure for ANODE09. The sensitivity at chosen working point was 94.27% with 7.57 false positives/slice for TIME and LIDC datasets combined, 94.03% with 5.46 FPs/slice for the Lung TIME, 89.62% sensitivity with 12.03 FPs/slice for LIDC, and 78.68% with 4,61 FPs/slice when applied on ANODE09.
Our task is to segment bones from 3D CT and MRI images. The main application is creation of 3D mesh models for finite element modeling. These surface and volume vector models can be used for further biomechanical processing and analysis. We selected a novel fast level set method because of its high computational efficiency, while preserving all advantages of traditional level set methods. Unlike in traditional level set methods, we are not solving partial differential equations (PDEs). Instead, the contours are represeted by two sets of points, corresponding to the inner and outer edge of the object boundary. We have extended the original implementation in 3D, where the speed advantage over classical level set segmentation are even more pronounced. We can segment a CT image of 512×512×125 in less than 20s by this method. It is approximately two orders of magnitude faster than standard narrow band algorithms. Our experiments with real 3D CT and MRI images presented in this paper showed high ability of the fast level set algorithm to solve complex segmentation problems.
Many cervical Computer-Aided Diagnosis (CAD) methods rely on measuring gradual appearance changes on the
cervix after the application of a contrast agent. Image registration has been used to ensure pixel correspondence
to the same tissue location throughout the whole temporal sequence but, to date, there is no reliable mean of
testing its accuracy to compensate for patient and tissue movement.
We present an independent system to use automatically extracted and matched features from a colposcopic image
sequence in order to generate position landmarks. These landmarks may be used either to measure the accuracy
of a registration method to align any pair of images from the colposcopic sequence or as a cue for registration.
The algorithm selects sets of matched features that extend through the whole image sequence allowing to locate,
in a reliable and unbiased way, a tissue point throughout the whole image sequence. Experiments on real
colposcopy image sequences show that the approach is robust, reliable, and leads to geometrically coherent sets
of landmarks that correspond to visually recognizable regions. We use the extracted landmarks to test the
precision of some of the cervical registration algorithms previously presented in the literature.
We present a computer-aided diagnosis (CAD) system to detect small-size (from 2mm to around 10mm) pulmonary
nodules from helical CT scans. A pulmonary nodule is a small, round (parenchymal nodule) or worm
(juxta-pleural) shaped lesion in the lungs. Both have greater radio density than lungs parenchyma. Lung nodules
may indicate a lung cancer and its detection in early stage improves survival rate of patients. CT is considered
to be the most accurate imaging modality for detection of nodules. However, the large amount of data per
examination makes the interpretation difficult. This leads to omission of nodules by human radiologist. CAD
system presented is designed to help lower the number of omissions. Our system uses two different schemes
to locate juxtapleural nodules and parenchymal nodules. For juxtapleural nodules, morphological closing and
thresholding is used to find nodule candidates. To locate non-pleural nodule candidates, 3D blob detector uses
multiscale filtration. Ellipsoid model is fitted on nodules. To define which of the nodule candidates are in fact
nodules, an additional classification step is applied. Linear and multi-threshold classifiers are used. System was
tested on 18 cases (4853 slices) with total sensitivity of 96%, with about 12 false positives/slice. The classification
step reduces number of false positives to 9 per slice without significantly decreasing sensitivity (89,6%).
KEYWORDS: Data modeling, Magnetoencephalography, Sensors, Magnetic resonance imaging, Electroencephalography, Signal to noise ratio, Brain, Functional magnetic resonance imaging, Statistical modeling, Neurons
Electroencephalography (EEG) and magnetoencephalography (MEG) have excellent time resolution. However,
the poor spatial resolution and small number of sensors do not permit to reconstruct a general spatial activation
pattern. Moreover, the low signal to noise ratio (SNR) makes accurate reconstruction of a time course also
challenging. We therefore propose to use constrained reconstruction, modeling the relevant part of the brain
using a neural mass model: There is a small number of zones that are considered as entities, neurons within a zone
are assumed to be activated simultaneously. The location and spatial extend of the zones as well as the interzonal
connection pattern can be determined from functional MRI (fMRI), diffusion tensor MRI (DTMRI), and
other anatomical and brain mapping observation techniques. The observation model is linear, its deterministic
part is known from EEG/MEG forward modeling, the statistics of the stochastic part can be estimated. The
dynamics of the neural model is described by a moderate number of parameters that can be estimated from the
recorded EEG/MEG data. We explicitly model the long-distance communication delays. Our parameters have
physiological meaning and their plausible range is known. Since the problem is highly nonlinear, a quasi-Newton
optimization method with random sampling and automatic success evaluation is used. The actual connection
topology can be identified from several possibilities. The method was tested on synthetic data as well as on true
MEG somatosensory-evoked field (SEF) data.
KEYWORDS: Signal to noise ratio, Magnetic resonance imaging, Receivers, Reconstruction algorithms, Error analysis, In vivo imaging, Image quality, Computer programming, Image filtering, Computer simulations
Parallel MRI is a way to use multiple receiver coils with distinct spatial sensitivities to increase the speed of
the MRI acquisition. The acquisition is speeded up by undersampling in the phase-encoding direction and the
resulting data loss and aliasing is compensated for by the use of the additional information obtained from several
receiver coils.
The task is to reconstruct an unaliased image from a series of aliased images. We have proposed an algorithm
called PROBER that takes advantage of the smoothness of the reconstruction transformation in space. B-spline
functions are used to approximate the reconstruction transformation. Their coefficients are estimated at once
minimizing the total expected reconstruction error. This makes the reconstruction less sensitive to noise in the
reference images and areas without signal in the image. We show that this approach outperforms the SENSE
and GRAPPA reconstruction methods for certain coil configurations.
In this article, we propose another improvement, consisting of a continuous representation of the B-splines to
evaluate the error instead of the discretely sampled version. This solves the undersampling issues in the discrete
B-spline representation and offers higher reconstruction quality which has been confirmed by experiments. The
method is compared with the discrete version of PROBER and with commercially used algorithms GRAPPA
and SENSE in terms of artifact suppression and reconstruction SNR.
Parallel MRI (pMRI) is a way to increase the speed of the MRI acquisition by combining data obtained simultaneously from several receiver coils with distinct spatial sensitivities. The measured data contains additional information about the position of the signal with respect to data obtained by a standard, uniform sensitivity coil. The idea is to speed up the acquisition by sampling more sparsely in the k-space and to compensate the data loss using the additional information obtained by a higher number of receiver coils. Most parallel reconstruction methods work in image domain and estimate the reconstruction transformation independently in each pixel. We propose an algorithm that uses B-spline functions to approximate the reconstruction map which reduces the number of parameters to estimate and makes the reconstruction faster and less sensitive to noise. The proposed method is tested on both phantom and in vivo images. The results are compared with commercial implementation of GRAPPA and SENSE algorithms in terms of time complexity and quality of the reconstruction.
Utilization of tools during surgical interventions sets the problem of their accurate localization within biological tissue. The ultrasound imaging represents an inexpensive and a flexible approach for a real-time image acquisition of tissue structure with metal instruments. There are several difficulties involving processing of ultrasound images: Their noisy nature makes the localization task difficult; the objects appear irregular and incomplete. Our task is to determine the position of a curvilinear electrode in biological tissue from a three-dimensional ultrasound image. Initially, the data are segmented by thresholding and processed with the randomized version of the RANSAC (R-RANSAC) algorithm. The curvilinear electrode is modeled by a three-dimensional cubic curve. Its shape is subject to check using a curvature measure in the hypothesis evaluation step of the R-RANSAC algorithm. Subsequently, we perform the least squares curve fitting to the data that have been marked by the R-RANSAC as the ones corresponding to the sought object. The position estimation is optimal with respect to the mean square criterion. Finally, the localization of the electrode tips is carried out by a hypothesis testing on the distances between projections of inliers on the estimated curve. The algorithm has been tested on real three-dimensional ultrasound images of a tissue mimicking phantom with a curvilinear object. From the results, we conclude that the method is very stable even if the data contain high percentage of outliers. The computational cost of the algorithm shows the possibility of real-time data processing.
We propose a way of measuring elastic properties of tissues in-vivo, using standard medical image ultrasound machine without any special hardware. Images are acquired while the tissue is being deformed by a varying pressure applied by the operator on the hand-held ultrasound probe. The local elastic shear modulus is either estimated from a local displacement field reconstructed by an elastic registration algorithm, or both the modulus and the displacement are estimated simultaneously. The relation between modulus and displacement is calculated using a finite element method (FEM). The estimation algorithms were tested on both synthetic, phantom and real subject data.
We formulate the tomographic reconstruction problem in a variational setting. The object to be reconstructed is considered as a continuous density function, unlike in the pixel-based approaches. The measurements are modeled as linear operators (Radon transform), integrating the density function along the ray path. The criterion that we minimize consists of a data term and a regularization term. The data term represents the inconsistency between applying the measurement model to the density function and the real measurements. The regularization term corresponds to the smoothness of the density function. We show that this leads to a solution lying in a finite dimensional vector space which can be expressed as a linear combination of generating functions. The coefficients of this linear combination are determined from a linear equation set, solvable either directly, or by using an iterative approach. Our experiments show that our new variational method gives results comparable to the classical filtered back-projection for high number of measurements (projection angles and sensor resolution). The new method performs better for medium number of measurements. Furthermore, the variational approach gives usable results even with very few measurements when the filtered back-projection fails. Our method reproduces amplitudes more faithfully and can cope with high noise levels; it can be adapted to various characteristics of the acquisition device.
Registration of images subject to non-linear warping has numerous practical applications. We present an algorithm based on double multiresolution structure of warp and image spaces. Tuning a so-called scale parameter controls the coarseness of the grid by which the deformation is described and also the amount of implicit regularization. The application of our algorithm deals with undoing unidirectional non-linear geometrical distortion of echo- planar images (EPI) caused by local magnetic field inhomogeneities induced mainly by the subject presence. The unwarping is based on registering the EPI images with corresponding undistorted anatomical MRI images. We present evaluation of our method using a wavelet-based random Sobolev-type deformation generator as well as other experimental examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.