Accurate estimation of atmospheric wind velocity plays an important role in weather forecasting, flight safety assessment and cyclone tracking. Atmospheric data captured by infrared and microwave satellite instruments provide global coverage for weather analysis. Extracting wind velocity fields from such data has traditionally been done through feature tracking, correlation/matching or optical flow means from computer vision. However, these recover either sparse velocity estimates, oversmooth details or are designed for quasi-rigid body motions which over-penalize vorticity and divergence within the often turbulent weather systems. We propose a texture based optical flow procedure tailored for water vapor data. Our method implements an L1 data term and total variation regularizer and employs a structure-texture image decomposition to identify key features which improve recoveries and help preserve the salient vorticity and divergence structures. We extend this procedure to a multi-fidelity scheme and test both flow estimation methods on simulated over-ocean mesoscale convective systems and convective and extratropical cyclone datasets, each of which have accompanying ground truth wind velocities so we can qualitatively compare performances with existing optical flow methods.
We use large datasets from the Atmospheric Infrared Sounder (AIRS) and the Moderate Resolution Imaging Spectroradiometer (MODIS) to derive AIRS spatial response functions and study their potential variations over the mission. The new reconstructed spatial response functions can be used to reduce errors in the radiances in non-uniform scenes and improve products generated using both AIRS and MODIS data. AIRS spatial response functions are distinct for each of its 2378 channels and each of its 90 scan angles. We develop the mathematical model and the optimization framework for deriving spatial response functions for two AIRS channels with low water vapor absorption and various scan angles. We quantify uncertainties in the derived reconstructions and study how they differ from pre-flight spatial response functions. We show that our approach generates reconstructions that agree with the data more accurately compared to pre-flight spatial responses. We derive spatial response functions using data collected during successive dates in order to ascertain the repeatability of the reconstructed spatial response functions. We also compare the derived spatial response functions based on data collected in the beginning, the middle, and at the current state of the mission in order to study changes in reconstructions over time.
Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important
information on how certain diseases progress. One important property is the structure of the placental fetal
stems. Analysis of the fetal stems in a placenta could be useful in the study and diagnosis of some diseases
like autism. To study the fetal stem structure effectively, we need to automatically and accurately track fetal
stems through a sequence of digitized hematoxylin and eosin (H&E) stained histology slides. There are many
problems in successfully achieving this goal. A few of the problems are: large size of images, misalignment of
the consecutive H&E slides, unpredictable inaccuracies of manual tracing, very complicated texture patterns of
various tissue types without clear characteristics, just to name a few. In this paper we propose a novel algorithm
to achieve automatic tracing of the fetal stem in a sequence of H&E images, based on an inaccurate manual
segmentation of a fetal stem in one of the images. This algorithm combines global affine registration, local
non-affine registration and a novel 'dynamic' version of the active contours model without edges. We first use
global affine image registration of all the images based on displacement, scaling and rotation. This gives us
approximate location of the corresponding fetal stem in the image that needs to be traced. We then use the
affine registration algorithm "locally" near this location. At this point, we use a fast non-affine registration
based on L2-similarity measure and diffusion regularization to get a better location of the fetal stem. Finally, we
have to take into account inaccuracies in the initial tracing. This is achieved through a novel dynamic version of
the active contours model without edges where the coefficients of the fitting terms are computed iteratively to
ensure that we obtain a unique stem in the segmentation. The segmentation thus obtained can then be used as
an initial guess to obtain segmentation in the rest of the images in the sequence. This constitutes an important
step in the extraction and understanding of the fetal stem vasculature.
Computerized tomography (CT) plays an important role in medical imaging, especially for diagnosis and therapy.
However, higher radiation dose from CT will result in increasing of radiation exposure in the population. Therefore,
the reduction of radiation from CT is an essential issue. Expectation maximization (EM) is an iterative
method used for CT image reconstruction that maximizes the likelihood function under Poisson noise assumption.
Total variation regularization is a technique used frequently in image restoration to preserve edges, given
the assumption that most images are piecewise constant. Here, we propose a method combining expectation
maximization and total variation regularization, called EM+TV. This method can reconstruct a better image
using fewer views in the computed tomography setting, thus reducing the overall dose of radiation. The numerical
results in two and three dimensions show the efficiency of the proposed EM+TV method by comparison with
those obtained by filtered back projection (FBP) or by EM only.
Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important
information on how certain diseases progress. One important property is the structure of the placental blood
vessels, which supply a fetus with all of its oxygen and nutrition. An essential step in the analysis of the vascular
network pattern is the extraction of the blood vessels, which has only been done manually through a costly
and time-consuming process. There is no existing method to automatically detect placental blood vessels; in
addition, the large variation in the shape, color, and texture of the placenta makes it difficult to apply standard
edge-detection algorithms. We describe a method to automatically detect and extract blood vessels from a given
image by using image processing techniques and neural networks. We evaluate several local features for every
pixel, in addition to a novel modification to an existing road detector. Pixels belonging to blood vessel regions
have recognizable responses; hence, we use an artificial neural network to identify the pattern of blood vessels.
A set of images where blood vessels are manually highlighted is used to train the network. We then apply
the neural network to recognize blood vessels in new images. The network is effective in capturing the most
prominent vascular structures of the placenta.
In two dimensions, the Mumford and Shah functional for image segmentation and regularization15 has minimizers
(u,K), where u is a piecewise-smooth approximation of the image data f, and K represents the set
of discontinuities of u (a union of curves). Theoretically, the edge set K could include both closed and open
curves. The current level set and piecewise-smooth Mumford-Shah based segmentation algorithms4, 23, 24 can
only detect objects with closed edges, which are boundaries of open sets. We propose an efficient Mumford-Shah
and level set based algorithm for segmenting images with edges which are made up of open curves or crack-tips.
By adapting Smereka's open level set formulation21 to variational problems, we are able to extend the current
piecewise-smooth and level-set based image segmentation methods, such as4, 23, 24 to the case of open curve segmentation.
The algorithm retains many of the advantages of using level sets, such as well-defined boundaries and
ability to change topology. We solve the resulting Euler-Lagrange equations by Sobolev H1 gradient descent,
avoiding instability and the need for additional regularization of the level set functions, while also accelerating
convergence to the reconstructed image. Finally, we present the numerical implementation and experimental
results on various noisy images.
We propose a unified variational approach for registration of gene expression data to neuroanatomical mouse
atlas in two dimensions. The proposed energy (minimized in the unknown displacement u) is composed of three
terms: a standard data fidelity term based on L2 similarity measure, a regularizing term based on nonlinear
elasticity (allowing larger smooth deformations), and a geometric penalty constraint for landmark matching. We
overcome the difficulty of minimizing the nonlinear elasticity functional by introducing an auxiliary variable v
that approximates ∇u, the Jacobian of the unknown displacement u. We therefore minimize now the functional
with respect to the unknowns u (a vector-valued function of two dimensions) and v (a two-by-two matrix-valued
function). An additional quadratic term is added, to insure good agreement between v and ∇u. In this way,
the nonlinearity in the derivatives of the unknown u no longer exists in the obtained Euler-Lagrange equations,
producing simpler implementations. Several satisfactory experimental results show that gene expression data are
mapped to a mouse atlas with good landmark matching and smooth deformation. We also present comparisons
with the biharmonic regularization. An advantage of the proposed nonlinear elasticity model is that usually no
numerical correction such as regridding is necessary to keep the deformation smooth, while unifying the data
fidelity term, regularization term, and landmark constraints in a single minimization approach.
We consider several variants of the active contour model without edges, extended here to the case of noisy and blurry images, in a multiphase and a multilayer level set approach. Thus, the models jointly perform denoising, deblurring and segmentation of images, in a variational formulation. To minimize in practice the proposed functionals, one of the most standard ways is to use gradient descent processes, in a time dependent approach. Usually, the L2 gradient descent of the functional is computed and discretized in practice, based on the L2 inner product. However, this computation often requires theoretically additional smoothness of the unknown, or stronger conditions.
One way to overcome this is to use the idea of Sobolev gradients. We compare in several experiments the L2 and H1 gradient descents for image segmentation using curve evolution, with applications to denoising and deblurring. The Sobolev gradient descent is preferable in many situations and gives smaller computational cost.
In this work we wish to recover an unknown image from a blurry version. We solve this inverse problem by energy minimization and regularization. We seek a solution of the form u + v, where u is a function of bounded variation (cartoon component), while v is an oscillatory component (texture), modeled by a Sobolev function with negative degree of differentiability. Experimental results show that this cartoon + texture model better recovers textured details in natural images, by comparison with the more standard models where the unknown is restricted only to the space of functions of bounded variation.
KEYWORDS: Radio over Fiber, Image analysis, Image denoising, Denoising, Partial differential equations, Data modeling, Signal to noise ratio, Computational imaging, Electronic imaging, Mathematical modeling
This paper is devoted to a recent topic in image analysis: the decomposition of an image into a cartoon or
geometric part, and an oscillatory or texture part. Here, we propose a practical solution to the (BV,G) model
proposed by Y. Meyer. We impose that the cartoon is a function of bounded variation, while the texture is
represented as the Laplacian of some function whose gradient belongs to L(infinity). The problem thus becomes related
with the absolutely minimizing Lipschitz extensions and the infinity Laplacian. Experimental results for image
denoising and cartoon + texture separation, together with details of the algorithm, are also presented.
KEYWORDS: Image segmentation, Image restoration, Denoising, Brain, Binary data, Radio over Fiber, Data modeling, Tomography, Computational imaging, Magnetic resonance imaging
This work is devoted to new computational models for image segmentation, image restoration and image decomposition. In particular, we partition an image into piecewise-constant regions using energy minimization and curve evolution approaches. Applications of denoising-segmentation in polar coordinates (motivated by impedance tomography) and of segmentation of brain images will be presented. Also, we decompose a natural image into a cartoon or geometric component and an oscillatory or texture component using a variational approach and dual functionals. Thus, new computational methods will be presented for denoising, deblurring and texture modeling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.