The classic shrinkage works well for monochrome-image denoising. To utilize inter-channel color correlations, a noisy
image undergoes the color-transformation from the RGB to the luminance-and-chrominance color space, and the
luminance and the chrominance components are separately denoised. However, this approach cannot cope with signaldependent
noise of a digital color camera. To utilize the noise's signal-dependencies, previously we have proposed the
soft color-shrinkage where the inter-channel color correlations are directly utilized in the RGB color space. The soft
color-shrinkage works well; but involves a large amount of computations. To alleviate the drawback, taking up the l0-l2
optimization problem whose solution yields the hard shrinkage, we introduce the l0 norms of color differences and the l0
norms of color sums into the model, and derive hard color-shrinkage as its solution. For each triplet of three primary
colors, the hard color-shrinkage has 24 feasible solutions, and from among them selects the optimal feasible solution
giving the minimal energy. We propose a method to control its shrinkage parameters spatially-adaptively according to
both the local image statistics and the noise's signal-dependencies, and apply the spatially-adaptive hard color-shrinkage
to removal of signal-dependent noise in a shift-invariant wavelet transform domain. The hard color-shrinkage performs
mostly better than the soft color-shrinkage, from objective and subjective viewpoints.
Recently, R. H. Chan, T. F. Chan, L. Shen, and Z. Shen proposed an image-restoration method, referred to as the C2-S2
method, in the shift-invariant Haar wavelet transform domain. The C2-S2 method is suitable for image-restoration in a
digital color camera and restores a sharp color image in a lightly noisy case; but in a heavily noisy case it produces
colored artifacts originating from noise. In such a case, the image-restoration process should be split into a colorinterpolation
stage, a denoising stage, and a deblurring stage; and along this line we present an approach to restore a
high ISO-sensitivity color image. Our approach firstly de-mosaics observed color data with the bi-linear interpolation
method, which is robust against observation noise but causes image blurs. Next, our approach applies our previouslyproposed
spatially-adaptive soft color-shrinkage to each shift-invariant Haar wavelet coefficient of the demosaicked
color-image, to produce a denoised color-image. Finally, our approach applies to the denoised color-image a colorimage
restoration method that corresponds to an extension of the C2-S2 method and employs our previously proposed
soft color-shrinkage. The experimental simulations conducted on raw color data observed with a SLR digital color
camera with 6400 ISO-sensitivity demonstrate that our approach restores a high-quality color image even in a high ISOsensitivity
case, without producing noticeable artifacts.
KEYWORDS: Wavelet transforms, Denoising, Image processing, Wavelets, Digital cameras, Cameras, Nonlinear image processing, RGB color model, Information technology, Projection systems
This paper extends the BV (Bounded Variation) - G and/or the BV-L1 variational nonlinear image-decomposition
approaches, which are considered to be useful for image processing of a digital color camera, to genuine color-image
decomposition approaches. For utilizing inter-channel color cross-correlations, this paper first introduces TV (Total
Variation) norms of color differences and TV norms of color sums into the BV-G and/or BV-L1 energy functionals, and
then derives denoising-type decomposition-algorithms with an over-complete wavelet transform, through applying the
Besov-norm approximation to the variational problems. Our methods decompose a noisy color image without producing
undesirable low-frequency colored artifacts in its separated BV-component, and they achieve desirable high-quality
color-image decomposition, which is very robust against colored random noise.
KEYWORDS: Image processing, Image enhancement, Digital cameras, Cameras, Image restoration, Super resolution, Visibility, Signal processing, Digital imaging, Digital image processing
This paper presents two new architectures of an image-processing (IP) pipeline for a digital color camera, which are
referred to as a prior-decomposition approach and a posterior-decomposition approach. The prior-decomposition
approach firstly decomposes each primary color channel of mosaicked raw color data as a product of its two
components, a structure component and a texture component, with a proper multiplicative monochrome-image
decomposition method such as the BV-G variational decomposition method. Each component is then demosaicked with
its proper color-interpolation method. Next, white balancing, color enhancement and inverse gamma correction are
applied to only the structure component. Finally, the two components are combined. On the other hand, the posteriordecomposition
approach firstly interpolates mosaicked raw color data with a proper demosaicing method, and
decomposes its demosaicked color image with our proposed multiplicative color-image decomposition method utilizing
inter-channel color cross-correlations. The subsequent processing is performed in the same manner as in the priordecomposition
approach. Our proposed two architectures produce a high-quality output full-color image, but somewhat
differ in performance. We experimentally compare their performance, and discuss the merits and the demerits of them.
In a digital camera, several factors cause signal-dependency of additive noise. Many denoising methods have been
proposed, but many of them do not necessarily work well for actual signal-dependent noise. To solve the problem of
removing signal-dependent noise of a digital color camera, this paper presents a denoising approach via nonlinear
image-decomposition. As a pre-process, we employ the BV-L1 nonlinear image-decomposition variational model. This
variational model decomposes an input image into three components: a structural component corresponding to a cartoon
image approximation collecting geometrical image features, a texture component corresponding to fine image textures,
and a residual component. Each separated component is denoised with a denoising method suitable to it. For an image
taken with a digital color camera under the condition of high ISO sensitivity, the BV-L1 model removes its signal-dependent
noise to a large extent from its separated structural component, in which geometrical image features are well
preserved, but the structural component sometimes suffers color-smear artifacts. To remove those color-smear artifacts,
we apply the sparse 3D transform-domain collaborative filtering to the separated structural component. On the other
hand, the texture component and the residual component are rather contaminated with noise, and the effects of noise are
selectively removed from them with our proposed color shrinkage denoising schemes utilizing inter-channel color crosscorrelations.
Our method achieves efficient denoising and selectively removes signal-dependent noise of a digital color
camera.
This paper presents a new image-interpolation approach where one can adjust edge sharpness and texture intensity
according to one's taste. This approach is composed of the three stages. At the first stage, with the BV-G imagede-composition
variational model, an image is represented as a product of its two components so that its separated
structural component may correspond to a cartoon image-approximation and its separated texture components may
collect almost all oscillatory variations representing textures, and the texture component can be amplified or attenuated
according to user's taste. At the second stage, each separated component is interpolated with an interpolation method
suitable to it. Since the structural component keeps sharp edges, its proper interpolation method is a TV-regularization
super-resolution interpolation method that can restore frequency components higher than the Nyquist frequency and
remove sample-hold blurs without producing ringing artifacts near edges. The texture component is an oscillatory
function, and its proper interpolation method is a smoothness-regularization super-resolution interpolation method that
can restore continuous variations and remove the blurs. At the final stage, the two interpolated components are
combined. The approach enlarges images without not only blurring edges but also destroying textures, and removes
blurs caused by the sample-hold and/or the optical low-pass filter without producing ringing artifacts.
KEYWORDS: Super resolution, Color difference, Image processing, Interference (communication), RGB color model, Linear filtering, Optical filters, Image restoration, Cameras, Digital cameras
Previously we presented a TV-based super-resolution sharpening-demosaicing method. Our previous method makes it
possible to restore frequency components higher than the Nyquist frequency, and to interpolate color signals effectively
while preserving their sharp color edges, without producing ringing artifacts along the edges. However, since our
previous method applies the TV regularization separately to each primary color channel, as side effects it sometimes
produces false color artifacts and/or zipper artifacts along sharp color edges. To remedy the drawback, in addition to the
TV regularization of each primary color signal, we introduce the TV regularization of color difference signals such as
G-R, and that of color sum signals such as G+R, into the TV-based super-resolution sharpening-demosaicing method.
Near sharp color edges, correct interpolation provides the smallest TV norms of color difference signals or the smallest
TV norms of color sum signals. Unlike our previous method, our new method jointly interpolates the three primary
color channels. We compare demosaicing performance of our new method with the state-of-the-art demosaicing
methods. In the evaluations, we consider a noise-free case and a noisy case. For both cases our new method achieves the
best performance, and for the noisy case our new method considerably outperforms the state-of-the-art methods.
KEYWORDS: Interference (communication), Denoising, Digital cameras, Nonlinear optics, Signal to noise ratio, Visualization, Cameras, Digital imaging, Astatine, Image processing
In a digital camera, several factors cause signal-dependency of additive noise. Many denoising methods have been
proposed, but unfortunately most of them do not work well for the actual signal-dependent noise. To solve the problem
of removing the signal-dependent noise of a digital camera, this paper presents a denoising approach via the nonlinear
image-decomposition. In the nonlinear decomposition-and-denoising approach, at the first nonlinear image-decomposition
stage, multiplicative image-decomposition is performed, and a noisy image is represented as a product of
its two components so that its structural component corresponding to a cartoon approximation of the noisy image may
not be corrupted by the noise and its texture component may collect almost all the noise. At the successive nonlinear
denoising stage, intensity of the separated structural component is utilized instead of the unknown true signal value, to
adapt the soft-thresholding-type denoising manipulation of the texture component to the signal dependency of the noise.
At the final image-synthesis stage, the separated structure component is combined with the denoised texture component,
and thus a sharpness-preserved denoised image is reproduced. The nonlinear decomposition-and-denoising approach
selectively removes the signal-dependent noise of a digital camera without not only blurring sharp edges but also
destroying visually important textures.
KEYWORDS: Video, Cameras, Point spread functions, Motion estimation, Field emission displays, Deconvolution, Bromine, Image restoration, Data modeling, Video processing
Time varying motion blur is sometimes brought by camera shake,
and alternate blurred frames and sharp frames are included in video clips.
This degradation is considered a kind of breathing distortion, which means oscillatory changes of blur.
We propose a motion deblurring method which aims to suppress breathing distortions.
A spatio-temporal regularization is introduced into the deconvolution approach to smooth the temporal change of blur.
KEYWORDS: Image restoration, Super resolution, Image compression, Radio over Fiber, Quantization, Cameras, Computer programming, Digital imaging, Signal to noise ratio, Interference (communication)
In a digital camera, its output image is sometimes corrupted by additive noise heavily and its noisy image is often
compressed with the JPEG encoder. When the coding rate of the JPEG encoder is not high enough, in a JPEG-decoded
image there appear noticeable artifacts such as the blocking, the ringing, and the false color artifacts. In the high ISOsensitivity
case, even if the coding rate is very high, camera's noise will produce noticeably annoying artifacts in a
JPEG-decoded image. This paper presents a restoration-type decoding approach that recovers a quality-improved image
from the JPEG-compressed data, while not only suppressing the occurrence of the coding artifacts particular to the
JPEG compression but also removing the camera's noise to some extent. This decoding approach is a kind of superresolution
image-restoration approach based on the TV (Total Variation) regularization; to reduce the ringing artifacts
near sharp edges it selectively restores the DCT coefficients truncated by the JPEG compression, whereas in an
originally smooth image region it flattens unnecessary signal variations to eliminate the blocking artifacts and the
camera's noise. Extending the standard ROF (Rudin-Osher-Fetami) framework of the TV image restoration, in this
paper we construct the super-resolution approach to the JPEG decoding. By introducing the JPEG-compressed data into
the fidelity term of the energy functional and adopting a nonlinear cost function constrained by the JPEG-compressed
data softly, we define a new energy functional whose minimization gives the super-resolution JPEG decoding.
At EI2006, we proposed the CMOS image sensor, which was overlaid with organic photoconductive layers in
order to incorporate in it large light-capturing ability of a color film owing to its multiple-layer structure, and
demonstrated the pictures taken by the trial product of the proposed CMOS image sensor overlaid with an organic layer
having green sensitivity. In this study, we have tried to get the optimized spectral sensitivity for the proposed CMOS
image sensor by means of the simulation to minimize the color difference between the original Macbeth chart and its
reproduction with the spectral sensitivity of the sensor as a parameter. As a result, it has been confirmed that the
proposed CMOS image sensor with multiple-layer structure possesses high potential capability in terms of imagecapturing
efficiency when it is provided with the optimized spectral sensitivity.
KEYWORDS: Interference (communication), Denoising, Digital cameras, Nonlinear optics, Visualization, Digital imaging, Radio over Fiber, Sensors, Image visualization, Optical filters
In a digital camera, several factors cause signal-dependency of additive noise. Many denoising methods have been
proposed, but unfortunately most of them do not work well for the actual signal-dependent noise. To solve the problem
of removing the signal-dependent noise of a digital camera, we present a denoising approach via the nonlinear imagedecomposition.
In the nonlinear decomposition-and-denoising approach, at the first nonlinear image-decomposition
stage, multiplicative image-decomposition is performed, and a noisy image is represented as a product of its two
components so that its structural component corresponding to a cartoon approximation of the noisy image may not be
corrupted by the noise and its texture component may collect almost all the noise. At the successive nonlinear denoising
stage, intensity of the separated structural component is utilized instead of the unknown true signal value, to adapt the
soft-thresholding-type denoising manipulation of the texture component to the signal dependency of the noise. At the
final image-synthesis stage, the separated structure component is combined with the denoised texture component, and
thus a sharpness-improved denoised image is reproduced. The nonlinear decomposition-and-denoising approach can
selectively remove the signal-dependent noise of a digital camera without not only blurring sharp edges but also
destroying visually important textures.
As an imaging scheme with a single solid-state image sensor, the direct color-imaging approach is considered promising as an acquisition scheme of color data with high spatial-resolution. The sensor has three photo-sensing layers along its depth direction. Although each pixel has three color channels, their spectral sensitivities are overlapped with each other to a considerable extent, and the direct color-imaging approach is considered to have a problem in color separation. To cope with this problem, this paper presents a hybrid color-imaging approach between the direct color-imaging approach and the color-filter-array approach. Our hybrid approach uses a direct color-imaging sensor that has three photo-sensing layers, and pastes the green and the magenta color filters on pixels' surfaces of the direct color-imaging sensor according to a checkered pattern. The use of the checkered green-magenta color-filter-array improves color separation, but the sensed reddish and the sensed bluish color channels still have somewhat different color spectra from those of the red and the blue primary color channels. In addition to it, the sensed green color channel and the sensed reddish / bluish color channels are sub-sampled in a ratio of 2 to 1 according to the checkered pattern. To recover RGB primary color images with full spatial resolution, not only the color transformation of the reddish and the bluish color channels but also the interpolation of the sensed three color channels are needed. This paper presents a method for the color transformation and a method for the interpolation. Our methods achieve high spatial resolution better than the pure color-filter-array approach, and at the same time improve color separation to some extent.
We previously presented a demosaicking method that simultaneously removes image blurs caused by an optical low-pass
filter used in a digital color camera with the Bayer's RGB color filter array. Our prototypal sharpening-demosaicking method restored only spatial frequency components lower than the Nyquist frequency corresponding to the mosaicking pattern, but it often produced ringing artifacts near color edges. To overcome this difficulty, afterward we introduced the super-resolution into the prototypal method. We formulated the recovery problem in the DFT domain, and then introduced the super-resolution by the total-variation (TV) image regularization into the sharpening-demosaicking approach. The TV-based super-resolution effectively demosaiced sharp color images while preserving such image structures as intensity values are almost constant along edges, without producing ringing artifacts. However, the TV image regularization works as the smoothing, and it tends to suppress small intensity variations excessively. Hence, the TV-based super-resolution sharpening-demosaicking approach tends to crash texture details in texture image regions. To remedy the drawback, in this paper we introduce a spatially adaptive technique that controls the TV image regularization according to the saliency of color edges around a pixel.
The Retinex theory was first proposed by Land, and deals with separation of irradiance from reflectance in an observed image. The separation problem is an ill-posed problem. Land and others proposed various Retinex separation algorithms. Recently, Kimmel and others proposed a variational framework that unifies the previous Retinex algorithms such as the Poisson-equation-type Retinex algorithms developed by Horn and others, and presented a Retinex separation algorithm with the time-evolution of a linear diffusion process. However, the Kimmel's separation algorithm cannot achieve physically
rational separation, if true irradiance varies among color channels. To cope with this problem, we introduce a nonlinear diffusion process into the time-evolution. Moreover, as to its extension to color images, we present two approaches to treat color channels: the independent approach to treat each color channel separately and the collective approach to treat all color channels collectively. The latter approach outperforms the former. Furthermore, we apply our separation algorithm to a high quality chroma key in which before combining a foreground frame and a background frame into an output image a color of each pixel in the foreground frame are spatially adaptively corrected through transformation of the separated
irradiance. Experiments demonstrate superiority of our separation algorithm over the Kimmel's separation algorithm.
KEYWORDS: 3D modeling, RGB color model, Diffusion, Image processing, Bromine, Argon, Image contrast enhancement, Image enhancement, Human vision and color perception, Image filtering
Previously we have presented a selective image sharpening method based on the coupled nonlinear diffusion process composed of a nonlinear diffusion term, a fidelity term and an isotropic peaking term, and it can sharpen only blurred edges without increasing the noise visibility. Our previously presented prototypal color-image sharpening methods based on the coupled nonlinear-diffusion process have been formulated on the linear color models, namely, the channel-bychannel model and the 3D vectorial model. Our prototypal methods can sharpen blurred color step edges, but they do not necessarily enhance contrasts of signal variations in complex texture image regions so well as in simple step-edge regions. To remedy the drawback, this paper extends our coupled nonlinear-diffusion color-image sharpening method to the nonlinear non-flat color model, namely, the chromaticity-brightness model, which is known to be closely relating to human color perception. We modify our time-evolution PDE's for the non-flat space of the chromaticity vector and present its digital implementations. Through experimental simulations, we compare our new color-image sharpening method based on the chromaticity-brightness model with our prototypal color-image sharpening methods based on the linear color models.
Once image motion is accurately estimated, we can utilize those motion estimates for image sharpening and we can remove
motion blurs. First, for the motion de-blurring, this paper presents a model-based PDE method that minimizes the regularized
energy functional defined with a spatially variant model of motion blurs. Unlike the case of spatially invariant image blurs,
the minimization of the energy functional cannot be achieved in a closed non-iterative way, and we derive its iterative
algorithm. The standard regularization method uses a square function to measure energy of its solution function, and
employs the energy functional composed of the data-fidelity energy term to measure a deviation of a solution function from
the assumed model of motion blurs and the regularization energy term to impose smoothness constraints on a solution
function. However, the standard variational method is not proper for the motion de-blurring, because it is sensitive to model
errors, and occurrence of errors are inevitable in motion estimation. To improve the robustness against the model errors, we
employ a nonlinear robust estimation function for measuring energy to be minimized. Secondly, this paper experimentally
compares the model-based PDE method with our previously presented model-free PDE method that does not need any
accurate blur model. In the model-error-free case the model-based PDE method outperforms the model-free PDE method,
whereas in the model-error case the latter works better than the former.
We previously presented a method of selective image sharpening based on a coupled nonlinear diffusion process that was composed of a nonlinear diffusion term, a fidelity term and an isotropic peaking term. It could sharpen only blurred edges without increasing noise visibility. This paper extends our method to the removal of blurs in images due to image motion. Motion blur is not only shift-variant but also anisotropic. To adapt our method to motion de-blurring, we replaced our isotropic peaking term, with an anisotropic peaking term steered in direction of motion at each pixel. We then devised discrete calculus to adapt it to the direction of motion. Through experiments with a test sequence of moving images containing artificial motion-blurs, we quantitatively evaluated sharpening performance. Our new method with the anisotropic peaking term performed better than our prototypal method with the isotropic peaking term, and was robust against errors in the direction of motion.
As an optical low-pass filter, a doubly refractive crystal device is used. The filter reduces frequency components lower than the Nyquist frequency, and images are blurred. We previously presented a demosaicking method that simultaneously removes blurs caused by the optical low-pass filter. For the sharpening-demosaicking approach, the Bayer’s RGB color filter array is not necessarily proper, and we studied another color-filter array, namely the WRB filter array, where the W-filtering means that all the visible light passes through it. Our prototypal sharpening-demosaicking method employed the iterative algorithm, and restored only spatial frequency components of color signals lower than the Nyquist frequency corresponding to the mosaicking pattern of the W filters. However, the same recovery problem is solved by a non-iterative method in the Discrete Fourier Transform domain. Moreover, our prototypal method often produced ringing artifacts near sharp color edges. To suppress those artifacts, we introduce the TV-based super-resolution into the sharpening-demosaicking approach. This super-resolution approach restores spatial frequency components higher than the Nyquist frequency from observed blurry spatial frequency components so that without producing ringing artifacts it can enlarge and sharpen images while preserving 1D image structures that intensity values are almost constant along the edge direction.
KEYWORDS: Reflectivity, Sensors, Inverse problems, RGB color model, Optical filters, Image sensors, Data acquisition, Silicon, Color imaging, Solid state electronics
As an imaging scheme with a single solid-state sensor, the direct color-imaging approach is considered promising. The sensor has photo-sensing layers more than two along its depth direction. Although each pixel has multiple color signals, their spectral sensitivities are overlapped with each other. The overlapped color signals should be transformed to color signals specified by an output device. We present a color transformation method for the direct color-imaging scheme. Our method tries to recover multi-spectral reflectance and then to transform colors by utilizing it. The problem is formulated as the inverse problem that multi-spectral reflectance with a large number of color channels are recovered from observed color signals with a smaller number of color channels, and solved by the regularization technique that minimizes the functional composed of a color-fidelity term and a spectral-regularity term. The color-fidelity term quantifies errors in linear transformation from multi-spectral reflectance to observed color signals; whereas the spectral-regularity term quantifies the property that spectral reflectance at a color channel is similar to those at its neighboring channels. We simulate the direct color-imaging scheme and our method. The results show that in the case of more than five photo-sensing layers our method restores multi-spectral reflectance satisfactorily.
KEYWORDS: Optical filters, RGB color model, Linear filtering, Image processing, Crystals, Color reproduction, Solid state electronics, Inverse optics, Color image segmentation, Image filtering
As an optical low-pass filter, a doubly refractive crystal device is used and performs separation of its incident light into
the normal light and the abnormal light shifted to the slightly different direction to the normal light. The actual optical
low-pass filter is formed by combining two types of doubly refractive crystal device; the one separates its incident light
into two traveling directions horizontally spaced each other by one pixel, and the other does vertically spaced by one
pixel. The filter cannot sharply cut off high frequency components and it reduces the frequency components near the
Nyquist frequency. Thus, images projected on the imaging surface are blurred. This paper presents a demosaicking
method that can simultaneously remove image blurs caused by the optical low-pass filter. Most of the existing
demosaicking methods do not try to remove the image blurs, whereas our sharpening approach to the demosaicking
employs the Landweber-type iterative algorithm. For our sharpening-demosaicking approach, the Bayer’s pattern of the
RGB primary color filter array is not necessarily proper, and hence we study another color-filter pattern, namely the
WRB filter array that is preferable to the RGB filter array, where the W-filtering means that all the visible light passes
through it. Our mathematical formulation of the sharpening-demosaicking has the form of the least square problem, but
there exist multiple different least square solutions. To avoid its ambiguity, in the spatial frequency domain we introduce
the pass-band limitation corresponding to the sub-sampling pattern of the mosaicking of color filters, into the iterative
algorithm.
Previously we have presented a method for selective image sharpness enhancement. Our method is based on the simultaneous nonlinear reaction-diffusion time-evolution equipped with a nonlinear diffusion term, a reaction term and an isotropic peaking term, and it can sharpen only degraded edges blurred by several causes without increasing the visibility of nuisance factors such as random noise. This paper applies our simultaneous nonlinear reaction-diffusion method to removal of image blurs due to image motion. The motion blur is not only shift-variant but also anisotropic. To adapt our
simultaneous nonlinear reaction-diffusion method for motion de-blurring, we replace the isotropic Laplacian operator, included in the isotropic peaking term of our prototypal method, with the anisotropic operator considering the direction of the estimated image motion. Preliminarily experiments using artificially generated test images show that our method achieves excellent motion de-blurring.
Previously we have presented a method for selective sharpness enhancement of monochrome images. Our method is based on the simultaneous nonlinear reaction-diffusion time-evolution equipped with a nonlinear diffusion term, a reaction term and an overshooting term, and it can sharpen only degraded edges blurred by several causes without increasing the visibility of nuisance factors such as random noise. This paper extends our method to selective sharpening of color images. As to the how to extend it, we take into accounts some variations about the treatment of three color components and the selection of the color space. By experiments, we quantitatively evaluate performance of these variations. Among them, the collective treatment of color components based on the simultaneous full-nonlinear reaction-diffusion time-evolution achieves the best performance, and sharpens blurred color edges selectively much better than the existing sharpness enhancement methods such as the adaptive peaking method.
KEYWORDS: Image segmentation, Buildings, Data conversion, Laser range finders, 3D image processing, Defect detection, Algorithm development, Detection and tracking algorithms, 3D acquisition, Image processing algorithms and systems
Some types of laser range scanner measuring range and color data simultaneously are used to acquire 3D structure of outdoor scenery. However, a laser range scanner cannot give us perfect information about target objects such as buildings, and various factors incur defects of range data. We present a defect detection scheme based on the region segmentation using observed range and color data, and apply a time-evolution method to the repair of defective range data. As to the defect detection, performing the segmentation, we divide observed data into several regions corresponding to buildings, the ground and so on. Using the segmentation results, we determine defect regions. Given defect regions, their range data will be repaired from observed data in their neighborhoods. Reforming the transportation-based inpainting algorithm, previously developed for the defect repair of an intensity image, we construct a new defect-repair method that applies the interleaved sequential updates, composed of the transportation-based inpainting and the data projection onto the viewing direction of each range sample, to 3D point data converted from observed range data. The performance evaluations using artificially damaged test range data demonstrate that our repair method outperforms the existing repair methods both in quantitative performance and in subjective quality.
KEYWORDS: Image segmentation, Buildings, 3D image processing, Data conversion, Defect detection, Laser range finders, Data acquisition, 3D acquisition, Image processing algorithms and systems, Detection and tracking algorithms
Some types of laser range scanner can measure both range data and color texture data simultaneously from the same viewpoint, and are often used to acquire 3D structure of outdoor scenery. However, for outdoor scenery, unfortunately a laser range scanner cannot give us perfect range information about the target objects such as buildings, and various factors incur critical defects of range data. We present a defect detection method based on region segmentation using observed range and color data, and employ a nonlinear PDE (Partial Differential Equation)-based method to repair detected defect regions of range data. As to the defect detection, performing range-and-color segmentation, we divide observed data into several regions that correspond to buildings, trees, the sky, the ground, persons, street furniture, etc. Using the segmentation results, we extract occlusion regions of buildings as defects regions. Once the defect regions are extracted, 3D position data or range data will be repaired from the observed data in their neighborhoods. For that purpose, we adapt the digital inpainting algorithm, originally developed for the color image repair problem, for this 3D range data repair problem. This algorithm is formulated as the nonlinear time-evolution procedure based on the geometrical nonlinear PDE.
n existing single solid-state color image sensors, three primary color filter arrays are pasted on the surface of photo detectors according to the Bayeris pattern. To utilize information conveyed by the incoming light more efficiently, we can employ new color filters whose color components in their passband are overlapped with each other to a larger extent. As the filter array pattern we adopt the Bayeris pattern; but unlike the three primary color filters we use only two color filters, red and blue color filters, and we replace green pixels of the primary color filters with white-black pixels that correspond to pixels on which no light-absorption chemicals are pasted. In this scheme, the key point is to construct a demosaicking method suitable to these color filter arrays. We employ a hybrid demosaicking method that can restrain the occurrence of false color caused by the demosaicking and preserve original hue variations as thoroughly as possible while enhancing the spatial resolution of the restored image. The hybrid demosaicking method first applies the Landweber-type iterative algorithm, equipped with the frequency-band limitation corresponding to the sub-sampling pattern of the white-black pixel array, to the interpolation of white-black pixels, and then performs the interpolation of red and blue pixels with some existing chrominance-preserving-type method such as the gradient-based method. Experiments using test color images demonstrate that our hybrid demosaicking method reproduces a sharpened high-resolution color image without producing noticeable artifacts of false color. Our color image acquisition scheme gives a good compromise between false color occurrence and high-fidelity color reproduction.
This paper presents a Landweber-type iterative demosaicking method for a single solid state color image sensor using three primary color filters of the Bayeris pattern. The iterative demosaicking method can restore a high-resolution color image much better than the existing non-iterative demosaicking methods using switchover-type interpolation filters. This paper, extending the idea of our previously presented image acquisition scheme using an imperfect image sensor with defect pixels, forms the iterative demosaicking method. Our previous scheme prepares a defect-pixel map in advance, and then takes a defocused image with an imperfect image sensor. Utilizing the defect-pixel map and the information that each pixel shares with its adjacent pixels, it recovers defect pixels and simultaneously sharpens the blurred image. As the recovery technique, we have formed the Landweber-type iterative algorithm. Taking it into account that decimated color pixels caused by the color filters correspond to defect pixels in the above recovery problem, we adapt our previous iterative recovery method to the demosaicking problem. Furthermore, to restrain the occurrence of false color caused by the demosaicking and simultaneously to preserve original hue variations as thoroughly as possible, this paper forms a hybrid demosaicking method that first applies the Landweber-type iterative algorithm to the interpolation of the green color component and then performs the interpolation of the red and the blue components with some existing hue- or chrominance-preserving-type method. Experiments using test images and color images really taken with a high-resolution digital color camera demonstrate that our hybrid demosaicking method reproduces a sharpened high-resolution color image without producing noticeable artifacts of false color.
KEYWORDS: Image segmentation, 3D image processing, Laser scanners, 3D scanning, Image registration, 3D modeling, 3D metrology, Multimedia, Buildings, Data conversion
Toward future 3D image communication, we have started studying the Multimedia Ambiance Communication, a kind of shared-space communication, and adopted an approach to design the 3D-image space using actual images of outdoor scenery, by introducing the concept of the three-layer model of long-, mid- and short-range views. The long- and mid-range views do not require precise representation of their 3D structure, and hence we employ the setting representation like stage settings to approximate their 3D structure according to the slanting-plane-model. We deal with an approach to produce the consistent setting representation for describing long- and mid-range views from range and texture data measured with a laser scanner and a digital camera located at multiple viewpoints. The production of such a representation requires the development of several techniques: nonlinear smoothing of raw range data, plane segmentation of range data, registration of multi-viewpoint range data, integration of multi-viewpoint setting representations and texture mapping onto each setting plane. In this paper, we concentrate on the plane segmentation and the multi-viewpoint data registration. Our plane segmentation method is based on the concept of the region competition, and can precisely extract fitting planes from the range data. Our registration method uses the equations of the segmented planes corresponding between two different viewpoints to determine the 3D Euclidean transformation between them. A unifying consistent setting representation can be constructed by integrating multiple setting representations for multiple viewpoints.
In old movie film, most of sharp brightness transitions have been blurred, and film materials are often corrupted by several distortions such as blotches. To restore original edge sharpness without augmenting visibility of such distortions, first we characterize such distortion areas, repair them and then sharpen only blurred edges selectively. This paper presents a locally-adaptive sharpening method based on the coupled nonlinear reaction-diffusion time-evolution equipped with a second-order nonlinear smoothing term, a reaction term and an overshooting term. The overshooting term adds an overshoot only to the blurred edges. The coupled nonlinear reaction-diffusion method utilizes information about local image contents and the characterized film distortions, to control the degree of the second-order smoothing and the magnitude of the overshoot to be added. Our method sharpens blurred edges selectively much better than our previously presented adaptive peaking method. Our method, of course, is applicable to sharpness enhancement of general blurred images.
The old film restoration involves the development of image processing technology. We focus on interframe image processing algorithms for the correction of film misalignment, flicker correction and the removal of blotches. In the digital restoration, the film is first read with a scanner, but the scanned sequence often suffers from irregular spatial vibration due to inaccurate frame alignment. Hence, we develop a robust correction method for estimating interframe misalignment separately from camera work and compensating for it. After the correction of interframe misalignment, we perform flicker correction. Flickers are defined as undesirable brightness fluctuations. We present a hierarchy of flicker-correction models for correcting old film flickers, and develop a flicker correction method which first estimates the correction-model parameters from the input sequence and then corrects its flickers according to the estimated model. Furthermore, we present a method for blotch removal. Blotch distortions are repaired with a blending-type filter. For blotch detection we employ our previously presented approach of the spatiotemporal continuity analysis. The simulations on really corrupted sequences have demonstrated that our interframe image processing techniques can reduce film misalignment and can remove flickers and blotches almost perfectly.
The method of image acquisition is one of the principal problems for handling very high resolution images. This paper presents a new scheme for acquiring high resolution pictures by processing stereo images. The method integrates low resolution images into a high resolution image. Analysis of the achievable passband is given from a point of view of the imager photodetector structure. Compared to an imaging device with equivalent resolution, it may be less sensitive to shot noise which becomes more dominant as the pixel size of an imager is further reduced. Preliminary experiments have shown clear improvements in the high frequencies and image details. In addition, a scheme for further increasing tlie resolution is mentioned.
Super high resolution images with more than 2,000*2.000 pixels will play a very important role in a wide variety of applications of future multimedia communications ranging from electronic publishing to broadcasting. To make communication of super high resolution images practicable, we need to develop image coding techniques that can compress super high resolution images by a factor of 1/10 to 1/20. Among existing image coding techniques, the sub-band coding technique is one of the most suitable techniques. With its applications to high-fidelity compression of super high resolution images,one of the major problem is how to encode high frequency sub-band signals. High frequency sub-band signals are well modeled as having approximately memoryless probability distribution, and hence the best way to solve this problem is to improve the quantization of high frequency sub-band signals.
From the standpoint stated above, the work herein First compares three diferent scalor quantization schemes and improved permutation codes, which the authors have previously developed extending the concept of permutation codes, from the aspect of quantization performance for a memoryless probability distribution that well approximates the real statistical properties of high frequency sub-band signals, and thus demonstrates that at low coding rates improved permutation codes outperform the other scalor quatization schemes and that its superiority decreases as its coding rate increases. Moreover, from the results stated above, the work herein, develops rate-adaptive quantization techniques where the number of bits assigned to each subblock is determined according to the signal variance within the subblock and the proper quantization scheme is chosen from among different types of quantizaton schemes according to the allocated number of bits, and applies them to the high-fidelity encoding of high frequency sub-band signals of super high resolution images to demonstrate their usefulness.
The work herein, applying the concept of irreversible coding via copying to low-rate video compression, has devised some new variants of universal pattern-matching interframe coding (PMIC), which produces the extra effect of generalizing the definition of a search area used in the conventional block-matching motion compensation. We have experimentally shown the performance gain provided by the generalization within the framework of irreversible coding via copying, and demonstrated that PMIC is useful and potential as a basic means for low-rate video compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.