Factored light-field (LF) technology helps resolving the vergence-accommodation conflict inherent to the most of conventional stereoscopic displays. The remaining challenges include decreasing computation cost of light-field factorization and improving image quality. We prototyped a dual-layer light-field stereoscope with a smartphone used as a display. We implement and compare three different methods of rank-one LF factorization and two ways of initializing them. The weighted rank-one residual iterations (WRRI) and the weighted nonnegative matrix factorization (WNMF) proved almost twice faster than Huang et al.’s method in our implementation. Our tests revealed that the best way of initialization for all the three methods is that by the square root of the LF central view values; namely, one-two iterations are enough to achieve acceptable image quality.
In the paper, we propose new fast and effective approach for automatic visibility enhancement of images with poor
global and local contrast. Initially, we developed the technique for scanned images with dark and light background
regions and low visibility of foreground objects in both types of regions. Newly proposed algorithm carries out locally
adaptive tone mapping by means of variable S-shaped curve. We use cubic Hermit spline. Starting and ending points of
the spline depend on global brightness contrast, whereas tangents depend on local distribution of background and
foreground pixels. Alteration of the tangents for adjacent areas is smoothed in order to avoid forming of visible artifacts.
The description of several optimization tricks, which allow realization of high-speed algorithm, is given. We compare
proposed method with several well-known image enhancement techniques by means of estimation of Michelson contrast
(also known as visibility metric) for a number of test patterns. Disclosed algorithm outperforms tested alternatives.
Finally, we extend application of proposed method for photo enhancement and correction of images with haze.
The paper is devoted to a novel high-performance algorithm for automatic segmentation and skew correction of several objects on a scanned image. The complex multi-stage technique includes preprocessing, initial segmentation, classification of connected regions, merging of fragmented regions by heuristic procedure, bounding boxes detection and deskew of rectangular objects. Our method is highly effective owing to unification most of operations in one pass. Algorithm provides users with additional functionality and comfort. The method is evaluated by suggested quantitative quality criteria.
The paper is devoted to the algorithm for generation of PDF with vector symbols from scanned documents. The complex
multi-stage technique includes segmentation of the document to text/drawing areas and background, conversion of
symbols to lines and Bezier curves, storing compressed background and foreground. In the paper we concentrate on
symbol conversion that comprises segmentation of symbol bodies with resolution enhancement, contour tracing and
approximation. Presented method outperforms competitive solutions and secures the best compression rate/quality ratio.
Scaling of initial document to other sizes as well as several printing/scanning-to-PDF iterations expose advantages of
proposed way for handling with document images. Numerical vectorization quality metric was elaborated. The outcomes
of OCR software and user opinion survey confirm high quality of proposed method.
Reducing toner consumption is an important task in modern printing devices and has a significant positive ecological
impact. Existing toner saving approaches have two main drawbacks: appearance of hardcopy in toner saving mode is
worse in comparison with normal mode; processing of whole rendered page bitmap requires significant computational
costs.
We propose to add small holes of various shapes and sizes to random places inside a character bitmap stored in font
cache. Such random perforation scheme is based on processing pipeline in RIP of standard printer languages Postscript
and PCL. Processing of text characters only, and moreover, processing of each character for given font and size alone, is
an extremely fast procedure. The approach does not deteriorate halftoned bitmap and business graphics and provide toner
saving for typical office documents up to 15-20%. Rate of toner saving is adjustable.
Alteration of resulted characters' appearance is almost indistinguishable in comparison with solid black text due to
random placement of small holes inside the character regions. The suggested method automatically skips small fonts to
preserve its quality. Readability of text processed by proposed method is fine. OCR programs process that scanned
hardcopy successfully too.
KEYWORDS: Halftones, Image filtering, Image processing, RGB color model, Linear filtering, Visualization, Optical filters, High dynamic range imaging, Detection and tracking algorithms, Printing
Screen or halftone pattern appears on the majority of images printed on electrophotographic and ink-jet printers as well
as offset machines. When such halftoned image is scanned, a noisy effect called a Moiré pattern often appears on the
image. There are plenty of methods proposed for descreening of images. Common way is adaptive smoothing of scanned
images. However the descreening techniques face the following dilemma: deep screen reduction and restoration of
contone images leads to blurring of sharp edges of text and other graphics primitives, on the other hand insufficient
smoothing keeps screen in halftoned areas.
We propose novel descreening algorithm that is primarily intended for preservation of sharpness and contrast of text
edges and for restoration contone images from halftone ones accurately. Proposed technique for descreening of scanned
images comprises five steps. The first step is decrease of edge transition slope length via local tone mapping with
ordering; it is carried out before adaptive smoothing, and it allows better preservation of edges. Adaptive low-pass filter
applies simplified idea of Non-Local Means filter for area classification; similarity is calculated between central block of
window and different adjacent block that is selected randomly. If similarity is high then current pixel relates to flat
region, otherwise pixel relates to edge region. For prevention of edges blurring, flat regions are smoothed stronger than
edge regions. By random selection of blocks we avoid the computational overhead related to excessive directional edge
detection.
Final three stages include additional decrease of edge transition slope length using local tone mapping, increase of local
contrast via modified unsharp mask filter, that uses bilateral filter with special edge-stop function for modest smoothing
of edges, and global contrast stretching. These stages are intended to compensate decreasing of sharpness and contrast
due to low-pass filtering, it allows to enhance visual quality of scanned image.
For parameters adjusting for different scanning resolutions and comparison with existing techniques test target and
criteria were proposed. Also the quality of proposed approach is evaluated by surveying observer's opinions. According
to obtained outcomes the proposed algorithm demonstrates good descreening capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.