Recently, manufacturers of electronic devices such as mobiles and displays tend to introduce various audiovisual assistive technologies in their products for the disabled. Since TV was born, however, the nature of TV “watching experience” for the visually impaired which is the first ranked leisure activity has not been improved unfortunately. Instead, it has been nothing more than using the TV in a way that understands the screen with only voice or listens to the voice that explains the screen. With the goal of improving the nature for the visually impaired with blurry vision, the world first visual-aid algorithm is proposed and implemented on TV for mass production and its effectiveness is proven by medical trials for many low vision people. This visual-aid algorithm is a technology that unprecedentedly emphasizes important features of pictures for the human vision including the edge, color and contrast so that low vision people who have significantly lowered contrast sensitivity can better understand the screen. In order to break the paradox of finding the pleasure of watching TV for specifically the low vision people, clinical trials were absolutely necessary and the results said that the proposed algorithm is surprisingly meaningful for the visually impaired. The clinical trials and our simulation experiments convinced that the TV viewing experience of the visually impaired and further the quality of their lives can be improved.
This paper presents a technical ensemble between a mobile projector and smartphone camera for providing colorimetrically calibrated realistic experience everywhere. Since portable projectors can be used in any places without a dedicated white screen, it is very essential to calibrate color of output videos no matter what a colored surface is used for a screen. That is, a calibration process is strongly required to realize the constant intended color experience while using mobile projectors. For that, we build an easy calibration process as follows. First, the (Android or iOS) mobile and projector are connected in a common local common network, and then a mobile application generates a specific electro-optical (RGB-XYZ) conversion function by capturing re ected light from a given specific white plate. The smartphone that plays the role of an optical measurement device transmits measured light information and calibration instruction to the display system. Then the projector adjusts its light outputs and notify the status to the mobile. This `measurement-adjustment' process is recursively conducted within few seconds. The process is completed when the adjustment result meets a stop condition for a calibration target on given colored wall screen. Based on our experimental results of photometric calibration using many kinds of recently released Android and iOS smartphones, the e
effectiveness of the methods is proven.
Digital cameras under a dark illumination invoke artifacts like motion blur in a long-exposed shot or salient noise
corruption in a short-exposed (High ISO) shot. To suppress such artifacts effectively, multi-frame fusion approaches
involving the use of multiple short-exposed images has been studied actively. Moreover, it recently has been being
applied to various consumer digital cameras for the practical still-shot stabilization. However, it requires too much
computational complexities and costs in order to conduct both multiframe noise filtering and brightness/color appearance restoration well from a set of multiple input images acquired at a harsh low-light situation.
In this paper, we propose a new fusion-based low-light stabilization approach, which inputs one proper-/long-exposure blurry image as well as multiple short-exposure noisy images. First, a coarse-to-fine motion compensated noise filtering is done to get a clean image from the multiple short-exposure images. Then, online low-light image restoration is followed to obtain a good visual appearance from the denoised image using a blurry long-exposure input image. More specifically, the noise filtering is conducted by a simple block-wise temporal averaging based on a between-frame motion info, which provides a denoising result with even better detail preservation. Our simulation and real scene tests show the possibility of the proposed algorithm for fast and effective low light stabilization at a programmable computing platform.
Motion blur is usually modeled as the convolution of a latent image with a motion blur kernel, and most of
current deblurring methods limit types of motion blurs to be uniform with the convolution model. However,
real motion blurs are often non-uniform, and in consequence the methods may not well remove real motion
blurs caused by camera shakes. To utilize the existing methods in practice, it is necessary to understand how
much the uniform motions (i.e., translations) can approximate real camera shakes. In this paper, we analyze the
displacement of real camera motions on image pixels and present the practical coverage of uniform motions (i.e.,
translations) to approximate complicated real camera shakes. We first analyze mathematically the difference of
the motion displacement between the optical axis and image boundary under real camera shakes, then derive
the practical coverage of uniform motion deblurring methods when used for real blurred images. The coverage
can effectively guide how much one can utilize the existing uniform motion deblurring methods, and informs the
need to model real camera shakes accurately rather than assuming uniform motions.
The Lucas-Kanade algorithm and its variants have been successfully used for numerous works in computer vision,
which include image registration as a component in the process. In this paper, we propose a Lucas-Kanade based
image registration method using camera parameters. We decompose a homography into camera intrinsic and
extrinsic parameters, and assume that the intrinsic parameters are given, e.g., from the EXIF information of
a photograph. We then estimate only the extrinsic parameters for image registration, considering two types of
camera motions, 3D rotations and full 3D motions with translations and rotations. As the known information
about the camera is fully utilized, the proposed method can perform image registration more reliably. In addition,
as the number of extrinsic parameters is smaller than the number of homography elements, our method runs
faster than the Lucas-Kanade based registration method that estimates a homography itself.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.