Image or video inpainting is the process/art of retrieving missing portions of an image without introducing undesirable
artifacts that are undetectable by an ordinary observer. An image/video can be damaged due to a variety of factors, such
as deterioration due to scratches, laser dazzling effects, wear and tear, dust spots, loss of data when transmitted through a
channel, etc. Applications of inpainting include image restoration (removing laser dazzling effects, dust spots, date, text,
time, etc.), image synthesis (texture synthesis), completing panoramas, image coding, wireless transmission (recovery of
the missing blocks), digital culture protection, image de-noising, fingerprint recognition, and film special effects and
production. Most inpainting methods can be classified in two key groups: global and local methods. Global methods are
used for generating large image regions from samples while local methods are used for filling in small image gaps. Each
method has its own advantages and limitations. For example, the global inpainting methods perform well on textured
image retrieval, whereas the classical local methods perform poorly. In addition, some of the techniques are
computationally intensive; exceeding the capabilities of most currently used mobile devices. In general, the inpainting
algorithms are not suitable for the wireless environment.
This paper presents a new and efficient scheme that combines the advantages of both local and global methods into a
single algorithm. Particularly, it introduces a blind inpainting model to solve the above problems by adaptively selecting
support area for the inpainting scheme. The proposed method is applied to various challenging image restoration tasks,
including recovering old photos, recovering missing data on real and synthetic images, and recovering the specular
reflections in endoscopic images. A number of computer simulations demonstrate the effectiveness of our scheme and
also illustrate the main properties and implementation steps of the presented algorithm. Furthermore, the simulation
results show that the presented method is among the state-of-the-art and compares favorably against many available
methods in the wireless environment. Robustness in the wireless environment with respect to the shape of the manually
selected “marked” region is also illustrated. Currently, we are working on the expansion of this work to video and 3-D
data.
|