It is commonly believed that having more white pixels in a color filter array (CFA) will help the demosaicing performance for images collected in low lighting conditions. We present a comparative study to evaluate the performance of demosaicing for images collected in realistic low lighting conditions using two CFAs: the standard Bayer pattern (aka CFA 1.0) and the Kodak CFA 2.0 (RGBW pattern with 50% white pixels). Using a data set containing 10 images collected in low lighting conditions, we observe that having more white pixels does help the demosaicing performance. However, some cautions are needed in quantifying the performance.
The objective of this paper is to detect the type of vegetation so that a more accurate Digital Terrain Model (DTM) can be generated by excluding the vegetation from the Digital Surface Model (DSM) based on the vegetation type (such as trees). This way, many different inpainting methods can be applied subsequently to restore the terrain information from the removed vegetation pixels from DSM and obtain a more accurate DTM. We trained three DeepLabV3+ models with three different datasets that are collected at different resolutions. Among the three DeepLabV3+ models, the model trained with the dataset that has an image resolution close to the test data images provided the best performance and the semantic segmentation results with this model looked highly promising.
To accurately extract digital terrain model (DTM), it is necessary to remove heights due to vegetation such as trees and shrubs and other manmade structures such as buildings, bridges, etc. from the digital surface model (DSM). The resulting DTM can then be used for construction planning, land surveying, etc. Normally, the process of extracting DTM involves two steps. First, accurate land cover classification is required. Second, an image inpainting process is needed to fill in the missing pixels due to trees, buildings, bridges, etc. In this paper, we focus on the second step of using image inpainting algorithms for terrain reconstruction. In particular, we evaluate seven conventional and deep learning based inpainting algorithms in the literature using two datasets. Both objective and subjective comparisons were carried out. It was observed that some algorithms yielded slightly better performance than others.
KEYWORDS: Image compression, Error analysis, Imaging systems, RGB color model, Video compression, Multispectral imaging, Cameras, Principal component analysis, Mars, Video
We present a high performance image compression framework for Mastcam images in the Mars rover Curiosity. First, we aim at achieving perceptually lossless compression. Four well-known image codecs in the literature have been evaluated and the performance was assessed using four well-known performance metrics. Second, we investigated the impact of error concealment algorithms for handling corrupted pixels due to transmission errors in communication channels. Extensive experiments using actual Mastcam images have been performed to demonstrate the proposed framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.