Proceedings Article | 29 January 2007
KEYWORDS: Tolerancing, Computer architecture, Motion models, Motion estimation, Error analysis, Quantization, Performance modeling, Data modeling, Image processing, Video compression
The progress of VLSI technology towards deep sub-micron feature sizes, e.g., sub-100 nanometer technology,
has created a growing impact of hardware defects and fabrication process variability, which lead to reductions
in yield rate. To address these problems, a new approach, system-level error tolerance (ET), has been recently
introduced. Considering that a significant percentage of the entire chip production is discarded due to minor
imperfections, this approach is based on accepting imperfect chips that introduce imperceptible/acceptable system-level
degradation; this leads to increases in overall effective yield. In this paper, we investigate the impact of
hardware faults on the video compression performance, with a focus on the motion estimation (ME) process.
More specifically, we provide an analytical formulation of the impact of single and multiple stuck-at-faults within
ME computation. We further present a model for estimating the system-level performance degradation due to
such faults, which can be used for the error tolerance based decision strategy of accepting a given faulty chip.
We also show how different faults and ME search algorithms compare in terms of error tolerance and define
the characteristics of search algorithm that lead to increased error tolerance. Finally, we show that different
hardware architectures performing the same metric computation have different error tolerance characteristics
and we present the optimal ME hardware architecture in terms of error tolerance. While we focus on ME
hardware, our work could also applied to systems (e.g., classifiers, matching pursuits, vector quantization) where
a selection is made among several alternatives (e.g., class label, basis function, quantization codeword) based on
which choice minimizes an additive metric of interest.