KEYWORDS: Visualization, Video, Cameras, Visual process modeling, Motion models, 3D modeling, 3D displays, Eye models, 3D image processing, Visual system
As three–dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits
further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between
accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method
for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable
zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific
objects for static cameras and background. Relative motion should be considered for different camera conditions
determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a
novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression
method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene,
and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional
algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation
with subjective test results.
The three-dimensional (3D) display technology has made a great progress in the last several decades, which provides
a dramatic improvement in visual experiences. The availability of 3D content is a critical factor limiting wide
applications of 3D technology. An adaptive point tracking method based on the depth map is demonstrated, which is used
to automatically generate depth maps elaborately. Point tracking method used in the previous investigation is template
matching and it can’t track points precisely. An adaptive point tracking method with adaptive window and weights based
on the discontinuous edge information and texture complexity of the depth map is used. In the experiment, a method to
automatically generate the depth maps using trace points between adjacent images is realized. Theoretical analysis and
experimental results show that the presented method can track feature points precisely, and the depth maps of non-key
images are perfectly generated.
Stereo vision is a hot research topic in the field of computer vision and 3D video display.Disparity map is one of the
most crucial steps. A novel constant computational complexity algorithm based on separable successive weight
summation (SWS) is presented. The proposed algorithm eliminates iteration and support area independently, which saves
computation and memory space .The similar measure of gradient is also applied to improve the original algorithm. Image
segmentation and edge detection is used for the stereo matching to accelerate the speed and improve the accuracy of
matching algorithm.The image of edge is extracted to reduce the search scope for the stereo matching algorithm. Dense
disparity map was obtained through local optimization.Experimental results show that the algorithm is efficient and can
well reduce the matching noise and improve the matching precision in depth discontinuities and low-texture region.
In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the
development of 3D technology. In this paper we propose some factors affecting human perception of depth as new
quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and
scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects
with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new
algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square
Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The
depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the
horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution
of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can
be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved.
When we use it in different frames between current and previous one, we can get temporal variation and scene
movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos
directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results
show that the proposed method exhibits high correlation with subjective quality assessment results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.