We have been developing a wearable interface system, "BOWL ProCam (BOdy-Worn Laser Projector Camera)", for providing the user with a mixed-reality interface by not using any head-mounted devices. The BOWL ProCam is equipped with a laser projector that has the long focal depth, a high-definition fish-eye camera for enabling wide-range situation understanding, and attitude sensors for projection stabilization. In this paper, we first show an evaluation by simulation on the proper position for wearing a projector-camera system in the context of real-world task support. According to the result, the upper chest area was selected as the wearing position. Next, we briefly describe interaction techniques effectively employing both nearby projection surfaces such as the user's hands and far projection surfaces such as a tabletop and wall. This paper then presents preliminary experiments on active-stereo and hand-posture-classification techniques to realize such interaction with a proof-of-concept system that uses a conventional light-bulb projector.
The factorization method by Tomasi and Kanade gives a stable and an accurate reconstruction. However is difficult to apply their method to real-time applications. Then we present an iterative factorization method for the GAP model with tracking the feature points. In this method, through the fixed size measurement matrix, which is independent of the number of the frames, the motion and the shape are to be reconstructed at every frame. Some experiments are also given to show the performance of our proposed iterative method.
We present the intuitive interpretation of affine epipolar geometry for the orthographic, scaled orthographic, and paraperspective projection models in terms of the factorization method for the generalized affine projection (GAP) model proposed by Fujiki and Kurata (1997). Using the GAP model introduced by Mundy and Zisserman (1992), each affine projection model can be resolved into the orthographic projection model by the introduction of virtual image planes, then the affine epipolar geometry can be simply obtained from the estimation of the factorization method. We show some experiments using synthetic data and real images and also demonstrate to reconstruct the dense 3D structure of the object.
The factorization method has been sued for recovering both the shape of an object and the motion of a camera from sequential images. This method consist of two steps. The first step is to decompose measurement matrix into a product of two matrices. And the second step is to determine a non- singular matrix to revise these matrices. Mathematical consideration of this method is not paid much attention. In this paper, we elucidate the mathematical meaning of the second step. This gives intuitive interpretation of many facts of shape from motion problem. It makes clear to understand why we need three distinct affine projection images to determine the shape and motion of camera and what information we can get from two affine projection images. We also consider the factorization method for two images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.