PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A person with an asymmetric morphology of maxillofacial skeleton reportedly possesses an asymmetric jaw function and the risk to express temporomandibular disorder is high. A comprehensive analysis from the point of view of both the morphology and the function such as maxillofacial or temporomandibular joint morphology, dental occlusion, and features of mandibular movement pathways is essential.
In this study, the 4D jaw movement visualization system was developed to visually understand the characteristic jaw movement, 3D maxillofacial skeleton structure, and the alignment of the upper and lower teeth of a patient. For this purpose, the 3D reconstructed images of the cranial and mandibular bones, obtained by computed tomography, were measured using a non-contact 3D measuring device, and the obtained morphological images of teeth model were integrated and activated on the 6 DOF jaw movement data. This system was experimentally applied and visualized in a jaw deformity patient and its usability as a clinical diagnostic support system was verified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time face and iris detection on video images has gained renewed attention because of multiple possible applications in studying eye function, drowsiness detection, virtual keyboard interfaces, face recognition, video processing and multimedia retrieval. In this paper, a study is presented on using directional templates in the detection of faces rotated in the coronal axis. The templates are built by extracting the directional image information from the regions of the eyes, nose and mouth. The face position is determined by computing a line integral using the templates over the face directional image. The line integral reaches a maximum when it coincides with the face position. It is shown an improvement in localization selectivity by the increased value in the line integral computed with the directional template. Besides, improvements in the line integral value for face size and face rotation angle was also found through the computation of the line integral using the directional template. Based on these results the new templates should improve selectivity and hence provide the means to restrict computations to a fewer number of templates and restrict the region of search during the face and eye tracking procedure. The proposed method is real time, completely non invasive and was applied with no background limitation and normal illumination conditions in an indoor environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to realize the face recognition in unrestricted posture, it is required to exclude an influence of three-dimensional face pose which cause a shape distortion and a change of a distance among face parts. Also it is required not to be influenced by the face image change on size. In this paper, the authors propose an invariant pattern on face image, which is shift-invariant and rotation-invariant in orthogonal plane to the camera and is shift-invariant toward depth. We also propose a method for detecting the face position and face pose in unknown posture. Furthermore we show an experimental result, which is performed to verity the efficacy of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of major research issues associated with 3D range acquisition is the creation of sensor systems with various functionalities and small size. A variety of machine vision techniques have been developed for the determination of 3D scene geometric information from 2D images. As one of active sensors, structured lighting method has been widely used because of its robustness on the illumination noise and its extractability of feature information of interest. As one of passive sensors, stereo vision does also due to its simple configuration and easy construction. In this work, we propose a novel visual sensor system for 3D range acquisition, using active technique and passive one simultaneously. The proposed sensor system includes inherently two types of sensors, an active trinocular vision and a passive stereo vision. In the active vision part of this sensor, the structured lighting method using multi-lasers is basically utilized. In its stereo vision part, a general passive stereo is constructed. Since each of them has its own advantages and disadvantages on the measurements of various objects, we propose sensor fusion algorithms for acquiring more reliable range information from them. To see how the proposed sensing system can be applied to real applications, we mount it on a mobile robot, and a series of experimental tests is performed for a variety of configurations of robot and environment. The sensing results are discussed in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a high-precision fringe pattern projection technique based on a novel 4D hypersurface calibration method, and its application to on-machine measurement of raw-stocks in die-making industry. Our fringe pattern projection technique has the following feature. In the calibration stage, coordinates (x, y) of a CCD image sensor correspond uniquely, for every calibration plane with height Zi (i=1,..,n), to a phase φ of a projected fringe pattern, and coordinates (X , Y) of a machine tool. These relationships are converted to hypersurfaces in 4D spaces of (x, y, Z, φ), (x, y, Z, X), and (x, y, Z, Y), which are considered to be a sort of function. Using these hypersurfaces, a measured data of (x, y, φ) is transformed to machine tool coordinates (X, Y, Z). Our hypersurface calibration method is expected to minimize systematic errors, because it inputs an observed data (x, y, φ) into precise interpolation functions created using actual measurement data, and accordingly systematic errors are cancelled. The repeatability, systematic errors, and random errors obtained from the experiment show that our measurement system has a potential for highly accurate non-contact 3D shape measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate measurement and thorough documentation of excavated artifacts are the essential tasks of archaeological fieldwork. The on-site recording and long-term preservation of fragile evidence can be improved using 3D spatial data acquisition and computer-aided modeling technologies. Once the artifact is digitized and geometry created in a virtual environment, the scientist can manipulate the pieces in a virtual reality environment to develop a "realistic" reconstruction of the object without physically handling or gluing the fragments. The ARCHAEO-SCAN system is a flexible, affordable 3D coordinate data acquisition and geometric modeling system for acquiring surface and shape information of small to medium sized artifacts and bone fragments. The shape measurement system is being developed to enable the field archaeologist to manually sweep the non-contact sensor head across the relic or artifact surface. A series of unique data acquisition, processing, registration and surface reconstruction algorithms are then used to integrate 3D coordinate information from multiple views into a single reference frame. A novel technique for automatically creating a hexahedral mesh of the recovered fragments is presented. The 3D model acquisition system is designed to operate from a standard laptop with minimal additional hardware and proprietary software support. The captured shape data can be pre-processed and displayed on site, stored digitally on a CD, or transmitted via the Internet to the researcher's home institution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we focus on the robust feature selection and investigate the application of scale-invariant feature transform (SIFT) in robotic visual servoing (RVS). We consider a camera mounted onto the endpoint of an anthropomorphic manipulator (eye-in-hand configuration).
The objective of such RVS system is to control the pose of the camera so that a desired relative pose between the camera and the object of interest is maintained. It is seen that the SIFT feature point correspondences are not unique and hence those feature points with more than a unique match are disregarded. When the endpoint moves along a trajectory, the robust SIFT feature points are found and then for a similar trajectory the same selected feature points are used to keep track of the object in the current view. The point correspondences of the remaining robust feature points would provide the epipolar geometry of the two scenes, where knowing the camera calibration the motion of the camera is retrieved. The robot joint angle vector is then determined solving the inverse kinematics of the manipulator. We show how to select a set of robust features that are appropriate for the task of visual servoing. Robust SIFT feature points are scale and rotation invariant and effective when the current position of the endpoint is farther than and rotated with respect to the desired position.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper aims to propose a new scheme for robust tagging for landmark definition in unknown circumstance using some qualitative evaluations based on Orientation Code representation and matching which has been proposed for robust image registration even in the presence of change in illumination and occlusion. Necessary characteristics for effective tags: richness, similarity, and uniqueness, are considered in order to design an algorithm for tag extraction. These qualitative considerations can be utilized to design simple and robust algorithm for tag definition in combination with the robust image registration algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we put forward and evaluate a near real-time night driving assistance system intended for use in land vehicles (cars in particular) to help with T-junctions crossing at night. The onboard system of the host vehicle computes the remaining distance between itself and the nearest approaching vehicle using spatial perspective method. The algorithm evaluates the interspacing of the incoming vehicle's headlights. This allows the distance-to-contact to be determined or estimated. This work emphasises techniques to obtain the required image quality for distance sensing. The image quality was achieved when work was focused primarily at the hardware levels. With polaroids in place, the acquired images show that the headlight signals are clearly distinguishable from other ambient lights. This significantly simplifies image processing. Road-testing shows rather promising results. The system can be generalised to intersection settings, prevent rear-front collisions and may be extended for daytime applications with the introduction of virtual references.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spot observation by computer vision is the one of fundamental key technology. In this paper, we propose a moving object color learning and robust recognition with Hidden Markov Model(HMM) from various scenes under different light conditions. Feature box which is a small area in a image is defined to observe a spot. The time series data of such as averages of R, G, B intensities in feature boxes are the input signals of our system. The HMMs learn correspondences of input signals with object color of moving object and background. Baum-Welch and Vi-terbi algorithms are used to learning and interpret the spot scene transition. In moving object color interpretation, the system selects a best HMM model for input signals using maximum likelihood method based on a given object color appearance grammar. In the experiment, we examine the number of feature boxes and its shapes under some light conditions. The feature boxes adjoining in vertical column whose height is almost same as objects results best score in the experiment. It shows the effectiveness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
OS: New Horizon of HUTOP Production Technologies I
The goal of HUTOP project is to rearrange the technical subjects inherent in the Total Production Life Cycle (TPLC) and to model a new human-centered TPLC by introducing new information technologies (IT) which could support and enhance the KANSEI human sensory factors. HUTOP concept will be described again in this paper through the analysis of the basic research sub-themes in order to investigate the next international activities. Second phase of HUTOP was designed as HUTOP-II, and HUTOP-II research activities are now on going.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study we propose a method for creating 3D map of real world environment by using 3D occupancy grids. The map is created by characterizing each grid associated with a certain area in the real world environment by utilizing multiple measurements using stereo vision and Bayesian inference. The proposed method can absorb the measurement uncertainties caused in the stereo matching process and in the system's calibrations. The preliminary experiments show that the proposed algorithm is able to robustly generate environment maps. The algorithm is also suitable to be implemented as a vision system for autonomous mobile robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face detection from an arbitrary scene has become a very actively studied topic in the image processing and pattern recognition fields. The reason for the importance of face detection is in its broad applications, for example in human detection by means of visual input for security reason, human-machine interaction, and video archiving. Human face is composed from several components, each with large varieties and it can take many postures in arbitrary scene, which make detection task a very difficult one. In this study we propose a method for robust face detection from arbitrary scene utilizing neural network as face's posture predictor and partial template matching of human face. The proposed model is robust to the lighting conditions and postures of the frontal faces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a framework for automatic generation of an image processing algorithm that consists of preprocessing, feature extraction, classification and algorithm evaluation modules based on machine learning. With a view to applying the generated algorithm to industrial visual inspection system, we intend to offer a framework model equipped with the below-mentioned features. Also, we want to report on the experimental result of the offered model.
1.Automatically generate by machine learning an image processing algorithm to extract regions that have same characteristics as specified by users.
2.Generate in particular a high-precision image processing algorithm, improving the level of statistical separation between true and false defects that may cause a deterioration factor in classification accuracy.
3.Optimize an image improving filter sequence in preprocessing modules by means of GA (Genetic Algorithm).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
OS: New Horizon of HUTOP Production Technologies II
Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since, at our laboratory, the basic configuration of the facial caricaturing system PICASSO has been constructed, it is strongly expected to get sufficient input image from a person who is naturally performing in front of the PICASSO camera system. From this viewpoint, we developed a face tracking PC system for capturing sufficient facial image especially in size by means of PTZ (Pan-Tilt-Zoom) camera collaborated with a fixed CCD camera. Irises are successfully recognized from the motion images captured from PTZ camera. These irises can be utilized to provide a key feature for realizing an automated facial recognizing system. In this system, a person performing naturally in pose and in facial expression within the scope of the fixed CCD camera can be stably tracked and the sufficient images in resolution of PTZ camera were successfully analyzed for iris recognition and facial parts extractions. This face tracking and face recognition system was characterized by a novel template replacement scheme among the successive image frames. Experimental results were also demonstrated in this paper. This system works well in a practical speed 6-9fps on a usual PC connected to these cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed an experimental system for 3-D sensing using rotary vision sensor. Visual inspection systems are widely used for assembled PCB inspection. In many cases, these systems employ sophisticated optical systems, and to calibrate or adjust the system requires much effort of specialists. Also, the systems are very much specialized to the specified task and cannot be used for wide variety of applications. Recently, CCD cameras can be easily used for many applications and we proposed a experimental system for 3-D sensing using relative stereo method, last year. Using the idea, this time, we developed a new 3-D sensing system using the rotary vision sensor and motion picture analysis method. It can extract 3-D shape of objects with high reliability. In this paper, we introduce the idea of this system with experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medium and wide-angle off-shelf cameras are often used in computer-vision applications despite their large lens distortion. Algorithms to correct for radial and tangential distortion are available; however, they often use non-linear optimization search methods that rely on carefully chosen starting points. This paper presents a method to correct for both radial symmetric lens distortion and decentering lens distortion using an iterative geometric approach to find the distortion center, and a closed-form solution for all other distortion parameters. The method is based on deriving an equivalent radial symmetric distortion model that accounts for both radial and tangential distortion. The technique uses the simple geometric relationship between a straight line and its distorted counterpart under this distortion model. The distortion calibration involves firstly determining the axis of symmetry of several distorted lines. The intersection of these axes is then computed and considered as the point of best radial symmetry (PBRS). The inclinations of the axes of symmetry of the distorted lines are then used in a closed-form solution to determine the distortion coefficients. One advantage of this approach is that higher-order coefficients can be included as needed, with their computation still achieved in closed form. The simplicity of the lens distortion calibration technique has been demonstrated in a simulation using synthetic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem we studied in this paper is to understand to what extent motion and shape parameters can be estimated from an optical flow generated on the image plane. The optical flow is generated by projecting the phase portrait of a class of motion of object in R3 onto the image plane in R2. Here, the class of motion of object we considered is a two-dimensional plane undergoing Riccati motion. The projection models are perspective and orthographic projections. Namely, in this paper, we show several results on the problem of parameter estimation of Riccati motion under the two projection models. One of results is that the parameters of Riccati motion can be estimated up to choice of a sign. Thus, for all practical purposes, when the relative position of the object undergoing Riccati motion is known, motion and shape parameters can be recovered uniquely. This fact is in sharp contrast with existing known result in the literature about affine motion under perspective projection where parameters can only be recovered up to a possible depth ambiguity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The camera calibration for the intrinsic parameters such as the principal point and the principal distance is one of the most important techniques for the 3-D measurement applications based on the cameras' 2D images: the principal point is the intersection of optical axis of camera and image plane, and the principal distance is the distance between the center of lens and principal point. Though the techniques of camera parameter calibration have been intensively investigated by many researchers, the calibration errors were just examined through limited experiments and simulations and no more. Taking up the two-fiducial-plane camera calibration technique, this paper examined the calibration errors theoretically for various conditions such as the fiducial-plane translation, and the principal distances where the extraction errors of image coordinates of the fiducial points were considered as the source of the errors. The estimation error of F and P are theoretically formulized with the analytical equations, and the effectiveness of the formulas is confirmed by comparing the values by the theory with those by the simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compact Camera Module(CCM) is widely used in PDA, Celluar phone and PC web camera. With the greatly increasing use for mobile applications, there has been a considerable demands for high speed production of CCM. The major burden of production of CCM is assembly of lens module onto CCD or CMOS packaged circuit board. After module is assembled, the CCM is inspected. In this paper, we developed the image capture board for CCM and the imaging processing algorithm to inspect the defects in captured image of assembled CCMO. The performances of the developed inspection system and its algorithm are tested on samples of 10000 CCMs. Experimental results reveal that the proposed system can focus the lens of CCM within 5s and we can recognize various types of defect of CCM modules with good accuracy and high speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A content-based scene indexing has been important technique for an effective video contents handling such as scene retrieval and editing. The standard multimedia content descriptor (MPEG7) has been proposed for the key scene indexing. As for an automatic scene indexing, audio-visual features are most important clues. Many methods have been proposed for effective scene indexing based on those features. In this paper, we propose an automatic key scene detection method for baseball video contents using video features. We regard pitching scenes as key scenes, because they are starting points of all baseball play scenes. If the pitching scenes are detected, they could be effective hints to detect other scenes. In addition, a pitching scene digest video can be easily edited by gathering automatically extracted scenes. The pitching scene digest can be useful data for pitching analysis. We extract pitching scenes using color, domain and motion template created from manually selected pitching scene samples. Those templates contain image features unique to pitching scenes. Template matching is applied to video stream, so that target scenes can be detected by judging calculated matching rate. We experimentally test our method for actual baseball video contents. It can be useful data for pitching analysis and editing of digest news broad casting. We are developing the video indexing support system which users can give text annotations to indexed scenes using MPEG7 format descriptors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a practical approach to automatic visual inspection of SMT PCBs. There are thousands of chip components mounted on the notebook SMT PCB. The images of those chip components could not be exactly same due to the variance of shift, orientation, scale, and illumination condition. Even so, we could not memorize all kinds of inspection reference values for the different conditions. Most of inspection algorithms with fixed window template such as template matching, Fourier analysis, OCR, etc., do not show good performance for images with shifted, oriented, scaled, and variable illumination conditions. We propose a practical automatic inspection method of SMT rectangular chips; correcting the image variance of shift, orientation, and scale with practical speed, and updating the decision reference values in the inspection process. The performance of the proposed method is tested on numerous samples of rectangular chips on SMT PCB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a prototype vision system maintaining conventional data transfer speeds that can achieve both high resolution and high-speed over 1000Hz feedback rate by using "Mm-Vision" concept, the technique of intelligent selection of pixel of interest for reducing the amount of output data. To verify the effectiveness of the system and its concept, some high-speed image processing experiments have been conducted by the prototype vision system built around a typical personal computer (PC) and software development environment (C or C++ language). In this paper, we also discuss dedicated imaging sensors based on the Mm-Vision concept to improve its performance and usability as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present in this paper a new method for implementing geometric moment functions in a CMOS retina. It is based on the computation of the correlation value between the image under analysis and a second image since there is a similarity between the expression of the moment of an image and that of the correlation of two images. The second image which is stored in memory devices in the circuit is approximated by a binary image using a dithering algorithm in order to reduce hardware implementation cost. As a result the value of the moment is also an approximate one. Computer simulations using the COIL 100 Columbia image database on 128x128 pixel images show that the maximal relative error between the approximate and the exact value is less than 1% for moments of order less than 2, and less than 5% for moments of order less than 6. Finally, we have considered an object localization application and quantified the error in the localization due to the use of the approximate moment values instead of the exact values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time image processing at high frame rates could play an important role in various visual measurements. Such image processing can be realized by using a high-speed vision system imaging at high frame rates and having appropriate algorithms processed at high speed. In this paper, we describe two visual measurements using high-speed vision, target counting and rotation measurement. For those measurements, we propose methods utilizing unique features of our high-speed vision system. Both measurements have excellent measurement precision and high flexibility because of high-frame-rate visual observation achievable. Experimental results show the advantages of high-speed vision compared with conventional visual systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Formation of this paper is evoked by solving of device that is able to detect faults of braiding ropes in real-time. Many various inspection devices for textile industry were developed. However, rope-producing textile company has come with demand of intelligent inspection device that is able to detect faults in finishing process. The winding speeds are 50 - 200 m/min. Nowadays commercial devices are focused on textile fabrics (weaving or knitting) and they are only able to detect basic faults (holes, dirty and oil spots). Considering textile structure faults are possible to find in several research papers, however, for specific types of textiles or for slow processes only.
The inspection device, which has been developed in our laboratory, is able to work with high winding speeds of rope. The device is based on fast line-scan camera with Camera-Link interface. The goal of the project was to search three basic structure faults: missing strand, strands pulled tight and stitch irregularity. The principle of fault detection is based on gathering the most suitable symptoms that are used for recognition methods. These methods are very successful for speech recognition and using them even bring us better results than using neural networks. This paper shows the way of finding the most suitable symptoms, their statistical evaluation and decision making algorithms. The most important step is reducing the problem from time-consuming image processing to one-dimensional signal processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this research, the new non-contact breathing motion monitoring system using Fiber Grating 3-dimension Sensor is used to measure the respiratory movement of the chest and the abdomen and the shape of the human body simultaneously. Respiratory trouble during sleep brings about various kinds of diseases. Particularly, Sleep Apnea Syndrome (SAS), which restricts respiration during sleep, has been in the spotlight in recent years. However, present equipment for analyzing the blessing motion requires attaching various sensors on the patient's body. This system adopted two CCD cameras to measure the movements of projected infrared bright spots on the patient's body which measure the body form, breathing motion of the chest and breathing motion of the abdomen in detail. Since the equipment does not contact the patient's body, the patient feels incompatibility, and there is no necessity to worry about the equipment coming off. Sleep Apnea Syndrome is classified into three types by their respiratory pattern-Obstructive, Central and Mixed SAS based on the characteristic. This paper reports the method of diagnosing SAS automatically. It is thought that this method will be helpful not only for the diagnosis of SAS but also for the diagnosis of other kinds of complicated respiratory disease.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, crisis management in response to terrorist attacks and natural disasters, as well as accelerating rescue operations has become an important issue. Rescue operations greatly influence human lives, and require the ability to accurately and swiftly communicate information as well as assess the status of the site. Currently, considerable amount of research is being conducted for assisting rescue operations, with the application of various engineering techniques such as information technology and radar technology.
In the present research, we believe that assessing the status of the site is most crucial in rescue and firefighting operations at a fire disaster site, and aim to visualize the space that is smothered with dense smoke. In a space filled with dense smoke, where visual or infrared sensing techniques are not feasible, three-dimensional measurements can be realized using a compact millimeter wave radar device combined with directional information from a gyro sensor. Using these techniques, we construct a system that can build and visualize a three-dimensional geometric model of the space. The
final objective is to implement such a system on a wearable computer, which will improve the firefighters' spatial perception, assisting them in the baseline assessment and the decision-making process. In the present paper, we report the results of the basic experiments on three-dimensional measurement and visualization of a space that is smoke free, using a millimeter wave radar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, domestic accidents have been increasing in Japan. These kinds of accidents occur in private areas such as bedrooms, toilets and bathrooms, and tend to be found too late. Accidents, particularly those occurring in the bathroom, can often result in death. Many systems which have been proposed or which are in use are designed to detect body motion in the bathroom, and determine that a bather has suddenly taken ill when movement ceases. However, the relaxed posture of a person bathing is actually very similar to that of a person who has passed out. It is therefore very difficult to differentiate between the two postures. We have developed a watching system for bathrooms. The new feature of this system lies in its ability to detect a person’s breathing by using an FG vision sensor. From the experiment, it was found that the false alarm rate is expected to reach less than 0.0001% when waiting time is set to 36.8 seconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a color-matching 3D look-up table that simplifies the complex color-matching procedure between a monitor and a mobile display device, where the image colors are processed in a device-independent color space, such as CIEXYZ or CIELAB, and gamut mapping performed to compensate the gamut difference.
When compared with a monitor, mobile displays are unable to display images with good color fidelity due to their smaller gamut, dimmer luminance, and worse color reproduction ability related to their low power consumption. As such, the image colors displayed on a monitor and mobile display can be significantly different for the same input digital values. Thus, to solve this problem, a color matching process between a monitor and a mobile display is needed that includes both color management in a device-independent color space1 and gamut mapping to compensate for the significant gamut difference. Yet, since these procedures involve many complex arithmetic functions, simplification is required for realization in mobile devices.
Accordingly, this paper proposes a color-matching look-up to simplify the complex color-matching procedures for use in a mobile display. Moreover, the performance of the proposed color-matching look-up table is evaluated with different sizes of look-up table to determine the minimum size. Color-matching experiments between a monitor and a mobile display show that the images on the mobile display reflect the monitor images better after color matching than without color matching.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.