The measurement of behavior for participants in a conversation scene involves verbal and nonverbal communications. The measurement validity may vary depending on the observers caused by some aspects such as human error, poorly designed measurement systems, and inadequate observer training. Although some systems have been introduced in previous studies to automatically measure the behaviors, these systems prevent participants to talk in a natural way. In this study, we propose a software application program to automatically analyze behaviors of the participants including utterances, facial expressions (happy or neutral), head nods, and poses using only a single omnidirectional camera. The camera is small enough to be embedded into a table to allow participants to have spontaneous conversation. The proposed software utilizes facial feature tracking based on constrained local model to observe the changes of the facial features captured by the camera, and the Japanese female facial expression database to recognize expressions. Our experiment results show that there are significant correlations between measurements observed by the observers and by the software.
Many approaches have been proposed to create an eye tracker based on visible-spectrum. These efforts provide a possibility to create inexpensive eye tracker capable to operate outdoor. Although the resulted tracking accuracy is acceptable for a visible-spectrum head-mounted eye tracker, there are many limitations of these approaches to create a remote eye tracker. In this study, we propose a high-accuracy remote eye tracker that uses visible-spectrum imaging and several gaze communication interfaces suited to the tracker. The gaze communication interfaces are designed to assist people with motor disability. Our results show that the proposed eye tracker achieved an average accuracy of 0.77° and a frame rate of 28 fps with a personal computer. With a tablet device, the proposed eye tracker achieved an average accuracy of 0.82° and a frame rate of 25 fps. The proposed gaze communication interfaces enable users to type a complete sentence containing eleven Japanese characters in about a minute.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.