Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been evaluated by comparing the results of fully automated analysis to manual analysis of the same images. The comparison shows a high degree of correlation that validates the quality of the automatic segmentation approach.
The objective of the system is inspection of individual pieces of stemware for geometry defects and glass imperfections. Cameras view stemware from multiple angles to increase surface coverage. The inspection images are acquired at three stations. The first inspects internal glass quality, detecting defects such as chemical residue and waviness. The second inspects the rim, geometry of the stemware body and stem, and internal defects such as cracks. The third station inspects the stemware base for geometrical and internal defects. Glass defects are optically enhanced through the use of striped pattern back lighting combined with morphological processing. Geometry inspection is enhanced through the use of converging illumination at the second station, while the third station utilizes large field true telecentric imaging. Progressive scan cameras and frame grabbers capable of simultaneous image capture are used at each station. The system software comprises six modules: system manager, I/O manager, inspection module for each station, and stemware sorting and logging module. Each module is run as a separate application. Applications communicate with each other through TCP/IP sockets, and can be run in a multi-computer or single-computer setup. Currently two Windows NT workstations are used to host the system.
The Crosshead Inspection System, CIS, utilizes machine vision technology for on-line inspection of a diesel engine component - a crosshead. The system includes three functional modules. 1) Part handling subsystem - presents parts for inspection and accepts or rejects them based on signals for the image analysis software. 2) Image acquisition hardware - Optics, light sources and two video cameras collect images of inspected parts. 3) Image analysis software - analyzes the images and sends pass/fail decision signals to the handling subsystem. The CIS acquires and inspects two images of each part. The upper camera generates an image of the part's top surface, while the lower camera generates an image of the so-called 'pockets' of the lower half. Both images are acquired when a part-in-place signal is received from the handling system. The surface inspection camera and light source are positioned at opposed low angles relative to the surface. Irregularities manifest themselves as shadows on the surface image. These shadows are detected, measured and compared to user specifications. The pocket inspection detects the presence of tumbler stones. The contrast of these stones is enhanced with circularly polarized lighting and imaging. The graphical user interface of the CIS provides easy setup and debugging of the image processing algorithms. A database module collects, archives and present part inspection statistics to the user. The inspection rate is sixty parts per minute.
Recent advances in IR system technology coupled with significant reduction sin cost are making thermography a viable tool for on-line monitoring of industrial processes. This paper describes the implementation of a novel rugged thermal imaging system based on a dual-wavelength technique for a large intelligent process monitoring project. The objective of the portion described herein is to deploy a non-contact means of monitoring die cast tooling surface thermal conditions and analyzing the data in the context of the process monitor. The technical and practical challenges of developing such a non-contact thermal measurement system for continuous inspection in an industrial environment are discussed, and methods of resolving them are presented. These challenges include implementation of a wavelength filter system for quantitative determination of the surface temperature. Additionally, emissivity variations of the tooling surface as well as IR reflections are discussed. The primary issues that are addressed, however, are compensation for ambient temperature conditions and optimization of the calibration process. Other issues center on remote camera control, image acquisition, data synchronization, and data interpretation. An example application of this system, along with in-plant images and thermal data, is described.
KEYWORDS: Thermography, Inspection, Infrared imaging, Imaging systems, Infrared radiation, Infrared technology, Data acquisition, Data centers, Manufacturing, Process control
Non-contact thermal measurement techniques such as on-line thermography can be valuable tools for process monitoring and quality control. Many manufacturing processes such as welding or casting are thermally driven, or exhibit strong correlation between thermal conditions and product characteristics. Infrared inspection of self-emitted radiation can provide valuable insight into process parameters not routinely observed yet which dominate product quality. Recent advances in IR system technology coupled with significant reductions in cost are making thermography a viable tool for such on-line monitoring. This paper describes the implementation of a novel rugged thermal imaging system based on a dual-wavelength technique for a large intelligent process monitoring project. The object of the portion described herein is to deploy a non- contact means of monitoring tooling surface thermal conditions. The technical and practical challenges of developing such a non-contact thermal measurement system for continuous inspection in an industrial environment are discussed, and methods of resolving them are presented. These challenges include implementation of a wavelength filter system for quantitative determination of the surface temperature. Also, unlike visible-spectrum machine vision applications, surface emissivity of the test object as well as reflections from other IR emitters must be taken into account when measuring infrared radiation for a part or process. However, the primary issues that must be addressed prior to deployment are compensation for ambient temperature conditions and optimization of the calibration process. Other issues center on remote camera control, image acquisition, data synchronization, and data interpretation. An example application of this system, along with preliminary data, is described.
There are two crucial, complementary, issues faced during design and implementation of practically any but a simple image processing library. First is an ability to represent a variety of image types, typically the discriminate feature being the pixel type, e.g. binary, short integer, long integer, or floating point. The second issue is implementation of image processing algorithms that will be able to operate on each of the supported image representations. In many traditional library designs this leads to reimplementation of the same algorithm many times, once for each possible image representation. Some attempts to alleviate this problem introduce elaborate schemes of dynamic pixel representation and registration. This results in single algorithm implementation, however, due to dynamic pixel registration, efficiency of these implementations is poor. In this paper, we investigate use of parameterized algorithms and design issues involved in implementing them in C++. We permit single expression of the algorithm to be used with any concrete representation of an image. Use of advanced features of C++ and object-oriented programming allow us to use static pixel representations, where pixel types are resolved during compile time instead of run time. This approach leads to very flexible and efficient implementations. We have both advantages: single algorithm implementation for numerous image representations, and best possible speed of execution.
A novel syntactic region growing and recognition algorithm called SRG will be presented. The primary function of SRG algorithm is detection of structured regions of interest in given image. The recognition technique operates on region's elected features, it is a subject of the next paper. The SRG algorithm can be outlined as follows. Preprocessing is performed to isolate the kernels of potential regions. The first level regions are grown until the region transition are detected. Regions sharing the same transition boundary are merged. The second level regions are grown and merged if they are within the same transition boundary. Growing process is repeated as necessary until whole image or specified region of interest is covered. Features such as area, depth, and others, are computed for each level region and are used for recognition. The recognition is based on region features without regard to region level. The algorithm was designed for analysis of nondestructive evaluation images. In particular, it was successfully tested on x-ray images and on ultrasonic contact scan images of ceramic specimens for detection of microstructural defects. The results of these tests are included in the paper.
A system for 3D gauging of small fibers has been developed for process monitoring. The basic hardware consists of a pair of 2048 linear cameras orthogonally positioned, an IBM PC-compatible Pentium computer with frame grabber, a stepper motor and associated hardware for translating the fiber, a bright-field light source and special optics. The fiber is moved vertically past the two cameras as they scan. the computer acquires each scan line, processes it and then issues control signals to the stepper motor. Several different image processing operations are used to minimize the effects of illumination nonuniformity since fibers will sometimes have low contrast due to their small size. There are two sources of illumination variations, spatial and temporal which are processed independently. Image analysis is performed to provide 3D fiber shape characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.