Virtual colonoscopy provides a safe, minimal-invasive approach to detect colonic polyps using medical imaging and computer graphics technologies. Residual stool and fluid are problematic for optimal viewing of the colonic mucosa. Electronic cleansing techniques combining bowel preparation, oral contrast agents, and image segmentation were developed to extract the colon lumen from computed tomography (CT) images of the colon. In this paper, we present a new electronic colon cleansing technology, which employs a hidden Markov random filed (MRF) model to integrate the neighborhood information for overcoming the non-uniformity problems within the tagged stool/fluid region. Prior to obtaining CT images, the patient undergoes a bowel preparation. A statistical method for maximum a posterior probability (MAP) was developed to identify the enhanced regions of residual stool/fluid. The method utilizes a hidden MRF Gibbs model to integrate the spatial information into the Expectation Maximization (EM) model-fitting MAP algorithm. The algorithm estimates the model parameters and segments the voxels iteratively in an interleaved manner, converging to a solution where the model parameters and voxel labels are stabilized within a specified criterion. Experimental results are promising.
We have developed a method that automatically displays, places in sized order and allows viewing of the areas of the colon surface not visualized during initial endoscopic navigational viewing. While complete surface visualization is possible, we demonstrate that all of these missed patches do not have to be reviewed to detect clinically significant colon polyps. CT scans are performed on 147 patients and volunteers after bowel preparation and colon distention with CO2. After automatic segmentation and electronic cleansing of the colon lumen, the medial axis (centerline) is extracted. Volume rendering fly-through along the centerline is performed and visualized surfaces are marked. To simulate optical colonoscopy, the virtual camera is passed in the antegrade direction. For virtual colonoscopy, the camera is passed both antegrade and retrograde, and the combined visible surface voxel count is recorded. After both fly-throughs, the total visualized surface is recorded and all 'patches' of connected surface area not yet seen are identified, measured, sorted by size, and counted. Clinically significant patches, defined as smallest diameter being > 5mm, are sequentially visualized by stepping through the sorted list until reaching the patch diameter of 5 mm.. for each. By enabling endoscopic navigation in both antegrade and retrograde directions, virtual colonoscopy is able to evaluate behind haustral folds and around sharp bends, thereby visualizing significantly more surface area than optical colonoscopy. Furthermore, automatic marking of the visualized surface area and identifying and viewing unseen patches allows examination of all clinically significant surfaces of the colon.
We propose an interactive electronic biopsy technique for more accurate colon cancer diagnoses by using advanced volume rendering technologies. The volume rendering technique defines a transfer function to map different ranges of sample values of the original volume data to different colors and opacities, so that the interior structure of the polyps can be clearly recognized by human eyes. Specifically, we provide a user- friendly interface for physicians to modify various parameters in the transfer function, so that the physician can interactively change the transfer function to observe the interior structures inside the abnormalities. Furthermore, to speed up the volume rendering procedure, we propose an efficient space-leaping technique by observing that the virtual camera parameters are often fixed when the physician modifies the transfer function. In addition, we provide an important tool to display the original 2D CT image at the current 3D camera position, so that the physician is able to double check the interior structure of a polyp with the density variation in the corresponding CT image for confirmation. Compared with the traditional biopsy in the procedure of optical colonoscopy, our method is more flexible, noninvasive, and therefore without risk.
Virtual colonoscopy on powerful workstations has the distinct advantage of interactive navigation, as opposed to passive viewing of cine loops or pre-computed movies. Because of the prohibitive cost of hardware, only passive displays have been feasible for the wide-scale deployment required for mass screening. The purpose of our work is to compare low-cost commodity hardware as an effective tool for interactive colonographic navigation versus the expensive workstations.
In our previous work, we developed a virtual colonoscopy system on a high-end 16-processor SGI Challenge with an expensive hardware graphics accelerator. The goal of this work is to port the system to a low cost PC in order to increase its availability for mass screening. Recently, Mitsubishi Electric has developed a volume-rendering PC board, called VolumePro, which includes 128 MB of RAM and vg500 rendering chip. The vg500 chip, based on Cube-4 technology, can render a 2563 volume at 30 frames per second. High image quality of volume rendering inside the colon is guaranteed by the full lighting model and 3D interpolation supported by the vg500 chip. However, the VolumePro board is lacking some features required by our interactive colon navigation. First, VolumePro currently does not support perspective projection which is paramount for interior colon navigation. Second, the patient colon data is usually much larger than 2563 and cannot be rendered in real-time. In this paper, we present our solutions to these problems, including simulated perspective projection and axis aligned boxing techniques, and demonstrate the high performance of our virtual colonoscopy system on low cost PCs.
Fixed targets such as bridges, airfields, and buildings are of military significance and their value is constantly being appraised as the battle scenario evolves. For examples, a building thought to be of no significance may be reappraised, through intelligence reports, as a military command center. The ability to quickly strike these targets with a minimal amount of a priori information is necessary. The requirements placed on such a system are: (1) Rapid turnaround time from the moment the decision is made to attack. Depending on the user organization, this time ranges from fifteen minutes to twelve hours. (2) Minimal a priori target information. There is likely to be no imagery data base of the target, and the system may be required to operate with as little information as an overhead photograph. (3) Real time recognition of the target. Terminal guidance of the weapons delivery system to a specified destructive aimpoint of will be impacted by the recognition system. (4) Flexibility to attack a variety of targets. A data base of known high value fixed targets (HVFT) may be stored, but the sudden inclusion of new targets must be accommodated. This paper will discuss a real time implementation of a model based approach to automatically recognize high value fixed targets in forward looking infrared (FLIR) imagery. This approach generates a predictive model of the expected target features to be found in the image, extracts those feature types from the image, and matches the predictive model with the image features. A generic approach to the description of the target features has been taken to allow for rapid preparation of the models from minimal a priori target information. The real time aspect has been achieved by implementing the system on a massively parallel single instruction, multiple data architecture. An overview of an entire system approach to attack high value fixed targets will be discussed. The automatic target recognizer (ATR), which is a part of this system, will be discussed in detail and results of the ATR operating against HVFT in FLIR imagery will be shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.