PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6516, including the Title Page, Copyright
information, Table of Contents, Introduction, and the
Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Appropriate use of Information and Communication Technology (ICT) and Mechatronic (MT) systems is considered by many experts as a significant contribution to improve workflow and quality of care in the Operating Room (OR). This will require a suitable IT infrastructure as well as communication and interface standards, such as DICOM and suitable extensions, to allow data interchange between surgical system components in the OR. A conceptual design of such an infrastructure, i.e. a Therapy Imaging and Model Management System (TIMMS) will be introduced in this paper.
A TIMMS should support the essential functions that enable and advance image, and in particular, patient model guided therapy. Within this concept, the image centric world view of the classical PACS technology is complemented by an IT model-centric world view. Such a view is founded in the special modelling needs of an increasing number of modern surgical interventions as compared to the imaging intensive working mode of diagnostic radiology, for which PACS was originally conceptualised and developed.
A proper design of a TIMMS, taking into account modern software engineering principles, such as service oriented architecture, will clarify the right position of interfaces and relevant standards for a Surgical Assist System (SAS) in general and their components specifically. Such a system needs to be designed to provide a highly modular structure. Modules may be defined on different granulation levels. A first list of components (e.g. high and low level modules) comprising engines and repositories of an SAS, which should be integrated by a TIMMS, will be introduced in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Standards for creating and integrating information about patients, equipment, and procedures are vitally needed when planning for an efficient Operating Room (OR). The DICOM Working Group 24 (WG24) has been established to develop DICOM objects and services related to Image Guided Surgery (IGS). To determine these standards, it is important to define day-to-day, step-by-step surgical workflow practices and create surgery workflow models per procedures or per variable cases.
A well-defined workflow and a high fidelity patient model will be the base of activities for both, radiation therapy and surgery. Considering the present and future requirements for surgical planning and intervention, such a patient model must be n-dimensional, were n may include the spatial and temporal dimensions as well as a number of functional variables.
As the boundaries between radiation therapy, surgery and interventional radiology are becoming less well-defined, precise patient models will become the greatest common denominator for all therapeutic disciplines. In addition to imaging, the focus of WG24 should, therefore, also be to serve the therapeutic disciplines by enabling modelling technology to be based on standards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The generation, storage, transfer, and representation of image data in radiology are standardized by DICOM. To cover the needs of image guided surgery or computer assisted surgery in general one needs to handle patient information besides image data. A large number of objects must be defined in DICOM to address the needs of surgery. We propose an analysis process based on Surgical Workflows that helps to identify these objects together with use cases and requirements motivating for their specification. As the first result we confirmed the need for the specification of representation and transfer of geometric models. The analysis of Surgical Workflows has shown that geometric models are widely used to represent planned procedure steps, surgical tools, anatomical structures, or prosthesis in the context of surgical planning, image guided surgery, augmented reality, and simulation. By now, the models are stored and transferred in several file formats bare of contextual information. The standardization of data types including contextual information and specifications for handling of geometric models allows a broader usage of such models. This paper explains the specification process leading to Geometry Mesh Service Object Pair classes. This process can be a template for the definition of further DICOM classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Purpose: build 20 ORs equipped with independent video acquisition and broadcasting systems and a powerful LAN connectivity. Methods: a digital PC controlled video matrix has been installed in each OR. The LAN connectivity has been developed to grant data entering the OR and high speed connectivity to a server and to broadcasting devices. Video signals are broadcasted within the OR. Fixed inputs and five additional video inputs have been placed in the OR. Images can be stored locally on a high capacity HDD and a DVD recorder. Images can be also stored in a central archive for future acquisition and reference. Ethernet plugs have been placed within the OR to acquire images and data from the Hospital LAN; the OR is connected to the server/archive using a dedicated optical fiber. Results: 20 independent digital ORs have been built. Each OR is "self contained" and images can be digitally managed and broadcasted. Security issues concerning both image visualization and electrical safety have been fulfilled and each OR is fully integrated in the Hospital LAN. Conclusions: Digital ORs were fully implemented, they fulfill surgeons needs in terms of video acquisition and distribution and grant high quality video for each kind of surgery in a major hospital.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The amount and heterogeneity of data in biomedical research, notably in transnational research, requires new methods for the collection, presentation and analysis of information. Important data from laboratory experiments as well as patient trials are available as images. Thus, the integration and processing of image data represent a crucial component of information systems in biomedical research. The Charité Medical School in Berlin has established a new information service center for kidney diseases and transplantation (Open European Nephrology Science Centre - OpEN.SC) together with the German Research Agency (DFG). The aims of this project are (i) to improve the availability of raw data, (ii) to establish an infrastructure for clinical trials, (iii) to monitor the occurrence of rare disease patterns and (iv) to establish a quality assurance system. Major diagnostic procedures in medicine are based on the processing and analysis of image data. In diagnostic pathology, the availability of automated slide scanners provide the opportunity to digitize entire microscopic slides. The processing, presentation and analysis of these image data are called virtual microscopy. The integration of this new technology into the OpEN.SC system and the link to other heterogeneous data of individual patients represent a major technological challenge. Thus, new ways in communication between clinical and scientific partners have to be established and will be promoted by the project. The technological basis of the repository are web services for a scalable and adaptable system. HL7 and DICOM are considered the main medical standards of communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A flexible, scalable, high-resolution display system is presented to support the next generation of radiology reading rooms or interventional radiology suites. The project aims to create an environment for radiologists that will simultaneously facilitate image interpretation, analysis, and understanding while lowering visual and cognitive stress. Displays currently in use present radiologists with technical challenges to exploring complex datasets that we seek to address. These include resolution and brightness, display and ambient lighting differences, and degrees of complexity in addition to side-by-side comparison of time-variant and 2D/3D images.
We address these issues through a scalable projector-based system that uses our custom-designed geometrical and photometrical calibration process to create a seamless, bright, high-resolution display environment that can reduce the visual fatigue commonly experienced by radiologists. The system we have designed uses an array of casually aligned projectors to cooperatively increase overall resolution and brightness. Images from a set of projectors in their narrowest zoom are combined at a shared projection surface, thus increasing the global "pixels per inch" (PPI) of the display environment.
Two primary challenges - geometric calibration and photometric calibration - remained to be resolved before our high-resolution display system could be used in a radiology reading room or procedure suite. In this paper we present a method that accomplishes those calibrations and creates a flexible high-resolution display environment that appears seamless, sharp, and uniform across different devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Medical Image Processing Group (MIPG) at the University of Pennsylvania has been developing (and distributing
with source code) medical image analysis and visualization software systems for a long period of time. Our most recent
system, 3DVIEWNIX, was first released in 1993. Since that time, a number of significant advancements have taken
place with regard to computer platforms and operating systems, networking capability, the rise of parallel processing
standards, and the development of open source toolkits. The development of CAVASS by our group is the next
generation of 3DVIEWNIX. CAVASS will be freely available, open source, and is integrated with toolkits such as ITK
and VTK. CAVASS runs on Windows, Unix, and Linux but shares a single code base. Rather than requiring expensive
multiprocessor systems, it seamlessly provides for parallel processing via inexpensive COWs (Cluster of Workstations)
for more time consuming algorithms. Most importantly, CAVASS is directed at the visualization, processing, and
analysis of 3D and higher dimensional medical imagery, so support for DICOM data and the efficient implementation of
algorithms is given paramount importance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer Aided Diagnosis (CAD) coupled with physician's knowledge can improve accuracy of clinical decision.
However, many developed CAD software have no features to integrate its results with a picture archive and
communication system (PACS). This obstacle hinders the extensive use of independent CAD results within a more
streamlined diagnosis workflow. In this paper, we demonstrate a universal PACS-CAD toolkit that can seamlessly
integrate independent CAD results with a clinical PACS. The PACS-CAD toolkit consisted of two versions, a DICOM
Secondary Capture (DICOM-SC) version and a DICOM-IHE version to accommodate various PACS. The DICOM-SC
version toolkit installed on a CAD workstation converts the screen shot of CAD results to a DICOM image file for
storing in a PACS server and displaying on PACS workstations. The DICOM-IHE version toolkit follows DICOM and
IHE standards using DICOM Structured Report and Post-Processing Workflow Profiles; thus, results from various CAD
software can be integrated into diagnosis workflow of a PACS having DICOM and IHE-compliance and, most
importantly, these quantified CAD results can be directly queried for and retrieved from within PACS for future data
mining applications. The successful implementation of this toolkit can greatly ease the extensive use of various CAD
results in the clinical diagnosis workflow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Image-Guided Surgery Toolkit (IGSTK) is an open source C++ software library that provides the basic components
needed to develop image-guided surgery applications. The focus of the toolkit is on robustness using a state machine
architecture. This paper presents an overview of the project based on a recent book which can be downloaded from
igstk.org. The paper includes an introduction to open source projects, a discussion of our software development process
and the best practices that were developed, and an overview of requirements. The paper also presents the architecture
framework and main components. This presentation is followed by a discussion of the state machine model that was
incorporated and the associated rationale. The paper concludes with an example application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pathology, the medical specialty charged with the evaluation of macroscopic and microscopic aspects of disease, is increasingly turning to digital imaging. While the conventional tissue blocks and glass slides form an "archive" that pathology departments must maintain, digital images acquired from microscopes or digital slide scanners are increasingly used for telepathology, consultation, and intra-facility communication.
Since many healthcare facilities are moving to "enterprise PACS" with departments in addition to radiology using the infrastructure of such systems, some understanding of the potential of whole-slide digital images is important. Network and storage designers, in particular, are very likely to be impacted if a significant number of such images are to be moved on, or stored (even temporarily) in, enterprise PACS.
As an example, a typical commercial whole-slide imaging system typically generates 15 gigabytes per slide scanned (per focal plane). Many of these whole-slide scanners have a throughput of 1000 slides per day. If that full capacity is used and all the resulting digital data is moved to the enterprise PACS, it amounts to 15 terabytes per day; the amount of data a large radiology department might generate in a year or two.
This paper will review both the clinical scenarios of whole-slide imaging as well as the resulting data volumes. The author will emphasize the potential PACS infrastructure impact of such huge data volumes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To improve radiologist's performance in lesion detection and diagnosis on 3D medical image dataset, we have conducted a pilot study to test viability and efficiency of the stereo display for lung nodule detection and classification. Using our previously developed stereo compositing methods, stereo image pairs were prestaged and precalculated from CT slices for real-time interactive display. Three display modes (i.e., stereoscopic 3D, orthogonal MIP and slice-by-slice) were compared for lung nodule detection and total of eight radiologists have participated this pilot study to interpret the images. The performance of lung nodule detection was analyzed and compared between the modes using FROC analysis. Subjective assessment indicates that stereo display was well accepted by the radiologists, despite some uncertainty of beneficial results due to the novelty of the display. The FROC analysis indicates a trend that, among the three display modes, stereo display resulted in the best performance of nodule detection followed by slice-based display, although no statistically significant difference was shown between the three modes. The stereo display of a stack of thin CT slices has the potential to clarify three-dimensional structures, while avoiding ambiguities due to tissue superposition. Few studies, however, have addressed actual utility of stereo display for medical diagnosis. Our preliminary results suggest a potential role of stereo display for improving radiologists' performance in medical detection and diagnosis, and also indicate some factors likely affect the performance with new display, such as novelty of the display, training effect from projected radiography interpretation and confidence with the new technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a modified medical image compression algorithm using cubic spline interpolation (CSI) is presented for
telemedicine applications. The CSI is developed in order to subsample image data with minimal distortion and to
achieve compression. It has been shown in literatures that the CSI can be combined with the JPEG algorithms to
develop a modified JPEG codec, which obtains a higher compression ratio and a better quality of reconstructed image
than the standard JPEG. However, this modified JPEG codec will lose some high-frequency components of medical
images during compression process. To minimize the drawback arose from loss of these high-frequency components,
this paper further makes use of bit-plane compensation to the modified JPEG codec. The bit-plane compensation
algorithm used in this paper is modified from JBIG2 standard. Experimental results show that the proposed scheme can
increase 20~30% compression ratio of original JPEG medical data compression system with similar visual quality. This
system can reduce the loading of telecommunication networks and is quite suitable for low bit-rate telemedicine
applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Building effective content-based image retrieval (CBIR) systems involves the combination of image creation, storage, security, transmission, analysis, evaluation feature extraction, and feature combination in order to store and retrieve medical images effectively. This requires the involvement of a large community of experts across several fields. We have created a CBIR system called Archimedes which integrates the community together without requiring disclosure of sensitive details. Archimedes' system design enables researchers to upload their feature sets and quickly compare the effectiveness of their methods against other stored feature sets. Additionally, research into the techniques used by radiologists is possible in Archimedes through double-blind radiologist comparisons based on their annotations and feature markups. This research archive contains the essential technologies of secure transmission and storage, textual and feature searches, spatial searches, annotation searching, filtering of result sets, feature creation, and bulk loading of features, while creating a repository and testbed for the community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the design, development, and
pilot-deployment experiences of MIMI, a web-based, Multi-modality Multi-Resource Information Integration environment
for biomedical core facilities.
This is an easily customizable, web-based software tool that integrates scientific
and administrative support for a biomedical core facility involving
a common set of entities: researchers; projects; equipments and devices; support staff; services; samples and materials; experimental workflow; large and complex data. With this software, one can: register users; manage projects; schedule resources; bill services; perform site-wide search; archive, back-up, and share data. With its customizable, expandable, and scalable characteristics,
MIMI not only provides a cost-effective solution to the overarching data management problem of biomedical core facilities
unavailable in the market place, but also
lays a foundation for data federation to facilitate and support discovery-driven research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a medical image and medical record database for the storage, research, transmission, and evaluation of medical images, as well as tele-medicine applications. Any medical image from a source that supports the DICOM standard can be stored and accessed, as well as associated analysis and annotations. Information and image retrieval can be done based on patient info, date, doctor's annotations, features in the images, or a spatial combination of features. Secure access and transmission is addressed for tele-medicine applications. This database application follows all HIPPA regulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article describes the use of a medical image retrieval system on a database of 16'000 fractures, selected from
surgical routine over several years. Image retrieval has been a very active domain of research for several years.
It was frequently proposed for the medical domain, but only few running systems were ever tested in clinical
routine. For the planning of surgical interventions after fractures, x-ray images play an important role. The
fractures are classified according to exact fracture location, plus whether and to which degree the fracture is
damaging articulations to see how complicated a reparation will be. Several classification systems for fractures
exist and the classification plus the experience of the surgeon lead in the end to the choice of surgical technique
(screw, metal plate, ...). This choice is strongly influenced by the experience and knowledge of the surgeons with
respect to a certain technique. Goal of this article is to describe a prototype that supplies similar cases to an
example to help treatment planning and find the most appropriate technique for a surgical intervention.
Our database contains over 16'000 fracture images before and after a surgical intervention. We use an image
retrieval system (GNU Image Finding Tool, GIFT) to find cases/images similar to an example case currently
under observation. Problems encountered are varying illumination of images as well as strong anatomic differences
between patients. Regions of interest are usually small and the retrieval system needs to focus on this region.
Results show that GIFT is capable of supplying similar cases, particularly when using relevance feedback, on
such a large database. Usual image retrieval is based on a single image as search target but for this application
we have to select images by case as similar cases need to be found and not images. A few false positive cases
often remain in the results but they can be sorted out quickly by the surgeons.
Image retrieval can well be used for the planning of operations by supplying similar cases. A variety of
challenges has been identified and partly solved (varying luminosity, small region of interested, case-based instead
of image-based). This article mainly presents a case study to identify potential benefits and problems. Several
steps for improving the system have been identified as well and will be described at the end of the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical centers collect and store significant amount of valuable data pertaining to patients' visit in the form of medical free-text. In addition, standardized diagnosis codes (International Classification of Diseases, Ninth Revision, Clinical Modification: ICD9-CM) related to those dictated reports are usually available. In this work, we have created a framework where image searches could be initiated through a combination of free-text reports as well as ICD9 codes. This framework enables more comprehensive search on existing large sets of patient data in a systematic way. The free text search is enriched by computer-aided inclusion of additional search terms enhanced by a thesaurus. This combination of enriched search allows users to access to a larger set of relevant results from a patient-centric PACS in a simpler way. Therefore, such framework is of particular use in tasks such as gathering images for desired patient populations, building disease models, and so on. As the motivating application of our framework, we implemented a search engine. This search engine processed two years of patient data from the OSU Medical Center's Information Warehouse and identified lung nodule location information using a combination of UMLS Meta-Thesaurus enhanced text report searches along with ICD9 code searches on patients that have been discharged. Five different queries with various ICD9 codes involving lung cancer were carried out on 172552 cases. Each search was completed under a minute on average per ICD9 code and the inclusion of UMLS thesaurus increased the number of relevant cases by 45% on average.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Content-based image retrieval (CBIR) is a promising technology to enrich the core functionality of picture archiving and
communication systems (PACS). CBIR has a potentially strong impact in diagnostics, research, and education. Research
successes that are increasingly reported in the scientific literature, however, have not made significant inroads as
medical CBIR applications incorporated into routine clinical medicine or medical research. The cause is often attributed
without sufficient analytical reasoning to the inability of these applications in overcoming the "semantic gap". The
semantic gap divides the high-level scene analysis of humans from the low-level pixel analysis of computers.
In this paper, we suggest a more systematic and comprehensive view on the concept of gaps in medical CBIR research.
In particular, we define a total of 13 gaps that address the image content and features, as well as the system performance
and usability. In addition to these gaps, we identify 6 system characteristics that impact CBIR applicability and
performance. The framework we have created can be used a posteriori to compare medical CBIR systems and
approaches for specific biomedical image domains and goals and a priori during the design phase of a medical CBIR
application. To illustrate the a posteriori use of our conceptual system, we apply it, initially, to the classification of three
medical CBIR implementations: the content-based PACS approach (cbPACS), the medical GNU image finding tool
(medGIFT), and the image retrieval in medical applications (IRMA) project. We show that systematic analysis of gaps
provides detailed insight in system comparison and helps to direct future research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Literature review is a time-consuming burden because it is hard to find relevant articles. But literature review is so important
because it allows researchers to find solutions to their questions/problems from previous work already performed and
published by others. It is difficult to wade through documents quickly and assess their quality by only looking at their title,
abstract, or even full-text. The human visual system allows us to quickly glance at images and infer the main subject of an
article and decide whether we are interested in reading more. In some cases, such as biology articles for example, figures
showing photos of experimental results quickly allow a researcher in the literature review phase to determine the quality of
the work by its results. This work describes a system for literature review that uses content-based image retrieval (CBIR)
techniques to search for relevant documents using the content of figures in a document along with relevance feedback
refinement instead of keyword search guesswork. The long-term goal is to use it as a subsystem in a content-based
document retrieval system where the figures and their captions are combined with the document's body text. This paper
describes the processing of the documents to extract available raster graphics as well as text with its layout and formatting
information intact. The process of matching a figure to its caption using this layout information is then described. While
caption-based search is implemented but not quite merged into the system yet, the figure-caption matching is complete.
Two novel modified tf-idf measures that are being considered to take into account bold/italic text, font size, and document
structure as a way to infer text importance rather than just rely on text frequency is detailed mathematically and explained
intuitively. CBIR queries where there are multiple images that form the query are issued as separate queries and their
results are then merged together.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The number of articles published in the scientific medical literature is continuously increasing, and Web access to the journals is becoming common. Databases such as SPIE Digital Library, IEEE Xplore, indices such as PubMed, and search engines such as Google provide the user with sophisticated full-text search capabilities. However, information in images and graphs within these articles is entirely disregarded. In this paper, we quantify the potential impact of using content-based image retrieval (CBIR) to access this non-text data. Based on the Journal Citations Report (JCR), the journal Radiology was selected for this study. In 2005, 734 articles were published electronically in this journal. This included 2,587 figures, which yields a rate of 3.52 figures per article. Furthermore, 56.4% of these figures are composed of several individual panels, i.e. the figure combines different images and/or graphs.
According to the Image Cross-Language Evaluation Forum (ImageCLEF), the error rate of automatic identification of medical images is about 15%. Therefore, it is expected that, by applying ImageCLEF-like techniques, already 95.5% of articles could be retrieved by means of CBIR. The challenge for CBIR in scientific literature, however, is the use of local texture properties to analyze individual image panels in composite illustrations. Using local features for content-based image representation, 8.81 images per article are available, and the predicted correctness rate may increase to 98.3%. From this study, we conclude that CBIR may have a high impact in medical literature research and suggest that additional research in this area is warranted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The decrease in reimbursement rates for radiology procedures has placed even more pressure on radiology departments
to increase their clinical productivity. Clinical faculties have less time for teaching residents, but with the advent and
prevalence of an electronic environment that includes PACS, RIS, and HIS, there is an opportunity to create electronic
teaching files for fellows, residents, and medical students. Experienced clinicians, who select the most appropriate
radiographic image, and clinical information relevant to that patient, create these teaching files. Important cases are
selected based on the difficulty in determining the diagnosis or the manifestation of rare diseases. This manual process of
teaching file creation is time consuming and may not be practical under the pressure of increased demands on the
radiologist. It is the goal of this research to automate the process of teaching file creation by manually selecting key
images and automatically extracting key sections from clinical reports and laboratories. The text report is then processed
for indexing to two standard nomenclatures UMLS and RADLEX. Interesting teaching files can then be queried based
on specific anatomy and findings found within the clinical reports.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Research studies have shown that advances in computed tomography (CT) technology allow better detection of pulmonary nodules by generating higher-resolution images. However, the new technology also generates many more individual transversal reconstructions, which as a result may affect the efficiency and accuracy of the radiologists interpreting these images.
The goal of our research study is to build a content-based image retrieval (CBIR) system for pulmonary CT nodules. Currently, texture is used to quantify the image content, but any other image feature could be incorporated into the proposed system. Unfortunately, there is no texture model or similarity measure known to work best for encoding nodule texture properties or retrieving most similar nodules. Therefore, we investigated and evaluated several texture models and similarity measures with respect to nodule size, number of retrieved nodules, and radiologist agreement on the nodules' texture characteristic.
The results were generated on 90 thoracic CT scans collected by the Lung Image Database Consortium (LIDC). Every case was annotated by up to four radiologists marking the contour of nodules and assigning nine characteristics (including texture) to each identified nodule. We found that Gabor texture descriptors produce the best retrieval results regardless of the nodule size, number of retrieved items or similarity metric. Furthermore, when analyzing the radiologists' agreement on the texture characteristic, we found that when just two radiologists agreed, the average precision increased from 88% to 96% for both Gabor and Markov texture features. Moreover, once three or four radiologists agreed the precision increased to nearly 100%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods
are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have
been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed.
These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate
metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant
and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites.
Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest
and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where
multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The
centralization of metadata database management simplifies the task of adding new databases into the grid and also
decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and
implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based
clinical trials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The EHR is a secure, real-time, point-of-care, patient-centric information resource for healthcare providers. Many
countries and regional districts have set long-term goals to build EHRs, and most of EHRs are usually built based on the
integration of different information systems with different information models and platforms. A number of hospitals in
Shanghai are also piloting the development of an EHR solution based on IHE XDS/XDS-I profiles with a
service-oriented architecture (SOA). The first phase of the project targets the Diagnostic Imaging domain and allows
seamless sharing of images and reports across the multiple hospitals. To develop EHRs for regional coordinated
healthcare, some factors should be considered in designing architecture, one of which is security issue. In this paper, we
present some approaches and policies to improve and strengthen the security among the different hospitals' nodes, which
are compliant with the security requirements defined by IHE IT Infrastructure (ITI) Technical Framework. Our security
solution includes four components: Time Sync System (TSS), Digital Signature Manage System (DSMS), Data
Exchange Control Component (DECC) and Single Sign-On (SSO) System. We give a design method and
implementation strategy of these security components, and then evaluate the performance and overheads of the security
services or features by integrating the security components into an image-based EHR system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents the development of an electronic infrastructure to make available a free, online, multipurpose and multimodality medical image database. The proposed infrastructure implements a distributed architecture for medical image database, authoring tools, and a repository for multimedia documents. Also it includes a peer-reviewed model that assures quality of dataset. This public repository provides a single point of access for medical images and related information to facilitate retrieval tasks. The proposed approach has been used as an electronic teaching system in Radiology as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of hospitals in Shanghai are piloting the development of an EHR solution based on a grid concept with a
service-oriented architecture (SOA). The first phase of the project targets the Diagnostic Imaging domain and allows
seamless sharing of images and reports across the multiple hospitals. The EHR solution is fully aligned with the IHE
XDS-I integration profile and consists of the components of the XDS-I Registry, Repository, Source and Consumer
actors. By using SOA, the solution uses ebXML over secured http for all transactions with in the grid. However,
communication with the PACS and RIS is DICOM and HL7 v3.x. The solution was installed in three hospitals and one
date center in Shanghai and tested for performance of data publication, user query and image retrieval. The results are
extremely positive and demonstrate that the EHR solution based on SOA with grid concept can scale effectively to
server a regional implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A cross-continental Data Grid infrastructure has been developed at the Image Processing and Informatics (IPI) research
laboratory as a fault-tolerant image data backup and disaster recovery solution for Enterprise PACS. The Data Grid
stores multiple copies of the imaging studies as well as the metadata, such as patient and study information, in
geographically distributed computers and storage devices involving three different continents: America, Asia and
Europe. This effectively prevents loss of image data and accelerates data recovery in the case of disaster. However, the
lack of centralized management system makes the administration of the current Data Grid difficult. Three major
challenges exist in current Data Grid management: 1. No single user interface to access and administrate each
geographically separate component; 2. No graphical user interface available, resulting in command-line-based
administration; 3. No single sign-on access to the Data Grid; administrators have to log into every Grid component with
different corresponding user names/passwords.
In this paper we are presenting a prototype of a unique web-based access interface for both Data Grid administrators and
users. The interface has been designed to be user-friendly; it provides necessary instruments to constantly monitor the
current status of the Data Grid components and their contents from any locations, contributing to longer system up-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the ubiquity of cell phones, SMS (Short Message Service) has become an ideal means to
wirelessly manage a Healthcare environment and in particular PACS (Picture Archival and
Communications System) data. SMS is a flexible and mobile method for real-time access and control of
Healthcare information systems such as HIS (Hospital Information System) or PACS. Unlike
conventional wireless access methods, SMS' mobility is not limited by the presence of a WiFi network or
any other localized signal. It provides a simple, reliable yet flexible method to communicate with an
information system. In addition, SMS services are widely available for low costs from cellular phone
service providers and allows for more mobility than other services such as wireless internet. This paper
aims to describe a use case of SMS as a means of remotely communicating with a PACS server. Remote
access to a PACS server and its Query-Retrieve services allows for a more convenient, flexible and
streamlined radiology workflow. Wireless access methods such as SMS will increase dedicated PACS workstation availability for more specialized DICOM (Digital Imaging and Communications in Medicine) workflow management. This implementation will address potential security, performance and cost issues of applying SMS as part of a healthcare information management system. This is in an effort
to design a wireless communication system with optimal mobility and flexibility at minimum material and time costs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Clinical trials play a crucial role in testing new drugs or devices in modern medicine. Medical imaging has also become
an important tool in clinical trials because images provide a unique and fast diagnosis with visual observation and
quantitative assessment. A typical imaging-based clinical trial consists of: 1) A well-defined rigorous clinical trial
protocol, 2) a radiology core that has a quality control mechanism, a biostatistics component, and a server for storing and
distributing data and analysis results; and 3) many field sites that generate and send image studies to the radiology core.
As the number of clinical trials increases, it becomes a challenge for a radiology core servicing multiple trials to have a
server robust enough to administrate and quickly distribute information to participating radiologists/clinicians worldwide.
The Data Grid can satisfy the aforementioned requirements of imaging based clinical trials. In this paper, we present a
Data Grid architecture for imaging-based clinical trials. A Data Grid prototype has been implemented in the Image
Processing and Informatics (IPI) Laboratory at the University of Southern California to test and evaluate performance in
storing trial images and analysis results for a clinical trial. The implementation methodology and evaluation protocol of
the Data Grid are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the last 4 years IPI (Image Processing and Informatics) Laboratory has been developing a web-based Study
Management Tool (SMT) application that allows Radiologists, Film librarians and PACS-related (Picture Archiving and
Communication System) users to dynamically and remotely perform Query/Retrieve operations in a PACS network.
The users utilizing a regular PDA (Personal Digital Assistant) can remotely query a PACS archive to distribute any
study to an existing DICOM (Digital Imaging and Communications in Medicine) node. This application which has
proven to be convenient to manage the Study Workflow [1, 2] has been extended to include a DICOM viewing
capability in the PDA. With this new feature, users can take a quick view of DICOM images providing them mobility
and convenience at the same time. In addition, we are extending this application to Metropolitan-Area Wireless
Broadband Networks. This feature requires Smart Phones that are capable of working as a PDA and have access to
Broadband Wireless Services. With the extended application to wireless broadband technology and the preview of
DICOM images, the Study Management Tool becomes an even more powerful tool for clinical workflow management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new resolution enhancement technology that used independent sub-pixel driving method was developed for medical monochrome liquid crystal displays (LCDs). Each pixel of monochrome LCDs, which employ color liquid crystal panels with color filters removed, consists of three sub-pixels. In the new LCD system implemented with this technology, sub-pixel intensities were modulated according to detailed image information, and consequently resolution was enhanced three times. In addition, combined with adequate resolution improvement by image data processing, horizontal and vertical resolution properties were balanced. Thus the new technology realized 9 mega-pixels (MP) ultra-high resolution out of 3MP LCD. Physical measurements and perceptual evaluations proved that the achieved 9MP (through our new technology) was appropriate and efficient to depict finer anatomical structures such as micro calcifications in mammography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses characterization of color-related properties of softcopy displays used in medical image
interpretation. Four Liquid Crystal Displays (LCD) were studied, three of them were color displays and one of them was
monochrome. Physical evaluation was conducted on all the displays. Luminance and chrominance response, white point,
color primaries and color gamut were evaluated. Results showed that when color displays were used to present grayscale
clinical images, their performance could be inferior to using monochrome displays. This is because of the addition of
color filters. When color displays were used to present color clinical images, the rendition could be totally different on
different color displays because of their different color gamuts. A calibration standard or guideline for color as well as
grayscale calibration is necessary for color displays used for medical applications which is currently absent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As clinical imaging and informatics systems continue to integrate the healthcare enterprise, the need to
prevent patient mis-identification and unauthorized access to clinical data becomes more apparent
especially under the Health Insurance Portability and Accountability Act (HIPAA) mandate. Last year, we
presented a system to track and verify patients and staff within a clinical environment. This year, we
further address the biometric verification component in order to determine which Biometric system is the
optimal solution for given applications in the complex clinical environment. We install two biometric
identification systems including fingerprint and facial recognition systems at an outpatient imaging facility,
Healthcare Consultation Center II (HCCII). We evaluated each solution and documented the advantages
and pitfalls of each biometric technology in this clinical environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed an anonymization system for DICOM images. It requires consent from the patient to use the DICOM images for research or education. However, providing the DICOM image to the other facilities is not safe because it contains a lot of personal data. Our system is a server that provides anonymization service of DICOM images for users in the facility. The distinctive features of the system are, input interface, flexible anonymization policy, and automatic body part identification. In the first feature, we can use the anonymization service on the existing DICOM workstations. In the second feature, we can select a best policy fitting for the Protection of personal data that is ruled by each medical facility. In the third feature, we can identify the body parts that are included in the input image set, even if the set lacks the body part tag in DICOM header. We installed the system for the first time to a hospital in December 2005. Currently, the system is working in other four facilities. In this paper we describe the system and how it works.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We compare the image quality characteristics of state-of-the-art mobile display systems based on different types of
liquid crystal and organic light-emitting materials with respect to luminance and color, viewing angle, resolution,
temporal response, and reflectance. The results for a reflective liquid crystal display suggest that the changes
in angular contrast and color shifts are more severe than for other LCDs, particularly for medical LCDs, where
no color or grayscale inversion is present within the entire hemisphere of viewing directions. A prototype light-emitting
device showed a wide viewing angle and large small-spot contrast. Display reflectance and resolution
were affected by the additional touch-screen coatings. The methodology developed provides a framework for the
comparison of alternative technologies for display of diagnostic images in small portable devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents preliminary data on the use of a color camera for the evaluation of Quality Control (QC) and Quality
Analysis (QA) of a color LCD in comparison with that of a monochrome LCD. The color camera is a C-MOS camera
with a pixel size of 9 µm and a pixel matrix of 2268 × 1512 × 3. The camera uses a sensor that has co-located pixels for
all three primary colors. The imaging geometry used mostly was 12 × 12 camera pixels per display pixel even though it
appears that an imaging geometry of 17.6 might provide results which are more accurate.
The color camera is used as an imaging colorimeter, where each camera pixel is calibrated to serve as a colorimeter. This
capability permits the camera to determine chromaticity of the color LCD at different sections of the display. After the
color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the
camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's
white point were in error.
Modulation Transfer Function (MTF) as well as Noise in terms of the Noise Power Spectrum (NPS) of both LCDs were
evaluated.
The horizontal MTFs of both displays have a larger negative slope than the vertical MTFs, indicating that the horizontal
MTFs are poorer than the vertical MTFs. However the modulations at the Nyquist frequency seem lower for the color
LCD than for the monochrome LCD. These results contradict simulations regarding MTFs in the vertical direction.
The spatial noise of the color display in both directions are larger than that of the monochrome display.
Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of
images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on the characterization of two novel probes for measuring display color without contamination from
other screen areas or off-normal emissions. The probes are characterized with a scanning slit method and a
moving laser and LED arrangement. The tails of the scans indicate the spread in signal due to light from
areas outside the intended measuring spot. A dual-laser setup suggests that color purity of the reading can be
maintained up to a few tens of millimeters outside of the measurement spot, and a dual-LED setup shows the
effects of secondary light emissions in the readings. The first design, color probe A, is then used to quantify
display color, maximum color difference, luminance uniformity, graylevel tracking, and angular color shifts of
medical liquid crystal displays and mobile displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past decade, several computerized tools have been developed for detection of lung nodules and for providing
volumetric analysis. Incidentally detected lung nodules have traditionally been followed over time by measurements of
their axial dimensions on CT scans to ensure stability or document progression. A recently published article by the
Fleischner Society offers guidelines on the management of incidentally detected nodules based on size criteria. For this
reason, differences in measurements obtained by automated tools from various vendors may have significant
implications on management, yet the degree of variability in these measurements is not well understood. The goal of this
study is to quantify the differences in nodule maximum diameter and volume among different automated analysis
software. Using a dataset of lung scans obtained with both "ultra-low" and conventional doses, we identified a subset of
nodules in each of five size-based categories. Using automated analysis tools provided by three different vendors, we
obtained size and volumetric measurements on these nodules, and compared these data using descriptive as well as
ANOVA and t-test analysis. Results showed significant differences in nodule maximum diameter measurements among
the various automated lung nodule analysis tools but no significant differences in nodule volume measurements. These
data suggest that when using automated commercial software, volume measurements may be a more reliable marker of
tumor progression than maximum diameter. The data also suggest that volumetric nodule measurements may be
relatively reproducible among various commercial workstations, in contrast to the variability documented when
performing human mark-ups, as is seen in the LIDC (lung imaging database consortium) study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A CAD method of calculating wall thickness of carotid vessels addresses the time-consuming issue of using B-mode ultrasound as well as inter- and intra-observer variability in results. Upon selection of a region-of-interest and filtering of a series of ultrasound carotid images, the CAD is able to measure the geometry of the lumen and plaque surfaces using a least-square fitting of the active contours during systole and diastole. To evaluate the approach, ultrasound image sequences from 30 patients were submitted to the procedure. The images were stored on an international data grid repository that consists of three international sites: Image Processing and Informatics (IPI) Laboratory at University of Southern California, USA; InCor (Heart Institute) at Sao Paulo, Brazil, and Hong Kong Polytechnic University, Hong Kong. The three chosen sites are connected with high speed international networks including the Internet2, and the Brazilian National Research and Education Network (RNP2). The Data Grid was used to store, backup, and share the ultrasound images and analysis results, which provided a large-scale and a virtual data system. In order to study the variability between the automatic and manual definition of artery boundaries, the pooled mean and the standard deviation for the difference between measurements of lumen diameter were computed. The coefficient of variation and correlation were also calculated. For the studied population the difference between manual and automatic measurement of the lumen diameter (LD) and intima-media-thickness (IMT) were 0.12 +/-0.10 and 0.09+/- 0.06, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Last year we presented work on an imaging informatics approach towards developing quantitative knowledge and tools
based on standardized DICOM-RT objects for Image-Guided Radiation Therapy. In this paper, we have extended this
methodology to perform knowledge-based medical imaging informatics research on specific clinical scenarios where
brain tumor patients are treated with Proton Beam Therapy (PT). PT utilizes energized charged particles, protons, to
deliver dose to the target region. Protons are energized to specific velocities which determine where they will deposit
maximum energy within the body to destroy cancerous cells. Treatment Planning is similar in workflow to traditional
Radiation Therapy methods such as Intensity-Modulated Radiation Therapy (IMRT) which utilizes a priori knowledge
to drive the treatment plan in an inverse manner. In March 2006, two new RT Objects were drafted in a DICOM-RT
Supplement 102 specifically for Ion Therapy which includes Proton Therapy. The standardization of DICOM-RT-ION
objects and the development of a knowledge base as well as decision-support tools that can be add-on features to the ePR
DICOM-RT system were researched. We have developed a methodology to perform knowledge-based medical imaging
informatics research on specific clinical scenarios. This methodology can be used to extend to Proton Therapy and the
development of future clinical decision-making scenarios during the course of the patient's treatment that utilize "inverse
treatment planning". In this paper, we present the initial steps toward extending this methodology for PT and lay the
foundation for development of future decision-support tools tailored to cancer patients treated with PT. By integrating
decision-support knowledge and tools designed to assist in the decision-making process, a new and improved
"knowledge-enhanced treatment planning" approach can be realized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Last year in this conference, we presented a theoretical analysis of how ambient lighting in dark reading rooms
could be moderately increased without compromising the interpretation of images displayed on LCDs. Based on
that analysis, in this paper we present results of two psychophysical experiments which were designed to verify
those theoretical predictions. The first experiment was designed to test how an increase in ambient lighting affects
the detection of subtle objects at different luminance levels, particularly at lower luminance levels. Towards that
end, images of targets consisting of low-contrast objects were shown to seven observers, first under a dark room
illumination condition of 1 lux and then under a higher room illumination condition of 50 lux. The targets had three
base luminance values of 1, 12 and 35 cd/m2 and were embedded in a uniform background. The uniform background
was set to 12 cd/m2 which enabled fixing Ladp, the visual adaptation luminance value when looking at the display,
to 12 cd/m2. This value also matched the luminance value of about 12 cd/m2 reflected off the wall surrounding the
LCD at the higher ambient lighting condition. The task of the observers was to detect and classify the displayed
objects under the two room lighting conditions. The results indicated that the detection rate in dark area (base
luminance of 1 cd/m2) increased by 15% when the ambient illumination is increased from 1 to 50 lux. The increase
was not conclusive for targets embedded in higher luminance regions, but there was no evidence to the contrary
either. The second experiment was designed to investigate the adaptation luminance value of the eye when viewing
typical mammograms. It was found that, for a typical display luminance calibration, this value might lie between 12
and 20 cd/m2. Findings from the two experiments provide justification for a controlled increase of ambient lighting
to improve ergonomic viewing conditions in darkly lit reading rooms while potentially improving diagnostic
performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computer-aided-diagnosis (CAD) method has been previously developed based on features extracted from phalangeal
regions of interest (ROI) in a digital hand atlas, which can assess bone age of children from ages 7 to 18 accurately.
Therefore, in order to assess the bone age of children in younger ages, the inclusion of carpal bones is necessary. In this
paper, we developed and implemented a knowledge-based method for fully automatic carpal bone segmentation and
morphological feature analysis. Fuzzy classification was then used to assess the bone age based on the selected features.
Last year, we presented carpal bone segmentation algorithm. This year, research works on procedures after carpal bone
segmentation including carpal bone identification, feature analysis and fuzzy system for bone age assessment is
presented. This method has been successfully applied on all cases in which carpal bones have not overlapped. CAD
results of total about 205 cases from the digital hand atlas were evaluated against subject chronological age as well as
readings of two radiologists. It was found that the carpal ROI provides reliable information in determining the bone age
for young children from newborn to 7-year-old.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Determination of bone age assessment (BAA) in pediatric radiology is a task based on detailed analysis of
patient's left hand X-ray. The current standard utilized in clinical practice relies on a subjective comparison
of the hand with patterns in the book atlas. The computerized approach to BAA (CBAA) utilizes automatic
analysis of the regions of interest in the hand image. This procedure is followed by extraction of quantitative
features sensitive to skeletal development that are further converted to a bone age value utilizing knowledge
from the digital hand atlas (DHA). This also allows providing BAA results resembling current clinical approach.
All developed methodologies have been combined into one CAD module with a graphical user interface (GUI).
CBAA can also improve the statistical and analytical accuracy based on a clinical work-flow analysis. For this
purpose a quality assurance protocol (QAP) has been developed. Implementation of the QAP helped to make
the CAD more robust and find images that cannot meet conditions required by DHA standards. Moreover, the
entire CAD-DHA system may gain further benefits if clinical acquisition protocol is modified. The goal of this
study is to present the performance improvement of the overall CAD-DHA system with QAP and the comparison
of the CAD results with chronological age of 1390 normal subjects from the DHA. The CAD workstation can
process images from local image database or from a PACS server.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When the Indianapolis Veterans Affairs Medical Center changed Picture Archiving and Communication Systems (PACS) vendors, we chose to use "on demand" image migration as the more cost effective solution. The legacy PACS stores the image data on optical disks in multi-platter jukeboxes. The estimated size of the legacy image data is about 5 terabytes containing studies from ~1997 to ~2003. Both the legacy and the new PACS support a manual DICOM query/retrieve. We implemented workflow rules to determine when to fetch the relevant priors from the legacy PACS. When a patient presents for a new radiology study, we used the following rules to initiate the manual DICOM query/retrieve. For general radiography we retrieved the two most recent prior examinations and for the modalities MR and CT we retrieved the clinically relevant prior examinations. We monitored the number of studies retrieved each week for about a 12 month period. For our facility which performs about 70,000 radiology examinations per year, we observed an essentially constant retrieval rate of slightly less than 50 studies per week. Some explanations for what may be considered an anomalous result maybe related to the fact that we are a tertiary care facility and a teaching hospital.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D and multi-planar reconstruction of CT images have become indispensable in the routine practice of diagnostic
imaging. These tools cannot only enhance our ability to diagnose diseases, but can also assist in therapeutic planning as
well. The technology utilized to create these can also render surface reconstructions, which may have the undesired
potential of providing sufficient detail to allow recognition of facial features and consequently patient identity, leading
to violation of patient privacy rights as described in the HIPAA (Health Insurance Portability and Accountability Act)
legislation. The purpose of this study is to evaluate whether 3D reconstructed images of a patient's facial features can
indeed be used to reliably or confidently identify that specific patient. Surface reconstructed images of the study
participants were created used as candidates for matching with digital photographs of participants. Data analysis was
performed to determine the ability of observers to successfully match 3D surface reconstructed images of the face with
facial photographs. The amount of time required to perform the match was recorded as well. We also plan to
investigate the ability of digital masks or physical drapes to conceal patient identity. The recently expressed concerns
over the inability to truly "anonymize" CT (and MRI) studies of the head/face/brain are yet to be tested in a prospective
study. We believe that it is important to establish whether these reconstructed images are a "threat" to patient
privacy/security and if so, whether minimal interventions from a clinical perspective can substantially reduce this
possibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multislice CT scanner advanced remarkably at the speed at which the chest CT images were acquired for mass screening.
Mass screening based on multislice CT images requires a considerable number of images to be read. It is this
time-consuming step that makes the use of helical CT for mass screening impractical at present. To overcome this
problem, we have provided diagnostic assistance methods to medical screening specialists by developing a lung cancer
screening algorithm that automatically detects suspected lung cancers in helical CT images and a coronary artery
calcification screening algorithm that automatically detects suspected coronary artery calcification. Moreover, we have
provided diagnostic assistance methods to medical screening specialists by using a lung cancer screening algorithm built
into mobile helical CT scanner for the lung cancer mass screening done in the region without the hospital. We also have
developed electronic medical recording system and prototype internet system for the community health in two or more
regions by using the Virtual Private Network router and Biometric fingerprint authentication system and Biometric face
authentication system for safety of medical information. Based on these diagnostic assistance methods, we have now
developed a new computer-aided workstation and database that can display suspected lesions three-dimensionally in a
short time. This paper describes basic studies that have been conducted to evaluate this new system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer Aided Detection/Diagnosis (CAD) can greatly assist in the clinical decision making process, and therefore,
has drawn tremendous research efforts. However, integrating independent CAD workstation results with the clinical
diagnostic workflow still remains challenging. We have presented a CAD-PACS integration toolkit that complies with
DICOM standard and IHE profiles. One major issue in CAD-PACS integration is the security of the images used in
CAD post-processing and the corresponding CAD result images. In this paper, we present a method for assuring the
integrity of both DICOM images used in CAD post-processing and the CAD image results that are in BMP or JPEG
format. The method is evaluated in a PACS simulator that simulates clinical PACS workflow. It can also be applied to
multiple CAD applications that are integrated with the PACS simulator. The successful development and evaluation of
this method will provide a useful approach for assuring image integrity of the CAD-PACS integration in clinical
diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000
compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as
ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to
find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we
present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77
cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture
Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images
into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics
(ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from
lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC
curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to
classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the
increment of compression ratio with small fluctuations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most commonly used method for bone age assessment in clinical practice is the book atlas matching method
developed by Greulich and Pyle in the 1950s. Due to changes in both population diversity and nutrition in the United
States, this atlas may no longer be a good reference. An updated data set becomes crucial to improve the bone age
assessment process. Therefore, a digital hand atlas was built with 1,100 children hand images, along with patient
information and radiologists' readings, of normal Caucasian (CAU), African American (BLK), Hispanic (HIS), and
Asian (ASI) males (M) and females (F) with ages ranging from 0 - 18 years. This data was collected from Childrens'
Hospital Los Angeles. A computer-aided-diagnosis (CAD) method has been developed based on features extracted from
phalangeal regions of interest (ROIs) and carpal bone ROIs from this digital hand atlas. Using the data collected along
with the Greulich and Pyle Atlas-based readings and CAD results, this paper addresses this question: "Do different
ethnicities and gender have different bone growth patterns?" To help with data analysis, a novel web-based visualization
tool was developed to demonstrate bone growth diversity amongst differing gender and ethnic groups using data
collected from the Digital Atlas. The application effectively demonstrates a discrepancy of bone growth pattern amongst
different populations based on race and gender. It also has the capability of helping a radiologist determine the
normality of skeletal development of a particular patient by visualizing his or her chronological age, radiologist reading,
and CAD assessed bone age relative to the accuracy of the P&G method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.