PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6919, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes how the integration between one of the RIS-systems (Adapt) in VGR and the infobroker in the
central archive is implemented. The project was presented in 2006 with the title Building an IT Healthcare Enterprise by
taking the standards to the limits and sometimes beyond that. The Adapt RIS is used by the Sahlgrenska University
Hospital (SU) in Gothenburg and handles 8 different radiology departments.
The implementation is based on HL7 version 3 and the message exchange is based on Web Services/SOAP.
The base of the RIS-system was developed in the beginning of the 1990:s by a company that no longer exists. SU has
always been able to modify the system by changing the source code and we have been responsible for the system-development
since late 1990s.
We are using IBM Informix Dynamic Server that is running on a Solaris-based cluster with additional software from
Veritas/Symantec.
The communication is planned to be 2-way. Our RIS-system transfers order promises, various status updates during the
workflow and finally reports with various status levels. Our system will be able to receive requests and reports from the
broker. The broker in turn receives these messages from other hospitals in VGR (Vastra Gotalands Regionen).
We use Axis2 to generate skeleton java-code based on WSDL- and XSD-files that defines the Web Services. Axis2 is an
Open Source software that is developed as a part of the Apache project. Eclipse is a development environment for Java
that we use and it is also open source.
Apache Tomcat is the application server that we use to receive messages from the infobroker.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A variety of software exists to interpret files or directories compliant to the Digital Imaging and Communications in
Medicine (DICOM) standard and display them as individual images or volume rendered objects. Some of them offer
further processing and analysis features. The surveys that have been published so far are partly not up-to-date anymore,
and neither a detailed description of the software functions nor a comprehensive comparison is given. This paper aims at
evaluation and comparison of freely available, non-diagnostic DICOM software with respect to the following aspects:
(i) data import; (ii) data export; (iii) header viewing; (iv) 2D image viewing; (v) 3D volume viewing; (vi) support; (vii)
portability; (viii) workability; and (ix) usability. In total, 21 tools were included: 3D Slicer, AMIDE, BioImage Suite,
DicomWorks, EViewBox, ezDICOM, FPImage, ImageJ, JiveX, Julius, MedImaView, MedINRIA, MicroView,
MIPAV, MRIcron, Osiris, PMSDView, Syngo FastView, TomoVision, UniViewer, and XMedCon. Our results in table
form can ease the selection of appropriate DICOM software tools. In particular, we discuss use cases for the
inexperienced user, data conversion, and volume rendering, and suggest Syngo FastView or PMSDView, DicomWorks
or XMedCon, and ImageJ or UniViewer, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Medical Imaging Informatics (MI2) Data Grid developed at the USC Image Processing and Informatics Laboratory
enables medical images to be shared securely between multiple imaging centers. Current applications include an
imaging-based clinical trial setting where multiple field sites perform image acquisition and a centralized radiology core
performs image analysis, often using computer-aided diagnosis tools (CAD) that generate a DICOM-SR to report their
findings and measurements. As more and more CAD tools are being developed in the radiology field, the generated
DICOM Structure Reports (SR) holding key radiological findings and measurements that are not part of the DICOM
image need to be integrated into the existing Medical Imaging Informatics Data Grid with the corresponding imaging
studies. We will discuss the significance and method involved in adapting DICOM-SR into the Medical Imaging
Informatics Data Grid. The result is a MI2 Data Grid repository from which users can send and receive DICOM-SR
objects based on the imaging-based clinical trial application. The services required to extract and categorize information
from the structured reports will be discussed, and the workflow to store and retrieve a DICOM-SR file into the existing
MI2 Data Grid will be shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the last 2 years we have been working on developing a DICOM-RT (Radiation Therapy) ePR (Electronic Patient
Record) with decision support that will allow physicists and radiation oncologists during their decision-making process.
This ePR allows offline treatment dose calculations and plan evaluation, while at the same time it compares and
quantifies treatment planning algorithms using DICOM-RT objects. The ePR framework permits the addition of
visualization, processing, and analysis tools, which combined with the core functionality of reporting, importing and
exporting of medical studies, creates a very powerful application that can improve the efficiency while planning cancer
treatments.
Usually a Radiation Oncology department will have disparate and complex data generated by the RT modalities as well
as data scattered in RT Information/Management systems, Record & Verify systems, and Treatment Planning Systems
(TPS) which can compromise the efficiency of the clinical workflow since the data crucial for a clinical decision may be
time-consuming to retrieve, temporarily missing, or even lost. To address these shortcomings, the ACR-NEMA
Standards Committee extended its DICOM (Digital Imaging & Communications in Medicine) standard from Radiology
to RT by ratifying seven DICOM RT objects starting in 1997 [1,2]. However, they are not broadly used yet by the RT
community in daily clinical operations. In the past, the research focus of an RT department has primarily been
developing new protocols and devices to improve treatment process and outcomes of cancer patients with minimal effort
dedicated to integration of imaging and information systems. Our attempt is to show a proof-of-concept that a DICOM-RT
ePR system can be developed as a foundation to perform medical imaging informatics research in developing
decision-support tools and knowledge base for future data mining applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The asymmetric distribution of PACS equipment and service providers across countries leads typically to the need to
hire third party service professionals outside the institutions where the exams were made. In this paper we present a
brokerage mechanism that puts customers and remote providers together in a seamless way.
The proposed solution, asserted with a case study for the Portuguese national health system, addresses the problems that
now impair the optimal provision of those services, enabling a more agile relationship between buyers and sellers,
optimizing administrative work and complying with clinical and legal requirements under discussion in the European
Union for the free movement of patients and professional health workers.
In this document, the detailed process and technical description of the broker functioning is made, and the main benefits
for the participants are also evaluated from a technical and economical point of view.
Finally, in the discussion chapter, an assessment of the creation of a spot market for imaging studies is made and the
integration with other similar markets is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Content-based image retrieval (CBIR) is the process of retrieving images by directly using image visual characteristics.
In this paper, we present a prototype system implemented for CBIR for a uterine cervix image (cervigram) database.
This cervigram database is a part of data collected in a multi-year longitudinal effort by the National Cancer Institute
(NCI), and archived by the National Library of Medicine (NLM), for the study of the origins of, and factors related to,
cervical precancer/cancer. Users may access the system with any Web browser. The system is built with a distributed
architecture which is modular and expandable; the user interface is decoupled from the core indexing and retrieving
algorithms, and uses open communication standards and open source software. The system tries to bridge the gap
between a user's semantic understanding and image feature representation, by incorporating the user's knowledge.
Given a user-specified query region, the system returns the most similar regions from the database, with respect to
attributes of color, texture, and size. Experimental evaluation of the retrieval performance of the system on "groundtruth"
test data illustrates its feasibility to serve as a possible research tool to aid the study of the visual characteristics of
cervical neoplasia.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intensity overlap often occurs in medical images, making it difficult to identify different anatomical structures using
intensity alone. Research studies have shown that texture is an important component in quantifying the visual
appearance of anatomical structures, and is therefore valuable in the analysis, interpretation, and retrieval of lung
nodules.
The goal of our research study is to present a comparison between the different texture models: Gabor filters, Markov
Random Field (MRF), and global & local co-occurrence. For comparison purposes we utilized Manhattan, Euclidean,
and Chebyshev distances for one-dimensional feature vectors (global co-occurrence) while for two-dimensional feature
comparison (local co-occurrence, Gabor filters, and MRF) we utilized the similarity measures Chi-Square and Jeffrey-
Divergence. Local co-occurrence contains many different variable aspects in its design that can considerably change the
success of its results. A thorough examination of local co-occurrence's variables is discussed.
All of the discussed texture models are presented in the context of our previous Content-Based Image Retrieval (CBIR)
System [1]. BRISC utilizes the Lung Image Database Consortium (LIDC) database. We have found that Gabor and
MRF texture descriptors produce the best retrieval results regardless of the nodule size, number of retrieved items or
similarity metric with an average precision of 88%. Global co-occurrence performed the worse at 44% precision yet
when co-occurrence was performed locally (local co-occurrence) the precision results improved to 64%. A combination
of all the features worked the best with 91% precision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radiology Information Systems (RIS) contain a wealth of information that can be used for research, education, and
practice management. However, the sheer amount of information available makes querying specific data difficult and
time consuming. Previous work has shown that a clinical RIS database and its RIS text reports can be extracted,
duplicated and indexed for searches while complying with HIPAA and IRB requirements. This project's intent is to
provide a software tool, the RadSearch Toolkit, to allow intelligent indexing and parsing of RIS reports for easy yet
powerful searches. In addition, the project aims to seamlessly query and retrieve associated images from the Picture
Archiving and Communication System (PACS) in situations where an integrated RIS/PACS is in place - even
subselecting individual series, such as in an MRI study. RadSearch's application of simple text parsing techniques to
index text-based radiology reports will allow the search engine to quickly return relevant results. This powerful
combination will be useful in both private practice and academic settings; administrators can easily obtain complex
practice management information such as referral patterns; researchers can conduct retrospective studies with specific,
multiple criteria; teaching institutions can quickly and effectively create thorough teaching files.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Effective use of new technologies to support healthcare initiatives is important and current research is moving towards
implementing secure grid-enabled healthcare provision. In the UK, a large-scale collaborative research project (GIMI:
Generic Infrastructures for Medical Informatics), which is concerned with the development of a secure IT infrastructure
to support very widespread medical research across the country, is underway. In the UK, there are some 109 breast
screening centers and a growing number of individuals (circa 650) nationally performing approximately 1.5 million
screening examinations per year. At the same, there is a serious, and ongoing, national workforce issue in screening
which has seen a loss of consultant mammographers and a growth in specially trained technologists and other non-radiologists.
Thus there is a need to offer effective and efficient mammographic training so as to maintain high levels of
screening skills. Consequently, a grid based system has been proposed which has the benefit of offering very large
volumes of training cases that the mammographers can access anytime and anywhere. A database, spread geographically
across three university systems, of screening cases is used as a test set of known cases. The GIMI mammography
training system first audits these cases to ensure that they are appropriately described and annotated. Subsequently, the
cases are utilized for training in a grid-based system which has been developed. This paper briefly reviews the
background to the project and then details the ongoing research. In conclusion, we discuss the contributions, limitations,
and future plans of such a grid based approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We compare five common classifier families in their ability to categorize six lung tissue patterns in high-resolution
computed tomography (HRCT) images of patients affected with interstitial lung diseases (ILD) but also normal
tissue. The evaluated classifiers are Naive Bayes, k-Nearest Neighbor (k-NN), J48 decision trees, Multi-Layer
Perceptron (MLP) and Support Vector Machines (SVM). The dataset used contains 843 regions of interest (ROI)
of healthy and five pathologic lung tissue patterns identified by two radiologists at the University Hospitals of
Geneva. Correlation of the feature space composed of 39 texture attributes is studied. A grid search for optimal
parameters is carried out for each classifier family. Two complementary metrics are used to characterize the
performances of classification. Those are based on McNemar's statistical tests and global accuracy. SVM
reached best values for each metric and allowed a mean correct prediction rate of 87.9% with high class-specific
precision on testing sets of 423 ROIs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scientific data files have been increasing in size during the past decades. In the medical field, for instance,
magnetic resonance imaging and computer aided tomography can yield image volumes of several gigabytes.
While secondary storage (hard disks) increases in capacity and its cost per megabyte slumps over the years,
primary memory (RAM) can still be a bottleneck in the processing of huge amounts of data. This represents
a problem for image processing algorithms, which often need to keep in memory the original image and a copy
of it to store the results. Operating systems optimize memory usage with memory paging and enhanced I/O
operations. Although image processing algorithms usually work on neighbouring areas of a pixel, they follow
pre-determined paths through the image and might not benefit from the memory paging strategies offered by
the operating system, which are general purpose and unidimensional. Having the principles of locality and pre-determined
traversal paths in mind, we developed an algorithm that uses multi-threaded pre-fetching of data
to build a disk cache in memory. Using the concept of a window that slides over the data, we predict the next
block of memory to be read according to the path followed by the algorithm and asynchronously pre-fetch such
block before it is actually requested. While other out-of-core techniques reorganize the original file in order to
optimize reading, we work directly on the original file. We demonstrate our approach in different applications,
each with its own traversal strategy and sliding window structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Digital Hand Atlas in Assessment of Skeletal Development is a large-scale Computer Aided Diagnosis (CAD)
project for automating the process of grading Skeletal Development of children from 0-18 years of age. It includes a
complete collection of 1,400 normal hand X-rays of children between the ages of 0-18 years of age. Bone Age
Assessment is used as an index of skeletal development for detection of growth pathologies that can be related to
endocrine, malnutrition and other disease types. Previous work at the Image Processing and Informatics Lab (IPILab)
allowed the bone age CAD algorithm to accurately assess bone age of children from 1 to 16 (male) or 14 (female) years
of age using the Phalanges as well as the Carpal Bones. At the older ages (16(male) or 14(female) -19 years of age) the
Phalanges as well as the Carpal Bones are fully developed and do not provide well-defined features for accurate bone
age assessment. Therefore integration of the Radius Bone as a region of interest (ROI) is greatly needed and will
significantly improve the ability to accurately assess the bone age of older children. Preliminary studies show that an
integrated Bone Age CAD that utilizes the Phalanges, Carpal Bones and Radius forms a robust method for automatic
bone age assessment throughout the entire age range (1-19 years of age).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Before a radiographic image is sent to a picture archiving and communications system (PACS), its projection
information needs to be correctly identified at capture modalities to facilitate image archive and retrieval. Currently,
annotating radiographic images is manually performed by technologists. It is labor intensive and cost ineffective.
Moreover, man-made annotation errors occur frequently during image acquisition. To address this issue, an automatic
image recognition method is developed. It first extracts a set of visual features from the most indicative region in a
radiograph for image recognition, and then uses a family of classifiers, each of which is trained for a specific projection
to determine the most appropriate projection for the image. The method has been tested on a large number of clinical
images and has shown excellent robustness and efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Last year, we presented methodology to perform knowledge-based medical imaging informatics research on specific
clinical scenarios where brain tumor patients are treated with Proton Beam Therapy (PT). In this presentation, we
demonstrate the design and implementation of quantification and visualization tools to develop the knowledge base for
therapy treatment planning based on DICOM-RT-ION objects. Proton Beam Therapy (PT) is a particular treatment that
utilizes energized charged particles, protons, to deliver dose to the target region. Similar to traditional Radiation Therapy
(RT), complex clinical imaging and informatics data are generated during the treatment process that guide the planning
and the success of the treatment. Therefore, an Electronic Patient Record (ePR) System has been developed to
standardize and centralize clinical imaging and informatics data and properly distribute data throughout the treatment
duration. To further improve treatment planning process, we developed a set of decision support tools to improve the
QA process in treatment planning process. One such example is a tool to assist in the planning of stereotactic PT cases
where CT and MR images need to be analyzed simultaneously during treatment plan assessment. These tools are add-on
features for DICOM standard ePR system of brain cancer patients and improve the clinical efficiency of PT treatment
planning. Additional outcome data collected for PT cases are included in the overall DICOM-RT-ION database design
as knowledge to enhance outcomes analysis for future PT adopters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bone age assessment (BAA) of children is a clinical procedure frequently performed in pediatric radiology
to evaluate the stage of skeletal maturation based on a left hand and wrist radiograph. The most commonly
used standard: Greulich and Pyle (G&P) Hand Atlas was developed 50 years ago and exclusively based on
Caucasian population. Moreover, inter- & intra-observer discrepancies using this method create a need of
an objective and automatic BAA method. A digital hand atlas (DHA) has been collected with 1,400 hand
images of normal children from Asian, African American, Caucasian and Hispanic descends. Based on
DHA, a fully automatic, objective computer-aided-diagnosis (CAD) method was developed and it was
adapted to specific population. To bring DHA and CAD method to the clinical environment as a useful tool
in assisting radiologist to achieve higher accuracy in BAA, a web-based system with direct connection to a
clinical site is designed as a novel clinical implementation approach for online and real time BAA. The
core of the system, a CAD server receives the image from clinical site, processes it by the CAD method and
finally, generates report. A web service publishes the results and radiologists at the clinical site can review
it online within minutes. This prototype can be easily extended to multiple clinical sites and will provide
the foundation for broader use of the CAD system for BAA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the aim of reducing the radiologists' subjectivity and the high degree of inter-observer variability, Content-based
Image Retrieval (CBIR) systems have been proposed to provide visual comparisons of a given lesion to a
collection of similar lesions of known pathology. In this paper, we present the effectiveness of shape features versus
texture features for calculating lung nodules' similarity in Computed Tomography (CT) studies. In our study, we used
eighty-five cases of thoracic CT data from the Lung Image Database Consortium (LIDC). To encode the shape
information, we used the eight most commonly used shape features for pulmonary nodule detection and diagnosis by
existent CAD systems. For the texture, we used co-occurrence, Gabor, and Markov features implemented in our previous
CBIR work. Our preliminary results give low overall precision results for shape compared to texture, showing that shape
features are not effective by themselves at capturing all the information we need to compare the lung nodules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Knee-related injuries involving the meniscal or articular cartilage are common and require accurate diagnosis and
surgical intervention when appropriate. With proper techniques and experience, confidence in detection of meniscal
tears and articular cartilage abnormalities can be quite high. However, for radiologists without musculoskeletal training,
diagnosis of such abnormalities can be challenging. In this paper, the potential of improving diagnosis through
integration of computer-aided detection (CAD) algorithms for automatic detection of meniscal tears and articular
cartilage injuries of the knees is studied. An integrated approach in which the results of algorithms evaluating either
meniscal tears or articular cartilage injuries provide feedback to each other is believed to improve the diagnostic
accuracy of the individual CAD algorithms due to the known association between abnormalities in these distinct
anatomic structures. The correlation between meniscal tears and articular cartilage injuries is exploited to improve the
final diagnostic results of the individual algorithms. Preliminary results from the integrated application are encouraging
and more comprehensive tests are being planned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusheng Wang, Florian Thiel, Daniel Furrer, Cristobal Vergara-Niedermayr, Chen Qin, Georg Hackenberg, Pierre-Emmanuel Bourgue, David Kaltschmidt, Mo Wang
Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific
collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions.
We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central
server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific
data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by
representing them as XML documents. The documents capture not only hierarchical structured data, but also images and
raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data
space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with
XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can
manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to
form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high
schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for
data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing
scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical
trials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Usually, there were multiple clinical departments providing imaging-enabled healthcare
services in enterprise healthcare environment, such as radiology, oncology, pathology, and
cardiology, the picture archiving and communication system (PACS) is now required to
support not only radiology-based image display, workflow and data flow management, but
also to have more specific expertise imaging processing and management tools for other
departments providing imaging-guided diagnosis and therapy, and there were urgent demand
to integrate the multiple PACSs together to provide patient-oriented imaging services for
enterprise collaborative healthcare. In this paper, we give the design method and
implementation strategy of developing grid-based PACS (Grid-PACS) for a hospital with
multiple imaging departments or centers. The Grid-PACS functions as a middleware between
the traditional PACS archiving servers and workstations or image viewing clients and provide
DICOM image communication and WADO services to the end users. The images can be
stored in distributed multiple archiving servers, but can be managed with central mode. The
grid-based PACS has auto image backup and disaster recovery services and can provide best
image retrieval path to the image requesters based on the optimal algorithms. The designed
grid-based PACS has been implemented in Shanghai Huadong Hospital and been running for
two years smoothly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Policies and regulations in the current health care environment have impacted the manner in which patient data -
especially protected health information (PHI) - are handled in the clinical and research settings. Specifically, it is now
more challenging to obtain de-identified PHI from the clinic for use in research while still adhering to the requirements
dictated by the new policies and regulations. To meet this challenge, we have designed and implemented a novel web-based
interface that uses a workflow model to manage the communication of data (for example, biopsy results) between
the clinic and research environments without revealing PHI to the research team or associated research identifiers to the
clinical collaborators. At the heart of the scheme is a web application that coordinates message passing between
researchers and clinical collaborators by use of a protocol that protects confidentiality. We describe the design
requirements of the messaging/communication protocol, as well as implementation details of the web application and its
associated database. We conclude that this scheme provides a useful communication mechanism that facilitates clinical
research while maintaining confidentiality of patient data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based
knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated
into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation
tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary
tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver
transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a
ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other
organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers
have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs,
including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms
using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the
liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are
also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built
available to medical imaging research community for performance benchmarking of liver segmentation algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Data Grid for medical images has been developed at the Image Processing and Informatics Laboratory, USC to
provide distribution and fault-tolerant storage of medical imaging studies across Internet2 and public domain. Although
back-up policies and grid certificates guarantee privacy and authenticity of grid-access-points, there still lacks a method
to guarantee the sensitive DICOM images have not been altered or corrupted during transmission across a public domain.
This paper takes steps toward achieving full image transfer security within the Data Grid by utilizing DICOM image
authentication and a HIPAA-compliant auditing system. The 3-D lossless digital signature embedding procedure
involves a private 64 byte signature that is embedded into each original DICOM image volume, whereby on the
receiving end the signature can to be extracted and verified following the DICOM transmission. This digital signature
method has also been developed at the IPILab. The HIPAA-Compliant Auditing System (H-CAS) is required to monitor
embedding and verification events, and allows monitoring of other grid activity as well. The H-CAS system federates the
logs of transmission and authentication events at each grid-access-point and stores it into a HIPAA-compliant database.
The auditing toolkit is installed at the local grid-access-point and utilizes Syslog [1], a client-server standard for log
messaging over an IP network, to send messages to the H-CAS centralized database. By integrating digital image
signatures and centralized logging capabilities, DICOM image integrity within the Medical Imaging and Informatics
Data Grid can be monitored and guaranteed without loss to any image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LCDs suffer from viewing angle dependency, meaning that characteristics of LCDs change with viewing angle.
DICOM GSDF calibration and corresponding quality checks typically take place for on-axis viewing. However, users
will use the display for a broad range of viewing angles. Several studies have shown that when calibration is done for
on-axis viewing then the display is not accurately complying with the DICOM GSDF standard when viewing off-axis.
This paper presents a novel solution: we adapt the DICOM GSDF calibration algorithm to have inherent robustness
against change of viewing angle. A validation has been done by means of a 5 Mega Pixel medical display. Results show
that it is possible to double the range of viewing angles (18° instead of 9°) for which the display is within the 10%
tolerance as defined in the DICOM GSDF standard. This result is very useful because users typically will use their
displays also for off-axis viewing angles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We explore the calibration of a high luminance range, dual-layer, liquid crystal display (LCD) prototype. The
operation of the prototype is done by splitting a high luminance resolution image (graylevel > 28) into two 8-bit
depth components and sending these images to the two liquid crystal panels stacked over the backlight module.
By interpolation of a small set of luminance data gathered using a specialized luminance probe, the look-up table
of graylevel pairs of front/back layer LCD and the corresponding luminance values can be generated. To display
images, we fit an extended DICOM model to the interpolated luminance table which is adjustable for graylevel
and luminance depth. A dynamic look up table is generated in which for each luminance there are several graylevel
pair candidates. We show results for one possible calibration strategy involving the pair selection criterion. By
selecting the pair that maximizes back-layer smoothness, the images with arbitrary graylevel and luminance
depth can be then displayed with equal perceptual distance between luminance levels, while minimizing parallax
effects. Other possible strategies that minimize glare and noise are also described. The results can be used for
high luminance range display performance characterization and for the evaluation of its clinical significance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Under typical dark conditions found in reading rooms, a reader's pupils will contract and dilate as the visual focus
intermittently shifts between the high luminance monitor and the darker background wall, resulting in increased visual
fatigue and the degradation of diagnostic performance. A controlled increase of ambient lighting may, however,
minimize these visual adjustments and potentially improve reader comfort and accuracy. This paper details results from
two psychophysical studies designed to determine the effect of a controlled ambient lighting increase on observer
detection of subtle objects and lesions viewed on a DICOM-calibrated medical-grade LCD. The first study examined the
effect of increased ambient lighting on detection of subtle objects embedded within a uniform background, while the
second study examined observer detection performance of subtle cancerous lesions in mammograms and chest
radiographs. In both studies, observers were presented with images under a dark room condition (1 lux) and an increased
room illuminance level (50 lux) for which the luminance level of the diffusely reflected light from the background wall
was approximately equal to that of the displayed image. The display was calibrated to an effective luminance ratio of
409 for both lighting conditions. Observer detection performance under each room illuminance condition was then
compared. Identification of subtle objects embedded within the uniform background improved from 59% to 67%, while
detection time decreased slightly with additional illuminance. An ROC analysis of the anatomical image results revealed
that observer AUC values remained constant while detection time decreased under increased illuminance. The results
provide evidence that an ambient lighting increase may be possible without compromising diagnostic efficacy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer assistance in image-based diagnosis and therapy are continuously growing fields that have gained
importance in several medical disciplines. Today, various free and commercial tools are available. However, only
few are routinely applied in clinical practice. Especially tools that provide a flsupport of the whole design
process from development and evaluation to the actual deployment in a clinical environment are missing.
In this work, we introduce a categorization of the design process into different types and fields of application.
To this end, we propose a novel framework that allows the development of software assistants that can be
integrated into the design process of new algorithms and systems. We focus on the specific features of software
prototypes that are valuable for engineers and clinicians, rather than on product development. An important
aspect in this work is the categorization of the software design process into different components. Furthermore, we
examine the interaction between these categories based on a new knowledge flow model. Finally, an encapsulation
of these tasks within an application framework is proposed. We discuss general requirements and present a layered
architecture. Several components for data- and workflow-management provide a generic functionality that can
be customized on the developer and the user level. A flexible handling of is offered through the use of a visual
programming and rapid prototyping platform. Currently, the framework is used in 15 software prototypes and
as a basis of commercial products. More than 90 clinical partners all over the world work with these tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Information Management Toolkit (ImTK) Consortium is an open source initiative to develop robust, freely available
tools related to the information management needs of basic, clinical, and translational research. An open source
framework and agile programming methodology can enable distributed software development while an open architecture
will encourage interoperability across different environments. The ISIS Center has conceptualized a prototype data
sharing network that simulates a multi-center environment based on a federated data access model. This model includes
the development of software tools to enable efficient exchange, sharing, management, and analysis of multimedia
medical information such as clinical information, images, and bioinformatics data from multiple data sources. The
envisioned ImTK data environment will include an open architecture and data model implementation that complies with
existing standards such as Digital Imaging and Communications (DICOM), Health Level 7 (HL7), and the technical
framework and workflow defined by the Integrating the Healthcare Enterprise (IHE) Information Technology
Infrastructure initiative, mainly the Cross Enterprise Document Sharing (XDS) specifications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital Imaging and Communications in Medicine (DICOM) has standardized structure reports (SR) to fully support
conventional free-text reports, images, and structured information, thus enhancing the precision, clarity, and value of
clinical documents. The SR standard provides the capacity to link key images, region of interest within images, and
measurement as result of Computer-Aided Diagnosis (CAD) process. Accordingly, SR bridges the traditional gap
between CAD and PACS. Last year we presented an open and universal CAD-PACS integration toolkit that could
seamlessly integrate standalone Computer-Aided Diagnosis (CAD) workstations with a clinical PACS based on
Structure Report (SR) and IHE Post-Processing. In this presentation, we illustrate the workflow and procedures of CAD-PACS
integration by showing examples from some available CAD applications using the toolkit. This proper integration
will improve usage of the CAD applications for more accurate analysis and faster assessment in the clinical decisionmaking
process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The integration of imaging devices, diagnostic workstations, and image servers into Picture Archiving and Communication
Systems (PACS) has had an enormous effect on the efficiency of radiology workflows. The standardization
of the information exchange between the devices with the DICOM standard has been an essential
precondition for that development.
For surgical procedures, no such infrastructure exists. With the increasingly important role computerized planning
and assistance systems play in the surgical domain, an infrastructure that unifies the communication between
devices becomes necessary. In recent publications, the need for a modularized system design has been established.
A reference architecture for a Therapy Imaging and Model Management System (TIMMS) has been proposed.
It was accepted by the DICOM Working Group 6 as the reference architecture for DICOM developments for
surgery.
In this paper we propose the inclusion of implant planning systems into the PACS infrastructure. We propose
a generic information model for the patient specific selection and positioning of implants from a repository according
to patient image data. The information models are based on clinical workflows from ENT, cardiac, and
orthopedic surgery as well as technical requirements derived from different use cases and systems.
We show an exemplary implementation of the model for application in ENT surgery: the selection and positioning
of an ossicular implant in the middle ear. An implant repository is stored in the PACS. It makes use of an
experimental implementation of the Surface Mesh Module that is currently being developed as extension to the
DICOM standard.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The situation today in most operating theaters is characterized by a large number of highly specialized but
isolated surgical-assist systems. Integration of these systems into a complete solution is the key to maximize
their usefulness and cost-effectiveness through optimization of the data flow and reuse of existing hardware. Goal
of the integration is the design of a distributed assist system by connecting multiple independent components
using standard protocols for communication and data exchange. Required surgical functionalities are created by
combining the appropriate components.
Such a distributed assist system induces fundamental changes in the nature of the data flow among individual
components. Today, components tend to exchange more-or-less independent and self-contained units of information
which they process once the complete data set has arrived. However, with distributed functionalities and
tighter integration necessary to complete a surgical process, increased continuous data transfer and processing
will be required. To handle this type of data transmission, the system will have to support streaming of continuous
data.
We present a general framework for the integration of data streaming into the Digital Operating Room. The
approach presented provides a two level system in which the management and supervision of the data producing
and consuming devices is independent of the actual mechanisms used to transmit the data. This approach allows
the use of infrastructure and transmission technologies specially adapted to the specific needs of the streamed
data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Endoscopy is a medical technology used to inspect the inner surface of organs such as the colon. During endoscopic
inspection of the colon or colonoscopy, a tiny video camera generates a video signal, which is displayed on a monitor for
manual interpretation by physicians. In practice, these images are not typically captured, which may be attributed by lack
of tools for automatic capturing, automatic analysis of important contents, and quick and easy access to these contents.
However, this lack of tools is being addressed by recent research efforts. This paper presents the description and
evaluation results of novel software that automates the capture of all images of a single colonoscopy into a single
digitized video file. The system uses metrics based on color and motion over time to determine whether the images are
derived from inside a single patient. During testing our system extracted 173 videos totaling 70 hours of endoscopic
video, out of 230 hours of raw video, with a segment-based sensitivity of 100% and specificity of 99%. No procedures
were missed. Two video files contained only a non-patient video signal. The features of our system are robust enough to
be suitable for day-to-day use in medical practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Workflow analysis can be used to record the steps taken during clinical interventions with the goal of identifying
bottlenecks and streamlining the procedure efficiency. In this study, we recorded the workflow for uterine fibroid
embolization (UFE) procedures in the interventional radiology suite at Georgetown University Hospital in Washington,
DC, USA. We employed a custom client/server software architecture developed by the Innovation Center for Computer
Assisted Surgery (ICCAS) at the University of Leipzig, Germany. This software runs in a JAVA environment and
enables an observer to record the actions taken by the physician and surgical team during these interventions. The data
recorded is stored as an XML document, which can then be further processed. We recorded data from 30 patients and
found a mean intervention time of 01:49:46 (+/- 16:04) minutes. The critical intervention step, the embolization, had a
mean time of 00:15:42 (+/- 05:49) minutes, which was only 15% of the total intervention time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The image-guided surgery toolkit (IGSTK) is an open source C++ library that provides the basic components required
for developing image-guided surgery applications. While the initial version of the toolkit has been released, some
additional functionalities are required for certain applications. With increasing demand for real-time intraoperative image
data in image-guided surgery systems, we are adding a video grabber component to IGSTK to access intraoperative
imaging data such as video streams. Intraoperative data could be acquired from real-time imaging modalities such as
ultrasound or endoscopic cameras. The acquired image could be displayed as a single slice in a 2D window or integrated
in a 3D scene. For accurate display of the intraoperative image relative to the patient's preoperative image, proper
interaction and synchronization with IGSTK's tracker and other components is necessary. Several issues must be
considered during the design phase: 1) Functions of the video grabber component 2) Interaction of the video grabber
component with existing and future IGSTK components; and 3) Layout of the state machine in the video grabber
component. This paper describes the video grabber component design and presents example applications using the video
grabber component.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work focuses on image retrieval utilizing principal component analysis (PCA) and linear discriminant analysis (LDA) techniques for brain tumors from Magnetic Resonance (MR) studies. The research has been broken into three stages. Stage 1 consists of developing the PCA and LDA algorithms to be used for content based image retrieval (CBIR) systems. Stage 2 consists of evaluation of PCA and LDA algorithms on synthetic tumor images with added noise and shading artifacts. Stage 3 consists of tailoring the algorithm specifically for automated detection and CBIR system of MR contrast enhancing tumors matching a given query image. The algorithm has been developed and tested successfully for synthetic tumor images and actual contrast enhanced tumors. We hope to integrate the PCA and LDA algorithms to perform an indexing of the tumor shapes derived from actual MR images. Two relevant indices: size and location will also be used to index the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
PACS display workstations usually display medical image volumes in one single pattern at one time. Though some
image workstations may offer three orthogonal views for orientation, users are not allowed to view different patterns of
three dimensional objects simultaneously. In this paper, we propose a novel framework that integrates different
rendering methods by utilizing the pipeline mechanism of Visualization Toolkit (VTK). VTK is an open source software
system for 3D computer graphics, image processing, and visualization. On the basis of VTK, this image display
framework can display multidimensional medical images in two different patterns, Multi-Planar Reformation (MPR) and
Maximum/Minimum Intensity Projection (MIP), at the same time with most freedom by allowing users to configure
viewpoint freely, what we call Free-MPR, and to shift between different patterns unlimitedly. Furthermore, the
framework can be easily applied to medical image workstation or Web-based network application for it is provide as a
plug-in that can be integrated conveniently. The preliminary testing results showed that our developed MedViewCtrl
display framework can be integrated into any Windows Program based display software or Internet Explore Web
Browser to provide multiwindow and multidimensional medical image visualization functionality for higher volume
medical image data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The demand for sharing medical information has kept rising. However, the transmission and displaying of high
resolution medical images are limited if the network has a low transmission speed or the terminal devices have limited
resources. In this paper, we present an approach based on JPEG2000 Interactive Protocol (JPIP) to browse high
resolution medical images in an efficient way. We designed and implemented an interactive image communication
system with client/server architecture and integrated it with Picture Archiving and Communication System (PACS). In
our interactive image communication system, the JPIP server works as the middleware between clients and PACS
servers. Both desktop clients and wireless mobile clients can browse high resolution images stored in PACS servers via
accessing the JPIP server. The client can only make simple requests which identify the resolution, quality and region of
interest and download selected portions of the JPEG2000 code-stream instead of downloading and decoding the entire
code-stream. After receiving a request from a client, the JPIP server downloads the requested image from the PACS
server and then responds the client by sending the appropriate code-stream. We also tested the performance of the JPIP
server. The JPIP server runs stably and reliably under heavy load.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have been advancing the study of method to quantitatively systematize nodules by conducting the statistical
comparison survey of relation between image features and clinical pathological features, reason of postoperative
recurrence and death. But necessary research data is huge, and because of this problem, the management and use of
image information such as diagnostic information, image features is difficult. Nodule systematization using database is
being proposed as the solution. We think that more efficient nodule schematization can be realized by managing,
searching and comparing image information efficiently using database. In this paper, we describe nodule systematization
using database and describe the construction of database as the core, and user interface which can be easily operated on
GUI. We also describe operational result and evaluation, making it as the prototype.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, a concept for coupling a system for content-based image retrieval in medical applications (IRMA) with
hospital information systems is presented. We aim at improving the work flow of radiologists and evaluating the
recognition performance of the IRMA system in clinical routine. The integration is designed such that a failure of IRMA
does not affect the routine operation of the other systems. The coupling is realized by generic communication modules
with the radiology information system, and the picture archiving and communication system (PACS) over the standard
protocols Digital Imaging and Communications in Medicine (DICOM) and Health Layer 7 (HL7). An optional plug-in
for the radiological viewing station further enhances the usability. Based on this concept, the pre-fetching of relevant
images for recurrent examinations is improved. When an examination is scheduled, all previous images of the patient are
read by the IRMA system with DICOM query/retrieve. If the images were not present before in our database, features
are extracted, stored, and indexed. After the acquisition of new images from the imaging modality, the new images are
automatically retrieved by the IRMA system with DICOM query/retrieve and similar images are selected based on the
stored global signatures. These images are then loaded into the online storage of the PACS and are available for
diagnostic purposes together with those images already pre-selected by the PACS. Thus the radiologist can avoid further
delays resulting from manually fetching further images from archives which have not been automatically selected by
alphanumerical meta data. In addition, he is able to sort all fetched images by the computed IRMA-similarity.
Furthermore, the hanging of images in the viewing software is planned to be organized by IRMA suggestions
automatically, further shortening the time for the examination and reducing manual interactions. Based on the generality
of our integration concept, a CBIR-based second opinion to support the diagnostics, and computer-based training of
radiologists will be established in near future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current study describes the experience in the implementation of a mobile HIS/PACS workstation to assist critical
cardiac patients in an Intensive Care Unit (ICU). Recently, mobile devices connected to a WiFi network were
incorporated to the Hospital information System, providing the same functionalities of common desktop counterpart.
However, the use of commercially devices like PDAs and Pocket PCs presented a series of problems that are more
emphasized in the ICUs 1) low autonomy of the batteries, which need constant recharges; 2) low robustness of the
devices; 3) insufficient display area to show medical images and vital signals; 4) data entry remains a major problem and
imposes an extra time consumption to the staff; 5) high cost when fully equipped with WiFi connection, optical reader to
access bar codes and memory. To address theses problems we developed a mobile workstation (MedKart) that provides
access the HIS and PACS systems, with all resources and an ergonomic and practical design to be used by physicians
and nurses inside the ICU. The system fulfills the requirements to assist, in the point-of-care, critical cardiac patients in
Intensive Care Units.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mass screening based on multi-helical CT images requires a considerable number of images to
be read. It is this time-consuming step that makes the use of helical CT for mass screening impractical at
present. To overcome this problem, we have provided diagnostic assistance methods to medical screening
specialists by developing a lung cancer screening algorithm that automatically detects suspected lung
cancers in helical CT images, a coronary artery calcification screening algorithm that automatically
detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative
evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The
function to observe suspicious shadow in detail are provided in computer-aided diagnosis workstation
with these screening algorithms. We also have developed the telemedicine network by using Web medical
image conference system with the security improvement of images transmission, Biometric fingerprint
authentication system and Biometric face authentication system. Biometric face authentication used on
site of telemedicine makes "Encryption of file" and Success in login" effective. As a result, patients'
private information is protected. Based on these diagnostic assistance methods, we have developed a new
computer-aided workstation and a new telemedicine network that can display suspected lesions
three-dimensionally in a short time. The results of this study indicate that our radiological information
system without film by using computer-aided diagnosis workstation and our telemedicine network
system can increase diagnostic speed, diagnostic accuracy and security improvement of medical
information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The transition of a development prototype to a product is a challenging task. There are several shortcomings of a
prototype such as the difficulty of client deployment, configuration issues and memory usage. The Eclipse RCP is a platform for development that offers many advantages such as: multiplatform nature; small memory footprint usage and extensible architecture. In this work we present the use of the Eclipse Rich Client Platform (RCP) as the basis for the deployment of a Contextual Medical Image Viewer. The Contextual Viewer is a concept of interface for medical/clinical information visualization that uses different contexts to enhance the user's capability and experience. We present the contextual viewer for X-Ray Angiographic images, based on Eclipse RCP, which can use information from two different
sources. We conclude that the Eclipse RCP is a promising platform for final-user quality software, improving the
Contextual Medical Image Viewer features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bone age assessment is most commonly performed with the use of the Greulich and Pyle (G&P)
book atlas, which was developed in the 1950s. The population of theUnited States is not as
homogenous as the Caucasian population in the Greulich and Pyle in the 1950s, especially in the
Los Angeles, California area. A digital hand atlas (DHA) based on 1,390 hand images of children
of different racial backgrounds (Caucasian, African American, Hispanic, and Asian) aged 0-18
years was collected from Children's Hospital Los Angeles. Statistical analysis discovered
significant discrepancies exist between Hispanic and the G&P atlas standard. To validate the usage
of DHA as a clinical standard, diagnostic radiologists performed reads on Hispanic pediatric hand
and wrist computed radiography images using either the G&P pediatric radiographic atlas or the
Children's Hospital Los Angeles Digital Hand Atlas (DHA) as reference. The order in which the
atlas is used (G&P followed by DHA or vice versa) for each image was prepared before actual
reading begins. Statistical analysis of the results was then performed to determine if a discrepancy
exists between the two readings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.