QuantMed is a platform consisting of software components enabling clinical deep learning, together forming the QuantMed infrastructure. It addresses numerous challenges: systematic generation and accumulation of training data; the validation and utilization of quantitative diagnostic software based on deep learning; and thus, providing support for more reliable, accurate, and efficient clinical decisions. QuantMed provides learning and expert correction capabilities on large, heterogeneous datasets. The platform supports collaboration to extract medical knowledge from large amounts of clinical data among multiple partner institutions via a two- stage learning approach: the sensitive patient data remains on premises and is analyzed locally in a first step in so-called QuantMed nodes. Support for GPU clusters accelerates the learning process. The knowledge is then accumulated through the QuantMed hub, and can be re-distributed afterwards. The resulting knowledge modules – algorithmic solution components which contain trained deep learning networks as well as specifications of input data and output parameters - do not contain any personalized data, and thus, are safe to share under data protection law. This way, our modular infrastructure makes it possible to efficiently carry out translational research in the context of deep learning, and deploy results seamlessly into prototypes or third-party software.
In the last few years, fiber tracking tools have become popular in clinical contexts, e.g., for pre- and intraoperative
neurosurgical planning. The efficient, intuitive, and reproducible selection of fiber bundles still constitutes one
of the main issues. In this paper, we present a framework for a real-time selection of axonal fiber bundles
using a Wii remote control, a wireless controller for Nintendo's gaming console. It enables the user to select
fiber bundles without any other input devices. To achieve a smooth interaction, we propose a novel spacepartitioning
data structure for efficient 3D range queries in a data set consisting of precomputed fibers. The data
structure which is adapted to the special geometry of fiber tracts allows for queries that are many times faster
compared with previous state-of-the-art approaches. In order to extract reliably fibers for further processing,
e.g., for quantification purposes or comparisons with preoperatively tracked fibers, we developed an expectationmaximization
clustering algorithm that can refine the range queries. Our initial experiments have shown that
white matter fiber bundles can be reliably selected within a few seconds by the Wii, which has been placed in a
sterile plastic bag to simulate usage under surgical conditions.
The clinical application of fiber tracking becomes more widespread. Thus it is of high importance to be able to produce
high quality results in a very short time. Additionally, research in this field would benefit from fast implementation and
evaluation of new algorithms. In this paper we present a GPU-based fiber tracking framework using latest features of
commodity graphics hardware such as geometry shaders. The implemented streamline algorithm performs fiber
reconstruction of a whole brain using 30,000 seed points in less than 120 ms on a high-end GeForce GTX 280 graphics
board. Seed points are sent to the GPU which emits up to a user-defined number of fiber points per seed vertex. These
are recorded to a vertex buffer that can be rendered or downloaded to main memory for further processing. If the output
limit of the geometry shader is reached before the stopping criteria are fulfilled, the last vertices generated are then used
in a subsequent pass where the geometry shader continues the tracking.
Since all the data resides on graphics memory the intermediate steps can be visualized in real-time. The fast
reconstruction not only allows for an interactive change of tracking parameters but, since the tracking code is
implemented using GPU shaders, even for a runtime change of the algorithm. Thus, rapid development and evaluation of
different algorithms and parameter sets becomes possible, which is of high value for e.g. research on uncertainty in fiber
tracking.
Tumor resections from the liver are complex surgical interventions. With recent planning software, risk analyses
based on individual liver anatomy can be carried out preoperatively. However, additional tumors within the
liver are frequently detected during oncological interventions using intraoperative ultrasound. These tumors are
not visible in preoperative data and their existence may require changes to the resection strategy. We propose
a novel method that allows an intraoperative risk analysis adaptation by merging newly detected tumors with a
preoperative risk analysis. To determine the exact positions and sizes of these tumors we make use of a navigated
ultrasound-system. A fast communication protocol enables our application to exchange crucial data with this
navigation system during an intervention.
A further motivation for our work is to improve the visual presentation of a moving ultrasound plane within
a complex 3D planning model including vascular systems, tumors, and organ surfaces. In case the ultrasound
plane is located inside the liver, occlusion of the ultrasound plane by the planning model is an inevitable problem
for the applied visualization technique. Our system allows the surgeon to focus on the ultrasound image while
perceiving context-relevant planning information. To improve orientation ability and distance perception, we
include additional depth cues by applying new illustrative visualization algorithms.
Preliminary evaluations confirm that in case of intraoperatively detected tumors a risk analysis adaptation
is beneficial for precise liver surgery. Our new GPU-based visualization approach provides the surgeon with
a simultaneous visualization of planning models and navigated 2D ultrasound data while minimizing occlusion
problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.