We describe a novel, two-stage computer assistance system for lung anomaly detection using ultrasound imaging in the intensive care setting to improve operator performance and patient stratification during coronavirus pandemics. The proposed system consists of two deep-learning-based models: a quality assessment module that automates predictions of image quality, and a diagnosis assistance module that determines the likelihood-of-anomaly in ultrasound images of sufficient quality. Our two-stage strategy uses a novelty detection algorithm to address the lack of control cases available for training the quality assessment classifier. The diagnosis assistance module can then be trained with data that are deemed of sufficient quality, guaranteed by the closed-loop feedback mechanism from the quality assessment module. Using more than 25,000 ultrasound images from 37 COVID-19-positive patients scanned at two hospitals, plus 12 control cases, this study demonstrates the feasibility of using the proposed machine learning approach. We report an accuracy of 86% when classifying between sufficient and insufficient quality images by the quality assessment module. For data of sufficient quality – as determined by the quality assessment module – the mean classification accuracy, sensitivity, and specificity in detecting COVID-19-positive cases were 0.95, 0.91, and 0.97, respectively, across five holdout test data sets unseen during the training of any networks within the proposed system. Overall, the integration of the two modules yields accurate, fast, and practical acquisition guidance and diagnostic assistance for patients with suspected respiratory conditions at pointof- care.
PURPOSE: Partial nephrectomy is the preferred method for managing small renal masses. This procedure has significant advantages over radical nephrectomy. However, partial nephrectomy is under-used due to its difficulty. We propose a navigation system for laparoscopic partial nephrectomy. In this study, we evaluate the usability and accuracy of the navigation system. METHODS: An electromagnetically tracked navigation system for partial nephrectomy was developed. This system tracks the positions of the laparoscopic scissors, ultrasound probe, tumor, and calyces and vasculature. Phantom kidneys were created using mixtures of plastisol and cellulose. To test the system, the navigation display quality was measured through measurement of lag and frames per second displayed. The accuracy of the system was determined through fiducial registration. Finally, a study consisting of ten participants was conducted to assess the usability of the navigation system using the System Usability Survey. RESULTS: The mean System Usability Score of the navigation system was 82.5. The navigation display had an average lag of 243 milliseconds and showed 5 frames per second. The accuracy was measured with fiducial registration and found to have an RMS error of 2.84 mm. CONCLUSION: The results of this study suggest that the partial nephrectomy navigation system developed is both usable and accurate. Future work will include the conversion of the laparoscopic scissor tool tracking to optical. Further studies will be conducted to determine the effectiveness of this technology in tumor resection and avoidance of calyx and vasculature damage. We will additionally explore this system as a training tool.
PURPOSE: Identification of vertebral landmarks with ultrasound is a challenging task. We propose a step-wise computer-guided landmark identification method for developing 3D spine visualizations from tracked ultrasound images. METHODS: Transverse process bone patches were identified to generate an initial spine segmentation in real - time from live ultrasound images. A modified k-means algorithm was adapted to provide an initial estimate of landmark locations from the ultrasound image segmentation. The initial estimations using the modified k-means algorithm do not always provide a landmark on every segmented image patch. As such, further processing may improve the result captured from the sequences, owing to the spine’s symmetries. Five healthy subjects received thoracolumbar US scans. Their real- time ultrasound image segmentations were used to create 3D visualizations for initial validation of the method. RESULTS: The resulting visualizations conform to the parasagittal curvature of the ultrasound images. Our processing can correct the initial estimation to reveal the underlying structure and curvature of the spine from each subject. However, the visualizations are typically truncated and suffer from dilation or expansion near their superior and inferior-most points. CONCLUSION: Our methods encompass a step-wise approach to bridge the gap between ultrasound scans, and 3D visualization of the scoliotic spine, generated using vertebral landmarks. Though a lack of ground-truth imaging prevented complete validation of the workflow, patient-specific deformation is clearly captured in the anterior-posterior curvatures. The frequency of user-interaction required for completing the correction methods presents a challenge in moving towards full automation and requires further attention.
PURPOSE: Spatially tracked ultrasound-guided needle insertions may require electromagnetic sensors to be clipped on the needle and ultrasound probe if not already embedded in the tools. It is assumed that switching the electromagnetic sensor clip does not impact the accuracy of the computed calibration. We propose an experimental process to determine whether or not devices should be calibrated on a more frequent basis. METHODS: We performed 250 calibrations. Of these, 125 were performed on the needle and 125 on the ultrasound. Every five calibrations, the tracking clip was removed and reattached. Every 25 calibrations, the tracking clip was exchanged for an identical 3D-printed model. From the resulting transform matrices, coordinate transformations were computed. Data reproducibility was analyzed through looking at the difference between mean and grand mean, standard deviation and the Shapiro-Wilks normality constant. Data was graphically displayed to visualize differences in calibrations in different directions. RESULTS: For the needle calibrations, transformations parallel to the tracking clip and perpendicular to the needle demonstrated the greatest deviation. For the ultrasound calibrations, transformations perpendicular to the sound propagation demonstrated the greatest deviation. CONCLUSION: Needle and ultrasound calibrations are reproducible when changing the tracking clip. These devices do not need to be calibrated on a more frequent basis. Caution should be taken to minimize confounding variables such as bending the needle or ultrasound beam width at the time of calibration. KEY WORDS: Calibration, tracking, reproducibility, electromagnetic, spatial, ultrasound-guided needle navigation, transformation, standard deviation.
PURPOSE: MR-guided injections are safer for the patient and the physician than CT-guided interventions but require a significant amount of hand-eye coordination and mental registration by the physician. We propose a low-cost, adjustable, handheld guide to assist the operator in aligning the needle in the correct orientation for the injection. METHODS: The operator adjusts the guide to the desired insertion angle as determined by an MRI image. Next, the operator aligns the guide in the image plane using the horizontal laser and level gradient. The needle is inserted into the sleeve of the guide and inserted into the patient. To evaluate the method, two operators inserted 5 needles in two facet joints of a lumbar spine phantom. Insertion points, final points and trajectory angles were compared to the projected needle trajectory using an electromagnetic tracking system. RESULTS: On their first attempt, operators were able to insert the needle into the facet joint 85% of the time. On average, operators had an insertion point error of 2.92 ± 1.57 mm, a target point error of 3.39 ± 2.28 mm, and a trajectory error of 3.98 ± 2.09 degrees. CONCLUSION: A low-cost, adjustable, handheld guide was developed to assist in correctly positioning a needle in MR guided needle interventions. The guide is as accurate as other needle placement assistance mechanisms, including the biplane laser guides and image overlay devices when used in lumbar facet joint injections in phantoms.
PURPOSE: Ultrasound offers a safe radiation-free approach to visualize the spine and measure or assess scoliosis. However, ultrasound assessment also poses major challenges. We propose a real-time algorithm and software implementation to automatically delineate the posterior surface patches of transverse processes in tracked ultrasound; a necessary step toward the ultimate goal of spinal curvature measurement.
METHODS: Following a pre-filtering of each captured ultrasound image, the shadows cast by each transverse process bone is examined and contours which are likely posterior bone surface are kept. From these contours, a threedimensional volume of the bone surfaces is created in real-time as the operator acquires the images. The processing algorithm was implemented on the PLUS and 3D Slicer open-source software platforms.
RESULTS: The algorithm was tested with images captured using the SonixTouch ultrasound scanner, Ultrasonix C5-2 curvilinear transducer and NDI trakSTAR electromagnetic tracker. Ultrasound data was collected from patients presenting with idiopathic adolescent scoliosis. The system was able to produce posterior surface patches of the transverse process in real-time, as the images were acquired by a non-expert sonographer. The resulting transverse process surface patches were compared with manual segmentation by an expert. The average Hausdorff distance was 3.0 mm when compared to the expert segmentation.
CONCLUSION: The resulting surface patches are expected to be sufficiently accurate for driving a deformable registration between the ultrasound space and a generic spine model, to allow for three-dimensional visualization of the spine and measuring its curvature.
PURPOSE: Vertebral landmark identification with ultrasound is notoriously difficult. We propose to assist the user in identifying vertebral landmarks by overlaying a visual aid in the ultrasound image space during the identification process. METHODS: The operator first identifies a few salient landmarks. From those, a generic healthy spine model is deformably registered to the ultrasound space and superimposed on the images, providing visual aid to the operator in finding additional landmarks. The registration is re-computed with each identified landmark. A spatially tracked ultrasound system and associated software were developed. To evaluate the system, six operators identified vertebral landmarks using ultrasound images, and using ultrasound images paired with 3D spine visualizations. Operator performance and inter-operator variability were analyzed. Software usability was assessed following the study, through questionnaire. RESULTS: In assessing the effectiveness of 3D spine visualization in landmark identification, operators were significantly more successful in landmark identification using visualizations and ultrasound than with ultrasound only (82 [72 – 94] % vs 51 [37 – 67] %, respectively; p = 0.0012). Time to completion was higher using visualizations and ultrasound than with ultrasound only 842 [448 – 1136] s vs 612 [434 – 785] s, respectively; p = 0.0468). Operators felt that 3D visualizations helped them identify landmarks, and visualize the spine and vertebrae. CONCLUSION: A three-dimensional visual aid was developed to assist in vertebral landmark identification using a tracked ultrasound system by deformably registering and visualizing a healthy spine model in ultrasound space. Operators found the visual aids useful and they were able to identify significantly more vertebral landmarks than without it.
KEYWORDS: Video, Ultrasonography, Object recognition, RGB color model, 3D modeling, 3D image processing, Visualization, Sensors, Human-machine interfaces, 3D displays
Purpose: Medical schools are shifting from a time-based approach to a competency-based education approach. A competency-based approach requires continuous observation and evaluation of trainees. The goal of Central Line Tutor is to be able to provide instruction and real-time feedback for trainees learning the procedure of central venous catheterization, without requiring a continuous expert observer. The purpose of this study is to test the accuracy of the workflow detection method of Central Line Tutor. This study also looks at the effectiveness of object recognition from a webcam video for workflow detection. Methods: Five trials of the procedure were recorded from Central Line Tutor. Five reviewers were asked to identify the timestamp of the transition points in each recording. Reviewer timestamps were compared to those identified by Central Line Tutor. Differences between these values were used to calculate average transitional delay. Results: Central Line Tutor was able to identify 100% of transition points in the procedure with an average transitional delay of -1.46 ± 0.81s. The average transitional delay of EM and webcam tracked steps were -0.35 ± 2.51s and -2.46 ± 3.57s respectively. Conclusions: Central line tutor was able to detect completion of all workflow tasks with minimal delay and may be used to provide trainees with real-time feedback. The results also show that object recognition from a webcam video is an effective method for detecting workflow tasks in the procedure of central venous catheterization.
PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK’s built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.
PURPOSE: Patient-specific heart and valve models have shown promise as training and planning tools for heart surgery, but physically realistic valve models remain elusive. Available proprietary, simulation-focused heart valve models are generic adult mitral valves and do not allow for patient-specific modeling as may be needed for rare diseases such as congenitally abnormal valves. We propose creating silicone valve models from a 3D-printed plastic mold as a solution that can be adapted to any individual patient and heart valve at a fraction of the cost of direct 3D-printing using soft materials. METHODS: Leaflets of a pediatric mitral valve, a tricuspid valve in a patient with hypoplastic left heart syndrome, and a complete atrioventricular canal valve were segmented from ultrasound images. A custom software was developed to automatically generate molds for each valve based on the segmentation. These molds were 3D-printed and used to make silicone valve models. The models were designed with cylindrical rims of different sizes surrounding the leaflets, to show the outline of the valve and add rigidity. Pediatric cardiac surgeons practiced suturing on the models and evaluated them for use as surgical planning and training tools. RESULTS: Five out of six surgeons reported that the valve models would be very useful as training tools for cardiac surgery. In this first iteration of valve models, leaflets were felt to be unrealistically thick or stiff compared to real pediatric leaflets. A thin tube rim was preferred for valve flexibility. CONCLUSION: The valve models were well received and considered to be valuable and accessible tools for heart valve surgery training. Further improvements will be made based on surgeons’ feedback.
PURPOSE: Tracked navigation has become prevalent in neurosurgery. Problems with registration of a patient and a preoperative image arise when the patient is in a prone position. Surfaces accessible to optical tracking on the back of the head are unreliable for registration. We investigated the accuracy of surface-based registration using points accessible through tracked ultrasound. Using ultrasound allows access to bone surfaces that are not available through optical tracking. Tracked ultrasound could eliminate the need to work (i) under the table for registration and (ii) adjust the tracker between surgery and registration. In addition, tracked ultrasound could provide a non-invasive method in comparison to an alternative method of registration involving screw implantation. METHODS: A phantom study was performed to test the feasibility of tracked ultrasound for registration. An initial registration was performed to partially align the pre-operative computer tomography data and skull phantom. The initial registration was performed by an anatomical landmark registration. Surface points accessible by tracked ultrasound were collected and used to perform an Iterative Closest Point Algorithm. RESULTS: When the surface registration was compared to a ground truth landmark registration, the average TRE was found to be 1.6±0.1mm and the average distance of points off the skull surface was 0.6±0.1mm. CONCLUSION: The use of tracked ultrasound is feasible for registration of patients in prone position and eliminates the need to perform registration under the table. The translational component of error found was minimal. Therefore, the amount of TRE in registration is due to a rotational component of error.
PURPOSE: Image-guided needle interventions are seldom performed with augmented reality guidance in clinical practice due to many workspace and usability restrictions. We propose a real-time optically tracked image overlay system to make image-guided musculoskeletal injections more efficient and assess its usability in a bed-side clinical environment. METHODS: An image overlay system consisting of an optically tracked viewbox, tablet computer, and semitransparent mirror allows users to navigate scanned patient volumetric images in real-time using software built on the open-source 3D Slicer application platform. A series of experiments were conducted to evaluate the latency and screen refresh rate of the system using different image resolutions. To assess the usability of the system and software, five medical professionals were asked to navigate patient images while using the overlay and completed a questionnaire to assess the system. RESULTS: In assessing the latency of the system with scanned images of varying size, screen refresh rates were approximately 5 FPS. The study showed that participants found using the image overlay system easy, and found the table-mounted system was significantly more usable and effective than the handheld system. CONCLUSION: It was determined that the system performs comparably with scanned images of varying size when assessing the latency of the system. During our usability study, participants preferred the table-mounted system over the handheld. The participants also felt that the system itself was simple to use and understand. With these results, the image overlay system shows promise for use in a clinical environment.
PURPOSE: The intraoperative measurement of tracking error is crucial to ensure the reliability of electromagnetically
navigated procedures. For intraoperative use, methods need to be quick to set up, easy to interpret, and not interfere with
the ongoing procedure. Our goal was to evaluate the feasibility of using redundant electromagnetic sensors to alert users
to tracking error in a navigated intervention setup.
METHODS: Electromagnetic sensors were fixed to a rigid frame around a region of interest and on surgical tools. A
software module was designed to detect tracking error by comparing real-time measurements of the differences between
inter-sensor distances and angles to baseline measurements. Once these measurements were collected, a linear support
vector machine-based classifier was used to predict tracking errors from redundant sensor readings.
RESULTS: Measuring the deviation in the reported inter-sensor distance and angle between the needle and cautery served
as a valid indicator for electromagnetic tracking error. The highest classification accuracy, 86%, was achieved based on
readings from the cautery when the two sensors on the cautery were close together. The specificity of this classifier was
93% and the sensitivity was 82%.
CONCLUSION: Placing redundant electromagnetic sensors in a workspace seems to be feasible for the intraoperative
detection of electromagnetic tracking error in controlled environments. Further testing should be performed to optimize
the measurement error threshold used for classification in the support vector machine, and improve the sensitivity of our
method before application in real procedures.
PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup.
METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth.
RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree.
CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.
PURPOSE: Augmented reality systems have been proposed for image-guided needle interventions but they have not become widely used in clinical practice due to restrictions such as limited portability, low display refresh rates, and tedious calibration procedures. We propose a handheld tablet-based self-calibrating image overlay system.
METHODS: A modular handheld augmented reality viewbox was constructed from a tablet computer and a semi-transparent mirror. A consistent and precise self-calibration method, without the use of any temporary markers, was designed to achieve an accurate calibration of the system. Markers attached to the viewbox and patient are simultaneously tracked using an optical pose tracker to report the position of the patient with respect to a displayed image plane that is visualized in real-time. The software was built using the open-source 3D Slicer application platform's SlicerIGT extension and the PLUS toolkit.
RESULTS: The accuracy of the image overlay with image-guided needle interventions yielded a mean absolute position error of 0.99 mm (95th percentile 1.93 mm) in-plane of the overlay and a mean absolute position error of 0.61 mm (95th percentile 1.19 mm) out-of-plane. This accuracy is clinically acceptable for tool guidance during various procedures, such as musculoskeletal injections.
CONCLUSION: A self-calibration method was developed and evaluated for a tracked augmented reality display. The results show potential for the use of handheld image overlays in clinical studies with image-guided needle interventions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.