PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Current methods of producing optical phantoms are incapable of accurately capturing the wavelength-dependent properties of tissue critical for many optical modalities.
Aim
We aim to introduce a method of producing solid, inorganic phantoms whose wavelength-dependent optical properties can be matched to those of tissue over the wavelength range of 370 to 950 nm.
Approach
The concentration-dependent optical properties of 20 pigments were characterized and used to determine combinations that result in optimal fits compared to the target properties over the full spectrum. Phantoms matching the optical properties of muscle and nerve, the diffuse reflectance of pale and melanistic skin, and the chromophore concentrations of a computational skin model with varying oxygen saturation (StO2) were made with this method.
Results
Both optical property phantoms were found to accurately mimic their respective tissues’ absorption and scattering properties across the entire spectrum. The diffuse reflectance phantoms were able to closely approximate skin reflectance regardless of skin type. All three computational skin phantoms were found to have emulated chromophore concentrations close to the model, with an average percent error for the StO2 of 4.31%.
Conclusions
This multipigment phantom platform represents a powerful tool for creating spectrally accurate tissue phantoms, which should increase the availability of standards for many optical techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Standardized data processing approaches are required in the field of bio-Raman spectroscopy to ensure information associated with spectral data acquired by different research groups, and with different systems, can be compared on an equal footing.
Aim
An open-sourced data processing software package was developed, implementing algorithms associated with all steps required to isolate the inelastic scattering component from signals acquired using Raman spectroscopy devices. The package includes a novel morphological baseline removal technique (BubbleFill) that provides increased adaptability to complex baseline shapes compared to current gold standard techniques. Also incorporated in the package is a versatile tool simulating spectroscopic data with varying levels of Raman signal-to-background ratios, baselines with different morphologies, and varying levels of stochastic noise.
Results
Application of the BubbleFill technique to simulated data demonstrated superior baseline removal performance compared to standard algorithms, including iModPoly and MorphBR. The data processing workflow of the open-sourced package was validated in four independent in-human datasets, demonstrating it leads to inter-systems data compatibility.
Conclusions
A new open-sourced spectroscopic data pre-processing package was validated on simulated and real-world in-human data and is now available to researchers and clinicians for the development of new clinical applications using Raman spectroscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The machine learning (ML) approach plays a critical role in assessing biomedical imaging processes especially optical imaging (OI) including segmentation, classification, and reconstruction, intending to achieve higher accuracy efficiently.
Aim
This research aims to develop an end-to-end deep learning framework for diffuse optical imaging (DOI) with multiple datasets to detect breast cancer and reconstruct its optical properties in the early stages.
Approach
The proposed Periodic-net is a nondestructive deep learning (DL) algorithm for the reconstruction and evaluation of inhomogeneities in an inverse model with high accuracy, while boundary measurements are calculated by solving a forward problem with sources/detectors arranged uniformly around a circular domain in various combinations, including 16 × 15, 20 × 19, and 36 × 35 boundary measurement setups.
Results
The results of image reconstruction on numerical and phantom datasets demonstrate that the proposed network provides higher-quality images with a greater amount of small details, superior immunity to noise, and sharper edges with a reduction in image artifacts than other state-of-the-art competitors.
Conclusions
The network is highly effective at the simultaneous reconstruction of optical properties, i.e., absorption and reduced scattering coefficients, by optimizing the imaging time without degrading inclusions localization and image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern optical volumetric imaging modalities, such as optical coherence tomography (OCT), provide enormous information about the structure, function, and physiology of living tissue. Although optical imaging achieves lateral resolution on the order of the wavelength of light used, and OCT achieves axial resolution on a similar micron scale, tissue optical properties, particularly high scattering and absorption, limit light penetration to only a few millimeters. In addition, in vivo imaging modalities are susceptible to significant motion artifacts due to cardiac and respiratory function. These effects limit access to artifact-free optical measurements during peripheral neurosurgery to only a portion of the exposed nerve without further modification to the procedure.
Aim
We aim to improve in vivo OCT imaging during peripheral neurosurgery in small and large animals by increasing the amount of visualized nerve volume as well as suppressing motion of the imaged area.
Approach
We designed a nerve holder with embedded mirror prisms for peripheral nerve volumetric imaging as well as a specific beam steering strategy to acquire prism and direct view volumes in one session with minimal motion artifacts.
Results
The axially imaged volumes from mirror prisms increased the OCT signal intensity by >22 dB over a 1.25-mm imaging depth in tissue-mimicking phantoms. We then demonstrated the new imaging capabilities in visualizing peripheral nerves from direct and side views in living rats and minipigs using a polarization-sensitive OCT system. Prism views have shown nerve fascicles and vasculature from the bottom half of the imaged nerve which was not visible in direct view.
Conclusions
We demonstrated improved OCT imaging during neurosurgery in small and large animals by combining the use of a prism nerve holder with a specifically designed beam scanning protocol. Our strategy can be applied to existing OCT imaging systems with minimal hardware modification, increasing the nerve tissue volume visualized. Enhanced imaging depth techniques may lead to a greater adoption of structural and functional optical biomarkers in preclinical and clinical medicine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cartilage tissue engineering is a promising strategy for effective curative therapies for treatment of osteoarthritis. However, tissue engineers depend predominantly on time-consuming, expensive, and destructive techniques as quality control to monitor the maturation of engineered cartilage. This practice can be impractical for large-scale biomanufacturing and prevents spatial and temporal monitoring of tissue growth, which is critical for the fabrication of clinically relevant-sized cartilage constructs. Nondestructive multimodal imaging techniques combining fluorescence lifetime imaging (FLIm) and optical coherence tomography (OCT) hold great potential to address this challenge.
Aim
The feasibility of using multimodal FLIm–OCT for nondestructive, spatial, and temporal monitoring of self-assembled cartilage tissue maturation in a preclinical mouse model is investigated.
Approach
Self-assembled cartilage constructs were developed for 4 weeks in vitro followed by 4 weeks of in vivo maturation in nude mice. Sterile and nondestructive in situ multispectral FLIm and OCT imaging were carried out at multiple time points (t = 2, 4, and 8 weeks) during tissue development. FLIm and 3D volumetric OCT images were reconstructed and used for the analysis of tissue biochemical homogeneity, morphology, and structural integrity. A biochemical homogeneity index was computed to characterize nonhomogeneous tissue growth at different time points. OCT images were validated against histology.
Results
FLIm detects heterogenous extracellular matrix (ECM) growth of tissue-engineered cartilage. The outer edge of the tissue construct exhibited longer fluorescence lifetime in 375 to 410 and 450 to 485 nm spectral channels, indicating increase in collagen content. Significant (p < 0.05) decrease of construct homogeneity index was observed between t = 2 weeks and t = 4 weeks. Both FLIm and OCT images revealed defects (voids) at the center of the tissue construct during in vitro culture (t = 2 and 4 weeks). Cyst formation during in vivo culture was detected by OCT and confirmed with histology.
Conclusions
The ability of multimodal FLIm–OCT to nondestructively monitor the heterogenous growth of engineered tissue constructs in situ is demonstrated. Spatial and temporal variation of construct ECM component was detected by FLIm. OCT reveals structural defects (voids and cysts). This multimodal approach has great potential to replace costly destructive tests in the manufacturing of tissue-engineered medical products, facilitating their clinical translation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TOPICS: Image restoration, Reconstruction algorithms, X-rays, Tomography, Transformers, Education and training, Image quality, Quantum deep learning, Detection and tracking algorithms, In vivo imaging
X-ray Cherenkov–luminescence tomography (XCLT) produces fast emission data from megavoltage (MV) x-ray scanning, in which the excitation location of molecules within tissue is reconstructed. However standard filtered backprojection (FBP) algorithms for XCLT sinogram reconstruction can suffer from insufficient data due to dose limitations, so there are limits in the reconstruction quality with some artifacts. We report a deep learning algorithm for XCLT with high image quality and improved quantitative accuracy.
Aim
To directly reconstruct the distribution of emission quantum yield for x-ray Cherenkov-luminescence tomography, we proposed a three-component deep learning algorithm that includes a Swin transformer, convolution neural network, and locality module model.
Approach
A data-to-image model x-ray Cherenkov-luminescence tomography is developed based on a Swin transformer, which is used to extract pixel-level prior information from the sinogram domain. Meanwhile, a convolutional neural network structure is deployed to transform the extracted pixel information from the sinogram domain to the image domain. Finally, a locality module is designed between the encoder and decoder connection structures for delivering features. Its performance was validated with simulation, physical phantom, and in vivo experiments.
Results
This approach can better deal with the limits to data than conventional FBP methods. The method was validated with numerical and physical phantom experiments, with results showing that it improved the reconstruction performance mean square error (>94.1 % ), peak signal-to-noise ratio (>41.7 % ), and Pearson correlation (>19 % ) compared with the FBP algorithm. The Swin-CNN also achieved a 32.1% improvement in PSNR over the deep learning method AUTOMAP.
Conclusions
This study shows that the three-component deep learning algorithm provides an effective reconstruction method for x-ray Cherenkov-luminescence tomography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced digital control of microscopes and programmable data acquisition workflows have become increasingly important for improving the throughput and reproducibility of optical imaging experiments. Combinations of imaging modalities have enabled a more comprehensive understanding of tissue biology and tumor microenvironments in histopathological studies. However, insufficient imaging throughput and complicated workflows still limit the scalability of multimodal histopathology imaging.
Aim
We present a hardware-software co-design of a whole slide scanning system for high-throughput multimodal tissue imaging, including brightfield (BF) and laser scanning microscopy.
Approach
The system can automatically detect regions of interest using deep neural networks in a low-magnification rapid BF scan of the tissue slide and then conduct high-resolution BF scanning and laser scanning imaging on targeted regions with deep learning-based run-time denoising and resolution enhancement. The acquisition workflow is built using Pycro-Manager, a Python package that bridges hardware control libraries of the Java-based open-source microscopy software Micro-Manager in a Python environment.
Results
The system can achieve optimized imaging settings for both modalities with minimized human intervention and speed up the laser scanning by an order of magnitude with run-time image processing.
Conclusions
The system integrates the acquisition pipeline and data analysis pipeline into a single workflow that improves the throughput and reproducibility of multimodal histopathological imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral imaging (HSI) technologies offer great potential in fluorescence microscopy for multiplexed imaging, autofluorescence removal, and analysis of autofluorescent molecules. However, there are also associated trade-offs when implementing HSI in fluorescence microscopy systems, such as decreased acquisition speed, resolution, or field-of-view due to the need to acquire spectral information in addition to spatial information. The vast majority of HSI fluorescence microscopy systems provide spectral discrimination by filtering or dispersing the fluorescence emission, which may result in loss of emitted fluorescence signal due to optical filters, dispersive optics, or supporting optics, such as slits and collimators. Technologies that scan the fluorescence excitation spectrum may offer an approach to mitigate some of these trade-offs by decreasing the complexity of the emission light path.
Aim
We describe the development of an optical technique for hyperspectral imaging fluorescence excitation-scanning (HIFEX) on a microscope system.
Approach
The approach is based on the design of an array of wavelength-dependent light emitting diodes (LEDs) and a unique beam combining system that uses a multifurcated mirror. The system was modeled and optimized using optical ray trace simulations, and a prototype was built and coupled to an inverted microscope platform. The prototype system was calibrated, and initial feasibility testing was performed by imaging multilabel slide preparations.
Results
We present results from optical ray trace simulations, prototyping, calibration, and feasibility testing of the system. Results indicate that the system can discriminate between at least six fluorescent labels and autofluorescence and that the approach can provide decreased wavelength switching times, in comparison with mechanically tuned filters.
Conclusions
We anticipate that LED-based HIFEX microscopy may provide improved performance for time-dependent and photosensitive assays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although several miniature microscope systems have been developed to allow researchers to image brain neuron activities of free moving rodents, they generally require a long cable connecting to the miniature microscope. It not only limits the behavior of the animal, but also makes it challenging to study multiple animals simultaneously.
Aim
The aim of this work is to develop a fully wireless miniature microscope that would break constraints from the connecting cables so that the animals could move completely freely, allowing neuroscience researchers to study more of animals’ behaviors simultaneously, such as social behavior.
Approach
We present a wireless mini-microscope (wScope) that enables simultaneously real-time brain imaging preview from multiple free-moving animals. The wScope has a mass of 2.7 g and a maximum frame rate of 25 Hz at 750 μm × 450 μm field of view with 1.8-μm resolution.
Results
The performance of the wScope is validated via real-time imaging of the cerebral blood flow and the activity of neurons in the primary visual cortex (V1) of different mice.
Conclusions
The wScope provides a powerful tool for brain imaging of multiple free moving animals in their much larger spaces and more naturalistic environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.