High-throughput imaging techniques have catalyzed significant strides in regenerative medicine, predominantly through advancements in stem cell research. Despite this, the analysis of these images often overlooks important biological implications due to the persistent challenge posed by artifacts during segmentation. In addressing this challenge, this study introduces a new deep learning architecture: a cross-structure, artifact-free U-Net (AFU-Net) model designed to optimize in vitro virtual nuclei staining of stem cells. This innovative framework, inspired by U-Net-based models, incorporates a cross-structure noise removal pre-processing layer. This layer has shown proficiency in handling artifacts frequently found on the peripheries of bright-field images used in stem cell manufacturing processes. In our extensive analysis using a gradient-density dataset of mesenchymal stem cell images, our model consistently outperformed established models in the domain. Specifically, when assessed using critical segmentation evaluation metrics— Segmentation Covering (SC) and Variation of Information (VI)—the proposed model yielded impressive results. It achieved a mean SC of 0.979 and a mean VI of 0.194, standing out from other standard configurations. Further optimization was evident in scenarios involving overlapping tiling, where the model was tasked with countering artifacts from segmented cells. Here, within a cell media setting, the model reached an elevated mean SC of 0.980 and a reduced mean VI of 0.187. The outcomes from our investigations signify a marked enhancement in the standardization and efficiency of stem cell image analysis. This facilitates a more nuanced understanding of cellular analytics derived from label-free images, bridging crucial gaps in both research and clinical applications of stem cell methodologies. While the primary focus has been on stem cells, the potential applicability of our architecture holds promise for broader realms, encompassing various biological and medical imaging contexts.
The production of high yields of viable cells, especially Mesenchymal stem cells (MSCs), is a crucial yet challenging aspect in the field of cell therapy (CT). While progress has been made, there is still a need for quick, non-destructive ways to check the quality of the cells being produced to enhance cell manufacturing process. In light of this, our study aims to develop an accurate, interpretable machine learning technique that relies solely on bright-field (BF) images for enhanced differentiation of MSCs under different serum-containing conditions. Our investigation centers around the expansion of human MSCs derived from bone marrow cultivated in two specific media types: serum-containing (SC) and low-serum containing (LSC) media. The prevalent method of chemical staining for cell component identification is often time-intensive, costly, and potentially harmful to cells. To address these issues, we captured BF images at a 20X magnification with a Perkin Elmer Operatta screening system. Utilizing mean Shapley Adaptive exPlanations (SHAP) values obtained from the application of the 2-D discrete Fourier transform (DFT) module to BF images, we developed a supervised clustering approach within a tree-based machine learning model. The results of our experimental trials revealed the Random Forest model's efficacy in correctly classifying MSCs under varying conditions with a weighted accuracy of 80.15%. A further application of the DFT module to BF images significantly increased this accuracy to 93.26%. By transforming the original dataset into SHAP values using Random Forest classifiers, our supervised clustering approach effectively differentiates MSCs using label-free images. This innovative framework significantly contributes to the understanding of MSC health, enhances CT manufacturing processes, and holds potential to improve the efficacy of cell therapies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.