Generating realistic tissue images with annotations is a challenging task that is important in many computational histopathology applications. Synthetically generated images and annotations are valuable for training and evaluating algorithms in this domain. To address this, we propose an interactive framework generating pairs of realistic colorectal cancer histology images with corresponding glandular masks from glandular structure layouts. The framework accurately captures vital features like stroma, goblet cells, and glandular lumen. Users can control gland appearance by adjusting parameters such as the number of glands, their locations, and sizes. The generated images exhibit good Frechet Inception Distance (FID) scores compared to the state-of-the-art image-to-image translation model. Additionally, we demonstrate the utility of our synthetic annotations for evaluating gland segmentation algorithms. Furthermore, we present a methodology for constructing glandular masks using advanced deep generative models, such as latent diffusion models. These masks enable tissue image generation through a residual encoder-decoder network.
Breast cancer is the dominant cancer among women as it accounts for about one-quarter of all cancer cases in females. The digitized images of Hematoxylin and Eosin (H&E) stained slides of breast cancer specimens carry valuable diagnostic information. However, inspecting these slides manually is a non-trivial task prone to subjective interpretation. Digital pathology (DP) and artificial intelligence (AI) open an opportunity for objective interpretation of the image data. It is challenging to automate the segmentation process in the whole slide images due to the visual complexity of tissue appearance without the need for tedious and time-consuming fine annotations. Many algorithms classify the tissue regions into different types instead of segmenting them, as the classification algorithms require coarse annotations that are easier to acquire. In this paper, we propose a new segmentation framework that combines the simple non-iterative clustering algorithm with a standard convolutional neural network (CNN) classifier to segment whole slide images into different tissue types. In addition, a graph-based post-processing step is applied to improve the framework segmentation performance further. The results show promising improvement to the CNN classifier based coarse segmentation, which would give better feasibility to quantify and study tissues’ mutual relationships.
Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.