Retinal vessels segmentation algorithms have a great significance for diagnosis of various blood-related diseases such as diabetes, high blood pressure, etc. In addition, every one of us has a different retinal vascular tree so it can be also used as a bio-metric identification. This paper describes our work to take up a challenge∗ to build the best segmentation model of blood vessels out of retinal images. One of the issues in deep learning in the medical domain is the lack of sufficient labeled data, and the DRIVE dataset given in the challenge is no exception. Therefore, in this work we propose a method to improve the performance of the state-of-the-art retinal images segmentation model by synthesizing new retinal images using StyleGAN and their corresponding segmentation maps created by a segmentation network. We show that training with additional generated images improves the segmentation performance.
One of the major problems in medical imaging is the shortage of pathology data. In most cases, the acquisition of labeled data is expensive and usually involves manual labeling by a skilled medical expert. Because of this, most medical imaging tasks suffer from a severe class imbalance with a bias towards non-pathological classes, resulting in reduced performance. The recent growth in the use of generative adversarial networks and their ability to generate synthetic data shows great promise for reducing the class imbalance problem. In this work we introduce the GC-CycleGAN model, a general method for CycleGAN factorization, utilizing Grad-CAMs as auxiliary data in the CycleGAN model to generate synthetic images. Our novel approach utilizes Grad-CAMs ability to describe class activation and uses it for improved network classification, rather than as a visualization tool. The spread of the COVID-19 pandemic is affecting the lives of millions worldwide. If proven effective, automated COVID-19 detection from chest X-ray images can be a supportive step in the fight against COVID-19. However, the task of COVID-19 classification suffers greatly from the class imbalance problem. Using the GC-CycleGAN method, we demonstrate in this work the ability to balance a heavily imbalanced dataset for the task of COVID-19 vs. non-COVID-19 pneumonia X-ray classification. We show improved results over two baselines and the COVID-Net model.
A fundamental problem in employing deep learning algorithms in the medical field is the lack of labeled data and severe class imbalance. In this work, we present novel ways to enlarge small scale datasets. We introduce an autoencoder framework comprised of an encoder and a StyleGAN generator to embed images into the latent space of StyleGAN. The autoencoder learns the disentangled latent representation of the data allowing for encoding real images to the latent space and manipulating the latent vector in a meaningful manner. We suggest ways to use the encoder along with the unique architecture of the StyleGAN generator to control the synthesized images and thus, create class-specific images that can be used to train and improve existing deep learning algorithms.
Medical image segmentation has a fundamental role in many computer-aided diagnosis (CAD) applications. Accurate segmentation of medical images is a key step in tracking changes over time, contouring during radiotherapy planning, and more. One of the state-of-the-art models for medical image segmentation is the U–Net that consists of an encoder-decoder based architecture. Many variations exist to the U–Net architecture. In this work, we present a new training procedure that combines U–Net with an adversarial training we refer to as Adversarial U–Net. We show that Adversarial U–Net outperformes the conventional U–Net in three versatile domains that differ in the acquisition method as well as the physical characteristics and yields smooth and improved segmentation maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.