We propose the use of a conditional generative adversarial network (cGAN) to generate anatomically accurate full-sized CT images. Our approach is motivated by the recently discovered concept of style transfer and proposes to mix style and content of two separate CT images for generating a new image. We argue that by using these losses in a style transfer based architecture along with a cGAN, we can increase the size of clinically accurate, annotated datasets by multiple folds. Our framework can generate full-sized images with novel anatomy at spatial high resolution for all organs and only requires limited annotated input data of a few patients. The expanded datasets our framework generates can then be utilized within the many deep learning architectures designed for various processing tasks in medical imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.