Occasionally a novel neural network architecture comes along that enables a truly unique way of solving specific deep learning problems. This has certainly been the case with Generative Adversarial Networks (GANs), originally proposed by Ian Goodfellow et al. in a 2014 paper that has been cited more than 32,000 times since its publication. Among other applications, GANs have become the preferred method for synthetic image generation. The results of using GANs for creating realistic images of people who do not exist have raised many ethical issues along the way.
In this blog post we focus on using GANs to generate synthetic images of skin lesions for medical image analysis in dermatology.
Figure 1 – How a generative adversarial network (GAN) works.
A Quick GAN Lesson
Essentially, GANs consist of two neural network agents/models (called generator and discriminator) that compete with one another in a zero-sum game, where one agent’s gain is another agent’s loss. The generator is used to generate new plausible examples from the problem domain whereas the discriminator is used to classify examples as real (from the domain) or fake (generated). The discriminator is then updated to get better at discriminating real and fake samples in subsequent iterations, and the generator is updated based on how well the generated samples fooled the discriminator (Figure 1).
During its history, numerous architectural variations and improvements over the original GAN idea have been proposed in the literature. Most GANs today are at least loosely based on the DCGAN (Deep Convolutional Generative Adversarial Networks) architecture, formalized by Alec Radford, Luke Metz and Soumith Chintala in their 2015 paper.
GANs for Medical Images
The use of GANs to create synthetic medical images is motivated by the following aspects:
- Medical (imaging) datasets are heavily unbalanced, i.e., they contain many more images of healthy patients than any pathology. The ability to create synthetic images (in different modalities) of specific pathologies could help alleviate the problem and provide more and better samples for a deep learning model to learn from.
- Manual annotation of medical images is a costly process (compared to similar tasks for generic everyday images, which could be handled using…
Continue reading: http://www.datasciencecentral.com/xn/detail/6448529:BlogPost:1063623