Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Spatial Coevolution for Generative Adversarial Network Training

Spatial Coevolution for Generative Adversarial Network Training Generative Adversarial Networks (GANs) are difficult to train because of pathologies such as mode and discriminator collapse. Similar pathologies have been studied and addressed in competitive evolutionary computation by increased diversity. We study a system, Lipizzaner, that combines spatial coevolution with gradient-based learning to improve the robustness and scalability of GAN training. We study different features of Lipizzaner’s evolutionary computation methodology. Our ablation experiments determine that communication, selection, parameter optimization, and ensemble optimization each, as well as in combination, play critical roles. Lipizzaner succumbs less frequently to critical collapses and, as a side benefit, demonstrates improved performance. In addition, we show a GAN-training feature of Lipizzaner: the ability to train simultaneously with different loss functions in the gradient descent parameter learning framework of each GAN at each cell. We use an image generation problem to show that different loss function combinations result in models with better accuracy and more diversity in comparison to other existing evolutionary GAN models. Finally, Lipizzaner with multiple loss function options promotes the best model diversity while requiring a large grid size for adequate accuracy. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Evolutionary Learning and Optimization Association for Computing Machinery

Loading next page...
 
/lp/association-for-computing-machinery/spatial-coevolution-for-generative-adversarial-network-training-khB1iLv22G
Publisher
Association for Computing Machinery
Copyright
Copyright © 2021 Association for Computing Machinery.
ISSN
2688-299X
eISSN
2688-3007
DOI
10.1145/3458845
Publisher site
See Article on Publisher Site

Abstract

Generative Adversarial Networks (GANs) are difficult to train because of pathologies such as mode and discriminator collapse. Similar pathologies have been studied and addressed in competitive evolutionary computation by increased diversity. We study a system, Lipizzaner, that combines spatial coevolution with gradient-based learning to improve the robustness and scalability of GAN training. We study different features of Lipizzaner’s evolutionary computation methodology. Our ablation experiments determine that communication, selection, parameter optimization, and ensemble optimization each, as well as in combination, play critical roles. Lipizzaner succumbs less frequently to critical collapses and, as a side benefit, demonstrates improved performance. In addition, we show a GAN-training feature of Lipizzaner: the ability to train simultaneously with different loss functions in the gradient descent parameter learning framework of each GAN at each cell. We use an image generation problem to show that different loss function combinations result in models with better accuracy and more diversity in comparison to other existing evolutionary GAN models. Finally, Lipizzaner with multiple loss function options promotes the best model diversity while requiring a large grid size for adequate accuracy.

Journal

ACM Transactions on Evolutionary Learning and OptimizationAssociation for Computing Machinery

Published: Jul 29, 2021

Keywords: Generative adversarial networks

References