Stabilizing GANs Under Limited Resources via Dynamic Machine Ordering
Public DepositedDownloadable Content
open in viewerGenerative Adversarial Networks (GANs) are a generative framework with a notorious reputation for instability. Despite significant work in attempting to improve stability, training remains extremely difficult in practice. Nearly all GAN optimization methods are built on either simultaneous (Sim-GDA) or alternating (Alt-GDA) gradient descent-ascent, where the generator and discriminator are updated either at the same time iteratively or in a fixed pattern. In this paper, we prove for simple GANs, for which training had been proven non-convergent under Sim-GDA and Alt-GDA, that our newly introduced training method is Lyapunov-stable. We then design a novel oracle-guided GDA training strategy called Dynamic-GDA that leverages generalized analogs of the properties exhibited in the simple case. We also prove that in contrast to Sim/Alt-GDA, GANs with Dynamic-GDA achieve Lyapunov-stable training with non-infinitesimal learning rates. Empirically, we show Dynamic-GDA improves convergence orthogonally to common stabilizing techniques on 8 classes of GAN models and 7 different data sets.
- Creator
- Contributors
- Degree
- Unit
- Publisher
- Identifier
- etd-121894
- Keyword
- Advisor
- Defense date
- Year
- 2024
- Date created
- 2024-04-26
- Resource type
- Source
- etd-121894
- Rights statement
Relations
- In Collection:
Items
Permanent link to this page: https://digital.wpi.edu/show/rf55zc947