Beyond Local Nash Equilibria for Adversarial Networks

Frans A. Oliehoek, Rahul Savani, Jose Gallego, Elise van der Pol, Roderich Gross

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

37 Downloads (Pure)


Save for some special cases, current training methods for Generative Adversarial Networks (GANs) are at best guaranteed to converge to a `local Nash quilibrium' (LNE). Such LNEs, however, can be arbitrarily far from an actual Nash equilibrium (NE), which implies that there are no guarantees on the quality of the found generator or clas-sier. This paper proposes to model GANs explicitly as nite games in mixed strategies, thereby ensuring that every LNE is an NE. We use the Parallel Nash Memory as a solution method, which is proven to monotonically converge to a resource-bounded Nash equilibrium. We empirically demonstrate that our method is less prone to typical GAN problems such as mode collapse and produces solutions that are less exploitable than those produced by GANs and MGANs
Original languageUndefined/Unknown
Title of host publicationProceedings of the 27th Annual Machine Learning Conference of Belgium and the Netherlands (Benelearn)
Number of pages15
Publication statusPublished - 1 Nov 2018
Event30th Benelux Conference on Artificial Intelligence (BNAIC 2018) - 's-Hertogenbosch, Netherlands
Duration: 8 Nov 20189 Nov 2018
Conference number: 30


Conference30th Benelux Conference on Artificial Intelligence (BNAIC 2018)
Abbreviated titleBNAIC'18

Bibliographical note

Accepted author manuscript

Cite this