SAN: Inducing Metrizability of GAN with Discriminative Normalized Linear Layer

Yuhta Takida1, Masaaki Imaizumi3, Takashi Shibuya1, Chieh-Hsin Lai1,
Toshimitsu Uesaka1, Naoki Murata1, Yuki Mitsufuji1,2
1Sony AI, 2Sony Group Corporation, 3The University of Tokyo


A theoretically sound modification scheme for discriminators, although simple, enhances GAN performance, resulting in a novel class of generative models called Slicing Adversarail Network. Applying SAN to StyleGAN-XL results in SOTA performance on ImageNet 256x256 (FID: 2.14, IS: 274.20).


Generated samples by StyleSAN-XL, trained on ImageNet 256x256.


Generative adversarial networks (GANs) learn a target probability distribution by optimizing a generator and a discriminator with minimax objectives. This paper addresses the question of whether such optimization actually provides the generator with gradients that make its distribution close to the target distribution. We derive metrizable conditions, sufficient conditions for the discriminator to serve as the distance between the distributions, by connecting the GAN formulation with the concept of sliced optimal transport. Furthermore, by leveraging these theoretical results, we propose a novel GAN training scheme called the Slicing Adversarial Network (SAN). With only simple modifications, a broad class of existing GANs can be converted to SANs. Experiments on synthetic and image datasets support our theoretical results and the effectiveness of SAN as compared to the usual GANs. We also apply SAN to StyleGAN-XL, which leads to a state-of-the-art FID score amongst GANs for class conditional generation on ImageNet 256x256.



We introduce a notion of metrizable discriminator to discuss sufficient conditions where discriminators serve as distances between the distributions.

SAN-ify: From GANs to SANs


All you need to convert GANs to SANs is simple modifications to objectives and last linear layers of discriminators.

Metrizable Conditions


Metrizable conditions can be decomposed into (1) direction optimality, (2) separability, and (3) injectivity. There is no existing GANs that satisfy all the conditions simultaneously. The idea behind SAN is inducing all the three conditions by customizing the maximization problem.

SAN vs. GAN on Mixture of Gaussian

SAN vs. GAN on Vision Dataset


        title={{SAN}: Inducing Metrizability of GAN with Discriminative Normalized Linear Layer},
        author={Takida, Yuhta and Imaizumi, Masaaki and Shibuya, Takashi and Lai, Chieh-Hsin and Uesaka, Toshimitsu and Murata, Naoki and Mitsufuji, Yuki},
        booktitle={The Twelfth International Conference on Learning Representations},