background
logo
ArxivPaperAI

Noise Dimension of GAN: An Image Compression Perspective

Author:
Ziran Zhu, Tongda Xu, Ling Li, Yan Wang
Keyword:
Computer Science, Computer Vision and Pattern Recognition, Computer Vision and Pattern Recognition (cs.CV)
journal:
--
date:
2024-03-14 00:00:00
Abstract
Generative adversial network (GAN) is a type of generative model that maps a high-dimensional noise to samples in target distribution. However, the dimension of noise required in GAN is not well understood. Previous approaches view GAN as a mapping from a continuous distribution to another continous distribution. In this paper, we propose to view GAN as a discrete sampler instead. From this perspective, we build a connection between the minimum noise required and the bits to losslessly compress the images. Furthermore, to understand the behaviour of GAN when noise dimension is limited, we propose divergence-entropy trade-off. This trade-off depicts the best divergence we can achieve when noise is limited. And as rate distortion trade-off, it can be numerically solved when source distribution is known. Finally, we verifies our theory with experiments on image generation.
PDF: Noise Dimension of GAN: An Image Compression Perspective.pdf
Empowered by ChatGPT