An Optimal Transportation (OT) View of Generative Adversarial Networks (GANs)

Generative Adversarial Net (GAN) is a powerful machine learning model, and becomes extremely successful recently. The generator and the discriminator in a GAN model competes each other and reaches the Nash equilibrium. GAN can generate samples automatically, therefore reduce the requirements for large amount of training data. It can also model distributions from data samples. In spite of its popularity, GAN model lacks theoretic foundation. In this talk, we give a geometric interpretation to optimal mass transportation theory, explain the relation with the Monge-Ampere equation, and apply the theory for the GAN model. In more detail, we will discuss the following problems:
1. The real data satisfies the manifold distribution hypothesis: their distribution is close to a low dimensional manifold in the high dimensional image space. Deep learning has two major tasks: a). learning the manifold structure; b). probability measure transformation. The second task can be explained and carried out using optimal transportation theory. This makes half of the blackbox transparent.
2. In GANs, the generator G computes an optimal transportation map, which is equivalent to the Brenier potential; the discriminator D calculates the Wasserstein distance between two probability distributions, which is equivalent to the Kontarovich potential. According to Brenier theorem, the optimal Brenier and Kontarovich potentials are related by a closed form. Therefore G and D should collaborate, not compete, with each other. This reduces the complexities of the DNNs and greatly improves the computational efficiency.
3. According to the regularity theory of Monge-Ampere equation, the transportation maps are not continuous. Deep neural networks can only represent continuous maps, this conflict induces the mode collapses in GANs. In order to avoid the mode collapses, the DNN should learn the Brenier potential, instead of the transportation maps directly. This gives a rigorous way to avoid mode collapse.

Based on this theoretic interpretation, we propose an Autoencoder-Optimal Transportation map (AE-OT) framework, which is partially transparent, and outperforms the state of the arts.

David Gu

Professor, Stony Brook University and Harvard University on April 5, 2019 at 11:45 AM in EB2 1230

Dr. David Xianfeng Gu got his B.S. in Computer Science from Tsinghua university in 1994, Ph.D. in Computer Science from Harvard University in 2002 supervised by the Fields medalist, Prof. Shing-Tung Yau. Currently, Dr. Gu is a tenured associate professor in Computer Science department, also affliated with applied mathematics department in the State University of New York at Stony Brook. Dr. Gu is also an affiliated professor in the Center of Mathematical Science and Applications of Harvard University.
Dr. Gu is one of the major funders of an emerging inter-disciplinary field: Computational Conformal Geometry, which combines modern geometry, topology theory with computer science. Dr. Gu and his collaborators laid down the theoretic foundations and systematically developed the computational algorithms, and applied conformal geometric method in many fields in engineering and medicine, such as Computer Graphics, Computer Vision, Visualization, Geometric Modeling, Networking, Artificial Intelligence, Medical Imaging and Computational Mechanics and so on.
Dr. Gu has published more than 300 papers in the top academic journals and conferences in the field of pure and applied mathematics, engineering and medical fields. Dr. Gu has won many academic awards, such as US NSF CAREER award in 2005, Morningsidea applied mathematics gold award in 2013 and so on.

Interdisciplinary Distinguished Seminar Series

The Department of Electrical and Computer Engineering hosts a regularly scheduled seminar series with preeminent and leading reseachers in the US and the world, to help promote North Carolina as a center of innovation and knowledge and to ensure safeguarding its place of leading research.