讲座题目：Deep Generative Learning via Variational Gradient Flow
主 说话 人口：杨灿博士
杨灿博士研究兴趣呢：Machine learning, Statistical Genetics相当, 曾获 2012 Hong Kong Young Scientist相当奖项。研究收获发表在：American Journal of Human Genetics, Annals of Statistics, Bioinformatics, IEEE Transactions on Pattern Analysis and Machine Intelligence, PLoS Genetics, and Proceedings of the National Academy of Sciences. 到2018年12月，谷歌引用 2, 300余次，h-index 22，i10-index 34.
Learning the generative model, i.e., the underlying data generating distribution, based on large amounts of data is one of the fundamental tasks in machine learning and statistics. Recent progresses in deep generative models have provided novel techniques for unsupervised and semi-supervised learning, with broad application varying from image synthesis, semantic image editing, image-to-image translation to low-level image processing. However, statistical understanding of deep generative models is still lacking, e.g., why the logD trick works well in training generative adversarial network (GAN). In this talk, we introduce a general framework, variational gradient flow (VGrow), to learn a deep generative model to sample from the target distribution via combing the strengths of variational gradient flow on probability space, particle optimization and deep neural network. The proposed framework is applied to minimize the f-divergence between the evolving distribution and the target distribution. We prove that the particles driven by VGrow are guaranteed to converge to the target distribution asymptotically. Connections of our proposed VGrow method with other popular methods, such as VAE, GAN and flow-based methods, have been established in this framework, gaining new insights of deep generative learning. We also evaluated several commonly used f-divergences, including Kullback-Leibler, Jensen-Shannon, Jeffrey divergences as well as our newly discovered “logD” divergence which serves as the objective function of the logD-trick GAN. Experimental results on benchmark datasets demonstrate that VGrow can generate high-fidelity images in a stable and efficient manner, achieving competitive performance with state-of-the-art GANs. This is a joint work with Yuan Gao, Yuling Jiao, Yao Wang, Yang Wang and Shunkang Zhang.