学活动

香港科技大学杨灿博士学术讲座

作者:人事办公室  来:人事办   /  更新日期: 2019-04-18  点击量:263

香港科技大学杨灿博士学术讲座

讲座题目:Deep Generative Learning via Variational Gradient Flow

 说话  人口:杨灿博士

讲座时间:2019419日下午2:30

讲座地点:清水河校区经管楼A105教室

 

主讲人简介:

macyang.jpg杨灿博士在香港科技大学任副教授。杨灿博士在浙江大学取得电子工程本科和硕士学位,在香港科技大学电子和计算机工程系获博士学位。曾在耶鲁卫生学院从事博士后研究,曾在耶鲁医学院人研究助理,曾在香港浸会大学任副教授。

杨灿博士研究兴趣呢:Machine learning, Statistical Genetics相当, 曾获 2012 Hong Kong Young Scientist相当奖项。研究收获发表在:American Journal of Human Genetics, Annals of Statistics, Bioinformatics, IEEE Transactions on Pattern Analysis and Machine Intelligence, PLoS Genetics, and Proceedings of the National Academy of Sciences. 201812月,谷歌引用 2, 300余次,h-index 22,i10-index 34.

 

讲座简介:

Learning the generative model, i.e., the underlying data generating distribution, based on large amounts of data is one of the fundamental tasks in machine learning and statistics. Recent progresses in deep generative models have provided novel techniques for unsupervised and semi-supervised learning, with broad application varying from image synthesis, semantic image editing, image-to-image translation to low-level image processing. However, statistical understanding of deep generative models is still lacking, e.g., why the logD trick works well in training generative adversarial network (GAN). In this talk, we introduce a general framework, variational gradient flow (VGrow), to learn a deep generative model to sample from the target distribution via combing the strengths of variational gradient flow on probability space, particle optimization and deep neural network. The proposed framework is applied to minimize the f-divergence between the evolving distribution and the target distribution. We prove that the particles driven by VGrow are guaranteed to converge to the target distribution asymptotically. Connections of our proposed VGrow method with other popular methods, such as VAE, GAN and flow-based methods, have been established in this framework, gaining new insights of deep generative learning. We also evaluated several commonly used f-divergences, including Kullback-Leibler, Jensen-Shannon, Jeffrey divergences as well as our newly discovered “logD” divergence which serves as the objective function of the logD-trick GAN. Experimental results on benchmark datasets demonstrate that VGrow can generate high-fidelity images in a stable and efficient manner, achieving competitive performance with state-of-the-art GANs. This is a joint work with Yuan Gao, Yuling Jiao, Yao Wang, Yang Wang and Shunkang Zhang.

 

迎接各位老师与同学参加!

 

 

经济和管理学院人事办公室

2019417