时 间:2023年10月24日 10:00-11:30
地 点:腾讯会议ID:560-4756-7741
报告人:桑海林 美国密西西比大学副教授
主持人:徐方军 华东师范大学教授
摘 要:
The generative adversarial network (GAN) is an important model devel- oped for high-dimensional distribution learning in recent years. However, there is a pressing need for a comprehensive method to understand its error convergence rate. In this research, we focus on studying the error conver- gence rate of the GAN model that is based on a class of functions encompass- ing the discriminator and generator neural networks. These functions are VC type with bounded envelope function under our assumptions, enabling the application of the Talagrand inequality. By employing the Talagrand inequality and Borel-Cantelli lemma, we establish a tight convergence rate for the error of GAN. This method can also be applied on existing error estimations of GAN and yields improved convergence rates. In particular, the error defined with the neural network distance is a special case error in our definition. This talk is based on the project jointly with Mahmud Hasan.
报告人简介:
桑海林,美国密西西比大学数学系副教授,2008年博士毕业于美国康涅狄格大学。 主要从事随机场、非参统计、经验过程,机器学习等方面研究。在Stochastic Processes and Their Applications, Statistica Sinica, Journal of Time Series Analysis, Journal of Nonparametric Statistics等一流学术期刊发表论文近30篇。