Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks
2020-04
发表期刊ENTROPY (IF:2.1[JCR-2023],2.2[5-Year])
ISSN1099-4300
卷号22期号:4
发表状态已发表
DOI10.3390/e22040410
摘要

Generative Adversarial Nets (GANs) are one of the most popular architectures for image generation, which has achieved significant progress in generating high-resolution, diverse image samples. The normal GANs are supposed to minimize the Kullback-Leibler divergence between distributions of natural and generated images. In this paper, we propose the Alpha-divergence Generative Adversarial Net (Alpha-GAN) which adopts the alpha divergence as the minimization objective function of generators. The alpha divergence can be regarded as a generalization of the Kullback-Leibler divergence, Pearson chi 2 divergence, Hellinger divergence, etc. Our Alpha-GAN employs the power function as the form of adversarial loss for the discriminator with two-order indexes. These hyper-parameters make our model more flexible to trade off between the generated and target distributions. We further give a theoretical analysis of how to select these hyper-parameters to balance the training stability and the quality of generated images. Extensive experiments of Alpha-GAN are performed on SVHN and CelebA datasets, and evaluation results show the stability of Alpha-GAN. The generated samples are also competitive compared with the state-of-the-art approaches.

关键词Alpha divergence generative adversarial network unsupervised image generation deep neural networks
收录类别SCI ; SCIE
WOS研究方向Physics
WOS类目Physics, Multidisciplinary
WOS记录号WOS:000537222600016
出版者MDPI
原始文献类型Article
引用统计
正在获取...
文献类型期刊论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/120145
专题信息科学与技术学院_硕士生
信息科学与技术学院_PI研究组_王浩组
信息科学与技术学院_特聘教授组_蔡宁组
通讯作者likuncai
作者单位
1.School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
2.Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 200050, China
3.University of Chinese Academy of Sciences, Beijing 100049, China
4.NEC Laboratories America, Inc. (NEC Labs), NEC Corporation, Princeton, NJ 08540, USA
第一作者单位信息科学与技术学院
通讯作者单位信息科学与技术学院
第一作者的第一单位信息科学与技术学院
推荐引用方式
GB/T 7714
likuncai,Yanjie Chen,Ning Cai,et al. Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks[J]. ENTROPY,2020,22(4).
APA likuncai,Yanjie Chen,Ning Cai,Wei Cheng,&Hao Wang.(2020).Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks.ENTROPY,22(4).
MLA likuncai,et al."Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks".ENTROPY 22.4(2020).
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[likuncai]的文章
[Yanjie Chen]的文章
[Ning Cai]的文章
百度学术
百度学术中相似的文章
[likuncai]的文章
[Yanjie Chen]的文章
[Ning Cai]的文章
必应学术
必应学术中相似的文章
[likuncai]的文章
[Yanjie Chen]的文章
[Ning Cai]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。