ShanghaiTech University Knowledge Management System
Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks | |
2020-04 | |
发表期刊 | ENTROPY (IF:2.1[JCR-2023],2.2[5-Year]) |
ISSN | 1099-4300 |
卷号 | 22期号:4 |
发表状态 | 已发表 |
DOI | 10.3390/e22040410 |
摘要 | Generative Adversarial Nets (GANs) are one of the most popular architectures for image generation, which has achieved significant progress in generating high-resolution, diverse image samples. The normal GANs are supposed to minimize the Kullback-Leibler divergence between distributions of natural and generated images. In this paper, we propose the Alpha-divergence Generative Adversarial Net (Alpha-GAN) which adopts the alpha divergence as the minimization objective function of generators. The alpha divergence can be regarded as a generalization of the Kullback-Leibler divergence, Pearson chi 2 divergence, Hellinger divergence, etc. Our Alpha-GAN employs the power function as the form of adversarial loss for the discriminator with two-order indexes. These hyper-parameters make our model more flexible to trade off between the generated and target distributions. We further give a theoretical analysis of how to select these hyper-parameters to balance the training stability and the quality of generated images. Extensive experiments of Alpha-GAN are performed on SVHN and CelebA datasets, and evaluation results show the stability of Alpha-GAN. The generated samples are also competitive compared with the state-of-the-art approaches. |
关键词 | Alpha divergence generative adversarial network unsupervised image generation deep neural networks |
收录类别 | SCI ; SCIE |
WOS研究方向 | Physics |
WOS类目 | Physics, Multidisciplinary |
WOS记录号 | WOS:000537222600016 |
出版者 | MDPI |
原始文献类型 | Article |
引用统计 | 正在获取...
|
文献类型 | 期刊论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/120145 |
专题 | 信息科学与技术学院_硕士生 信息科学与技术学院_PI研究组_王浩组 信息科学与技术学院_特聘教授组_蔡宁组 |
通讯作者 | likuncai |
作者单位 | 1.School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China 2.Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 200050, China 3.University of Chinese Academy of Sciences, Beijing 100049, China 4.NEC Laboratories America, Inc. (NEC Labs), NEC Corporation, Princeton, NJ 08540, USA |
第一作者单位 | 信息科学与技术学院 |
通讯作者单位 | 信息科学与技术学院 |
第一作者的第一单位 | 信息科学与技术学院 |
推荐引用方式 GB/T 7714 | likuncai,Yanjie Chen,Ning Cai,et al. Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks[J]. ENTROPY,2020,22(4). |
APA | likuncai,Yanjie Chen,Ning Cai,Wei Cheng,&Hao Wang.(2020).Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks.ENTROPY,22(4). |
MLA | likuncai,et al."Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks".ENTROPY 22.4(2020). |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 |
个性服务 |
查看访问统计 |
谷歌学术 |
谷歌学术中相似的文章 |
[likuncai]的文章 |
[Yanjie Chen]的文章 |
[Ning Cai]的文章 |
百度学术 |
百度学术中相似的文章 |
[likuncai]的文章 |
[Yanjie Chen]的文章 |
[Ning Cai]的文章 |
必应学术 |
必应学术中相似的文章 |
[likuncai]的文章 |
[Yanjie Chen]的文章 |
[Ning Cai]的文章 |
相关权益政策 |
暂无数据 |
收藏/分享 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。