DropNaE: Alleviating irregularity for large-scale graph representation learning
2025-03
发表期刊NEURAL NETWORKS (IF:6.0[JCR-2023],7.9[5-Year])
ISSN0893-6080
EISSN1879-2782
卷号183
发表状态已发表
DOI10.1016/j.neunet.2024.106930
摘要

Large-scale graphs are prevalent in various real-world scenarios and can be effectively processed using Graph Neural Networks (GNNs) on GPUs to derive meaningful representations. However, the inherent irregularity found in real-world graphs poses challenges for leveraging the single-instruction multiple-data execution mode of GPUs, leading to inefficiencies in GNN training. In this paper, we try to alleviate this irregularity at its origin—the irregular graph data itself. To this end, we propose DropNaE to alleviate the irregularity in large-scale graphs by conditionally dropping nodes and edges before GNN training. Specifically, we first present a metric to quantify the neighbor heterophily of all nodes in a graph. Then, we propose DropNaE containing two variants to transform the irregular degree distribution of the large-scale graph to a uniform one, based on the proposed metric. Experiments show that DropNaE is highly compatible and can be integrated into popular GNNs to promote both training efficiency and accuracy of used GNNs. DropNaE is offline performed and requires no online computing resources, benefiting the state-of-the-art GNNs in the present and future to a significant extent. © 2024 Elsevier Ltd

关键词Adversarial machine learning Federated learning Graph algorithms Knowledge graph Algorithm on graph representation learning Efficient large-scale graph representation learning Graph neural networks Graph representation Irregularity Large-scales Neural networks trainings Real-world graphs Real-world scenario
URL查看原文
收录类别EI ; SCI
语种英语
资助项目National Key Research and Development Program, China[2023YFB4502305] ; National Natural Science Foundation of China["62202451","6230247","61975124","62276151","62106119"] ; Chinese Institute for Brain Research at Beijing, CAS Project for Young Scientists in Basic Research, China[YSBR-029]
WOS研究方向Computer Science ; Neurosciences & Neurology
WOS类目Computer Science, Artificial Intelligence ; Neurosciences
WOS记录号WOS:001411709200001
出版者Elsevier Ltd
EI入藏号20245017518264
EI主题词Contrastive Learning
EI分类号1101.2 ; 1106.2 ; 1201.8
原始文献类型Journal article (JA)
文献类型期刊论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/461525
专题信息科学与技术学院_硕士生
通讯作者Yan, Mingyu
作者单位
1.SKLP, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China;
2.University of Chinese Academy of Sciences, Beijing, China;
3.ShanghaiTech University, Shanghai, China;
4.Griffith University, Brisbane, Australia;
5.University of Shanghai for Science and Technology, Shanghai, China;
6.Department of Precision Instrument, Tsinghua University, Beijing, China
推荐引用方式
GB/T 7714
Liu, Xin,Xiong, Xunbin,Yan, Mingyu,et al. DropNaE: Alleviating irregularity for large-scale graph representation learning[J]. NEURAL NETWORKS,2025,183.
APA Liu, Xin.,Xiong, Xunbin.,Yan, Mingyu.,Xue, Runzhen.,Pan, Shirui.,...&Fan, Dongrui.(2025).DropNaE: Alleviating irregularity for large-scale graph representation learning.NEURAL NETWORKS,183.
MLA Liu, Xin,et al."DropNaE: Alleviating irregularity for large-scale graph representation learning".NEURAL NETWORKS 183(2025).
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Liu, Xin]的文章
[Xiong, Xunbin]的文章
[Yan, Mingyu]的文章
百度学术
百度学术中相似的文章
[Liu, Xin]的文章
[Xiong, Xunbin]的文章
[Yan, Mingyu]的文章
必应学术
必应学术中相似的文章
[Liu, Xin]的文章
[Xiong, Xunbin]的文章
[Yan, Mingyu]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。