消息
×
loading..
Decomposing Temporal Equilibrium Strategy for Coordinated Distributed Multi-Agent Reinforcement Learning
2024
会议录名称THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16
ISSN2159-5399
卷号38
期号16
页码17618-17627
发表状态已发表
DOI10.1609/aaai.v38i16.29713
摘要The increasing demands for system complexity and robustness have prompted the integration of temporal logic into Multi-Agent Reinforcement Learning (MARL) to address tasks with non-Markovian properties. However, incorporating non-Markovian properties introduces additional computational complexities, as agents are required to integrate historical data into their decision-making process. Also, optimizing strategies within a multi-agent environment presents significant challenges due to the exponential growth of the state space with the number of agents. In this study, we introduce an innovative hierarchical MARL framework that synthesizes temporal equilibrium strategies through parity games and subsequently encodes them as individual reward machines for MARL coordination. More specifically, we reduce the strategy synthesis problem into an emptiness problem concerning parity games with optimized states and transitions. Following this synthesis step, the temporal equilibrium strategy is decomposed into individual reward machines for decentralized MARL. Theoretical proofs are provided to verify the consistency of the Nash equilibrium between the parallel composition of decomposed strategies and the original strategy. Empirical evidence confirms the efficacy of the proposed synthesis technique, showcasing its ability to reduce state space compared to the state-of-the-art tool. Furthermore, our study highlights the superior performance of the distributed MARL paradigm over centralized approaches when deploying decomposed strategies.
会议录编者/会议主办者Association for the Advancement of Artificial Intelligence
关键词Fertilizers Game theory Multi agent systems Reinforcement learning Decision-making process Equilibria strategies Historical data Multi-agent environment Multi-agent reinforcement learning Non-Markovian property Parity games State-space System robustness Systems complexity
会议名称38th AAAI Conference on Artificial Intelligence (AAAI) / 36th Conference on Innovative Applications of Artificial Intelligence / 14th Symposium on Educational Advances in Artificial Intelligence
出版地2275 E BAYSHORE RD, STE 160, PALO ALTO, CA 94303 USA
会议地点null,Vancouver,CANADA
会议日期FEB 20-27, 2024
URL查看原文
收录类别EI ; CPCI-S
语种英语
资助项目National Natural Science Foundation of China[62202067] ; Natural Science Foundation of the Higher Education Institutions of Jiangsu Province[22KJB520012] ; Postgraduate Research and Practice Innovation Project of Jiangsu Province[SJCX231485]
WOS研究方向Computer Science ; Education & Educational Research
WOS类目Computer Science, Artificial Intelligence ; Computer Science, Interdisciplinary Applications ; Computer Science, Theory & Methods ; Education, Scientific Disciplines
WOS记录号WOS:001239323500038
出版者ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE
EI入藏号20241515880743
EI主题词Decision making
EISSN2374-3468
EI分类号723.4 Artificial Intelligence ; 804 Chemical Products Generally ; 821.2 Agricultural Chemicals ; 912.2 Management ; 922.1 Probability Theory
原始文献类型Conference article (CA)
引用统计
正在获取...
文献类型会议论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/354894
专题信息科学与技术学院
信息科学与技术学院_PI研究组_江智浩组
通讯作者Zhu, Chenyang
作者单位
1.Changzhou Univ, Sch Comp Sci & Aritificial Intelligence, Changzhou, Jiangsu, Peoples R China
2.ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai, Peoples R China
推荐引用方式
GB/T 7714
Zhu, Chenyang,Si, Wen,Zhu, Jinyu,et al. Decomposing Temporal Equilibrium Strategy for Coordinated Distributed Multi-Agent Reinforcement Learning[C]//Association for the Advancement of Artificial Intelligence. 2275 E BAYSHORE RD, STE 160, PALO ALTO, CA 94303 USA:ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE,2024:17618-17627.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Zhu, Chenyang]的文章
[Si, Wen]的文章
[Zhu, Jinyu]的文章
百度学术
百度学术中相似的文章
[Zhu, Chenyang]的文章
[Si, Wen]的文章
[Zhu, Jinyu]的文章
必应学术
必应学术中相似的文章
[Zhu, Chenyang]的文章
[Si, Wen]的文章
[Zhu, Jinyu]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。