ShanghaiTech University Knowledge Management System
ALTERNATING DIFFERENTIATION FOR OPTIMIZATION LAYERS | |
2023 | |
会议录名称 | 11TH INTERNATIONAL CONFERENCE ON LEARNING REPRESENTATIONS, ICLR 2023 |
发表状态 | 正式接收 |
摘要 | The idea of embedding optimization problems into deep neural networks as optimization layers to encode constraints and inductive priors has taken hold in recent years. Most existing methods focus on implicitly differentiating Karush-Kuhn-Tucker (KKT) conditions in a way that requires expensive computations on the Jacobian matrix, which can be slow and memory-intensive. In this paper, we developed a new framework, named Alternating Differentiation (Alt-Diff), that differentiates optimization problems (here, specifically in the form of convex optimization problems with polyhedral constraints) in a fast and recursive way. Alt-Diff decouples the differentiation procedure into a primal update and a dual update in an alternating way. Accordingly, Alt-Diff substantially decreases the dimensions of the Jacobian matrix especially for optimization with large-scale constraints and thus increases the computational speed of implicit differentiation. We show that the gradients obtained by Alt-Diff are consistent with those obtained by differentiating KKT conditions. In addition, we propose to truncate Alt-Diff to further accelerate the computational speed. Under some standard assumptions, we show that the truncation error of gradients is upper bounded by the same order of variables' estimation error. Therefore, Alt-Diff can be truncated to further increase computational speed without sacrificing much accuracy. A series of comprehensive experiments validate the superiority of Alt-Diff. © 2023 11th International Conference on Learning Representations, ICLR 2023. All rights reserved. |
会议录编者/会议主办者 | Baidu ; DeepMind ; et al. ; Google Research ; Huawei ; Meta AI |
关键词 | Convex optimization Deep neural networks Multilayer neural networks Network layers Computational speed Convex optimization problems Embeddings Karush Kuhn tucker condition Large-scales Optimisations Optimization problems Polyhedral constraints Standard assumptions Truncation errors |
会议名称 | 11th International Conference on Learning Representations, ICLR 2023 |
会议地点 | Kigali, Rwanda |
会议日期 | May 1, 2023 - May 5, 2023 |
收录类别 | EI |
语种 | 英语 |
出版者 | International Conference on Learning Representations, ICLR |
EI入藏号 | 20243116791627 |
EI主题词 | Jacobian matrices |
EI分类号 | 461.4 Ergonomics and Human Factors Engineering ; 723 Computer Software, Data Handling and Applications ; 921.1 Algebra |
原始文献类型 | Conference article (CA) |
文献类型 | 会议论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/407255 |
专题 | 信息科学与技术学院_PI研究组_石野组 信息科学与技术学院_硕士生 信息科学与技术学院_PI研究组_汪婧雅组 |
通讯作者 | Shi, Ye |
作者单位 | 1.ShanghaiTech University, China; 2.University of Technology Sydney, Australia; 3.Princeton University, United States; 4.JD Explore Academy |
第一作者单位 | 上海科技大学 |
通讯作者单位 | 上海科技大学 |
第一作者的第一单位 | 上海科技大学 |
推荐引用方式 GB/T 7714 | Sun, Haixiang,Shi, Ye,Wang, Jingya,et al. ALTERNATING DIFFERENTIATION FOR OPTIMIZATION LAYERS[C]//Baidu, DeepMind, et al., Google Research, Huawei, Meta AI:International Conference on Learning Representations, ICLR,2023. |
条目包含的文件 | ||||||
条目无相关文件。 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。