| |||||||
ShanghaiTech University Knowledge Management System
WLiT: Windows and Linear Transformer for Video Action Recognition | |
2023-02 | |
发表期刊 | SENSORS (IF:3.4[JCR-2023],3.7[5-Year]) |
ISSN | 1424-8220 |
EISSN | 1424-8220 |
卷号 | 23期号:3 |
发表状态 | 已发表 |
DOI | 10.3390/s23031616 |
摘要 | The emergence of Transformer has led to the rapid development of video understanding, but it also brings the problem of high computational complexity. Previously, there were methods to divide the feature maps into windows along the spatiotemporal dimensions and then calculate the attention. There are also methods to perform down-sampling during attention computation to reduce the spatiotemporal resolution of features. Although the complexity is effectively reduced, there is still room for further optimization. Thus, we present the Windows and Linear Transformer (WLiT) for efficient video action recognition, by combining Spatial-Windows attention with Linear attention. We first divide the feature maps into multiple windows along the spatial dimensions and calculate the attention separately inside the windows. Therefore, our model further reduces the computational complexity compared with previous methods. However, the perceptual field of Spatial-Windows attention is small, and global spatiotemporal information cannot be obtained. To address this problem, we then calculate Linear attention along the channel dimension so that the model can capture complete spatiotemporal information. Our method achieves better recognition accuracy with less computational complexity through this mechanism. We conduct extensive experiments on four public datasets, namely Something-Something V2 (SSV2), Kinetics400 (K400), UCF101, and HMDB51. On the SSV2 dataset, our method reduces the computational complexity by 28% and improves the recognition accuracy by 1.6% compared to the State-Of-The-Art (SOTA) method. On the K400 and two other datasets, our method achieves SOTA-level accuracy while reducing the complexity by about 49%. © 2023 by the authors. |
关键词 | rtificial intelligence Action recognition Feature map Linear attention Recognition accuracy Self-attention Spatial windows Spatial-window attention Spatiotemporal information Transformer Video understanding |
URL | 查看原文 |
收录类别 | EI ; SCI ; SCOPUS |
语种 | 英语 |
资助项目 | Special Fund for Basic Research on Scientific Instruments of the National Natural Science Foundation of China[51827814] ; Youth Innovation Promotion Association CAS[2021289] |
WOS研究方向 | Chemistry ; Engineering ; Instruments & Instrumentation |
WOS类目 | Chemistry, Analytical ; Engineering, Electrical & Electronic ; Instruments & Instrumentation |
WOS记录号 | WOS:000931350000001 |
出版者 | MDPI |
EI入藏号 | 20230713586353 |
EI主题词 | Computational complexity |
EI分类号 | 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory ; 723.4 Artificial Intelligence |
原始文献类型 | Journal article (JA) |
引用统计 | 正在获取...
|
文献类型 | 期刊论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/282022 |
专题 | 信息科学与技术学院_硕士生 |
通讯作者 | Zhang, Fuping |
作者单位 | 1.Chinese Acad Sci, Shanghai Adv Res Inst, Shanghai 201210, Peoples R China 2.Shanghai Tech Univ, Sch Informat Sci & Technol, Shanghai 201210, Peoples R China 3.Univ Chinese Acad Sci, Sch Elect Elect & Commun Engn, Beijing 100049, Peoples R China 4.Chinese Acad Sci, Inst Rock & Soil Mech, State Key Lab Geomech & Geotech Engn, Wuhan 430071, Peoples R China |
第一作者单位 | 信息科学与技术学院 |
推荐引用方式 GB/T 7714 | Sun, Ruoxi,Zhang, Tianzhao,Wan, Yong,et al. WLiT: Windows and Linear Transformer for Video Action Recognition[J]. SENSORS,2023,23(3). |
APA | Sun, Ruoxi,Zhang, Tianzhao,Wan, Yong,Zhang, Fuping,&Wei, Jianming.(2023).WLiT: Windows and Linear Transformer for Video Action Recognition.SENSORS,23(3). |
MLA | Sun, Ruoxi,et al."WLiT: Windows and Linear Transformer for Video Action Recognition".SENSORS 23.3(2023). |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。