消息
×
loading..
WLiT: Windows and Linear Transformer for Video Action Recognition
2023-02
发表期刊SENSORS (IF:3.4[JCR-2023],3.7[5-Year])
ISSN1424-8220
EISSN1424-8220
卷号23期号:3
发表状态已发表
DOI10.3390/s23031616
摘要The emergence of Transformer has led to the rapid development of video understanding, but it also brings the problem of high computational complexity. Previously, there were methods to divide the feature maps into windows along the spatiotemporal dimensions and then calculate the attention. There are also methods to perform down-sampling during attention computation to reduce the spatiotemporal resolution of features. Although the complexity is effectively reduced, there is still room for further optimization. Thus, we present the Windows and Linear Transformer (WLiT) for efficient video action recognition, by combining Spatial-Windows attention with Linear attention. We first divide the feature maps into multiple windows along the spatial dimensions and calculate the attention separately inside the windows. Therefore, our model further reduces the computational complexity compared with previous methods. However, the perceptual field of Spatial-Windows attention is small, and global spatiotemporal information cannot be obtained. To address this problem, we then calculate Linear attention along the channel dimension so that the model can capture complete spatiotemporal information. Our method achieves better recognition accuracy with less computational complexity through this mechanism. We conduct extensive experiments on four public datasets, namely Something-Something V2 (SSV2), Kinetics400 (K400), UCF101, and HMDB51. On the SSV2 dataset, our method reduces the computational complexity by 28% and improves the recognition accuracy by 1.6% compared to the State-Of-The-Art (SOTA) method. On the K400 and two other datasets, our method achieves SOTA-level accuracy while reducing the complexity by about 49%. © 2023 by the authors.
关键词rtificial intelligence Action recognition Feature map Linear attention Recognition accuracy Self-attention Spatial windows Spatial-window attention Spatiotemporal information Transformer Video understanding
URL查看原文
收录类别EI ; SCI ; SCOPUS
语种英语
资助项目Special Fund for Basic Research on Scientific Instruments of the National Natural Science Foundation of China[51827814] ; Youth Innovation Promotion Association CAS[2021289]
WOS研究方向Chemistry ; Engineering ; Instruments & Instrumentation
WOS类目Chemistry, Analytical ; Engineering, Electrical & Electronic ; Instruments & Instrumentation
WOS记录号WOS:000931350000001
出版者MDPI
EI入藏号20230713586353
EI主题词Computational complexity
EI分类号721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory ; 723.4 Artificial Intelligence
原始文献类型Journal article (JA)
引用统计
正在获取...
文献类型期刊论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/282022
专题信息科学与技术学院_硕士生
通讯作者Zhang, Fuping
作者单位
1.Chinese Acad Sci, Shanghai Adv Res Inst, Shanghai 201210, Peoples R China
2.Shanghai Tech Univ, Sch Informat Sci & Technol, Shanghai 201210, Peoples R China
3.Univ Chinese Acad Sci, Sch Elect Elect & Commun Engn, Beijing 100049, Peoples R China
4.Chinese Acad Sci, Inst Rock & Soil Mech, State Key Lab Geomech & Geotech Engn, Wuhan 430071, Peoples R China
第一作者单位信息科学与技术学院
推荐引用方式
GB/T 7714
Sun, Ruoxi,Zhang, Tianzhao,Wan, Yong,et al. WLiT: Windows and Linear Transformer for Video Action Recognition[J]. SENSORS,2023,23(3).
APA Sun, Ruoxi,Zhang, Tianzhao,Wan, Yong,Zhang, Fuping,&Wei, Jianming.(2023).WLiT: Windows and Linear Transformer for Video Action Recognition.SENSORS,23(3).
MLA Sun, Ruoxi,et al."WLiT: Windows and Linear Transformer for Video Action Recognition".SENSORS 23.3(2023).
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Sun, Ruoxi]的文章
[Zhang, Tianzhao]的文章
[Wan, Yong]的文章
百度学术
百度学术中相似的文章
[Sun, Ruoxi]的文章
[Zhang, Tianzhao]的文章
[Wan, Yong]的文章
必应学术
必应学术中相似的文章
[Sun, Ruoxi]的文章
[Zhang, Tianzhao]的文章
[Wan, Yong]的文章
相关权益政策
暂无数据
收藏/分享
文件名: 10.3390@s23031616.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。