ShanghaiTech University Knowledge Management System
SpikeGS: Learning 3D Gaussian Fields from Continuous Spike Stream | |
2025 | |
会议录名称 | LECTURE NOTES IN COMPUTER SCIENCE (INCLUDING SUBSERIES LECTURE NOTES IN ARTIFICIAL INTELLIGENCE AND LECTURE NOTES IN BIOINFORMATICS)
![]() |
ISSN | 0302-9743 |
卷号 | 15481 LNCS |
页码 | 159-177 |
DOI | 10.1007/978-981-96-0972-7_10 |
摘要 | A spike camera is a specialized high-speed visual sensor that offers advantages such as high temporal resolution and high dynamic range compared to conventional frame cameras. These features provide the camera with significant advantages in many computer vision tasks. However, the tasks of novel view synthesis based on spike cameras remain underdeveloped. Although there are existing methods for learning neural radiance fields from spike stream, they either lack robustness in extremely noisy, low-quality lighting conditions or suffer from high computational complexity due to the deep fully connected neural networks and ray marching rendering strategies used in neural radiance fields, making it difficult to recover fine texture details. In contrast, the latest advancements in 3DGS have achieved high-quality real-time rendering by optimizing the point cloud representation into Gaussian ellipsoids. Building on this, we introduce SpikeGS, the method to learn 3D Gaussian fields solely from spike stream. We designed a differentiable spike stream rendering framework based on 3DGS, incorporating noise embedding and spiking neurons. By leveraging the multi-view consistency of 3DGS and the tile-based multi-threaded parallel rendering mechanism, we achieved high-quality real-time rendering results. Additionally, we introduced a spike rendering loss function that generalizes under varying illumination conditions. Our method can reconstruct view synthesis results with fine texture details from a continuous spike stream captured by a moving spike camera, while demonstrating high robustness in extremely noisy low-light scenarios. Experimental results on both real and synthetic datasets demonstrate that our method surpasses existing approaches in terms of rendering quality and speed. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025. |
会议录编者/会议主办者 | the Asian Federation of Computer ; Vision, Sapien, Google, Springer, and the Australian Institute for Machine Learning |
关键词 | Digital storage Gaussian distribution Gaussian noise (electronic) High speed cameras Image coding Image reconstruction Rendering (computer graphics) 3d gaussian splatting 3D reconstruction Gaussian field Gaussians High quality High Speed Novel view synthesis Real-time rendering Spike camera Splatting |
会议名称 | 17th Asian Conference on Computer Vision, ACCV 2024 |
会议地点 | Hanoi, Viet nam |
会议日期 | December 8, 2024 - December 12, 2024 |
收录类别 | EI |
语种 | 英语 |
出版者 | Springer Science and Business Media Deutschland GmbH |
EI入藏号 | 20245317613168 |
EI主题词 | Deep neural networks |
EISSN | 1611-3349 |
EI分类号 | 1101.2.1 ; 1103.1 ; 1106.3 ; 1106.3.1 ; 1106.5 ; 1202.1 ; 1202.2 ; 716.1 Information Theory and Signal Processing ; 742.2 Photographic and Video Equipment |
原始文献类型 | Conference article (CA) |
文献类型 | 会议论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/467896 |
专题 | 信息科学与技术学院_PI研究组_Laurent Kneip组 |
通讯作者 | Wang, Yiqun |
作者单位 | 1.College of Computer Science, Chongqing University, Chongqing, China; 2.Motovis Co., Ltd., Shanghai, China; 3.University of Chinese Academy of Sciences, Beijing, China; 4.Mobile Perception Lab, ShanghaiTech University, Shanghai, China |
推荐引用方式 GB/T 7714 | Yu, Jinze,Peng, Xin,Lu, Zhengda,et al. SpikeGS: Learning 3D Gaussian Fields from Continuous Spike Stream[C]//the Asian Federation of Computer, Vision, Sapien, Google, Springer, and the Australian Institute for Machine Learning:Springer Science and Business Media Deutschland GmbH,2025:159-177. |
条目包含的文件 | ||||||
条目无相关文件。 |
个性服务 |
查看访问统计 |
谷歌学术 |
谷歌学术中相似的文章 |
[Yu, Jinze]的文章 |
[Peng, Xin]的文章 |
[Lu, Zhengda]的文章 |
百度学术 |
百度学术中相似的文章 |
[Yu, Jinze]的文章 |
[Peng, Xin]的文章 |
[Lu, Zhengda]的文章 |
必应学术 |
必应学术中相似的文章 |
[Yu, Jinze]的文章 |
[Peng, Xin]的文章 |
[Lu, Zhengda]的文章 |
相关权益政策 |
暂无数据 |
收藏/分享 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。