ShanghaiTech University Knowledge Management System
Deep eyes: Joint depth inference using monocular and binocular cues | |
2021-09-17 | |
发表期刊 | NEUROCOMPUTING (IF:5.5[JCR-2023],5.5[5-Year]) |
ISSN | 0925-2312 |
EISSN | 1872-8286 |
卷号 | 453页码:812-824 |
发表状态 | 已发表 |
DOI | 10.1016/j.neucom.2020.06.132 |
摘要 | Human visual system relies on both monocular focusness cues and binocular stereo cues to gain effective 3D perception. Correspondingly, depth from focus/defocus (DfF/DfD) and stereo matching are two most studied passive depth sensing schemes, which are traditionally solved in separate tracks. However, the two techniques are essentially complementary: the monocular cue from DfF/DfD can robustly handle repetitive textures and occlusion that are problematic for stereo matching whereas the binocular cue from stereo matching is insensitive to defocus blurs and can resolve large depth range. In this paper, we emulate human perception and present unified learning-based techniques to conduct hybrid DfF/ DfD and stereo matching. We first construct a comprehensive focal stack dataset synthesized by depth-guided light field rendering. Next, we propose different network architectures to suit various inputs, including focal stack, stereo image pair, binocular focal stack, a focus-defocus image pair and defocus-stereo image triplet. We also exploit different connection methods between the separate net-works for integrating them into an optimized solution to produce high fidelity disparity maps. For exper-iment, we further explore different hardware setup to capture both monocular and binocular depth cues. Results show that our new learning-based hybrid techniques can significantly improve accuracy and robustness in depth estimation. (C) 2020 Elsevier B.V. All rights reserved. |
关键词 | Depth from Focus Depth from Defocus Stereo Matching Deep Learning Light Field |
收录类别 | SCIE ; EI |
WOS研究方向 | Computer Science |
WOS类目 | Computer Science, Artificial Intelligence |
WOS记录号 | WOS:000663418300002 |
出版者 | ELSEVIER |
原始文献类型 | Article |
引用统计 | 正在获取...
|
文献类型 | 期刊论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/127555 |
专题 | 信息科学与技术学院_博士生 信息科学与技术学院_PI研究组_虞晶怡组 |
通讯作者 | Yu, Jingyi |
作者单位 | 1.ShanghaiTech Univ, Shanghai Engn Res Ctr Intelligent Vis & Imaging, Sch Informat Sci & Technol, Shanghai, Peoples R China; 2.Chinese Acad Sci, Shanghai Inst Microsyst & Informat Technol, Shanghai, Peoples R China; 3.Univ Chinese Acad Sci, Beijing, Peoples R China; 4.DGene Inc, Santa Clara, CA USA; 5.Ecole Polytech Fed Lausanne, Lausanne, Switzerland |
第一作者单位 | 上海科技大学 |
通讯作者单位 | 上海科技大学 |
第一作者的第一单位 | 上海科技大学 |
推荐引用方式 GB/T 7714 | Chen, Zhang,Guo, Xinqing,Li, Siyuan,et al. Deep eyes: Joint depth inference using monocular and binocular cues[J]. NEUROCOMPUTING,2021,453:812-824. |
APA | Chen, Zhang,Guo, Xinqing,Li, Siyuan,Yang, Yang,&Yu, Jingyi.(2021).Deep eyes: Joint depth inference using monocular and binocular cues.NEUROCOMPUTING,453,812-824. |
MLA | Chen, Zhang,et al."Deep eyes: Joint depth inference using monocular and binocular cues".NEUROCOMPUTING 453(2021):812-824. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。