Deep eyes: Joint depth inference using monocular and binocular cues
2021-09-17
发表期刊NEUROCOMPUTING (IF:5.5[JCR-2023],5.5[5-Year])
ISSN0925-2312
EISSN1872-8286
卷号453页码:812-824
发表状态已发表
DOI10.1016/j.neucom.2020.06.132
摘要

Human visual system relies on both monocular focusness cues and binocular stereo cues to gain effective 3D perception. Correspondingly, depth from focus/defocus (DfF/DfD) and stereo matching are two most studied passive depth sensing schemes, which are traditionally solved in separate tracks. However, the two techniques are essentially complementary: the monocular cue from DfF/DfD can robustly handle repetitive textures and occlusion that are problematic for stereo matching whereas the binocular cue from stereo matching is insensitive to defocus blurs and can resolve large depth range. In this paper, we emulate human perception and present unified learning-based techniques to conduct hybrid DfF/ DfD and stereo matching. We first construct a comprehensive focal stack dataset synthesized by depth-guided light field rendering. Next, we propose different network architectures to suit various inputs, including focal stack, stereo image pair, binocular focal stack, a focus-defocus image pair and defocus-stereo image triplet. We also exploit different connection methods between the separate net-works for integrating them into an optimized solution to produce high fidelity disparity maps. For exper-iment, we further explore different hardware setup to capture both monocular and binocular depth cues. Results show that our new learning-based hybrid techniques can significantly improve accuracy and robustness in depth estimation. (C) 2020 Elsevier B.V. All rights reserved.

关键词Depth from Focus Depth from Defocus Stereo Matching Deep Learning Light Field
收录类别SCIE ; EI
WOS研究方向Computer Science
WOS类目Computer Science, Artificial Intelligence
WOS记录号WOS:000663418300002
出版者ELSEVIER
原始文献类型Article
引用统计
正在获取...
文献类型期刊论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/127555
专题信息科学与技术学院_博士生
信息科学与技术学院_PI研究组_虞晶怡组
通讯作者Yu, Jingyi
作者单位
1.ShanghaiTech Univ, Shanghai Engn Res Ctr Intelligent Vis & Imaging, Sch Informat Sci & Technol, Shanghai, Peoples R China;
2.Chinese Acad Sci, Shanghai Inst Microsyst & Informat Technol, Shanghai, Peoples R China;
3.Univ Chinese Acad Sci, Beijing, Peoples R China;
4.DGene Inc, Santa Clara, CA USA;
5.Ecole Polytech Fed Lausanne, Lausanne, Switzerland
第一作者单位上海科技大学
通讯作者单位上海科技大学
第一作者的第一单位上海科技大学
推荐引用方式
GB/T 7714
Chen, Zhang,Guo, Xinqing,Li, Siyuan,et al. Deep eyes: Joint depth inference using monocular and binocular cues[J]. NEUROCOMPUTING,2021,453:812-824.
APA Chen, Zhang,Guo, Xinqing,Li, Siyuan,Yang, Yang,&Yu, Jingyi.(2021).Deep eyes: Joint depth inference using monocular and binocular cues.NEUROCOMPUTING,453,812-824.
MLA Chen, Zhang,et al."Deep eyes: Joint depth inference using monocular and binocular cues".NEUROCOMPUTING 453(2021):812-824.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Chen, Zhang]的文章
[Guo, Xinqing]的文章
[Li, Siyuan]的文章
百度学术
百度学术中相似的文章
[Chen, Zhang]的文章
[Guo, Xinqing]的文章
[Li, Siyuan]的文章
必应学术
必应学术中相似的文章
[Chen, Zhang]的文章
[Guo, Xinqing]的文章
[Li, Siyuan]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。