[1] |
Caesar H, Bankiti V, Lang A H, et al. nuScenes:A multimodal dataset for autonomous driving[C]. Online: IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020:11621-11631.
|
[2] |
Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The kitti vision benchmark suite[C]. Providence: IEEE Conference on Computer Vision and Pattern Recognition, 2012:3354-3361.
|
[3] |
Sun P, Kretzschmar H, Dotiwalla X, et al. Scalability in perception for autonomous driving:Waymo open dataset[C]. Online: IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020:2446-2454.
|
[4] |
Chang M F, Lambert J, Sangkloy P, et al. Argoverse:3D tracking and forecasting with rich maps[C]. Long Beach: IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019:8748-8757.
|
[5] |
Long Y, Morris D, Liu X, et al. Radar-camera pixel depth association for depth completion[C]. Nashville: IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021:12507-12516.
|
[6] |
Teed Z, Deng J. Raft:Recurrent all-pairs field transforms for optical flow[C]. Glasgow: European Conference on Computer Vision, 2020:402-419.
|
[7] |
Cordts M, Omran M, Ramos S, et al. The cityscapes dataset for semantic urban scene understanding[C]. Las Vegas: IEEE Conference on Computer Vision and Pattern Recognition, 2016:3213-3223.
|
[8] |
Cheng B, Collins M D, Zhu Y, et al. Panoptic-deeplab:A simple,strong and fast baseline for bottom-up panoptic segmentation[C]. Online: IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020:12475-12485.
|
[9] |
Hu X, Yang K, Fei L, et al. Acnet:Attention based network to exploit complementary features for rgbd semantic segmentation[C]. Taipei: IEEE International Conference on Image Processing, 2019:1440-1444.
|
[10] |
Hu M, Wang S, Li B, et al. Penet:Towards precise and efficient image guided depth completion[C]. Xi'an: IEEE International Conference on Robotics and Automation, 2021: 13656-13662.
|
[11] |
Zhang Y, Funkhouser T. Deep depth completion of a single rgb-d image[C]. Salt Lake City: IEEE Conference on Computer Vision and Pattern Recognition, 2018:175-185.
|
[12] |
白宇, 梁晓玉, 安胜彪. 深度学习的2D-3D融合深度补全综述[J]. 计算机工程与应用, 2023, 59(13):17-32.
doi: 10.3778/j.issn.1002-8331.2209-0284
|
|
Bai Yu, Liang Xiaoyu, An Shengbiao. Review of 2D-3D fusion deep completion of deep learning[J]. Computer Engineering and Applications, 2023, 59(13):17-32.
doi: 10.3778/j.issn.1002-8331.2209-0284
|
[13] |
郑柏伦, 冼楚华, 张东九. 融合RGB图像特征的多尺度深度图像补全方法[J]. 计算机辅助设计与图形学学报, 2021, 33(9):1407-1417.
|
|
Zheng Bolun, Xian Chuhua, Zhang Dongjiu. Multi-scale fusion networks with RBG image features for depth map completion[J]. Journal of Computer-Aided Design & Computer Graphics, 2021, 33(9):1407-1417.
|
[14] |
杨宇翔, 曹旗, 高明煜, 等. 基于多阶段多尺度彩色图像引导的道路场景深度图像补全[J]. 电子与信息学报, 2022, 44(11):3951-3959.
|
|
Yang Yuxiang, Cao Qi, Gao Mmingyu, et al. Multi-stage multi-scale color guided depth image completion for road scenes[J]. Journal of Electronics & Information Technology, 2022, 44(11):3951-3959.
|
[15] |
Lo C C, Vandewalle P. How much depth information can radar contribute to a depth estimation model? [EB/OL]. (2022-02-26) [2023-06-12] https://arxiv.org/pdf/2202.13220.
|
[16] |
Ma F, Karaman S. Sparse-to-dense:Depth prediction from sparse depth samples and a single image[C]. Brisbane: IEEE International Conference on Robotics and Automation, 2018:4796-4803.
|
[17] |
郑兴华, 孙喜庆, 吕嘉欣, 等. 基于深度学习和智能规划的行为识别[J]. 电子学报, 2019, 47(8):1661-1668.
doi: 10.3969/j.issn.0372-2112.2019.08.008
|
|
Zheng Xinghua, Sun Xiqing, Lü Jiaxin, et al. Action recognition based on deep learning and artificial intelligence planning[J]. Acta Electronica Sinica, 2019, 47(8):1661-1668.
doi: 10.3969/j.issn.0372-2112.2019.08.008
|
[18] |
毛莺池, 张建华, 陈豪. 一种多视图深度融合的连续性缺失补全方法[J]. 西安电子科技大学学报, 2019, 46(2):61-68.
|
|
Mao Yingchi, Zhang Jianhua, Chen Hao. Successive missing completion based on deep fusion from multiple views[J]. Journal of Xidian University, 2019, 46(2):61-68.
|