[1] |
代兴冉. 自主跟随机器人的研究与设计[D]. 曲阜: 曲阜师范大学, 2020.
|
|
Dai Xingran. Research and design of autonomous following robot[D]. Qufu: Qufu Normal University, 2020.
|
[2] |
Calik S K. UAV path planning with multiagent Ant Colony system approach[C]. Zonguldak: Proceedings of the Twenty-fourth Signal Processing and Communication Application Conference, 2016.
|
[3] |
Hart P E, Nilsson N J, Raphael B. A formal basis for the heuristic determination of minimum cost paths[J]. IEEE Transactions on Systems Science and Cybernetics, 1968, 4(2):100-107.
|
[4] |
Rösmann C, Hoffmann F, Bertram T. Integrated online trajectory planning and optimization in distinctive topologies[J]. Robotics and Autonomous Systems, 2017, 7(4):142-153.
|
[5] |
刘永建, 曾国辉, 黄勃, 等. 改进蚁群算法的机器人路径规划研究[J]. 电子科技, 2020, 33(1):13-18.
|
|
Liu Yongjian, Zeng Guohui, Huang Bo, et al. Research on robot path planning based on improved ant colony algorithm[J]. Electronic Science and Technology, 2020, 33(1):13-18.
|
[6] |
Feyrer S, Zell A. Detection, tracking, and pursuit of humans with an autonomous mobile robot[C]. Kyongju: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 1999.
|
[7] |
Hirai N, Mizoguchi H. Visual tracking of human back and shoulder for person following robot[C]. Kobe: Proceedings of the IEEE/ASM E International Conference on Advanced Intelligent Mechatronics, 2003.
|
[8] |
Hassan M S, Khan A F, Khan M W, et al. A computationally lowcost vision based tracking algorithm for human following robot[C]. Hong Kong: Proceedings of the Second International Conference on Control, Automation and Robotics, 2016.
|
[9] |
余铎, 王耀南, 毛建旭, 等. 基于视觉的移动机器人目标跟踪方法[J]. 仪器仪表学报, 2019, 40(1):227-235.
|
|
Yu Duo, Wang Yaonan, Mao Jianxu, et al. Vision-based object tracking method of mobile robot[J]. Chinese Journal of Scientific Instrument, 2019, 40(1):227-235.
|
[10] |
李建伏, 王思博, 宋国平. 基于出行偏好和路径长度的路径规划方法[J]. 吉林大学学报(理学版), 2021, 59(1):107-114.
|
|
Li Jianfu, Wang Sibo, Song Guoping. Path planning method based on routing preference and path length[J]. Journal of Jilin University(Science Edition), 2021, 59(1):107-114.
|
[11] |
吴国威. 智能除草机器人的视觉导航方法研究[D]. 哈尔滨: 哈尔滨工业大学, 2020.
|
|
Wu Guowei. The visual navigation method of intelligent weeding robot[D]. Harbin: Harbin Institute of Technology, 2020.
|
[12] |
唐小煜, 黄进波, 冯洁文, 等. 基于U-net和YOLOv4的绝缘子图像分割与缺陷检测[J]. 华南师范大学学报(自然科学版), 2020, 52(6):15-21.
|
|
Tang Xiaoyu, Huang Jinbo, Feng Jiewen, et al. Image segmentation and defect detection of insulators based on U-net and YOLOv4[J]. Journal of South China Normal University (Natural Science Edition), 2020, 52(6):15-21.
|
[13] |
Wang C Y, Liao H Y M, Wu Y H, et al. CSPNet: A new backbone that can enhance learning capability of CNN[C]. Seattle: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.
|
[14] |
He K, Zhang X, Ren S. et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9):1904-1916.
doi: 10.1109/TPAMI.2015.2389824
pmid: 26353135
|
[15] |
Liu S, Qi L, Qin H F, et al. Path aggregation network for instance segmentation[C]. Salt Lake City: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
|
[16] |
黎阳, 沈烨, 刘敏, 等. 融合运动信息与表观信息的多目标跟踪算法[J]. 电子科技, 2020, 33(9):21-24.
|
|
Li Yang, Shen Ye, Liu Min, et al. Multi-target tracking algorithm by combining motion information and apparent information[J]. Electronic Science and Technology, 2020, 33(9):21-24.
|
[17] |
Dalal N, Triggs B, Schmid C. Human detection using oriented histograms of flow and appearance[C]. Berlin: Proceedings of the European Conference on Computer Vision, 2006.
|
[18] |
Luo H, Gu Y, Liao X, et al. Bag of tricks and a strong baseline for deep person reidentification[C]. Long Beach: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.
|
[19] |
Pan X, Luo P, Shi J, et al. Two at once: Enhancing learning and generalization capacities via ibn-net[C]. Munich: Proceedings of the European Conference on Computer Vision, 2018.
|