[1] |
Arulananth T S, Baskar M, Sree R D, et al. Smart garbage segregation for the smart city management systems[C]. Kuching: Proceedings of the Fourteenth Asia-Pacific Physics Conference, 2021:507-512.
|
[2] |
Mittal G, Yagnik K B, Garg M, et al. Spot garbage:Smart ph-one app to detect garbage using deep learning[C]. Heidel-berg: Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2016:940-945.
|
[3] |
陈昱辰, 曾令超, 张秀妹, 等. 基于图像LBP特征与Adab-oost分类器的垃圾分拣识别方法[J]. 南方农机, 2021, 52(21):136-138,144.
|
|
Chen Yuchen, Zeng Lingchao, Zhang Xiumei, et al. Waste sorting recognition method based on image LBP features and Adaboost classifier[J]. China Southern Agricultural Machinery, 2021, 52(21):136-138,144.
|
[4] |
Wang Y, Feng W. Design and implementation of garbage classification system based on deep learning[C]. Singapore: The Seventh International Conference on Control,Automation and Robotics, 2021:601-605.
|
[5] |
Wang Y, Zhang X. Autonomous garbage detection for intelligent urban management[C]. Tokyo: MATEC Web of Conferences, 2018:1189-1193.
|
[6] |
张伟, 刘娜, 江洋, 等. 基于YOLO神经网络的垃圾检测与分类[J]. 电子科技, 2022, 35(10):45-50.
|
|
Zhang Wei, Liu Na, Jiang Yang, et al. Garbage detection andclassification based on YOLO neural network[J]. Electronic Science and Technology, 2022, 35(10):45-50.
|
[7] |
Woo S, Park J, Lee J Y, et al. Cbam:Convolutional block attention module[C]. Munich: Proceedings of the European Conference on Computer Vision, 2018:395-401.
|
[8] |
Howard A, Sandler M, Chu G, et al. Searching for mobilen-etv3[C]. Seoul: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019:570-576.
|
[9] |
Howard A G, Zhu M, Chen B, et al. MobileNets:Efficient convolutional neural networks for mobile vision applications[C]. Holunono: Computer Vision and Pattern Recognition, 2017:897-905.
|
[10] |
Sandler M, Howard A, Zhu M, et al Mobilenetv2:Inverted residuals and linear bottlenecks[C]. Singapore: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:370-376.
|
[11] |
He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]. Chongqing: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016:988-993.
|
[12] |
Glorot X, Bordes A, Bengio Y. Deep sparse rectifier neural networks[C]. Quebec City: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Sttistics, 2011:315-320.
|
[13] |
Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]. Singapore: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:627-631.
|
[14] |
Nader A, Azar D. Searching for activation functions using a self-adaptive evolutionary algorithm[C]. New York: Genetic and Evolutionary Computation Conference, 2020:770-776.
|
[15] |
Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Advances in Neural Information Processing Systems, 2017, 139(5):30-36.
|
[16] |
Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16×16 words:Transformers for image recognition atscale[C]. Online: International Conference on Learning Representations, 2020:1513-1520.
|
[17] |
Tan M, Pang R, Le Q V. Efficientdet:Scalable and efficient object detection[C]. Seattle: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020:593-600.
|
[18] |
Wang Q, Wu B, Zhu P, et al. Supplementary material for E-CA-Net:Efficient channel attention for deep convolutional networks[C]. Seattle: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020:362-368.
|
[19] |
Bodla N, Singh B, Chellappa R, et al. Soft-NMS-improving object detection with one line of code[C]. Chengdu: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:209-215.
|
[20] |
He J, Erfani S, Ma X, et al. Alpha-IoU:A family of power intersection over union losses for bounding box regression[J]. Advances in Neural Information Processing Systems, 2021, 41(5):20230-20242.
|