Electronic Science and Technology ›› 2024, Vol. 37 ›› Issue (1): 72-80.doi: 10.16180/j.cnki.issn1007-7820.2024.01.011
Previous Articles Next Articles
LI Zenghui1,WANG Wei2
Received:
2022-09-26
Online:
2024-01-15
Published:
2024-01-11
Supported by:
CLC Number:
LI Zenghui,WANG Wei. Research Progress of Medical Image Segmentation Method Based on Deep Learning[J].Electronic Science and Technology, 2024, 37(1): 72-80.
Table 1.
Comparison of medical image segmentation algorithms"
方法 | 特点 | 优点 | 缺点 | |||
---|---|---|---|---|---|---|
FCN | 用卷积替换全 连接,引入跳跃 连接 | 可输入任意尺 寸的图片,高层 特征与低层特 征进行融合 | 没有充分利用全 局上下文信息, 分割精度不足 | |||
DeepLab-V1 | 先用空洞卷积 进行特征提取, 再全连接CRF 优化 | 通过扩大感受 野,来提取更多 的特征 | 深度卷积神经网 络会导致空间分 辨率下降,且需要 很大的存储空间 | |||
DeepLab-V2 | 引入多尺度空 间金字塔池化 | 同上 | 分割细节不足 | |||
DeepLab-V3 | 改进了旧版空 洞卷积和多尺 度空间金字塔 池化 | 同上 | 输出图的放大效 果较差 | |||
U-Net | 在卷积网络的 基础上加入对 称的解码网络 | 更好的利用全 局上下文信息, 对低层和高层 信息进行有效 融合 | 物体边界的分割 粗糙 | |||
U-Net++ | 改进U-Net的 跳跃连接 | 提高了分割精 度,缩减了参 数量 | 训练数据冗余, 占用显存多 | |||
U-Net3+ | 改进跳跃连接 | 同上 | 网络结构复杂, 参数量大 | |||
MultiResUnet | 引入残差学习 | 减缓了梯度消 失和梯度爆炸 | 增加了网络的训 练时间 | |||
Attention U-Net | 引入注意力 机制 | 灵活捕获全局 特征与局部特 征的联系 | 网络的深层信息 可能被破坏,从 而影响模型的学 习能力 | |||
3D U-Net | 对图像进行三 维分割 | 能直接分割三 维医学图像 | 参数量大、训练 时间长 | |||
Cascaded 3D U-Net | 多阶段分割 | 对小目标的检 测效果良好 | 增加了额外的计 算成本 | |||
TransUNet | 把Transformer 与U-Net融合 | 通过恢复局部 空间信息来增 强细节 | 所需数据量较大 | |||
Swin U-Net | 使用移位窗口 的Swin Trans- former作编码器 提取特征 | 有良好的分割 精度和鲁棒的 泛化能力 | 直接使用Swin Transformer的训 练权重来初始化 网络,会影响模 型的性能 |
Table 2.
Commonly used medical image segmentation datasets"
部位 | 成像 方式 | 名称 | 大小 | 格式 | 区域 | 地址 |
---|---|---|---|---|---|---|
大脑 | MRI | BraTs 2018 | 285 | NIFIT | 胶质瘤 | https://aistudio.baidu.com/aistudio/datasetdetail/64660 |
膝盖 | MRI | MRNET | 60 | NIFIT | 骨骼和 软骨 | https://stanfordmlgroup.github.io/competitions/mrnet/ |
眼 | 视网 膜图 像 | DRIVE | 40 | JPEG | 视网膜 血管 | https://gitee.com/zongfang/retina-unet |
CHASE_ DB1 | 1 200 | JPEG | 病理性 近视血 管病变 | https://blogs.kingston.ac.uk/retinal/chasedb1/ | ||
STARE | 20 | JPEG | 病理性 近视血 管病变 | https://blogs.kingston.ac.uk/retinal/chasedb1/ | ||
STARE | 20 | JPEG | 同上 | https://cecas.clemson.edu/~ahoover/stare/ | ||
腹部 | CT | CHAOS | 40 | DICOM | 肝脏和 血管 | https://zenodo.org/record/3431873#.Y0ABL3ZBxPZ |
MRI | CHAOS | 120 | DICOM | 肝脏和 血管 | https://zenodo.org/record/3431873#.Y0ABL3ZBxPZ | |
胸部 | 胸部 X光 | SIIM- ACR | - | DICOM | 气胸 | https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation |
SCR | 247 | JPEG | 心脏和 肺 | http://www.isi.uu.nl/Research/Databases/ SCR/ | ||
CT | Seg THOR | 60 | - | 心脏和 肺 | https://github.com/FEanglang/SegTHOR- Pytorch | |
肾 | CT | KiTS 19 | 300 | NIFTI | 肾肿瘤 | https://github.com/neheller/kits19 |
肝 | WSI CT | PAIP[ | 50 | - | 肝癌 肿瘤 | - |
肺 | CT | Luna | 888 | Meta Image | 肺结核 分割 | https://pan.baidu.com/s/1BJIBqu3q GZeXj9Vm_xftGQ 提取码:r6Ba |
[1] |
Bigler E D. Neuroimaging as a biomarker in symptom validity and performance validity testing[J]. Brain Imaging and Behavior, 2015, 9(3):421-444.
doi: 10.1007/s11682-015-9409-1 pmid: 26100657 |
[2] |
Pun T. A new method for grey-level picture thresholding using the entropy of the histogram[J]. Signal Processing, 1980, 2(3):223-237.
doi: 10.1016/0165-1684(80)90020-1 |
[3] |
Khan J F, Bhuiyan S M A, Adhami R R. Image segmentation and shape analysis for road-sign detection[J]. IEEE Transactions on Intelligent Transportation Systems, 2010, 12(1):83-96.
doi: 10.1109/TITS.2010.2073466 |
[4] |
Tremeau A, Borel N. A region growing and merging algorithm to color segmentation[J]. Pattern Recognition, 1997, 30(7):1191-1203.
doi: 10.1016/S0031-3203(96)00147-1 |
[5] | He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]. Shanghai: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016:159-168. |
[6] | Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[EB/OL].(2014-09-04)[2022-08-14]https://doi.org/10.48550/arXiv.1409.1556. |
[7] | Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]. Boston: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015:307-316. |
[8] | Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]. Boston: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015:329-338. |
[9] |
Chen L C, Papandreou G, Kokkinos I, et al. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(4):834-848.
doi: 10.1109/TPAMI.2017.2699184 |
[10] | Chen L C, Papandreou G, Schroff F, et al. Rethinking atrous convolution for semantic image segmentation[EB/OL].(2016-06-17)[2022-08-14]https://doi.org/10.48550/arXiv.1706.05587. |
[11] | Ronneberger O, Fischer P, Brox T. U-Net:Convolutional networks for biomedical image segmentation[C]. Madrid: International Conference on Medical Image Computing and Computer-assisted Intervention, 2015:98-105. |
[12] | Zhou Z, Rahman S M, Tajbakhsh N, et al. UNet++:A nested U-Net architecture for medical image segmentation[M]. Shanghai: Springer, Cham, 2018:3-11. |
[13] | Huang H, Lin L, Tong R, et al. UNet 3+:A full-scale connected unet for medical image segmentation[C]. Barcelona: ICASSP IEEE International Conference on Acoustics, Speech and Signal Processing, 2020:497-506. |
[14] | Ibtehaz N, Rahman M S. MultiResUNet:Rethinking the U-Net architecture for multimodal biomedical image segmentation[J]. Neural Networks, 2020, 18(6):74-87. |
[15] | Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision[C]. Las Vegas: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016:897-903. |
[16] |
Zhang Z, Liu Q, Wang Y. Road extraction by deep residual U-Net[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(5):749-753.
doi: 10.1109/LGRS.2018.2802944 |
[17] | Kesavan S M, Al Naimi I, Al Attar F, et al. Res-UNet supported segmentation and evaluation of Covid-19 lesion in lung CT[C]. Puducherry: International Conference on System,Computation,Automation and Networking, 2021:633-640. |
[18] | Liu Z, Yuan H. An Res-U Net method for pulmonary artery segmentation of CT images[C]. Shanghai:Journal of Physics: Conference Series, 2021:591-596. |
[19] | Çiçek Ö, Abdulkadir A, Lienkamp S S, et al. 3D U-Net:Learning dense volumetric segmentation from sparse annotation[C]. Athens: International Conference on Medical Image Computing and Computer-Assisted Intervention, 2016:155-162. |
[20] | Tong Q, Ning M, Si W, et al. 3D deeply-supervised U-Net based whole heart segmentation[C]. Hong Kong: International Workshop on Statistical Atlases and Computational Models of the Heart, 2017:602-609. |
[21] | Wang C, Macgillivray T, Macnaught G, et al. A two-stage U-Net model for 3D multi-class segmentation on full-resolution cardiac data[C]. Shanghai: International Workshop on Statistical Atlases and Computational Models of the Heart, 2018:566-572. |
[22] | Xu H, Xu Z, Gu W, et al. A two-stage fully automatic segmentation scheme using both 2D and 3D U-Net for multi-sequence cardiac MR[C]. Shanghai: International Workshop on Statistical Atlases and Computational Models of the Heart, 2019:981-990. |
[23] | Zeng G, Wang Q, Lerch T, et al. Latent 3D U-Net: Multi-level latent shape space constrained 3D U-Net for automatic segmentation of the proximal femur from radial MRI of the hip[C]. Bern Switzerland: International Workshop on Machine Learning in Medical Imaging, 2018:1321-1337. |
[24] | Wang C, Guo Y, Chen W, et al. Fully automatic intervertebral disc segmentation using multimodal 3D U-Net[C]. Shanghai: IEEE the Forty-third Annual Computer Software Apply Conference, 2019:506-512. |
[25] | Mehta R, Arbel T. 3D U-Net for brain tum our segmentation[J]. MICCAI Brain Lesion Workshop, 2019, 17(6):643-646. |
[26] | Owler J, Irving B, Ridgeway G, et al. Comparison of multi-atlas segmentation and U-Net approaches for automated 3D liver delineation in MRI[J]. Annual Conference on Medical Image Understanding and Analysis, 2019, 19(6):478-488. |
[27] | Zhao C, Han J, Jia Y, et al. Lung nodule detection via 3D U-Net and contextual convolutional neural network[J]. International Conference on Networking and Network Applications, 2018, 20(7):356-361. |
[28] | He Y, Yu X, Liu C, et al. A 3D dual path U-Net of cancer segmentation based on MRI[C]. Shanghai: IEEE the International Conference on Image,Vision and Computing, 2018:933-938. |
[29] |
Heinrich M P, Oktay O, Bouteldja N. OBELISK-Net: Fewer layers to solve 3D multi-organ segmentation with sparse deformable convolutions[J]. Medical Image Analysis, 2019, 10(7):1-9.
doi: 10.1016/j.media.2005.11.001 |
[30] | Wang Y, Zhao L, Wang M, et al. Organ at risk segmentation in head and neck CT images using a two-stage segmentation framework based on 3D U-Net[J]. IEEE Access, 2019(5):144591-144602. |
[31] | Li B, Vernooij M W, Ikram M A, et al. Reproducible white matter tract segmentation using 3D U-Net on a large-scale DTI dataset[J]. Workshop on Machine Learning in Medical Imaging, 2018, 8(3):205-213. |
[32] | Feng X, Wang C, Cheng S, et al. Automatic liver and tumor segmentation of CT based on cascaded U-Net[J]. Chinese Intelligent System Conference, 2019, 9(4):155-164. |
[33] | Liu Y, Qi N, Zhu Q, et al. CR-U-Net:Cascaded U-Net with residual mapping for liver segmentation in CT Images[J]. IEEE Visual Communications and Image Processing, 2019, 12(6):1-4. |
[34] | Li H, Li A, Wang M. A novel end-to-end brain tumor segmentation method using improved fully convolutional networks[J]. Computers in Biology and Medicine, 2019, 18(9):150-160. |
[35] |
Vigneault D M, Xie W Ho C Y, et al. Ω-net(omega-net): Fully automatic,multi-view cardiac MR detection, orientation,and segmentation with deep neural networks[J]. Medical Image Analysis, 2018, 48(7): 95-106.
doi: 10.1016/j.media.2018.05.008 |
[36] | Abd-Ellah M K, Khalaf A A, Awad A I, et al. TPUAR-Net: Two parallel U-Net with asymmetric residual-based deep convolutional neural network for brain tumor segmentation[J]. Journal of Image Analysis, 2019, 7(9):106-116. |
[37] | Soltanpour M, Greiner R, Boulanger P, et al. Ischemic stroke lesion prediction in CT perfusion scans using multiple parallel U-Nets following by a pixel-level classifier[J]. IEEE the Nineteenth International Conference on Bioinformatics and Bioengineering, 2019, 16(7):957-963. |
[38] | Zhang L, Xu L. An automatic liver segmentation algorithm for CT images U-Net with separated paths of feature extraction[C]. Tianjin: IEEE the Third International Conference Image, Vision Computing, 2018:294-298. |
[39] | Huang G, Liu Z, Weinberger K Q. Densely connected convolutional networks[J]. IEEE Conference Computing Vision Pattern Recognit, 2017, 12(8):4700-4708. |
[40] | Chen J, Lu Y, Yu Q, et al. Transunet:Transformers make strong encoders for medical image segmentation[EB/OL].(2021-02-08)[2022-08-14]https://doi.org/10.48550/arXiv.2102.04306. |
[41] | Cao H, Wang Y, Chen J, et al. Swin-UNet:Unet-like pure transformer for medical image segmentation[EB/OL].(2021-05-12)[2022-08-14]https://doi.org/10.48550/arXiv.2105.05537. |
[42] |
Fu Z, Zhang J, Luo R, et al. TF-UNet:An automatic cardiac MRI image segmentation method[J]. Mathematical Biosciences and Engineering, 2022, 19(5):5207-5222.
doi: 10.3934/mbe.2022244 pmid: 35430861 |
[43] | Zhao X, Wu Y, Song G, et al. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation[J]. Medical Image Analysis, 2018, 12(6):98-111. |
[44] |
Lessmann N, Van Ginneken B, De Jong P A, et al. Iterative fully convolutional neural networks for automatic vertebra segmentation and identification[J]. Medical Image Analysis, 2019, 53(6):142-155.
doi: 10.1016/j.media.2019.02.005 |
[45] |
Tong N, Gou S, Yang S, et al. Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks[J]. Medical Physics, 2018, 45(10):4558-4567.
doi: 10.1002/mp.13147 pmid: 30136285 |
[46] | Roth H R, Shen C, Oda H, et al. Deep learning and its application to medical image segmentation[J]. Medical Imaging Technology, 2018, 36(2):63-71. |
[47] | Jin Q, Meng Z, Sun C, et al. RA-UNet:A hybrid deep attention-aware network to extract liver and tumor in CT scans[J]. Frontiers in Bioengineering and Biotechnology, 2020, 15(9):1471-1480. |
[48] | Lou A, Guan S, Loew M. DC-UNet:Rethinking the U-Net architecture with dual channel efficient CNN for medical image segmentation[C]. Sanya:Medical Imaging: Image Processing, 2021:59-68. |
[49] | Xu G, Wu X, Zhang X, et al. Levit-Unet:Make faster encoders with transformer for medical image segmentation[EB/OL].(2021-07-19)[2022-08-14]https://doi.org/10.48550/arXiv.2107.08623. |
[50] | Guo C, Szemenyei M, Yi Y, et al. SA-UNet:Spatial attention U-net for retinal vessel segmentation[C]. Milan: The Twenty-fifth International Conference on Pattern Recognition,IEEE, 2021:483-490. |
[51] |
Sinha A, Dolz J. Multi-scale self-guided attention for medical image segmentation[J]. IEEE Journal of Biomedical and Health Informatics, 2020, 25(1):121-130.
doi: 10.1109/JBHI.6221020 |
[52] |
Kamnitsas K, Ledig C, Newcombe V F J, et al. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation[J]. Medical Image Analysis, 2017, 36(3):61-78.
doi: 10.1016/j.media.2016.10.004 |
[53] | Thoma M. A survey of semantic segmentation[EB/OL].(2016-02-21)[2022-08-14]https://doi.org/10.48550/arXiv.1602.06541. |
[54] |
Kim Y J, Jang H, Lee K, et al. PAIP 2019:Liver cancer segmentation challenge[J]. Medical Image Analysis, 2021, 67(5):101854-101860.
doi: 10.1016/j.media.2020.101854 |
[1] | YU Zhihong,LI Feifei. Semi-Supervised Medical Image Segmentation Method Based on Meta-Learning and Neural Architecture Search [J]. Electronic Science and Technology, 2024, 37(1): 17-23. |
[2] | WANG Jiawei,YU Xiao. Review of Text Classification Research Based on Deep Learning [J]. Electronic Science and Technology, 2024, 37(1): 81-86. |
[3] | HUANG Zixuan,LI Qiaoxing. Research on Identification of Urban Illegal Vehicles Based on Random Forest Model [J]. Electronic Science and Technology, 2024, 37(1): 66-71. |
[4] | Bin ,WANG Sen. Visual Detection of Structural Cracks Using Depth Deformable Contour ModelLAI [J]. Electronic Science and Technology, 2023, 36(9): 35-40. |
[5] | CAO Hongfang,WANG Xiaolei,DU Gaoming,LI Zhenmin,NI Wei. Design and FPGA Implementation of Dehazing Based on Channel Difference Model and Guided Filtering [J]. Electronic Science and Technology, 2023, 36(8): 1-6. |
[6] | SUN Hong,ZHAO Yingzhi. Lightweight Generative Adversarial Networks Based on Multi-Scale Gradient [J]. Electronic Science and Technology, 2023, 36(7): 32-38. |
[7] | ZENG Xinxin,ZHANG Hongyan. A Farmland Parcel Extraction Network Based on Multi-Scale Semantic Information Enhancement [J]. Electronic Science and Technology, 2023, 36(7): 70-74. |
[8] | GUO Qicheng,SHEN Tuo,ZHANG Xuanxiong. Recognition Method of the Combined Trackside Signal Light Based on Image Processing [J]. Electronic Science and Technology, 2023, 36(7): 8-15. |
[9] | YUAN Zhenbo,BAI Bo,ZHANG Xiaowei,LUO Liujun,SHANG Tao. Automatic Alignment System for Visible Light Communication Based on Image Processing [J]. Electronic Science and Technology, 2023, 36(6): 8-15. |
[10] | LI Keran,CHEN Sheng,KE Panpan. A Method of Facial Mask Segmentation Based on CA-Net [J]. Electronic Science and Technology, 2023, 36(6): 64-71. |
[11] | SHI Jianke,QIAO Meiying,LI Bingfeng,ZHAO Yan. Underwater Occlusion Target Detection Algorithm Based on Attention Mechanism [J]. Electronic Science and Technology, 2023, 36(5): 62-70. |
[12] | LIU Yuwei,CAO Min,FENG Haojia. CNAS Recognition Criteria Automatic Benchmarking System Based on Natural Language Processing [J]. Electronic Science and Technology, 2023, 36(5): 28-33. |
[13] | ZHENG Yuheng,FU Dongxiang. UAV Detection Based on Slim-YOLOv4 with Embedded Device [J]. Electronic Science and Technology, 2023, 36(5): 55-61. |
[14] | LUO Ruipeng,FENG Mingke,HUANG Xin,ZOU Renling,LI Dan. A Review of Research on EEG Signal Preprocessing Methods [J]. Electronic Science and Technology, 2023, 36(4): 36-43. |
[15] | CUI Zhuodong,CHEN Wei,YIN Zhong. Helmet Wearing Detection Based on Enhanced Feature Fusion Network [J]. Electronic Science and Technology, 2023, 36(4): 44-51. |
|