西安电子科技大学学报 ›› 2024, Vol. 51 ›› Issue (6): 73-83.doi: 10.19665/j.issn1001-2400.20240702

• 信息与通信工程 • 上一篇    下一篇

一种高光谱与LiDAR特征耦合的融合分类网络

徐海涛1,2(), 刘玉哲3(), 闫欣怡3(), 李娇娇3(), 薛长斌1()   

  1. 1.中国科学院国家空间科学中心,北京 100190
    2.中国科学院大学,北京 100049
    3.西安电子科技大学 通信工程学院,陕西 西安 710071
  • 收稿日期:2024-04-21 出版日期:2024-12-20 发布日期:2024-07-18
  • 通讯作者: 李娇娇(1987—),女,教授,E-mail:jjli@xidian.edu.cn;
    薛长斌(1972—),男,研究员,E-mail:xuechangbin@nssc.ac.cn
  • 作者简介:徐海涛(1982—),男,研究员,E-mail:xuhaitao@nssc.ac.cn;
    刘玉哲(1999—),男,西安电子科技大学硕士研究生,E-mail:yuzheliu@stu.xidian.edu.cn;
    闫欣怡(2002—),女,西安电子科技大学硕士研究生,E-mail:20009101594@stu.xidian.edu.cn
  • 基金资助:
    中国科学院重点研究计划(KGFZD-145-2023-15);国家自然科学基金(62371359);陕西省高校科协青年人才基金(20210104)

Fusion classification network for hyperspectral and LiDAR eature coupling modeling

XU Haitao1,2(), LIU Yuzhe3(), YAN Xinyi3(), LI Jiaojiao3(), XUE Changbin1()   

  1. 1. National Space Science Center,Chinese Academy of Sciences,Beijing 100190,China
    2. School of Telecommunications Engineering,Xidian University,Xi’an 710071,China
    3. University of Chinese Academy of Sciences,Beijing 100049,China
  • Received:2024-04-21 Online:2024-12-20 Published:2024-07-18

摘要:

高光谱图像数据富含丰富的光谱信息,而LiDAR数据可以提供详细的高程信息。在遥感处理领域,通常将二者融合以提高解译精度。然而,在进行不同数据源之间的信息交互时,现有方法没有充分发挥出多源数据融合的优势。因此,笔者针对两种数据各自的优势设计了一种结合卷积网络与Transformer的双分支融合分类算法。在特征提取阶段,设计了跨模态特征耦合模块,通过通道特征交互和空间特征交互操作,提高了主干网络提取特征的一致性,增强了数据间的互补性与特征之间的语义相关性;在特征融合阶段,设计了双边注意力特征融合模块,该模块采用交叉注意力机制对高光谱数据和LiDAR数据进行双向特征融合操作,提升数据间的互补性、减少冗余信息,确保优化后的特征能高效地融合并输入分类器中,从而显著提升了分类网络的准确性与鲁棒性。实验结果表明,与现有的融合分类算法相比,笔者设计的算法在Houston2013和TRENTO数据集上展示出了较为先进的结果,平均分类精度分别提高了1.43%和4.81%,说明本算法可以显著增强网络对高光谱图像空间和光谱特征的辨识能力,提高融合分类准确性。

关键词: 高光谱成像, 激光雷达, 深度学习, 数据融合, 分类

Abstract:

Hyperspectral image data are rich in spectral information,while LiDAR data can provide detailed elevation information.In the field of remote sensing processing,the two are usually fused to improve the interpretation accuracy.However,when performing information interaction between different data sources,existing methods do not fully exploit the advantages of multi-source data fusion.Therefore,this paper presents a dual-branch fusion classification algorithm combining convolutional networks and Transformers,leveraging the advantages of both types of data.In the feature extraction stage,a cross-modal feature coupling module is designed,which improves the consistency of feature extraction by the backbone network through channel feature interaction and spatial feature interaction,enhancing the complementarity between data and the semantic relevance of features.In the feature fusion stage,a bilateral attention feature fusion module is designed which adopts a cross-attention mechanism to perform bidirectional feature fusion operations on hyperspectral data and LiDAR data,enhancing complementarity between data,reducing redundant information,and ensuring that the optimized features can be efficiently fused and input into the classifier so as to significantly improve the accuracy and robustness of the classification network.Experimental results show that compared to existing fusion classification algorithms,the algorithm designed in this paper demonstrates more advanced results on the Houston2013 and TRENTO datasets,with average classification accuracy improvements of 1.43% and 4.81% respectively,indicating that this algorithm can significantly enhance the network’s ability to distinguish spatial and spectral features of hyperspectral images,and improve the fusion classification accuracy.

Key words: hyperspectral imaging, LiDAR, deep learning, data fusion, classification

中图分类号: 

  • TP753