Hyperspectral image data are rich in spectral information,while LiDAR data can provide detailed elevation information.In the field of remote sensing processing,the two are usually fused to improve the interpretation accuracy.However,when performing information interaction between different data sources,existing methods do not fully exploit the advantages of multi-source data fusion.Therefore,this paper presents a dual-branch fusion classification algorithm combining convolutional networks and Transformers,leveraging the advantages of both types of data.In the feature extraction stage,a cross-modal feature coupling module is designed,which improves the consistency of feature extraction by the backbone network through channel feature interaction and spatial feature interaction,enhancing the complementarity between data and the semantic relevance of features.In the feature fusion stage,a bilateral attention feature fusion module is designed which adopts a cross-attention mechanism to perform bidirectional feature fusion operations on hyperspectral data and LiDAR data,enhancing complementarity between data,reducing redundant information,and ensuring that the optimized features can be efficiently fused and input into the classifier so as to significantly improve the accuracy and robustness of the classification network.Experimental results show that compared to existing fusion classification algorithms,the algorithm designed in this paper demonstrates more advanced results on the Houston2013 and TRENTO datasets,with average classification accuracy improvements of 1.43% and 4.81% respectively,indicating that this algorithm can significantly enhance the network’s ability to distinguish spatial and spectral features of hyperspectral images,and improve the fusion classification accuracy.