西安电子科技大学学报 ›› 2023, Vol. 50 ›› Issue (2): 23-32.doi: 10.19665/j.issn1001-2400.2023.02.003

• 信息与通信工程 • 上一篇    下一篇

融合上下文感知注意力的低光图像去雾网络

王柯俨(),成吉聪(),黄诗芮(),蔡坤伦(),王威然(),李云松()   

  1. 西安电子科技大学 综合业务网理论及关键技术国家重点实验室,陕西 西安 710071
  • 收稿日期:2022-06-27 出版日期:2023-04-20 发布日期:2023-05-12
  • 作者简介:王柯俨(1980—),女,副教授,博士,E-mail:kywang@mail.xidian.edu.cn;|成吉聪(1999—),男,西安电子科技大学硕士研究生,E-mail:jccheng@stu.xidian.edu.cn;|黄诗芮(2000—),女,西安电子科技大学硕士研究生,E-mail:srhuang@stu.xidian.edu.cn;|蔡坤伦(1997—),男,西安电子科技大学硕士研究生,E-mail:klcai@stu.xidian.edu.cn;|王威然(1997—),男,西安电子科技大学硕士研究生,E-mail:wrwang@stu.xidian.edu.cn;|李云松(1974—),男,教授,博士,E-mail:ysli@mail.xidian.edu.cn
  • 基金资助:
    陕西省自然科学基金面上项目(2021JM-125)

Low-light image dehazing network with aggregated context-aware attention

WANG Keyan(),CHENG Jicong(),HUANG Shirui(),CAI Kunlun(),WANG Weiran(),LI Yunsong()   

  1. State Key Laboratory of Integrated Services Networks,Xidian University,Xi’an 710071,China
  • Received:2022-06-27 Online:2023-04-20 Published:2023-05-12

摘要:

现有的低光去雾算法因受图像光照强度低、光照不均匀等影响,其去雾后的图像存在细节丢失、色彩失真等现象。针对上述问题,提出一种融合上下文感知注意力的低光图像去雾网络(ACANet)。首先,在基准网络中引入层内上下文感知注意力模块,分别从通道维度和空间维度结合全局视角辨识和加权同一尺度下的重要特征,使网络突破局部视野的约束,更加高效地提取图像纹理信息;其次,引入层间上下文感知注意力模块,通过投影操作将高级特征映射到信号子空间,以实现不同层之间多尺度特征信息的高效融合,进一步提升对图像细节的重建;最后,引入CIEDE2000色偏损失函数,通过CIELAB色彩空间对图像色调进行约束,并与L2损失一起联合优化网络,使网络准确地学习图像色彩,以解决图像的严重色偏问题。实验结果表明,所提算法在多个数据集上的客观指标均优于现有去雾算法,其峰值信噪比指标较基准网络提高了8.8%,且主观去雾效果更彻底,恢复图像细节更丰富,色彩还原度更好,更接近于真实图像。

关键词: 低光图像去雾, 注意力机制, 特征融合, 色偏损失, 深度学习

Abstract:

Existing low-light dehazing algorithms are affected by the low and uneven illumination of the hazy images with their dehazed images often suffering from loss of details and color distortion.To address the above problems,a low-light image dehazing network with aggregated context-aware attention (ACANet) is proposed.First,an intra-layer context-aware attention module is introduced to identify and highlight significant features at the same scale from the channel dimension and the spatial dimension,respectively,so that the network can break through the constraints of the local field of view,and extract image texture information more efficiently.Second,an inter-layer context-aware attention module is introduced to efficiently fuse multi-scale features and the advanced features are mapped to the signal subspace through projection operations in order to further enhance the reconstruction of image details.Finally,the CIEDE2000 color shift loss function is adopted to constrain the image hue by CIELAB color space and jointly optimize the network together with L2 loss so as to enable the network to learn image colors accurately and solve the severe color shift problem.Both quantitative and qualitative experimental results on several datasets demonstrate that the proposed ACANet outperforms existing dehazing methods.Specifically,the ACANet improves the PSNR of dehazed images by 8.8% compared to the baseline network,and enhances the image visibility with richer details and more natural color.

Key words: low-light image dehazing, attention mechanism, feature fusion, color shift loss, deep learning

中图分类号: 

  • TP391.41