西安电子科技大学学报 ›› 2025, Vol. 52 ›› Issue (1): 1-13.doi: 10.19665/j.issn1001-2400.20241012

• 信息与通信工程 •    下一篇

分布式策略下的解码端增强图像压缩网络

张静(), 吴慧雪(), 张少博(), 李云松()   

  1. 西安电子科技大学 空天地一体化综合业务网全国重点实验室,陕西 西安 710071

Decoder-side enhanced image compression network under distributed strategy

ZHANG Jing(), WU Huixue(), ZHANG Shaobo(), LI Yunsong()   

  1. State Key Laboratory of Integrated Services Networks,Xidian University,Xi’an 710071,China
  • Received:2024-06-21 Online:2024-11-05 Published:2024-11-05

摘要:

随着多媒体的快速发展,大规模的图像数据对网络带宽和存储空间造成了巨大压力。目前,基于深度学习的图像压缩方法仍然存在重建图像有压缩伪影和训练速度慢等问题。针对上述问题,提出了分布式策略下的解码端增强图像压缩网络,用于减少重建图像压缩伪影和提高训练速度。一方面,原有的信息聚合子网不能有效利用超先验解码器的输出信息,不可避免地会在重建图像中产生压缩伪影。因此,使用解码端增强模块预测重建图像中的高频分量,减少压缩伪影。随后,为了进一步提高网络的非线性能力,引入了特征增强模块,进一步提高重建图像质量。另一方面,采用分布式训练解决单个节点训练速度较慢的问题,通过使用分布式训练有效缩短了训练时间。然而,分布式训练过程中梯度同步会产生大量通信开销,将梯度稀疏算法加入了分布式训练,每个节点按照概率将重要的梯度传递到主节点进行更新,进一步提高了训练速度。实验结果表明,分布式训练可以在保证重建图像质量的基础上加速训练。

关键词: 分布式训练, 解码端增强, 深度学习, 图像压缩

Abstract:

With the rapid development of multimedia,large-scale image data causes a great pressure on network bandwidth and storage.Presently,deep learning-based image compression methods still have problems such as compression artifacts in the reconstructed image and a slow training speed.To address the above problems,we propose a decoder-side enhanced image compression network under distributed strategy to reduce the reconstructed image compression artifacts and improve the training speed.On the one hand,the original information aggregation subnetwork cannot effectively utilize the output information of the hyperpriori decoder,which inevitably generates compression artifacts in the reconstructed image and negatively affects the visual effect of the reconstructed image.Therefore,we use the decoder-side enhancement module to predict the high-frequency components in the reconstructed image and reduce the compression artifacts.Subsequently,in order to further improve the nonlinear capability of the network,a feature enhancement module is introduced to further improve the reconstructed image quality.On the other hand,distributed training is introduced in this paper to solve the problem of slow training of traditional single node networks,and the training time is effectively shortened by using distributed training.However,the gradient synchronization during distributed training generates a large amount of communication overhead,so we add the gradient sparse algorithm to distributed training,and each node passes the important gradient to the master node for updating according to the probability,which further improves the training speed.Experimental results show that distributed training can accelerate training on the basis of ensuring the quality of the reconstructed image.

Key words: distributed training, decode-side enhancement, deep learning, image compression

中图分类号: 

  • TP391.4