Journal of Xidian University ›› 2021, Vol. 48 ›› Issue (2): 205-212.doi: 10.19665/j.issn1001-2400.2021.02.026

• Computer Science and Technology • Previous Articles    

Reciprocal bi-directional generative adversarial network for cross-modal pedestrian re-identification

WEI Ziyu1(),YANG Xi1(),WANG Nannan1(),YANG Dong2(),GAO Xinbo3()   

  1. 1. School of Telecommunications Engineering,Xidian University,Xi’an 710071,China
    2. Xi’an Institute of Space Radio Technology,Xi’an 710100,China
    3. Chongqing Key Laboratory of Image Cognition,Chongqing University of Posts and Telecommunications,Chongqing 400065,China
  • Received:2020-06-12 Revised:2020-01-06 Online:2021-04-20 Published:2021-04-28
  • Contact: Xi YANG;;;;;


To improve the accuracy of cross-modal pedestrian re-identification,a reciprocal bi-directional generative adversarial network-based method is proposed.First,we build two generative adversarial networks to generate cross-modal heterogeneous images.Second,an associated loss is designed to pull close the distribution of features in latent space during the image translation between visible and infrared images so as to help the networks generate fake heterogeneous images that have high similarity with the real images.Finally,by concatenating the original and generated heterogeneous pedestrian images into the discriminative feature extraction network,images from different modalities can be unified into a common modality,thus suppressing the cross-modal gap.Representation learning and metric learning are utilized to achieve more discriminative pedestrian features.Comparative experiments are conducted on SYSU-MM01 and RegDB datasets to analyze the accuracy with different loss functions.Compared with other state-of-the-art cross-modal pedestrian re-identification methods,the proposed method achieves a higher accuracy and stronger robustness.

Key words: generative adversarial networks, image translation, feature extraction, cross-modal pedestrianre-identification

CLC Number: 

  • TN911.73