Journal of Xidian University ›› 2021, Vol. 48 ›› Issue (3): 71-77.doi: 10.19665/j.issn1001-2400.2021.03.009

• Computer Science and Technology & Artificial Intelligence • Previous Articles     Next Articles

Deep consistency-preserving hashing

SHI Juan1(),XIE De2(),JIANG Qing3()   

  1. 1. School of Computer,Electronics and Information,Guangxi University,Nanning 530004,China
    2. School of Electronic Engineering,Xidian University,Xi’an 710071,China
    3. Dept of Information Construction and Management,Baise Executive Leadership Academy,Baise 533013,China
  • Received:2019-09-27 Online:2021-06-20 Published:2021-07-05
  • Contact: De XIE E-mail:sjuan@gxu.edu.cn;dxie@stu.xidian.edu.cn;33811237@qq.com

Abstract:

At present,most existing cross-modal hashing methods fail to explore the relevance and diversity of different modality data,thus leading to unsatisfactory search performance.In order to solve the above problem,a simple yet efficient deep hashing model is proposed,named deep consistency-preserving hashing for cross-modal retrieval that simultaneously exploits modality-common representation and modality-private representation through the simple end-to-end network structure,and generates compact and discriminative hash codes for multiple modalities.Compared with other deep cross-modal hashing methods,the complexity and computation of the proposed method can be neglected with significant performance improvements.Comprehensive evaluations are conducted on three cross-modal benchmark datasets which illustrate that the proposed method is superior to the state-of-the-art cross-modal hashing methods.

Key words: multi-modal learning, hash, modality-common representation, modality-private representation, retrieval

CLC Number: 

  • TP391