Journal of Xidian University ›› 2021, Vol. 48 ›› Issue (3): 99-105.doi: 10.19665/j.issn1001-2400.2021.03.013

• Computer Science and Technology & Artificial Intelligence • Previous Articles     Next Articles

Effective learning strategy for hard samples

CAO Yi(),CAI Xiaodong()   

  1. School of Information and Communication Engineering,Guilin University of Electronic Technology,Guilin 541004,China
  • Received:2019-12-15 Online:2021-06-20 Published:2021-07-05
  • Contact: Xiaodong CAI E-mail:x-l-cy@qq.com;caixiaodong@guet.edu.cn

Abstract:

To improve the learning efficiency for hard samples and reduce noise interference caused by superfluous hard samples in deep hash algorithm,a generic strategy called Loss to Gradient for hard sample learning is proposed.First,a non-uniform gradient normalization method is proposed to improve the learning ability of models for hard samples.Back propagation gradients are weighted by calculating the loss ratio between hard samples and all samples.Furthermore,a weighted random sampling method is designed for accuracy improvement with superfluous hard samples.According to the loss,training samples are weighted and under-sampled for noise filtering and a small number of hard samples are retained to avoid over-fitting.Based on open datasets,the average accuracy of hash feature retrieval is increased by 4.7% and 3.4%,respectively.Experimental results show that the improved method outperforms other benchmarking methods in accuracy,proving that the feature representation of hard samples in the dataset can be effectively learned.

Key words: sample analysis, gradient method, hash functions, deep neural networks

CLC Number: 

  • TP183