Electronic Science and Technology ›› 2019, Vol. 32 ›› Issue (8): 12-16.doi: 10.16180/j.cnki.issn1007-7820.2019.08.003

Previous Articles     Next Articles

Adaptive Positions Fusion for Visual Tracking

WANG Runling   

  1. School of Sciences,North China University of Technology,Beijing 100144,China
  • Received:2018-08-20 Online:2019-08-15 Published:2019-08-12
  • Supported by:
    National Key R&D Program of China(2017YFC0821102)


To improve the real-time and robust performances of hierarchical convolutional features for visual tracking method, the adaptive positions fusion based on multiple correlation filters for visual tracking was proposed. Firstly, features were extracted from Pool4 layer of VGG-19 network. Besides, multi-channel feature maps were pruned by average feature energy ratio to speed up the algorithm. Moreover, it trained several correlation filters with different Gaussian distributions of training samples, and fused all the predicted positions adaptively. Finally, the sparse model update strategy was used to further speed up. The proposed algorithm was evaluated on OTB100 benchmark dataset. The results showed that the average precision was 86.3%, which was 2.6 percentage points higher than the hierarchical convolutional features for visual tracking method. It was robust when there were occlusion、deformation and similarity interference. The average speed was 45.2 frames per second, four times than the original method and had favorable real-time performance.

Key words: visual tracking, correlation filter, convolutional feature, position fusion, model update, real-time performance

CLC Number: 

  • TP391.41