Electronic Science and Technology ›› 2022, Vol. 35 ›› Issue (2): 20-26.doi: 10.16180/j.cnki.issn1007-7820.2022.02.004

Previous Articles     Next Articles

Design of FPGA-Based SqueezeNet Inference Accelerator

CHU Ping,NI Wei   

  1. School of Electronic Science and Applied Physics,Hefei University of Technology,Hefei 230009,China
  • Received:2020-10-13 Online:2022-02-15 Published:2022-02-24
  • Supported by:
    Anhui Colleges Collaborative Innovation Project(PA2019AGXC0127)


In view of the problems of the lightweight deep neural network SqueezeNet, such as large amount of intermediate data and long consumption calculation cycle,this study proposes to divide the entire network with a process block structure to speed up the calculation. Each process block is composed of Expand layer and Squeeze layer. The processing block structure ending with the Squeeze layer reduces the amount of intermediate data flowing between the computing module and the memory, and reduces the read and write consumption. The core calculation module introduces the early termination of the convolution calculation technology using the characteristics of the activation function. The effective index survival unit, the effective index control value unit and the convolution judgment unit are designed to skip the calculation amount and calculation cycle occupied by invalid values in the convolution calculation. Experimental results show that the data flow of the accelerator is reduced by 55.38%, and the calculation amount and calculation period occupied by invalid values are reduced by 14.68%.

Key words: lightweight deep neural network, SqueezeNet, process block, activation function, early termination of the convolution calculation, effective index, invalid value, calculation period

CLC Number: 

  • TP183