西安电子科技大学学报 ›› 2020, Vol. 47 ›› Issue (4): 55-63.doi: 10.19665/j.issn1001-2400.2020.04.008

• • 上一篇    下一篇

一种面向卷积神经网络加速器的高性能乘累加器

孔鑫1,2(),陈刚1,龚国良1,鲁华祥1,2,3,4,毛文宇1   

  1. 1.中国科学院 半导体研究所, 北京 100083
    2.中国科学院大学, 北京 100049
    3.中国科学院 脑科学与智能技术卓越创新中心, 上海 200031
    4.半导体神经网络智能感知与计算技术北京市重点实验室, 北京 100083
  • 收稿日期:2020-01-06 出版日期:2020-08-20 发布日期:2020-08-14
  • 作者简介:孔 鑫(1991—),男,中国科学院大学博士研究生,E-mail:kongxin@semi.ac.cn.
  • 基金资助:
    国家自然科学基金(U19A2080);国家自然科学基金(U1936106);国家自然科学基金(61701473);中国科学院战略性先导科技专项(A类XDA18040400);高技术项目(31513070501);中国科学院STS项目(KFJ-STS-ZDTP-070);北京市科技计划项目(Z181100001518006);科技创新特区项目(1916312ZD00902201)

High performance multiply-accumulator for the convolutional neural networks accelerator

KONG Xin1,2(),CHEN Gang1,GONG Guoliang1,LU Huaxiang1,2,3,4,Mao Wenyu1   

  1. 1. Institute of Semiconductors, Chinese Academy of Sciences, Beijing, 100083, China
    2. University of Chinese Academy of Sciences, Beijing, 100089, China
    3. Center of Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
    4. Semiconductor Neural Network Intelligent Perception and Computing Technology Beijing Key Lab, Beijing 100083, China
  • Received:2020-01-06 Online:2020-08-20 Published:2020-08-14

摘要:

针对现有卷积神经网络加速器中的乘累加器普遍存在的面积大、功耗高、速度慢的问题,设计了一种基于传输门结构的全定制高性能乘累加器。提出了一种适用于乘累加器的新型累加数据压缩结构,减少了硬件开销;提出了一种新的并行加法器架构,在与Brent Kung加法器相同硬件开销的情况下,降低了门延迟级数,提高了计算速度;利用传输门的优点对乘累加器各单元电路进行优化设计。基于笔者方法设计的16乘8定点数高性能乘累加器在SMIC 130nm tt工艺角下关键路径延迟为1.173 ns,版图面积为9049.41 μm 2,800MHz下平均功耗为4.153 mW。对比传统的乘累加器,速度约提高了37.42%,面积约减小了47.87%,在同等条件下功耗约降低了56.77%。

关键词: 乘累加器, 传输门, 累加压缩, 卷积神经网络, 高性能

Abstract:

The multiply-accumulator (MAC) in existing convolutional neural network(CNN) accelerators generally have some problems, such as a large area, a high power consumption and a long critical path. Aiming at these problems, this paper presents a high-performance MAC based on transmission gates for CNN accelerators. This paper proposes a new data accumulation and compression structure suitable for the MAC, which reduces the hardware overhead. Moreover, we propose a new parallel adder architecture. Compared with the Brent Kung adder, the proposed adder reduces the number of gate delay stages and improves the calculation speed without causing an increase in hardware resources. In addition, we use the advantages of the transmission gate to optimize each unit circuit of the MAC. The 16-by-8 fixed-point high performance MAC based on the methods presented in this paper has a critical path delay of 1.173ns, a layout area of 9049.41μm2, and an average power consumption of 4.153mW at 800MHz under the SMIC 130nm tt corner. Compared with the traditional MAC, the speed is increased by 37.42%, the area is reduced by 47.84%, and the power consumption is reduced by56.77% under the same conditions.

Key words: multiply accumulator, transmission gate, accumulation and compression, convolutional neural network, high performance

中图分类号: 

  • TN4