文章摘要
郝一帆* **,支天*,杜子东*.基于位串行计算的动态精度神经网络处理器[J].高技术通讯(中文),2022,32(9):881~893
基于位串行计算的动态精度神经网络处理器
Bit-serial-based dynamic-precision neural network processor
  
DOI:10.3772/j.issn.1002-0470.2022.09.001
中文关键词: 神经网络处理器; 动态精度计算; 位串行计算
英文关键词: neural network processor, dynamic precision computing, bit-serial
基金项目:
作者单位
郝一帆* ** (*中国科学院计算技术研究所处理器芯片全国重点实验室北京 100190) (**中国科学院大学北京 100049) 
支天* (*中国科学院计算技术研究所处理器芯片全国重点实验室北京 100190) (**中国科学院大学北京 100049) 
杜子东* (*中国科学院计算技术研究所处理器芯片全国重点实验室北京 100190) (**中国科学院大学北京 100049) 
摘要点击次数: 788
全文下载次数: 938
中文摘要:
      针对当前神经网络动态精度计算系统在周期性的模型重训练和动态精度切换的过程中会引入大量的计算和访存开销问题,提出了基于串行位计算的动态精度神经网络处理器(DPNN),其可支持任意规模、任意精度的神经网络模型;支持以非重训练的方式对模型数据精度进行细粒度调整,并消除了动态精度切换时因权值bit位重叠造成的重复计算与访存。实验结果表明,相较于自感知神经网络系统(SaNNs)的最新进展之一MinMaxNN,DPNN可使计算量平均降低1.34~2.52倍,访存量降低1.16~1.93倍;相较于代表性的bit串行计算神经网络处理器Stripes,DPNN使性能提升2.57倍、功耗节省2.87倍、面积减少1.95倍。
英文摘要:
      Aiming at the problem that the existing neural network dynamic-precision-computing system introduces a lot of computing and data access overhead in the process of periodic model retraining and switching, this paper proposes a dynamic-precision neural-network processor (DPNN) based on bit-serial-computing, which can support neural networks of any scales and bit-widths. DPNN supports fine-grained adjustment of model data accuracy without retraining, and eliminates repeated operands and data access caused by bits-of-synapses overlap during dynamic-precision-computing. The experimental results show that, compared with MinMaxNN, one of the latest advances in self-aware neural network systems (SaNNs), DPNN could reduce operands by 1.34-2.52 times and data access by 1.16-1.93 times on average. Compared with Stripes, the representative bit-serial-computing neural network processor, DPNN improves performance by 2.57 times, saves power-consumption by 2.87 times, and reduces area by 1.95 times.
查看全文   查看/发表评论  下载PDF阅读器
关闭

分享按钮