Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Accelerating On-Chip Training with Ferroelectric-Based Hybrid Precision Synapse

Accelerating On-Chip Training with Ferroelectric-Based Hybrid Precision Synapse In this article, we propose a hardware accelerator design using ferroelectric transistor (FeFET)-based hybrid precision synapse (HPS) for deep neural network (DNN) on-chip training. The drain erase scheme for FeFET programming is incorporated for both FeFET HPS design and FeFET buffer design. By using drain erase, high-density FeFET buffers can be integrated onchip to store the intermediate input-output activations and gradients, which reduces the energy consuming off-chip DRAM access. Architectural evaluation results show that the energy efficiency could be improved by 1.2× ∼ 2.1×, 3.9× ∼ 6.0× compared to the other HPS-based designs and emerging non-volatile memory baselines, respectively. The chip area is reduced by 19% ∼ 36% compared with designs using SRAM on-chip buffer even though the capacity of FeFET buffer is increased. Besides, by utilizing drain erase scheme for FeFET programming, the chip area is reduced by 11% ∼ 28.5% compared with the designs using body erase scheme. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Journal on Emerging Technologies in Computing Systems (JETC) Association for Computing Machinery

Accelerating On-Chip Training with Ferroelectric-Based Hybrid Precision Synapse

Loading next page...
 
/lp/association-for-computing-machinery/accelerating-on-chip-training-with-ferroelectric-based-hybrid-wZx0vS206J

References (34)

Publisher
Association for Computing Machinery
Copyright
Copyright © 2022 Association for Computing Machinery.
ISSN
1550-4832
eISSN
1550-4840
DOI
10.1145/3473461
Publisher site
See Article on Publisher Site

Abstract

In this article, we propose a hardware accelerator design using ferroelectric transistor (FeFET)-based hybrid precision synapse (HPS) for deep neural network (DNN) on-chip training. The drain erase scheme for FeFET programming is incorporated for both FeFET HPS design and FeFET buffer design. By using drain erase, high-density FeFET buffers can be integrated onchip to store the intermediate input-output activations and gradients, which reduces the energy consuming off-chip DRAM access. Architectural evaluation results show that the energy efficiency could be improved by 1.2× ∼ 2.1×, 3.9× ∼ 6.0× compared to the other HPS-based designs and emerging non-volatile memory baselines, respectively. The chip area is reduced by 19% ∼ 36% compared with designs using SRAM on-chip buffer even though the capacity of FeFET buffer is increased. Besides, by utilizing drain erase scheme for FeFET programming, the chip area is reduced by 11% ∼ 28.5% compared with the designs using body erase scheme.

Journal

ACM Journal on Emerging Technologies in Computing Systems (JETC)Association for Computing Machinery

Published: Jan 12, 2022

Keywords: Deep neural network

There are no references for this article.