Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Spiking Neural Networks in Spintronic Computational RAM

Spiking Neural Networks in Spintronic Computational RAM Spiking Neural Networks (SNNs) represent a biologically inspired computation model capable of emulating neural computation in human brain and brain-like structures. The main promise is very low energy consumption. Classic Von Neumann architecture based SNN accelerators in hardware, however, often fall short of addressing demanding computation and data transfer requirements efficiently at scale. In this article, we propose a promising alternative to overcome scalability limitations, based on a network of in-memory SNN accelerators, which can reduce the energy consumption by up to 150.25= when compared to a representative ASIC solution. The significant reduction in energy comes from two key aspects of the hardware design to minimize data communication overheads: (1) each node represents an in-memory SNN accelerator based on a spintronic Computational RAM array, and (2) a novel, De Bruijn graph based architecture establishes the SNN array connectivity. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Architecture and Code Optimization (TACO) Association for Computing Machinery

Loading next page...
 
/lp/association-for-computing-machinery/spiking-neural-networks-in-spintronic-computational-ram-fBX0exc2yW

References (65)

Publisher
Association for Computing Machinery
Copyright
Copyright © 2021 ACM
ISSN
1544-3566
eISSN
1544-3973
DOI
10.1145/3475963
Publisher site
See Article on Publisher Site

Abstract

Spiking Neural Networks (SNNs) represent a biologically inspired computation model capable of emulating neural computation in human brain and brain-like structures. The main promise is very low energy consumption. Classic Von Neumann architecture based SNN accelerators in hardware, however, often fall short of addressing demanding computation and data transfer requirements efficiently at scale. In this article, we propose a promising alternative to overcome scalability limitations, based on a network of in-memory SNN accelerators, which can reduce the energy consumption by up to 150.25= when compared to a representative ASIC solution. The significant reduction in energy comes from two key aspects of the hardware design to minimize data communication overheads: (1) each node represents an in-memory SNN accelerator based on a spintronic Computational RAM array, and (2) a novel, De Bruijn graph based architecture establishes the SNN array connectivity.

Journal

ACM Transactions on Architecture and Code Optimization (TACO)Association for Computing Machinery

Published: Sep 29, 2021

Keywords: Processing in memory

There are no references for this article.