Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Fully Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories

Fully Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories Fully Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories PHILIPPE COUSSY, CYRILLE CHAVET, HUGUES NONO WOUAFO, and LAURA CONDE-CANENCIA, Universit´ de Bretagne Sud, Lab-STICC e Brain processes information through a complex hierarchical associative memory organization that is distributed across a complex neural network. The GBNN associative memory model has recently been proposed as a new class of recurrent clustered neural network that presents higher efficiency than the classical models. In this article, we propose computational simplifications and architectural optimizations of the original GBNN. This work leads to significant complexity and area reduction without affecting neither memorizing nor retrieving performance. The obtained results open new perspectives in the design of neuromorphic hardware to support large-scale general-purpose neural algorithms. Categories and Subject Descriptors: B.7.1 [Integrated Circuits]: Types and Design Styles--Algorithms implemented in hardware; C.1.3 [Processor Architectures]: Other Architecture Styles--Neural nets; I [Computing Methodologies] General Terms: Design, Algorithms Additional Key Words and Phrases: Neural network, sparse network, associative memory, neural cliques ACM Reference Format: Philippe Coussy, Cyrille Chavet, Hugues Nono Wouafo, and Laura Conde-Canencia. 2015. Fully binary neural network model and optimized hardware architectures for associative memories. ACM J. Emerg. Technol. Comput. Syst. 11, 4, Article 35 (April http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Journal on Emerging Technologies in Computing Systems (JETC) Association for Computing Machinery

Fully Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories

Fully Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories


Abstract

Fully Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories PHILIPPE COUSSY, CYRILLE CHAVET, HUGUES NONO WOUAFO, and LAURA CONDE-CANENCIA, Universit´ de Bretagne Sud, Lab-STICC e Brain processes information through a complex hierarchical associative memory organization that is distributed across a complex neural network. The GBNN associative memory model has recently been proposed as a new class of recurrent clustered neural network that presents higher efficiency than the classical models. In this article, we propose computational simplifications and architectural optimizations of the original GBNN. This work leads to significant complexity and area reduction without affecting neither memorizing nor retrieving performance. The obtained results open new perspectives in the design of neuromorphic hardware to support large-scale general-purpose neural algorithms. Categories and Subject Descriptors: B.7.1 [Integrated Circuits]: Types and Design Styles--Algorithms implemented in hardware; C.1.3 [Processor Architectures]: Other Architecture Styles--Neural nets; I [Computing Methodologies] General Terms: Design, Algorithms Additional Key Words and Phrases: Neural network, sparse network, associative memory, neural cliques ACM Reference Format: Philippe Coussy, Cyrille Chavet, Hugues Nono Wouafo, and Laura Conde-Canencia. 2015. Fully binary neural network model and optimized hardware architectures for associative memories. ACM J. Emerg. Technol. Comput. Syst. 11, 4, Article 35 (April

Loading next page...
 
/lp/association-for-computing-machinery/fully-binary-neural-network-model-and-optimized-hardware-architectures-G0TVeWMjmr

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Association for Computing Machinery
Copyright
Copyright © 2015 by ACM Inc.
ISSN
1550-4832
DOI
10.1145/2629510
Publisher site
See Article on Publisher Site

Abstract

Fully Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories PHILIPPE COUSSY, CYRILLE CHAVET, HUGUES NONO WOUAFO, and LAURA CONDE-CANENCIA, Universit´ de Bretagne Sud, Lab-STICC e Brain processes information through a complex hierarchical associative memory organization that is distributed across a complex neural network. The GBNN associative memory model has recently been proposed as a new class of recurrent clustered neural network that presents higher efficiency than the classical models. In this article, we propose computational simplifications and architectural optimizations of the original GBNN. This work leads to significant complexity and area reduction without affecting neither memorizing nor retrieving performance. The obtained results open new perspectives in the design of neuromorphic hardware to support large-scale general-purpose neural algorithms. Categories and Subject Descriptors: B.7.1 [Integrated Circuits]: Types and Design Styles--Algorithms implemented in hardware; C.1.3 [Processor Architectures]: Other Architecture Styles--Neural nets; I [Computing Methodologies] General Terms: Design, Algorithms Additional Key Words and Phrases: Neural network, sparse network, associative memory, neural cliques ACM Reference Format: Philippe Coussy, Cyrille Chavet, Hugues Nono Wouafo, and Laura Conde-Canencia. 2015. Fully binary neural network model and optimized hardware architectures for associative memories. ACM J. Emerg. Technol. Comput. Syst. 11, 4, Article 35 (April

Journal

ACM Journal on Emerging Technologies in Computing Systems (JETC)Association for Computing Machinery

Published: Apr 27, 2015

There are no references for this article.