Fully Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories
Abstract
Fully Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories PHILIPPE COUSSY, CYRILLE CHAVET, HUGUES NONO WOUAFO, and LAURA CONDE-CANENCIA, Universit´ de Bretagne Sud, Lab-STICC e Brain processes information through a complex hierarchical associative memory organization that is distributed across a complex neural network. The GBNN associative memory model has recently been proposed as a new class of recurrent clustered neural network that presents higher efficiency than the classical models. In this article, we propose computational simplifications and architectural optimizations of the original GBNN. This work leads to significant complexity and area reduction without affecting neither memorizing nor retrieving performance. The obtained results open new perspectives in the design of neuromorphic hardware to support large-scale general-purpose neural algorithms. Categories and Subject Descriptors: B.7.1 [Integrated Circuits]: Types and Design Styles--Algorithms implemented in hardware; C.1.3 [Processor Architectures]: Other Architecture Styles--Neural nets; I [Computing Methodologies] General Terms: Design, Algorithms Additional Key Words and Phrases: Neural network, sparse network, associative memory, neural cliques ACM Reference Format: Philippe Coussy, Cyrille Chavet, Hugues Nono Wouafo, and Laura Conde-Canencia. 2015. Fully binary neural network model and optimized hardware architectures for associative memories. ACM J. Emerg. Technol. Comput. Syst. 11, 4, Article 35 (April