Access the full text.

Sign up today, get DeepDyve free for 14 days.

Bio-Algorithms and Med-Systems
, Volume 8 – Jan 1, 2012

/lp/de-gruyter/a-survey-of-fpga-implementations-of-artificial-spiking-neurons-models-nASjyCdVoD

- Publisher
- de Gruyter
- Copyright
- Copyright © 2012 by the
- ISSN
- 1895-9091
- eISSN
- 1896-530X
- DOI
- 10.2478/bams-2012-0004
- Publisher site
- See Article on Publisher Site

thinking. Furthermore they can also change our look on information processing and modern computing. Most common software implementations need great computing power and because of that they are not suitable for real time applications. Additionally, biological neurons process information in parallel which is impossible with simulation on conventional computer. Thus we present alternative way to implement models of SNNs incorporating FPGAs. In this paper we compared most common models that are used to implement SNNs in reconfigurable hardware and also we made review of recent works that were done in this subject. KEYWORDS: Spiking Neural Networks, SNN, FPGA Introduction One of the most important recent trends in modern science is pursuance to use bio-inspired solutions and concepts. Evolutionary algorithms, swarm behavior and neural networks are only a few of those concepts. All of them will help us to understand nature and will allow creating better more efficient technological solutions. This trend is also observable in computational intelligence systems. Such research is motivated by the desire to form a more comprehensive understanding of information processing in biological networks and to investigate how this understanding could be used to improve traditional information processing techniques [1]. Evidence of one of the latest models of biological brain circuits are spiking neural networks (SNN). Spiking neurons differ from classical artificial neuron models that are most common nowadays, because they encode information in frequency and temporal pattern of spikes [2]. A single neuron fires when its membrane potential exceed threshold. This mechanism is illustrated in Figure 1. SNNs have also ability to incorporate richer dynamics than other artificial neural networks and from this reason they can even closer resemble the real brain [3]. Thus there are few research groups which use supercomputers to simulate biologically plausible models of particular brain structures. Best known European project is called BlueBrain that assume modeling of the rat brain. Scientist from this group are now able to model and simulate a single brain block which behavior is similar to that observed in neurophysiological recordings [4]. Another big project concerning of building artificial brain is C2 cortical simulator. This project is financed and supported by IBM and U.S Defense Advanced Research Projects Agency (DARPA). In this project 1 billion of neurons with 10 trillion synapses were modeled and successfully simulated. What is interesting this model was calculated by BlueGene/P super computer that consumes 1.4 MW of energy while human brain needs only 100W [5]. Works carried out during those are called brain reverse engineering. Software implementation of the SNNs is useful only for investigating capabilities of models. Architectures of biological systems are inherently parallel whereas all PCs are based on sequential information processing. Because of complexity and number of calculations software simulation is rather slow. Neural networks implemented in hardware are much more suitable for real applications. Developing Application Specific Integrated Circuits (ASICs) is time consuming, difficult and very expensive. In order to overcome this drawbacks hardware architecture implemented in FPGA is proposed. These devices consist of arrays of logic elements that can be configured in a desired way. FPGA architecture allows parallel processing of information which is similar to biological systems. Big advantage is also flexibility which is important during implementation and testing. FPGAs are also commonly available and inexpensive compared to manufacturing ASICs. These advantages motivated scientist to implement SNNs in those devices. This article in section 2. shows models of artificial neurons that are suitable and were implemented in reconfigurable devices. Review of recent studies and achievements in implementing artificial SNNs in FPGA is presented in section 3. Short summary is shown in the last paragraph of the paper. Figure 1. Mechanism of generating spikes in response to input potentials. On the upper chart membrane potential of spiking neuron is shown. Lower part presents input spikes. Artificial spiking neuron models In this section we present widely used models of spiking neurons that are suitable to use in FPGA. All of those models are described by differential equations. Following symbols will be used to describe models: representing membrane potential, membrane potential derivative with respect to time, I representing input current. To compare number of floating point operations needed to simulate 1ms of each model we assume that all of them are implemented with fixed step first order Euler method [6]. Leaky Integrate & Fire Model This model is the simplest way to implement spiking neuron. It is described by one differential equation where that and are parameters. To simplify calculations we assume . (This assumption does not affect dynamic of model because all information about neuron is stored in a, b and c parameters.) When membrane potential reaches the threshold potential, neuron fires spike and is reset to . (Typically and ) The Integrate & Fire neuron can fire spikes which frequency is dependent on input current. This model can resemble only two features of biological neurons. One step of simulation of this model needs only four floating points operations. However this simplicity affects dynamics of this model which is less rich than biological neuron [6]. Low computational costs and simplicity of this model strongly promote it to be used in FPGA. Thus vast majority of models implemented in reconfigurable hardware are I&F. Izhikevich Spiking Neuron Model This simple model of spiking neuron is described by two differential equations [7]: with auxiliary reset Variable U, represents membrane recovery variable, is responsible for the activation of potassium and sodium ionic currents and providing negative feedback to membrane potential. After the membrane potential reachs its apex U, and V are reset. According to Izhikevich this model incorporates all known types of cortical neurons firing patterns. Those patterns can be obtained by changing parameters and . (Typical values are ) Also in this case we assume that Only 13 floating points operations are needed to calculate one step of simulation of this model. Because of its low computational cost and good biological accuracy is ideally suited to implement in FPGAs. However, this model is not very often described as implemented in reconfigurable devices. HodgkinHuxley Model The HodgkinHuxley model [8] was developed by two scientists basing on identification of a giant squid axon. They received the Nobel Prize for this work in 1963. This model is described by set of equations: where are ionic currents of the model. Current that flows through a particular ion channel is described by the equation Where is reversal potential of the i-th ion channel. In case of voltage gated ion channels conductance of each is function of time and voltage and it is described by following equations: ( ) ( ) where and are gating variables for activation and inactivation, respectively, those values represent the fraction of the maximum conductance available at any given time and voltage. represents maximum value of conductance of the i-th channel and are time constants for activation and inactivation. For leak channels conductance is always constant. Original Hodgkin-Huxley model consist of sodium and potassium voltage gated ion channel conductances and constant leak conductance. Additional current comes from outside as input. Only this model from three presented here is biologically meaningful. It means that it not only resembles behavior of real neuron, but it includes models of mechanism that are responsible for this behavior. Thus values of currents and potentials of this model can be measured and used to compare with biological one. Main disadvantage of this model is computational cost. One step of simulation of this model needs 120 floating points operations. The Hodgkin-Huxley model is used when main goal is biological realism of implemented SNN. Review of recent works One of the main directions of research connected with implementing artificial spiking neural networks in FPGAs is pursuance to achieve as many neurons as possible in single chip. Most commonly used I&F model allows gaining SNNs of high density. First articles, known to the authors, describing implementation of artificial spiking neurons in reconfigurable hardware were published after year 2000. In those papers neuron models are implemented mainly with using System Generator software. Those were attempts to implement single neurons that worked in reconfigurable hardware. Those models did not use logical resources very efficiently and it was possible to implement 168 Integrate and Fire neurons with 168 synapses or increasing number of connections between neurons network consisting of 13 neurons and 1300 synapses. Device used to implement these networks was the largest chip in Xilinx Virtex II family XC2V8000 [9]. Authors provide also information about amount of resources which is needed to implement neuron and synapses. It is 33 slices per synapse model and 63 slices per neuron model. Using this implementation with clock speed of 100 MHz and 0.125 ms step in Euler method 1 second of operation of model could be completed in 0.8 ms. Speed up factor of 12500 compared to real time was achieved. This solution assumes that all of the models are computed in parallel. Simplified block diagram of this architecture is shown in Figure 2. Figure 2. Block diagram of parallel architecture SNN. Next stage of developing SNNs in FPGA used semi-parallel implementation. In this kind of architecture, one or a few soft processors implemented in reconfigurable hardware iterates through collection of models which are computed in dedicated logic fragment. States of each neuron is stored in RAM and previous values are given to neuron or synapse block as input parameters. This approach allows implementing much more neurons in single chip. Simplified block diagram of this solution is presented in Figure 3. Authors were able to implement 1,964,200 synapses and 4200 neurons. Number of neurons to number of connections ratio is 1:467 which is closer to real brain ratio than reached in previous works [10]. Described architecture puts number of neurons before simulation speed. There is still great speed up comparing to PC but such large networks could not be simulated in real-time. Main drawback of those presented solutions is limited scalability. In first described case every change of parameters, number of neurons and connections between them needs manual change in design and time consuming synthesis which for complex networks may take even more than few hours. In second case states are not connected with computational elements thus changing numbers of neurons do not need changing hardware design. Those changes can be made by changing software running on soft processor which can also be hard and time consuming. All neurons must have initial steps, parameters and connections defined in program written in C/C++ language. Figure 3. Block diagram of semi-parallel architecture SNN. Thus the next step in developing SNNs in FPGAs leads to changing networks size easily and changing parameters on-the-fly [11]. Authors of this article developed integrated system that allows to easy change network configuration and tune parameters during work of whole system. They used automatic generation of System Generator models which are then translated to VHDL and finally to hardware structures. All of needed tools were developed in MATLAB or Simulink software. System Generator toolbox contains Shared Memory blocks which were used to send parameters to device while it was running. Whole SNN environment was divided into three auto generated subsystems: states, parameters and outputs. All of them could be parameterized in MATLAB and then just added to System generator model. Hodgkin-Huxley model of neuron was implemented using this conception. To speed up calculations mathematical operations were implemented in pipelined manner. Thanks to that not only one operation is computed in single cycle. Authors were able to implement 40 neurons with all-to-all synaptic connections. What is also meaningful they achieve 8.7 speed up compared to real-time. This design involved 13,840 (90%) slices and 183 (95%) multipliers of a Virtex IV family XC4VSX35 device. Application state of the art During last two or three years one can see slightly different direction of developing SNNs. Scientist are now trying to show solutions in which spiking neural networks can be used in practice. One of the significant achievements in this field was made by scientist from the University of Ulster in Ireland. They implemented spiking neural network which task was finding edges in pictures [12]. They focused on implementing large number of neurons rather than guaranteeing real-time simulation. They used multiplexed SNNs. This architecture is quite similar to this which uses soft processor but this time network controller was implemented as a distinct logic block. They used also possibilities of PCI-X interface through which FPGA was connected to the PC. Using this connection scientists were able to upload parameters of neurons and even change the network topology. Results of their work are very promising. They were able to implement Integrate & Fire neurons and synapses in single Virtex IV XCV4LX160 device. What is also notable this SNN model shows speed up over 14 compared to software simulation on Intel Xeon 2.8GHz processor. Another approach is using Izhikevich neuron model [13]. As written before, this model much better resembles biological neuron than commonly used I&F model. Scientist successfully implemented SNN based on this model and used it to recognize characters. They worked on Virtex II Pro XC2VP50 and Virtex IV XC4VLX160 devices and implemented respectively 624 and 6264 Izhikevich neurons. Another recently reported application of SNNs implemented in reconfigurable devices is modeling of real, biological structures. Example of such implementations could be a model of the Olfactory Bulb used to odor classification [14]. This model was created by scientist from University of Leicester. They used integrate & fire neurons implemented in a Virtex II Pro FPGA device. Successful model of the Olfactory Gland was constructed from logic blocks representing synapses and soma of neural cell. Their results are very promising because model of the Olfactory Gland is able to classify odors, recognize them even with interfering background. Summary This paper presented review of achievements in the field of modeling SNNs in reconfigurable devices. In recent years this domain of reconfigurable computing has expanded. In this period number of spiking neurons implemented in a single chip has increased over 6000 times. Vast majority of scientists prefer implementing networks with greater number of simple neuron models than a few complicated with high computational cost. What is even more notable, new practical applications were successfully implemented. Summary presented here gives good prognosis for the future. Especially because all of presented works were realized in Virtex II or Virtex IV FPGAs. Most recent reconfigurable devices provide 2 000 000 logic cells compared to 150 000 of most powerful Virtex IV. Estimated number of neurons, calculated based on the model described in [12] that can be implemented in various devices is presented in Table 1. Very interesting seems to be modeling and creating artificial structures that mimic real biological brain parts. This definitely will lead to better understanding of real neural mechanisms. Through the use of reconfigurable devices like FPGAs, we get relatively cheap and widely available platform for implementing and simulating high capacity neural networks. Notable advances of this solution are also very reduced power consumption and reduced sizes comparing to conventional computer based solutions. ASICs and neural devices on the other hand are also small and energy efficient but time to market, cost and effort connected with development process and production is much higher when compare with FPGAs. Additionally reconfigurable devices are flexible and their architecture can be changed freely during developing final solution. Thus FPGAs are really well suited to implement SNNs. Table 1. Estimated numbers of neurons that can be implemented using different families of FPGA devices. Number of neurons was calculated assuming linear correlation between number of slices and number of neurons. Based on the architecture presented in [12] Family Virtex II Virtex IV Virtex VII Device XC2V8000 XC4VLX200 XC7V2000T Number of Slices 46592 89088 305400 Max. number of neurons 719 435 1 375 624 4 715 735 Acknowledgements This article has been supported by AGH-UST grant No. 11.11.120.612.

Bio-Algorithms and Med-Systems – de Gruyter

**Published: ** Jan 1, 2012

Loading...

You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!

Read and print from thousands of top scholarly journals.

System error. Please try again!

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.

Copy and paste the desired citation format or use the link below to download a file formatted for EndNote

Access the full text.

Sign up today, get DeepDyve free for 14 days.

All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.