版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、688 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 3, MAY 2003FPGA Implementation of a Pulse Density Neural Network With Learning Ability Using Simultaneous PerturbationYutaka Maeda and Toshiki TadaAbstract—Hardware
2、realization is very important when con- sidering wider applications of neural networks (NNs). In partic- ular, hardware NNs with a learning ability are intriguing. In these networks, the learning scheme is of much intere
3、st, with the back- propagation method being widely used. A gradient type of learning rule is not easy to realize in an electronic system, since calcula- tion of the gradients for all weights in the network is very diffi-
4、 cult. More suitable is the simultaneous perturbation method, since the learning rule requires only forward operations of the network to modify weights unlike the backpropagation method. In addi- tion, pulse density NN s
5、ystems have some promising properties, as they are robust to noisy situations and can handle analog quanti- ties based on the digital circuits. In this paper, we describe a field- programmable gate array realization of a
6、 pulse density NN using the simultaneous perturbation method as the learning scheme. We confirm the viability of the design and the operation of the actual NN system through some examples.Index Terms—Field-programmable g
7、ate array (FPGA), learning ability, neural networks (NNs), pulse density, simultaneous pertur- bation.I. INTRODUCTION NEURAL NETWORKS (NNs) are widely used in a number of applications in which the NNs are usually impleme
8、nted as a software program on an ordinary digital computer. How- ever, software implementations cannot utilize the essential prop- erty of parallelism found in biological NNs. In this respect, implementation of NNs using
9、 hardware elements such as very large-scale integration (VLSI) is beneficial. When considering the hardware implementation of an NN, realization of the learning mechanism as a hardware system is an important and difficul
10、t issue [1]. As we well know, the backpropagation method is commonly used. However, realiza- tion of the backpropagation method as an electronic system is very difficult, considering wiring for modifying quantities to al
11、l weights, calculation of the derivative of the sigmoid function, and so on. Thus, it is particularly difficult to implement large-scale NNs with learning ability via the gradient method because of the complexity of the
12、mechanism that derives the gradient. From this point of view, we must try to find a learning rule that is easy to realize. The simultaneous perturbation method was introduced by Spall [2], [3], Alespector et al. [4], and
13、 Cauwenberghs [5]. Maeda also independently proposed a learning rule of NNs using simultaneous perturbation and reported a feasibility ofManuscript received October 18, 2001; revised March 4, 2002 and January 3, 2003. Th
14、is work was supported in part by Kansai University High Technology Research Center. The authors are with the Department of Electrical Engineering, Faculty of Engineering, Kansai University, Osaka 564-8680, Japan. Digital
15、 Object Identifier 10.1109/TNN.2003.811357the learning rule [6]–[8]. At the same time, the merit of the learning rule was demonstrated in VLSI implementation of analog NNs [9], [10]. The advantage of the simultaneous per
16、turbation optimization method is its simplicity. The method can estimate the gradient using only values of the error function. Therefore, implementa- tion of this learning rule is relatively easy compared to that of othe
17、r learning rules, because it does not have to take the error backpropagation circuit into account. Certain pulse techniques, such as pulse width or pulse stream, have also been investigated to implement artificial NNs. F
18、or ex- ample, El-Masry et al. reported an efficient implementation of artificial NNs using a current-mode pulse width modulation ar- tificial NN [11]. Moreover, Murray et al. proposed a VLSI NN using analog and digital t
19、echniques [12]. In particular, pulse density NNs have fascinating properties. For example, pulse systems are invulnerable to noisy conditions. Moreover, pulse density systems can handle quantized analog values based on t
20、he digital circuit [13]. Based on these features, Hikawa reported a frequency-based NN using the backpropaga- tion [14]. In [14], the ordinary backpropagation method is ap- plied to a pulse density NN. However, it seems
21、difficult to employ the backpropagation method for a pulse density system. Actually, NN system de- scribed in [14] has to complete the error propagation mechanism based on the pulse density, in which case the circuit des
22、ign be- comes complex compared with the simultaneous perturbation method. Recently, field programmable gate arrays (FPGAs) have been used in many commercial fields because of their reconfiguration properties and flexibil
23、ity [15]. FPGAs also seem to be promising devices for implementing NNs, in comparison with ordinary software implementations. VHDL is a very popular hardware description language (HDL) for describing or designing digital
24、 circuits. In the fundamental design of this research, HDL is used. Combining a pulse density system with the simultaneous per- turbation method, we can easily design analog hardware NN systems with learning capability.
25、Some of the features of a pulse density NN system using FPGA can be summarized as follows: 1) Hardware can take advantage of parallelism; 2) simultaneous perturbation learning rule is very simple; 3) analog NN system is
26、realized based on digital circuits; 4) digital design technology used is supported by electronic design automation; and 5) pulse density NNs are not affected by noisy situations.II. SIMULTANEOUS PERTURBATION LEARNING RUL
27、EDetails of the simultaneous perturbation method as a learning rule of NNs have been described previously [6]–[9], [13] and are reiterated in this section.1045-9227/03$17.00 © 2003 IEEE690 IEEE TRANSACTIONS ON NEURA
28、L NETWORKS, VOL. 14, NO. 3, MAY 2003Fig. 2. Weight unit.Fig. 3. Weight modification part.unit and carries out addition or subtraction of the perturbation. At the same time, it stores the weight value. The random-number g
29、eneration part generates a random number using a linear feed- back shift register. If the sign of the result of the unit is positive, the output is sent to the positive side of the neuron unit. If the sign is negative, t
30、he output is sent to the negative side. 1) Weight Modification: Fig. 3 depicts the weight modifica- tion part. The first counter (eight bits) and the first Flip Flop (FF) in this part (left counter and FF in Fig. 3) stor
31、e an initial value of a weight and its corresponding sign, respectively. The basic modifying quantity in (2) is common to all weights. This quantity is sent from the learning unit, and con- nected to the first counter. T
32、he sign of the quantity is connected to the first FF. The sign in (2), which is generated by the linear feedback shift register, is also connected to the FF which decides whether counting up or down should be performed.
33、These op- erations modify the weights as represented in (2). Another role of the weight modification part is to add a pertur- bation to the weight. This is simultaneously done for all weights in each weight modification
34、part. The second counter and the second FF (right counter and FF in Fig. 3) are used for this purpose. That is, the perturbation , which is constant, is added by the counter, and the sign of the perturbation is stored in
35、 the second FF. 2) Pulse Generation: The weight values calculated in the weight modification part must be converted into a pulse series. We use a random-number generator and a comparator for this. The linear feedback shi
36、ft register is used to produce random numbers. We compare a weight value with a random value gen- erated by the linear feedback shift register. If the weight is larger than the random number, this circuit generates a sin
37、gle pulse and if not, no pulse is generated. We repeat this procedure, and new random numbers are generated at each time step. Therefore, a large weight results in many pulses and a small weight results in very few pulse
38、s. In other words, the weights in our system are represented by pulse density.Fig. 4. Neuron unit.Fig. 5. Learning unit.B. Neuron UnitFig. 4 shows the neuron unit which consists of counters and a comparator and calculate
39、s the weighted sum of inputs. The counters sum the number of pulses given by the weight units as shown in Fig. 4. The first counter (upper counter in Fig. 4) counts the number of positive inputs, and the second counter (
40、lower counter in Fig. 4) counts the number of negative inputs. If the number of positive pulses is larger than the number of negative pulses, the neuron unit generates a single pulse. The input–output behavior of our neu
41、ron units is characterized by a piecewise-linear function determined by the saturation of pulse density. That is, even if a weighted sum for a neuron is extremely large, the maximum number of pulses per unit time is limi
42、ted. No pulse indicates the weighted sum of a neuron is less than the lowest limit of the output. Otherwise, the number of the output pulses is equal to the weighted sum of inputs. That is, the am- plification factor of
43、the linear function is assumed to be unity. Thus, instead of the sigmoid function, the system uses a linear function with a restriction applied. A similar idea for pulse den- sity neurons is discussed in [14].C. Learning
44、 UnitThe learning unit achieves the so-called learning process using simultaneous perturbation and sends the basic modifying quantity to the weight units, which is common to all weights. The block diagram is shown in Fig
45、. 5. One of the features of this learning rule is that it requires only forward operations of the NN. There is a counter in each error calculation part. Since the error function used here is defined by the absolute diffe
46、rence as in (4), using the counter, this part gives the difference in the number of pulses between the output of the NN and the corre- sponding teaching signal; counting up for the output pulses and counting down for the
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫(kù)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- (節(jié)選)外文翻譯--基于FPGA具有學(xué)習(xí)能力的同時(shí)擾動(dòng)脈沖密度神經(jīng)網(wǎng)絡(luò)(原文 ).pdf
- (節(jié)選)外文翻譯--基于FPGA具有學(xué)習(xí)能力的同時(shí)擾動(dòng)脈沖密度神經(jīng)網(wǎng)絡(luò)(原文 ).pdf
- (節(jié)選)外文翻譯--基于fpga具有學(xué)習(xí)能力的同時(shí)擾動(dòng)脈沖密度神經(jīng)網(wǎng)絡(luò)
- (節(jié)選)外文翻譯--基于fpga具有學(xué)習(xí)能力的同時(shí)擾動(dòng)脈沖密度神經(jīng)網(wǎng)絡(luò)
- (節(jié)選)外文翻譯--基于fpga具有學(xué)習(xí)能力的同時(shí)擾動(dòng)脈沖密度神經(jīng)網(wǎng)絡(luò)(譯文)
- (節(jié)選)外文翻譯--基于FPGA具有學(xué)習(xí)能力的同時(shí)擾動(dòng)脈沖密度神經(jīng)網(wǎng)絡(luò)(譯文).docx
- (節(jié)選)外文翻譯--基于FPGA具有學(xué)習(xí)能力的同時(shí)擾動(dòng)脈沖密度神經(jīng)網(wǎng)絡(luò)(譯文).docx
- 脈沖神經(jīng)網(wǎng)絡(luò)的噪聲擾動(dòng)分析.pdf
- 具有隨機(jī)和脈沖擾動(dòng)的時(shí)滯遞歸神經(jīng)網(wǎng)絡(luò)的穩(wěn)定性研究.pdf
- 基于GPU的脈沖神經(jīng)網(wǎng)絡(luò)學(xué)習(xí)研究.pdf
- 離散神經(jīng)網(wǎng)絡(luò)在隨機(jī)擾動(dòng)下的脈沖控制.pdf
- 基于FPGA的脈沖神經(jīng)網(wǎng)絡(luò)加速器的設(shè)計(jì).pdf
- 基于脈沖序列內(nèi)積的脈沖神經(jīng)網(wǎng)絡(luò)監(jiān)督學(xué)習(xí)研究
- 基于脈沖序列內(nèi)積的脈沖神經(jīng)網(wǎng)絡(luò)監(jiān)督學(xué)習(xí)研究.pdf
- 人工神經(jīng)網(wǎng)絡(luò)外文翻譯
- 外文翻譯---神經(jīng)網(wǎng)絡(luò)概述
- 外文翻譯---人工神經(jīng)網(wǎng)絡(luò)
- 脈沖耦合神經(jīng)網(wǎng)絡(luò)及混沌脈沖耦合神經(jīng)網(wǎng)絡(luò)的研究.pdf
- 基于bp神經(jīng)網(wǎng)絡(luò)的車型識(shí)別外文翻譯
- 基于FPGA的神經(jīng)網(wǎng)絡(luò)硬件實(shí)現(xiàn).pdf
評(píng)論
0/150
提交評(píng)論