版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡介
1、<p> 附錄A 外文翻譯——原文部分</p><p> Prediction of Al(OH)3 fluidized roasting temperature based on wavelet neural network</p><p> LI Jie(李劼 )1, LIU Dai-fei(劉代飛)1, DAI Xue-ru(戴學(xué)儒)2, ZOU Zhong(鄒忠 )
2、1, DING Feng-qi(丁鳳其)1 </p><p> 1. School of Metallurgical Science and Engineering, Central South University, Changsha 410083, China; </p><p> 2. Changsha Engineering and Research Institute of
3、Nonferrous Metallurgy, Changsha 410011, China Received 24 October 2006; accepted 18 December 2006 </p><p> Abstract(cnki)</p><p> The recycle fluidization roasting in alumina production was st
4、udied and a temperature forecast model was established based on wavelet neural network that had a momentum item and an adjustable learning rate. By analyzing the roasting process, coal gas flux, aluminium hydroxide feedi
5、ng and oxygen content were ascertained as the main parameters for the forecast model. The order and delay time of each parameter in the model were deduced by F test method. With 400 groups of sample data (sampled with &l
6、t;/p><p> Key words: wavelet neural networks; aluminum hydroxide; fluidized roasting; roasting temperature; modeling; prediction</p><p> 1 Introduction</p><p> In alumina production
7、, roasting is the last process,in which the attached water is dried, crystal water is removed, and γ-Al2O3 is partly transformed into α-Al2O3.The energy consumption in the roasting process occupies about 10% of the whole
8、 energy used up in the alumina production[1] and the productivity of the roasting process directly influences the yield of alumina. As the roasting temperature is the primary factor affecting yield, quality and energy co
9、nsumption, its control is very importan</p><p> At present, the following three kinds of fluidized roasting technology are widely used in the industry:American flash calcinations, German recycle calcination
10、s and Danish gas suspension calcinations. For all these roasting technologies, most existing roasting temperature models are static models, such as simplematerial and energy computation models based on reaction mechanism
11、[2]; relational equations between process parameters and the yield and the energy consumption based on regression analysis</p><p> In this study, a roasting temperature forecast model was established based
12、on artificial neural networks and wavelet analysis. With characteristics of strong fault tolerance, self-study ability, and non-linear mapping ability, neural network models have advantages to solve complex problems conc
13、erning inference, recognition, classification and so on. But the forecast accuracy of a neural network relies on the validity of model parameters and the reasonable choice of network architecture. At prese</p><
14、;p> 2 Wavelet neural network algorithms</p><p> In 1980’s, GROSSMANN and MORLET[11?13]proposed the definition of wavelet of any function f(x)∈L2(R) in aix+bi affine group as Eqn.(1). In Eqn.(1) and Eqn.
15、(2), the function ψ(x), which has the volatility characteristic[14], is named as Mother-wavelet. The parameters a and b mean the scaling coefficient and the shift coefficient respectively. Wavelet function can be obtaine
16、d from the affine transformation of Mother-wavelet by scaling a and translating b. The parameter 1/|a|1/2 is the normalized co</p><p> , a∈R+, b∈R (x=?∞?+∞) (1)</p><p><b> (2)</b>
17、</p><p><b> (3)</b></p><p> For a dynamic system, the observation inputs x(t) and outputs y(t) are defined as</p><p> [x(1),x(2)……x(t)],=[y(1),y(2)……y(t)](4)</p&g
18、t;<p> By setting parameter t as the observation time point,the serial observation sample before t is [xt, yt] and function y(t), and the forecast output after t, is defined as</p><p><b> (5)&
19、lt;/b></p><p> If the v(t) value is tiny , function g(xt-1, yt-1) may be regarded as a forecast for function y(t).The relation between input (influence factors) and output (evaluation index) can be descr
20、ibed by a BP neural network whose hidden function is Sigmoid type defined as Eqn.(6):</p><p> ?。╥=0,……,N)(6)</p><p> where g(x) is the fitting function; wi is the weight coefficient; S is the
21、Sigmoid function; N is the node number.</p><p> The wavelet neural network integrates wavelet transformation with neural network. By substituting wavelet function for Sigmoid function, the wavelet neural ne
22、twork has a stronger non-linearity approximation ability than BP neural network. The function expressed by wavelet neural network is realized by combining a series of wavelet. The value of y(x) is approximated with the s
23、um of a set of ψ(x), as expressed in Eqn.(7):</p><p> g(x)=(i=0,……,N)(7)</p><p> where g(x) is the fitting function; wi is the weight coefficient; ai is the scaling coefficient; bi is the shi
24、ft coefficient; N is the node number.</p><p> The process of wavelet neural network identification is the calculation of parameters wi, ai and bi. With the smallest mean-square deviation energy function for
25、 the error evaluation, the optimization rule for computation is that the error approaches the minimum. By making ψ0=1, the smallest mean-square deviation energy function is shown in Eqn.(8). In this formula, K means the
26、number of sample:</p><p> E=(j=1,……,k)(8)</p><p> At present, the following wavelet functions are widely used: Haar wavelet, Shannon wavelet,Mexican-hat wavelet and Morlet wavelet and so on[1
27、5].These functions can constitute standard orthogonal basis in L2(R) by scaling and translating.</p><p> In this study, a satisfactory result was obtained by applying the wavelet function expressed as Eqns.
28、(9) and (10), which were discussed in Ref.[16].</p><p> s(x+2)-2s(x)+s(x-2)(9)</p><p> s(x)=1/(1+)(10)</p><p> 3 Roasting temperature forecasting model</p><p> 3.
29、1 Selection of model parameters</p><p> The roasting process includes feeding, dehydration, preheating decomposition, roasting and cooling, among which roasting temperature is the crucial operation paramete
30、r. When quality is good, low temperature is advantageous to increasing yield and decreasing consumption. The practice indicated that when temperature decreased by 100 ℃, about 3% energy could be saved[17]. There are many
31、 factors influencing on roasting, such as humidity, gas fuel quality, the ratio of air to gas fuel, feeding and fu</p><p> By analyzing the roasting process, coal gas flux,feeding and oxygen content were as
32、certained as the main parameters of the forecast model. The model structure is shown in Fig.1. As the actual production is a continuous process, a previous operation directly influences the present conditions of the furn
33、ace, therefore, when ascertaining the input parameters, the time succession must be taken into consideration. The parameters whose time series model orders must be determined including:temperature,</p><p>
34、Fig.1 Logic model of aluminium hydroxide roasting</p><p> The model orders of the parameters were determined by the F test method[18], which is a general statistical method and is able to compute the remark
35、abledegree of the variance of the loss function when the model orders of the parameters are changed. While an order increases from n1 to n2 (n1<n2=, the loss function E(n) decreases from E(n1) to E(n2), as shown in th
36、e following equation:</p><p> t=[(E(n1)-E(n2))/E(n2)][(L-2n2)/2(n2-n1)](11)</p><p> where t is in accord with F distribution named as t?F[2(n1?n2), L?2n2].</p><p> Assigning a c
37、onfidence value to a, if t≤ta, namely E(n) does not decrease obviously, the order parameter n1is accepted; if t>ta, namely E(n) decreases obviously, n1 may not be accepted, the order must be increased and t must be recom
38、puted until n1 is accepted.</p><p> 400 groups of sample data with a sampling period of 1.5 min were used to determine the orders of the model parameters. Through computation, the orders of temperature, coa
39、l gas flux, feeding and oxygen content were 3, 2, 1 and 1 respectively, and the delay time of coal gas flux, feeding and oxygen content were 3, 5, 1 respectively. The structure of the wavelet neural network model is show
40、n in Fig.2, and its equation is defined as follows:</p><p> y(t)=WNN[y(t-1),y(t-2),y(t-3),u1(t-3),u1(t-4),u2(t-5),u3(t-1)(12)</p><p> where y is the temperature;u1 is the coal gas flux; u2is
41、the feeding; u3is the oxygen content; t is the sample time.</p><p> Fig.2 Structure of wavelet neural network model</p><p> Then we can deduce the neural network single-step prediction model:&
42、lt;/p><p> ym(t+1)=WNN1[y(t),y(t-1),y(t-2),u1(t-2),u1(t-3),u2(t-4),u3(t)](13)</p><p> And the multi-step prediction model is</p><p> ym(t+d)=WNNd[y(t+d-1),y(t+d-2),y(t+d-3),u1(t+d-
43、3),u1(t+d-4),u2(t+d-5),u3(t+d-1)](14)</p><p> where ym(t+1) is the prediction result for time t+1 with the sample data of time t; d is the prediction step; WNN1 is the single-step prediction model; WNNd is
44、 the d-step multi-step prediction model. For the input variable in the right of Eqn.(14) [y, u1, u2, u3] whose sample time is remarked as t+d?i(i=1,2,3,4,5), if t+d?i≤t, their input values are real sample values. Whereas
45、, if t+d?i>t, their input values as following y(t+d?i), u1(t+d?i), u2(t+d?i) and u3(t+d?i) are substituted with ym(t+d?i)</p><p> 3.2 Set-up of neural network model</p><p> At the end of the 2
46、0th century, the approximate representation capability of neural networks had been developed greatly[19?21]. It had been proved that single-hidden-layer forward-feed neural network had the characteristics of arbitrary ap
47、proximation to any non-linear mapping function. Therefore, a singlehidden-layer neural network was adopted as the temperature forecast model in this work. As the training measure, the gradient decline rule was used, in w
48、hich weightiness of neural network was mo</p><p> 3.2.1 Network learning algorithm</p><p> The number of hidden nodes was determined with the pruning method[22]. At first, a network with its n
49、umber of hidden nodes much larger than the practical requirement was used; then, according to a performance criterion equation for network, the nodes and their weightiness that had no or little contribution to the perfor
50、mance of the network were trimmed off; finally a suitable network structure could be obtained. In view of existing shortcomings in BP algorithm, such as easily dropping into a loc</p><p> 1) Attached moment
51、um item</p><p> The application of an attached momentum item, whose function equals to a low-frequency filter,considers not only error gradient, but also the change tendency on error curved surface, which a
52、llows the change existing in network. Without momentum function,the network may fall into a local minimum. With the use of this method in the error back propagation process, a change value in direct proportion to previou
53、s weightiness change is added to present weightiness change, which is used in the calculat</p><p> Δwij(t+1)=wij(t)-(15)</p><p> 2) Adaptive adjustment of learning rate</p><p>
54、In order to improve convergence performance in training process, a method of adaptive adjustment of learning rate was applied. The adjustment criterion was defined as follows: when the new error value becomes bigger than
55、 the old one by certain times, the learning rate will be reduced, otherwise, it may be remained invariable.When the new error value becomes smaller than the old one, the learning rate will be increased. This method can k
56、eep network learning at proper speed. This strategy is shown</p><p> η(t+1)=1.05η(t) [SSE(t+1)<SSE(t)]</p><p> η(t+1)=0.70η(t) [SSE(t+1)>SSE(t)] (16)</p><p> η(t+1)=1.00η(t) [SS
57、E(t+1)=SSE(t)]</p><p> 3.2.2 Results of network prediction</p><p> To set up the neural network model, 450 groups of sample data were used, in which 400 groups for training and 50 groups for p
58、rediction. When the training loop times reached 22 375, the step-length-alterable training process was finished, with the network learning error E=0.01 and the finally determined structure of the network {7211}, i.e., se
59、ven nodes in the input layer, twenty-one nodes in the hidden layer and one node in the output layer. The trained network could accurately express the roast</p><p> Fig.3 Change tendency of multi-step predic
60、tion error</p><p> Fig.4 Result of wavelet neural network prediction</p><p> 4 Conclusions</p><p> 1) By analyzing the sample data, coal gas flux,feeding and oxygen content are a
61、scertained as the main parameters for the temperature forecast model. The model parameter order and delay time are deduced from F test method. Then the wavelet neural network is used to identify the roasting process. The
62、 practice application indicates this model is good in roasting temperature forecast.</p><p> 2) According to the process parameters analysis, the model has certain forecast ability. With forecast ability,th
63、e model provides a method for system analysis and optimization, which means that when influence factors are suitably altered, the change tendency of the roasting temperature can be analyzed. The forecast and the analysis
64、 based on the model have guiding significance for production operation.</p><p> References</p><p> [1] YANG Chong-yu. Process technology of alumina [M]. Beijing:Metallurgy Industry Press, 1994
65、. (in Chinese)</p><p> [2] ZHANG Li-qiang, LI Wen-chao. Establishment of some mathematicmodels for Al(OH)3 roasting [J]. Energy Saving of Non-ferrous Metallurgy, 1998, 4: 11?15. (in Chinese)</p><
66、p> [3] WEI Huang. The relations between process parameters, yield and energy consumption in the production of Al(OH)3 [J]. Light Metals,2003(1): 13?18. (in Chinese)</p><p> [4] TANG Mei-qiong, LU Ji-don
67、g, JIN Gang, HUANG Lai. Software design for Al(OH)3 circulation fluidization roasting system [J].Nonferrous Metals (Extractive Metallurgy), 2004(3): 49?52. (inChinese)</p><p> [5] WANG Yu-tao, ZHOU Jian-cha
68、ng, WANG Shi. Application of neural network model and temporal difference method to predict the silicon content of the hot metal [J]. Iron and Steel, 1999, 34(11):7?11. (in Chinese)</p><p> [6] TU Hai, XU J
69、ian-lun, LI Ming. Application of neural network to the forecast of heat state of a blast furnace [J]. Journal of Shanghai University (Natural Science), 1997, 3(6): 623?627. (in Chinese)</p><p> [7] LU Bai-q
70、uan, LI Tian-duo, LIU Zhao-hui. Control based on BPneural networks and wavelets [J]. Journal of System Simulation,1997, 9(1): 40?48. (in Chinese)</p><p> [8] CHEN Tao, QU Liang-sheng. The theory and applica
71、tion of multiresolution wavelet network [J]. China Mechanical Engineering,1997, 8(2): 57?59. (in Chinese)</p><p> [9] ZHANG Qing-hua, BENVENISTE A. Wavelet network [J]. IEEE Transon on Neural Networks, 1992
72、, 3(6): 889?898.</p><p> [10] PATI Y C, KRISHNA P S. Analysis and synthesis of feed forward network using discrete affine wavelet transformations [J]. IEEE Transon on Neural Networks, 1993, 4(1): 73?85.<
73、/p><p> [11] GROSSMANN A, MORLET J. Decomposition of hardy functions into square integrable wavelets of constant shape [J]. SIAM J Math Anal, 1984, 15(4): 723?736.</p><p> [12] GROUPILLAUD P, GRO
74、SSMANN A, MORLET J. Cycle-octave and related transforms in seismic signal analysis [J]. Geoexploration,1984, 23(1): 85?102.</p><p> [13] GROSSMANN A, MORLET J. Transforms associated to square integrable gro
75、up representations (I): General results [J]. J Math Phys,1985, 26(10): 2473?2479.</p><p> [14] ZHAO Song-nian, XIONG Xiao-yun. The wavelet transformation and the wavelet analyze [M]. Beijing: Electronics In
76、dustry Press,1996. (in Chinese)</p><p> [15] NIU Dong-xiao, XING Mian. A study on wavelet neural network prediction model of time series [J]. Systems Engineering—Theory and Practice, 1999(5): 89?92. (in Chi
77、nese)</p><p> [16] YAO Jun-feng, JIANG Jin-hong, MEI Chi, PENG Xiao-qi, REN Hong-jiu, ZHOU An-liang. Application of wavelet neural network in forecasting slag weight and components of copper-smelting conver
78、ter[J]. Nonferrous Metals, 2001, 53(2): 42?44. (in Chinese)</p><p> [17] WANG Tian-qing. Practice of lowering gaseous suspension calciner heat consumption coast [J]. Energy Saving of Non-ferrous Metallurgy,
79、 2004, 21(4): 91?94. (in Chinese)</p><p> [18] FANG Chong-zhi, XIAO De-yun. Process identification [M]. Beijing:Tsinghua University Press, 1988. (in Chinese)</p><p> [19] CARROLL S M, DICKINSO
80、N B W. Construction of neural nets using the radon transform [C]// Proceedings of IJCNN. New York: IEEE Press, 1989: 607?611.</p><p> [20] ITO Y. Representation of functions by superposition of a step or si
81、gmoidal functions and their applications to neural network theory[J]. Neural Network, 1991, 4: 385?394.</p><p> [21] JAROSLAW P S, KRZYSZTOF J C. On the synthesis and complexity of feedforward networks [C]/
82、/ IEEE World Congress on Computational Intelligence. IEEE Neural Network, 1994:2185?2190.</p><p> [22] HAYKIN S. Neural networks: A comprehensive foundation [M]. 2nd Edition. Beijing: China Machine Press, 2
83、004.</p><p> 附錄B外文翻譯——譯文</p><p> 基于小波神經(jīng)網(wǎng)絡(luò)的Al(OH)3流化床焙燒溫度的預(yù)測</p><p> 李 劼,劉代飛,戴學(xué)儒,鄒 忠,丁鳳其</p><p> 1.中南大學(xué)冶金科學(xué)與工程學(xué)院,長沙410083,中國</p><p> 2.長沙有色冶金工程研究院,長沙41
84、0011,中國</p><p><b> 摘要:</b></p><p> 在氧化鋁生產(chǎn)中的循環(huán)流態(tài)化焙燒進(jìn)行了研究和溫度預(yù)報(bào)模型的建立基于小波變換神經(jīng)網(wǎng)絡(luò)動(dòng)量項(xiàng)和學(xué)習(xí)速率的可調(diào)。通過分析焙燒的過程中,煤炭氣體流量和氫氧化鋁氧含量,作為預(yù)測模型的主要參數(shù)的確定。該模型中每個(gè)命令和延遲時(shí)間的參數(shù)用F檢驗(yàn)方法推導(dǎo)出。400組樣本數(shù)據(jù)(采樣期間的1.5分鐘)的實(shí)驗(yàn)后,小
85、波神經(jīng)網(wǎng)絡(luò)模型獲得了{(lán)7,21,1}節(jié)點(diǎn)行的結(jié)構(gòu),7個(gè)節(jié)點(diǎn)在輸入層,二十一節(jié)個(gè)點(diǎn)在隱藏層和輸出層一個(gè)節(jié)點(diǎn)。在模型預(yù)測準(zhǔn)確性方面的測試顯示,絕對誤差為±5.0℃,采用單步預(yù)測精度可以達(dá)到90%,在6個(gè)多步溫度模型的預(yù)測結(jié)果在合理范圍內(nèi)。</p><p> 關(guān)鍵詞:小波神經(jīng)網(wǎng)絡(luò);氫氧化鋁流態(tài)化焙燒;焙燒溫度;建模;預(yù)測</p><p><b> 1引言</b>
86、;</p><p> 在氧化鋁生產(chǎn)中,焙燒是最后一道工序,在烘干附著水,去除結(jié)晶水和γ-Al2O3的部分轉(zhuǎn)化為α-Al2O3的。在焙燒過程中的能源消耗在生產(chǎn)氧化鋁過程中10%的能源消耗在氧化鋁的焙燒上。焙燒溫度是影響物料和能源消耗的主要因素。溫度的控制在整個(gè)氧化鋁生產(chǎn)中是非常重要。如果能獲得一些合適預(yù)測模型,可以準(zhǔn)確預(yù)測溫度,然后就可以采用優(yōu)化措施改善。</p><p> 目前,有3種
87、被廣泛應(yīng)用于氧化鋁焙燒工藝流態(tài)化焙燒技術(shù):美國閃速焙燒爐,德國循環(huán)焙燒爐和丹麥的氣體懸浮焙燒爐。所有的這些焙燒技術(shù)中,大多數(shù)現(xiàn)有的焙燒溫度模型是靜態(tài)的模型,如簡單的物料和能量模型是根據(jù)工藝參數(shù)及產(chǎn)量和能源基于回歸分析的反應(yīng)機(jī)制之間的關(guān)系方程計(jì)算的。靜態(tài)模型是基于物質(zhì)和能量平衡并通過算和分析過程變量在整個(gè)流程和系統(tǒng)各單位的結(jié)構(gòu)得到的。然而,所有的靜態(tài)模型在應(yīng)用方面都存在缺陷,因?yàn)樗麄儾荒艹浞置枋龆嘧兞康奶攸c(diǎn),非線性的和因?yàn)楣虤獗簾磻?yīng)造成
88、的復(fù)雜耦合系統(tǒng)。在系統(tǒng)中,流場,熱場,密度場是相互依存限制相互限制的。因此,溫度預(yù)測模型必須有很強(qiáng)的動(dòng)態(tài)建設(shè),自我判斷和適應(yīng)能力。</p><p> 在這項(xiàng)研究中,焙燒溫度預(yù)測模型是基于人工神經(jīng)網(wǎng)絡(luò)和小波分析建立的具有強(qiáng)大的故障排查、自我學(xué)習(xí)能力和非線性映射能力的特點(diǎn)。神經(jīng)網(wǎng)絡(luò)模型有解決推理,識(shí)別,分類等方面的復(fù)雜問題的優(yōu)勢。但神經(jīng)網(wǎng)絡(luò)預(yù)測的準(zhǔn)確性依賴于模型參數(shù)的有效性和網(wǎng)絡(luò)結(jié)構(gòu)的合理選擇。在目前,人工神經(jīng)網(wǎng)絡(luò)
89、被廣泛應(yīng)用于冶金領(lǐng)域[5-6]。波分析,一種時(shí)間頻率信號(hào)分析方法,被命名為數(shù)學(xué)顯微鏡。它具有多分辨率分析能力,特別是有能力來分析局部時(shí)間和頻率信號(hào)區(qū)域的特點(diǎn)。小波分析可以解決分析窗口尺寸方面的問題,但允許分析窗口形的狀變化。通過結(jié)合小的小波分析包,神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu)變得層次化和多分辨率化。加上小波分析的時(shí)頻定位,網(wǎng)絡(luò)模型預(yù)測的準(zhǔn)確性得到改善[7-10]。</p><p><b> 2小波神經(jīng)網(wǎng)絡(luò)算法<
90、;/b></p><p> 在20世紀(jì)80年代,格羅斯曼和美樂[11-13]提出了小波任何函數(shù)的定義f(x)∈L2(R)aix+bi作為式(1)。在式(1)和式(2),函數(shù)ψ(x)有波動(dòng)特征[14],所以被命名為母小波。參數(shù)a和b分別是縮放系數(shù)和轉(zhuǎn)移系數(shù)。小波函數(shù)通過母波的縮放a和轉(zhuǎn)換b實(shí)現(xiàn)仿射變換。參數(shù)1/|a|1/2是歸一化系數(shù),用式(3)表示為:</p><p> , a∈
91、R+, b∈R (x=?∞?+∞)(1)</p><p><b> (2)</b></p><p><b> (3)</b></p><p> 對于一個(gè)動(dòng)態(tài)系統(tǒng),觀察到輸入x(T)和輸出y(t)時(shí)被定義為</p><p> [x(1),x(2)……x(t)],=[y(1),y(2)……y(
92、t)](4)</p><p> 通過設(shè)置參數(shù)t作為觀察時(shí)間點(diǎn),之前t系列的觀測樣本是[xt,yt]和函數(shù)y(t),在t的預(yù)測輸出后,被定義為:</p><p><b> (5)</b></p><p> 如果v(t)值很小,函數(shù)g(xt-1,yt-1)可能被視為一個(gè)函數(shù)y(t)的預(yù)測。</p><p> 輸入(
93、影響因素)和輸出(評價(jià)指標(biāo))之間的關(guān)系可以由BPl來描述,神經(jīng)網(wǎng)絡(luò)的隱藏函數(shù)是S形類型定義為式(6):</p><p> ?。╥=0,……,N)(6)</p><p> 其中g(shù)(x)是擬合函數(shù); wi為權(quán)重系數(shù); S是S型函數(shù),N是節(jié)點(diǎn)數(shù)。</p><p> 小波神經(jīng)網(wǎng)絡(luò)是集成了神經(jīng)網(wǎng)絡(luò)的小波變換。通過取代小波函數(shù)為S型函數(shù),小波神經(jīng)網(wǎng)絡(luò)比BP神經(jīng)網(wǎng)絡(luò)擁有較強(qiáng)
94、的非線性逼近能力。“小波神經(jīng)網(wǎng)絡(luò)表達(dá)功能的實(shí)現(xiàn)是通過結(jié)合一系列的小波變換。y的值(x)近似??于一組ψ(x)的總和,表示為式(7):</p><p> g(x)=(i=0,……,N)(7)</p><p> 其中g(shù)(x)是擬合函數(shù); wi為權(quán)重系數(shù); ai的比例系數(shù);bi雙向移位系數(shù),N是節(jié)點(diǎn)的數(shù)。</p><p> 小波神經(jīng)網(wǎng)絡(luò)的過程識(shí)別是參數(shù)wi,ai和
95、bi的估計(jì)值。用最小的均方偏差能量功能評估誤差,優(yōu)化計(jì)算法則表明誤差是接近最低限度。令得到ψ0=1,最小均方偏差能量函數(shù)式如(8)所示。在式中,K是樣本數(shù):</p><p> E=(j=1,……,k)(8)</p><p> 目前,小波函數(shù)廣泛應(yīng)用于:哈爾小波,香農(nóng)小波,墨西哥帽小波和美樂小波等[15]。通過擴(kuò)展和轉(zhuǎn)換,這些功能可以在L2(R)中構(gòu)成標(biāo)準(zhǔn)正交基底。</p>
96、<p> 在這項(xiàng)研究中,獲得了一項(xiàng)在文獻(xiàn)中討論過的滿意的結(jié)構(gòu)是應(yīng)用小波函數(shù)表示函數(shù)式(9)和(10)。</p><p> s(x+2)-2s(x)+s(x-2)(9)</p><p> s(x)=1/(1+)(10)</p><p><b> 3焙燒溫度預(yù)測模型</b></p><p> 3.
97、1模型參數(shù)的選擇</p><p> 焙燒過程中包括加料量,脫水,預(yù)熱分解,焙燒和冷卻,在這里面焙燒溫度是關(guān)鍵的操作參數(shù)。當(dāng)質(zhì)量是好的時(shí)候,低溫有利于增加產(chǎn)量和減少消耗。實(shí)踐表明,當(dāng)溫度降低100℃,約3%的能源被節(jié)約[17]。有諸多因素的影響焙燒,如濕度,氣體燃料質(zhì)量,氣體燃料中空氣的比例,加料窯爐結(jié)構(gòu)。所有這些因素是相互依存和相互限制。</p><p> 通過分析焙燒過程中,煤氣的通
98、量,加料量和氧含量確定為預(yù)測模型的主體參數(shù)。模型的結(jié)構(gòu)是如圖1所示。由于實(shí)際的生產(chǎn)是一個(gè)連續(xù)的過程中,前一個(gè)操作直接影響爐現(xiàn)時(shí)的狀況,因此,當(dāng)確定輸入?yún)?shù),時(shí)間繼承必須考慮到。參數(shù)的時(shí)間序列模型,必須確定,包括:溫度,煤氣流量,加料量,氧含量。所有這些,除了溫度參數(shù)必須有自己的延遲時(shí)間的確定。</p><p> 圖1氫氧化鋁焙燒邏輯模型</p><p> 模型命令的參數(shù)由F檢驗(yàn)方法決
99、定[18],這是一個(gè)普遍的統(tǒng)計(jì)方法,而且當(dāng)模型命令的參數(shù)改變時(shí),能夠顯著的計(jì)算損失函數(shù)的方差。當(dāng)一個(gè)命令從n1增加到n2(n1<n2,損失函數(shù)E(n)從E(n1)降低到E(n2),如下式所示:</p><p> t=[(E(n1)-E(n2))/E(n2)][(L-2n2)/2(n2-n1)](11)</p><p> 其中t與F分布一致,命名為t?F[2(n2?n1), L?
100、2n2]</p><p> 指定一個(gè)置信度值,如果t≤ta,即E(n)沒有明顯下降,該命令參數(shù)n1被接受;如果t> ta,即E(n)明顯降低,n1可能不被接受,該命令必須增加,T必須重新計(jì)算,直到n1被接受。.</p><p> 一個(gè)采樣周期內(nèi)的樣本數(shù)據(jù)400組在1.5分鐘內(nèi)被用來測定模型參數(shù)的命令。通過計(jì)算,溫度,煤氣流量,加料和氧含量的命令分別是3,2,1和1,煤氣流量,加料
101、量和氧含量的延遲時(shí)間分別為3,5,1。小波神經(jīng)網(wǎng)絡(luò)模型的結(jié)構(gòu)如圖2所示,其方程被定義為如下:</p><p> y(t)=WNN[y(t-1),y(t-2),y(t-3),u1(t-3),u1(t-4),u2(t-5),u3(t-1)(12)</p><p> 其中,y是溫度; U1是煤氣流量; U2是加料量; U3是氧含量; t為采樣時(shí)間。</p><p>
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲(chǔ)空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 基于模糊神經(jīng)網(wǎng)絡(luò)的循環(huán)流化床鍋爐控制.pdf
- 基于神經(jīng)網(wǎng)絡(luò)辨識(shí)的循環(huán)流化床鍋爐的床溫控制研究.pdf
- 模糊神經(jīng)網(wǎng)絡(luò)在循環(huán)流化床中的應(yīng)用.pdf
- 基于小波神經(jīng)網(wǎng)絡(luò)的系統(tǒng)邊際電價(jià)預(yù)測.pdf
- 基于小波神經(jīng)網(wǎng)絡(luò)的呼吸運(yùn)動(dòng)預(yù)測研究.pdf
- 基于小波神經(jīng)網(wǎng)絡(luò)的高峰負(fù)荷預(yù)測研究.pdf
- 基于區(qū)間小波神經(jīng)網(wǎng)絡(luò)的高爐爐溫預(yù)測.pdf
- 基于小波神經(jīng)網(wǎng)絡(luò)的短時(shí)交通流量預(yù)測.pdf
- 基于模糊神經(jīng)網(wǎng)絡(luò)的循環(huán)流化床鍋爐床溫控制方法研究.pdf
- 空氣重介流化床的神經(jīng)網(wǎng)絡(luò)逆在線解耦控制.pdf
- 基于小波神經(jīng)網(wǎng)絡(luò)的聲景質(zhì)量預(yù)測模型.pdf
- 短期電力負(fù)荷的小波神經(jīng)網(wǎng)絡(luò)預(yù)測.pdf
- 基于小波神經(jīng)網(wǎng)絡(luò)的短期電能負(fù)荷預(yù)測方法研究.pdf
- 基于小波和FIR神經(jīng)網(wǎng)絡(luò)的流量預(yù)測模型研究.pdf
- 基于小波神經(jīng)網(wǎng)絡(luò)的河道流量預(yù)測研究與應(yīng)用.pdf
- 基于改進(jìn)小波神經(jīng)網(wǎng)絡(luò)的短時(shí)交通流預(yù)測研究.pdf
- 基于小波變換和神經(jīng)網(wǎng)絡(luò)的短期負(fù)荷預(yù)測研究.pdf
- 基于小波變換的鼓泡流化床壓力脈動(dòng)信號(hào)分析.pdf
- 基于小波神經(jīng)網(wǎng)絡(luò)的網(wǎng)絡(luò)安全態(tài)勢預(yù)測方法研究.pdf
- 基于神經(jīng)網(wǎng)絡(luò)的循環(huán)流化床煙氣脫硫過程的智能集成建模研究.pdf
評論
0/150
提交評論