2023年全國(guó)碩士研究生考試考研英語(yǔ)一試題真題(含答案詳解+作文范文)_第1頁(yè)
已閱讀1頁(yè),還剩16頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、<p>  外文標(biāo)題:The Driver Fatigue Monitoring System Based on Face Recognition Technology</p><p>  外文作者:Xiao-qing Luo , Rong Hu, Tian-e Fan</p><p>  文獻(xiàn)出處:International Conference on Intelligent

2、Control & Information Processing</p><p>  英文2486單詞, 12908字符,中文3769漢字。</p><p>  此文檔是外文翻譯成品,無(wú)需調(diào)整復(fù)雜的格式哦!下載之后直接可用,方便快捷!只需二十多元。</p><p><b>  原文:</b></p><p>  

3、The Driver Fatigue Monitoring System Based on Face Recognition Technology</p><p>  Xiao-qing Luo , Rong Hu, Tian-e Fan</p><p><b>  Abstract</b></p><p>  This paper uses

4、different algorithms, which are called AdaBoost algorithm and the difference between infrared frames algorithm, to locate the precise position of the eyes in different light environment of driving. We identify the eye’s

5、status by extracting the characteristic parameters of eyes and detect fatigue based on the method of PERCLOS. At the same time, tfurther test the driver's fatigue, we use the Local Binary Patter (LBP) algorithm to de

6、tect the yawning as an aided detection. The resu</p><p>  Keywords:Driver fatigue; Yawning detection; Eye detection; PERCLOS; AdaBoost.</p><p>  Introduction</p><p>  Nowadays, fati

7、gue driving is one of the main reasons for traffic accidents [1]. It’s reported that there were 3906164 traffic accidents in 2010 in our country. Among them, 92% of accident deaths were caused by motor vehicle driver spe

8、eding, while most of accident deaths caused by the fatigue driving roses 1% [2]. At present, there are already some methods of fatigue testing such as testing through the position of head, EEG, EKG, eye blinking, PERCLOS

9、 and so on [3]. And as a non-contact method, P</p><p>  System framework</p><p>  The framework of driver fatigue warning system can be seen in Figure 1, including the infrared light source CCD

10、sensor camera which can control the near-far, the image pre- processing module, module of fatigue detection and alarm devices [6]. CCD camera is being introduced to pick up driver's face image. And the image pre-proc

11、essing module is used to obtain the image histogram equalization, smoothing pre-processing operation. Fatigue detection module is to calculate eyes closed extent by extract</p><p>  Image processing</p>

12、;<p>  The processing system is shown in Figure 2. The system estimates the image daytime or night according to the brightness of it. When the brightness is greater than a certain threshold (45), it is the daytime

13、 image and daytime mode algorithm is used for image processing, to the contrary, the processing algorithm using the night mode.</p><p>  Image pre-processing</p><p>  The adaptability of the alg

14、orithm and the accuracy of detection will be affected by many factors if direct detection due to the collected images is influenced by the illumination, background, noise and other factors. Therefore, we have to pre-proc

15、ess these images. In order to efficiently distinguish between foreground and background, in the stage of image pre-processing, lighting compensation and image histogram equalization of treatment are done first to enhance

16、 the contrast of image. Meanwhil</p><p>  Process in day mode</p><p>  Eyes location and face detection</p><p>  This paper proposes a face detection method named AdaBoost to detec

17、t eyes and face with rectangular characteristics. This method is composed of integral image, AdaBoost algorithm and cascade detector [8]. Also it can solve the complexity of face and eye detection problem well. At the sa

18、me time, the AdaBoost is experimentally proved good real- time and high efficient detection. According to the way of weighted voting, the AdaBoost can cascade the weak classifier level to a strong classifier. The </p&

19、gt;<p>  Input the training sample data N : (x1, y1), (x2 , y2 ), …, (xn , yn ) ;</p><p>  b)Initializing the weightsand l denotes the number of positive and negative samples respectively;</p>

20、<p>  c)For t =1 to T , first normalize weights, For hj of each feature j , Calculating the error rate relative to the current weightsthen choose the weak classifier </p><p>  with the smallest error r

21、ate as number t . Finally, update the corresponding weight of each sample,</p><p>  d)Output the last strong classifier:</p><p>  which 0 denotes negative data and 1 denotes positive data.</p

22、><p>  e)At the end, composing all the strong classifier into a cascade detector.</p><p>  Figures 3 and 4 show the test effect pictures in the day mode. As some parts in face have characteristics

23、of rectangular, which similar to the eyes, such as eyebrows, nostrils, mouth and other parts, leading to the high risk of error detection. In this paper, we make use of the geometric features of eyes to further choose re

24、ctangle and locate the eyes precisely.</p><p>  Extraction eyes characteristic parameters</p><p>  After extracting eyes contour point, then we need to calculate the eyes characteristic paramete

25、rs. On the basis of analyzing eyes’ characteristics of state recognition,calculating the characteristic parameters of eyes such as aspect ratio and eyelid curvature.</p><p>  a)Eyes height to width ratio(Eye

26、HWRate of eye)</p><p>  During the eye feature extraction, as shown in Figure 5, measuring the height of eyes (EyeHeight) and width (EyeWidth) to calculate the height to width ratio of the eyes. The EyeHWRat

27、e is showed as follow:</p><p>  b)Upper eyelid curvature</p><p>  It can be seen from the picture of the edges of eyelids, in the process of opening or closing your eyes, the upper eyelid curvat

28、ure changed a lot, while lower eyelid curvature basically remain unchanged. This is the reason why we choose upper eyelid curvature as feature parameter. In the process of treatment the eyes, the errors of the pixel in t

29、he left and right side of eyes will be happened, so we need to indent five pixels for eyelid.</p><p>  As shown in Figure 6, from A to C, from B to D. We pick up intermediate portion in edge curve of eyes be

30、cause there is more accurate of eye states reflected by the curvature of eyelid located in, and E is the middle on the eyes of the extracted contour points. We find the first white pixels from the leftmost and rightmost,

31、 and then take the pixel values corresponding to the midpoint between the two, if we got 0, it is judged as a recess and on the contrary, it is a convex.</p><p>  Fatigue judgments</p><p>  As t

32、he analysis above, the eye can be regarded as being closed when EyeHWRate <270 . Also, when the EyeHWRate is between 270 and 400, then decide whether it is open by curvature. If the judge is concave and EyeHWRate <

33、;400 ,then it is regarded as closed or squint. Otherwise it is regarded protrude and EyeHWRate >400 , it is open.</p><p>  Write down the number of eyes open and close, as well as the time it begins, when

34、 it ends. We can calculate the value of PERCLOS to judge the driver’s fatigue. If the system ascertains the driver in fatigue state, then voice warning.</p><p>  Process in night mode</p><p>  

35、If the images collected from the case at night dark, human eyes will be influenced by light red-eye. So we use the infrared inter-frame difference algorithm to locate the eye. Around the CCD camera, an controlled infrare

36、d light source was installed between adaxial and abaxial, adaxial light produce the bright pupil when the odd frame was controlled by the program, just as shown in Figure 7 (a). In the same</p><p>  wa

37、y, abaxial light produce the bright pupil when the even frame was controlled, it is shown in Figure 7 (b). Considering there is smallest variations among the parity frame image, only pupils vary greatly, so the vi

38、deo sequence of eyes can be determined by using Frame-difference method.</p><p>  In image processing, the opening and closing operation of binary mathematical morphology is usually used to process segmented

39、 image [9], after opening operation, we use the method of image processing including dilation, erosion to remove isolated points and burrs. At end, to calculate the eye area that is occupied. In a lab environment, settin

40、g the area threshold value of 50, when the area is less than 50, eyes are closed, otherwise, eyes are open. Figure 8 show the night mode with eyes closed t</p><p>  Auxiliary warning processing</p>&l

41、t;p>  A large extent of the fatigue is reacted in the eye, but not all. We can also watch the reactions in the mouth. In this paper, LBP algorithm is presented to detect yawn. The LBP algorithm is as follows:</p>

42、;<p>  Among them, gc represent all pixels' gray values in local regional center, g0 to g p ?1</p><p>  represent neighbor pixels' gray regional center. The arc centered at gc whose radius is

43、the R. The operator has a gray scale invariance, With the only illumination changes, which will not change the order of local pixel values. The same to the value obtained by the operator.</p><p>  In Case of

44、 eye is located, we need to detect the production of the yawn [10], first we can locate the position of mouth area according to face organ distribution. Horizontal distance of eyes’ center is approximately equal to the v

45、ertical distance of eyes to mouth; while the width of the mouth is roughly equivalent to a horizontal distance of eyes’ center. Thus, according to these proportions, we determine roughly the mouth region, and then reuse

46、the LBP algorithm for further testing of the yawn.</p><p>  Experimental result</p><p>  In order to verify the effect of fatigue driving test, we simulate monitoring for the driver in fatigue s

47、tate. Then to detect the laboratory personnel respectively in different lighting conditions (morning, afternoon, evening) as well as in different states (awake and fatigue). The accuracy of discriminant results is over

48、90%. Specific test results are shown in table 1.</p><p>  Conclusion</p><p>  This paper introduces a method of driver fatigue warning,which gots a strong robustness classifier based on the trai

49、ning of AdaBoost algorithm to locate the eyes, and to judge the state of the eyes through the calculation of height-width ratio and eyelid curvature. While, in the night mode, using the method of infrared frame differen

50、ce can locate the eyes accurately, then determine the fatigue level by the value of PERCLOS. Moreover, by testing the yawn based on LBP algorithm, we could further </p><p>  References</p><p>  

51、[1]M. Jia, et al., "Research on driver's face detection and position method based on image processing," in 2012 24th Chinese Control and Decision Conference, CCDC 2012, May 23, 2012 - May 25, 2012, Taiyuan,

52、 China, 2012, pp. 1954-1959.</p><p>  [2]SONG Zhumei, WANG Hailin, LIU Hanhui. Drive Fatigue Monitoring and Identification Methods [J]. Journal of Shenzhen Institute of Information Technology, 2011, pp. 38-4

53、2.</p><p>  [3]Q. Ji, et al., "Real-time nonintrusive monitoring and prediction of driver fatigue," IEEE Transactions on Vehicular Technology, vol. 53, pp. 1052-1068, 2004.</p><p>  [4

54、]Q. Yangon, et al., "A novel real-time face tracking algorithm for detection of driver fatigue," in 2010 International Symposium on Intelligent Information Technology and Security Informatics, IITSI 2010, April

55、 2, 2010 - April 4, 2010, China, 2010, pp. 671-676.</p><p>  [5]L. M. Bergasa and J. Nuevo, "Real-time system for monitoring driver vigilance," in IEEE International Symposium on Industrial Electro

56、nics 2005, ISIE 2005, June 20, 2005 - June 23, 2005, Dubrovnik, Croatia, 2005, pp. 1303-1308.</p><p>  [6]P. Viola and M. Jones, "Rapid object detection using a boosted cascade of simple features,"

57、 in 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, December 8, 2001 - December 14, 2001, Kauai, HI, United states, 2001, pp. I511-I518.</p><p>  [7]P. Campadelli, et al., &

58、quot;Precise eye and mouth localization," International Journal of Pattern Recognition and Artificial Intelligence, vol. 23, pp. 359-377, 2009.</p><p>  [8]J. Wu, et al., "Fast asymmetric learning

59、for cascade face detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, pp. 369-382, 2008.</p><p>  [9]T. Ahonen, et al., "Face description with local binary patterns:

60、 Application to face recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, pp. 2037-2041, 2006.</p><p>  [10]X. Fan, et al., "Yawning detection for monitoring drive

61、r fatigue," in 6th International Conference on Machine Learning and Cybernetics, ICMLC 2007, August 19, 2007 - August 22, 2007, Hong Kong, China, 2007, pp. 664-668.</p><p>  [11]M.Balasubtamanian, S.Pal

62、anivel, V. Ramalingam, Real time face and mouth recognition using radial basis function neural networks. Expert System with Applications (2008),doi:10.10.1016/j.eswa.2008.08.001.</p><p>  [12]P.S. Rau. Drows

63、y driver detection and warning system for commercial vehicle drivers: field operational test design, data analyses, and progress. Proceedings of the 19th International Conference on Enhanced Safety of Vehicles, Washingto

64、n, DC, June 6-9, 2005.</p><p>  [13]S. Park, M. Trivedi. Driver activity analysis for intelligent vehicles: issues and development framework. Proceedings of IEEE Intelligent Vehicles Symposium, Las Vegas, US

65、A, June 2005, pp. 795-800.</p><p>  [14]G. Yang, Y. Lin, P. Bhattacharya. A driver fatigue recognition model using fusion of multiple features. Proceedings of IEEE International Conference on Systems, Man an

66、d Cybernetics, Hawaii USA, vol. 2, October 2005, pp. 1777-1784.</p><p>  [15]Z. Zhu, Q. Ji, P. Lan. Real time non intrusive monitoring and prediction of driver fatigue. IEEE Transactions on Vehicular Techno

67、logies, 53 (4) (2004), pp. 1052-1068.</p><p><b>  譯文:</b></p><p>  基于人臉識(shí)別技術(shù)的駕駛員疲勞監(jiān)測(cè)系統(tǒng)</p><p>  Xiao-qing Luo , Rong Hu, Tian-e Fan</p><p><b>  摘要</

68、b></p><p>  在本文中,利用不同的AdaBoost算法和紅外圖像序列算法之間的差異,來(lái)定位在不同光照駕駛環(huán)境下眼睛的精確位置。 我們通過(guò)提取眼睛的特征參數(shù)來(lái)識(shí)別眼睛的狀態(tài),并且在PERCLOS方法的基礎(chǔ)上去監(jiān)測(cè)疲勞駕駛。同時(shí),為了進(jìn)一步監(jiān)測(cè)駕駛員的疲勞程度,我們使用局部二元模式(LBP)算法來(lái)監(jiān)測(cè)駕駛員打呵欠的行為以作輔助檢測(cè)。 實(shí)驗(yàn)結(jié)果表明該算法保證了系統(tǒng)的準(zhǔn)確性,達(dá)到了非接觸式、不同光照條件

69、和實(shí)時(shí)監(jiān)測(cè)的要求。</p><p>  關(guān)鍵詞:疲勞駕駛,打哈欠監(jiān)測(cè),眼睛監(jiān)測(cè),PERCLOS方法, AdaBoost算法.</p><p><b>  引言</b></p><p>  疲勞駕駛是目前交通事故發(fā)生的主要原因之一[1]。據(jù)悉,2010年我國(guó)共發(fā)生交通事故3906164起。其中,92%的事故死亡是由機(jī)動(dòng)車(chē)駕駛員超速駕駛造成的,而大

70、部分事故死亡則是由疲勞駕駛引起的并提升了1%[2]。目前已經(jīng)有一些疲勞駕駛的監(jiān)測(cè)方法,如通過(guò)頭部位置、EEG、EKG、眨眼、PERCLOS等進(jìn)行監(jiān)測(cè)[3]。作為一種非接觸式方法,PERCLOS方法能夠監(jiān)測(cè)出人們?cè)诶Ь霑r(shí)眼瞼的狀況和疲勞程度,但是它很容易受到光照的影響。本文分析了CCD攝像機(jī)拍攝圖像的亮度去評(píng)估白天或夜間的環(huán)境。我們利用AdaBoost算法在日間來(lái)定位人臉和眼睛,而在夜間我們使用紅外圖像序列差異來(lái)監(jiān)測(cè)他們的眼睛[4]。我們

71、通過(guò)提取眼睛的特征參數(shù)來(lái)判斷駕駛員眼睛的狀況,包括高寬比和眼瞼曲率。然后采用PERCLOS方法測(cè)試駕駛員的疲勞程度。另外,在定位駕駛員的眼睛之后,我們根據(jù)他們的面部特征定位駕駛員的嘴巴。然后使用LBP算法測(cè)試打哈欠的行為并判斷駕駛員的疲勞狀態(tài)。對(duì)疲勞駕駛系統(tǒng)的研究對(duì)預(yù)防交通事故具有重要意義[5]。</p><p><b>  系統(tǒng)架構(gòu)</b></p><p>  從圖

72、1可以看到駕駛員疲勞駕駛預(yù)警系統(tǒng)的框架,包括可以控制近場(chǎng)紅外光源CCD傳感器攝像頭,圖像預(yù)處理模塊,疲勞檢測(cè)模塊和報(bào)警裝置[6]。 還介紹了CCD攝像頭來(lái)監(jiān)測(cè)駕駛員的臉部圖像。 圖像預(yù)處理模塊用于獲取圖像直方圖均衡、平滑預(yù)處理操作。 疲勞檢測(cè)模塊通過(guò)提取眼睛的特征參數(shù)來(lái)計(jì)算眼睛的閉合程度,并利用PERCLOS方法檢測(cè)駕駛員的疲勞程度并測(cè)量P80(閉合度大于80%時(shí)判斷為眼睛閉合狀態(tài))。 報(bào)警裝置用于確定駕駛員是否疲勞駕駛,如果駕駛員感到

73、疲勞,則會(huì)發(fā)現(xiàn)報(bào)警信號(hào)[7]。</p><p>  紅外光源CCD相機(jī) 圖像處理模塊 疲勞監(jiān)測(cè)模塊 報(bào)警裝置</p><p><b>  控制裝置</b></p><p>  圖一 系統(tǒng)框架</p><p><b>  圖像處理</b></p><p>

74、  圖像處理系統(tǒng)如圖2所示。系統(tǒng)根據(jù)圖像在白天或夜間的亮度去評(píng)估圖像。 當(dāng)亮度大于某個(gè)閾值(45)時(shí),采用白天圖像和白天模式算法進(jìn)行圖像處理,相反,則采用夜間模式的處理算法。</p><p>  圖二 算法流程圖</p><p><b>  圖像預(yù)處理</b></p><p>  當(dāng)直接檢測(cè)采集的圖像受到光照、背景、噪聲和其他因素的影響時(shí),

75、則該算法的適用性和檢測(cè)的準(zhǔn)確性也將受到許多因素的影響。 因此,我們必須預(yù)先處理這些圖像。為了有效區(qū)分前景和背景,在圖像預(yù)處理階段首先進(jìn)行光照補(bǔ)償和圖像直方圖均衡進(jìn)行處理,以提高圖像的對(duì)比度。 同時(shí),為了消除噪聲的影響,圖像中值濾波的運(yùn)算不僅消除了孤立點(diǎn)的噪聲,而且有效地保護(hù)了圖像的邊界信息; 此外,可控距離紅外CCD攝像機(jī)可用于在低光照條件下拍攝圖像[7]。</p><p><b>  日間模式下的處理

76、</b></p><p><b>  眼睛位置和面部檢測(cè)</b></p><p>  本文提出了一種名為AdaBoost的人臉檢測(cè)方法來(lái)檢測(cè)具有矩形特征的眼睛和臉部。 該方法由積分圖像、AdaBoost算法和級(jí)聯(lián)檢測(cè)器組成[8]。 也可以很好地解決人臉檢測(cè)問(wèn)題的復(fù)雜性。 與此同時(shí),AdaBoost通過(guò)實(shí)驗(yàn)證明其具有良好的實(shí)時(shí)性和高效性。 根據(jù)不同加權(quán)方式,

77、AdaBoost可以將弱分類(lèi)器級(jí)別聯(lián)到強(qiáng)分類(lèi)器。 方式如下:</p><p>  輸入樣本數(shù)據(jù)N:(x1, y1), (x2 , y2 ), …, (xn , yn ) ;</p><p>  b)初始化權(quán)參數(shù)1和0分別表示正樣本和負(fù)樣本的數(shù)量。</p><p>  c) t 的取值范圍從1 到T , 首先對(duì)權(quán)參數(shù)正態(tài)化, j的每個(gè)特性 hj , 計(jì)算當(dāng)前權(quán)參數(shù)的錯(cuò)

78、誤率然后選擇弱分類(lèi)器 </p><p>  以數(shù)字t表示最小的錯(cuò)誤率。 最后,更新每個(gè)樣本的相應(yīng)權(quán)參數(shù),,</p><p>  d)輸出最后一個(gè)強(qiáng)分類(lèi)器:</p><p>  其中0表示負(fù)數(shù)據(jù),1表示正數(shù)據(jù)。</p><p>  e)最后,將所有強(qiáng)分類(lèi)器組成一個(gè)級(jí)聯(lián)檢測(cè)器。</p><p>  圖3和4顯示了日間模式

79、下的測(cè)試效果圖。 由于臉上的某些部位具有長(zhǎng)方形的特征,類(lèi)似于眼睛,例如眉毛、鼻孔、嘴巴等部位,這容易導(dǎo)致錯(cuò)誤檢測(cè)的高風(fēng)險(xiǎn)。 在本文中,我們利用眼睛的幾何特征進(jìn)一步選擇矩形并精確定位眼睛。</p><p>  圖三 日間模式下眼部睜開(kāi)處理</p><p>  圖四 夜間模式下眼部閉合處理</p><p><b>  提取眼睛特征參數(shù)</b&

80、gt;</p><p>  提取眼睛輪廓點(diǎn)后,我們需要計(jì)算眼睛特征參數(shù)。在分析眼睛狀態(tài)識(shí)別特征的基礎(chǔ)上,計(jì)算眼睛的長(zhǎng)寬比和眼瞼曲率等特征參數(shù)。</p><p>  a)眼睛高寬比(EyeHWRate)</p><p>  圖五 眼睛高寬比</p><p>  在眼部特征提取過(guò)程中,如圖5所示,測(cè)量眼睛的高度(EyeHeight)和寬度(E

81、yeWidth)來(lái)計(jì)算眼睛的高寬比。 眼睛高寬比計(jì)算公式如下:</p><p><b>  b)上眼瞼曲率</b></p><p>  從眼瞼邊緣的圖片可以看出,在打開(kāi)或閉上眼睛的過(guò)程中,上眼瞼的曲率變化很大,而下眼瞼的曲率基本保持不變。 這就是我們選擇上瞼曲率作為特征參數(shù)的原因。 在處理眼睛的過(guò)程中,會(huì)出現(xiàn)眼睛左右兩側(cè)像素的誤差,因此我們需要縮小眼瞼的五個(gè)像素。&l

82、t;/p><p>  如圖6所示,從A到C,從B到D.我們截取眼睛邊緣曲線中的中間部分,因?yàn)檠劬Φ臓顟B(tài)通過(guò)由位于其中的眼瞼曲率能得到更準(zhǔn)確的反應(yīng),E是眼瞼提取輪廓點(diǎn)的中心點(diǎn)。 我們從最左邊和最右邊找到第一個(gè)白色像素,然后取對(duì)應(yīng)于這兩個(gè)中間點(diǎn)的像素值,如果我們得到0,則判斷為凹陷,相反則是凸起的。</p><p>  圖六 上眼瞼曲率</p><p><b&g

83、t;  疲勞判斷</b></p><p>  正如上面的分析,當(dāng)眼睛高寬比 <270時(shí),眼睛可以被視為是閉合狀態(tài)。此外,當(dāng)眼睛高寬比在270和400之間時(shí),則根據(jù)曲率決定是否為閉合。如果判斷為凹陷并且眼睛高寬比 <400,則視為閉合或斜視。否則將被視為凸出并且眼睛高寬比> 400時(shí),則判斷為打開(kāi)狀態(tài)。</p><p>  記錄下眼睛睜開(kāi)以及閉上的次數(shù)以及眼睛睜

84、開(kāi)閉上的開(kāi)始時(shí)間和結(jié)束時(shí)間。我們可以計(jì)算PERCLOS的值來(lái)判斷駕駛員的疲勞程度。如果系統(tǒng)確定駕駛員處于疲勞狀態(tài),則發(fā)出語(yǔ)音警告。</p><p><b>  在夜間模式下處理</b></p><p>  如果夜間收集的圖像太暗,人眼會(huì)受到紅光的影響。所以我們使用紅外幀間差分算法來(lái)定位眼睛。在CCD相機(jī)周?chē)?,在近軸和遠(yuǎn)軸之間安裝了可控的紅外光源,</p>

85、<p>  當(dāng)程序控制的是奇數(shù)圖像時(shí),近軸光會(huì)產(chǎn)生明亮的瞳孔,如圖7(a)所示。同樣,當(dāng)偶數(shù)圖像被控制時(shí),遠(yuǎn)軸光產(chǎn)生明亮的瞳孔,如圖7(b)所示??紤]到奇偶校驗(yàn)幀圖像的變化最小,只有瞳孔變化很大,因此可以采用幀間差分的方法確定眼睛的圖像序列。</p><p>  圖七 夜間模式下眼部睜開(kāi)處理</p><p>  在圖像處理中,二進(jìn)制數(shù)學(xué)形態(tài)學(xué)的開(kāi)閉操作通常用于處理分割圖像[

86、9],開(kāi)放操作后,我們使用了包括擴(kuò)大、侵蝕去除孤立點(diǎn)和毛邊的圖像處理方法。 最后,計(jì)算被占用的眼睛區(qū)域。 在實(shí)驗(yàn)室環(huán)境中,將面積閾值設(shè)置為50,當(dāng)面積小于50時(shí),眼睛閉合,否則則為眼睛打開(kāi)。 圖8顯示了夜間模式下閉眼處理的效果圖。</p><p>  圖八 夜間模式下眼部閉合處理</p><p><b>  輔助預(yù)警處理</b></p><p

87、>  人很大一部分的疲勞在眼中可以得到反義,但不是全部。 我們也可以觀察口中的反應(yīng)。 本文提出了LBP算法來(lái)檢測(cè)打哈欠的行為。 LBP算法如下:</p><p>  其中,gc表示局部區(qū)域中心的所有像素的灰度值,g0?gp1表示相鄰像素的灰色區(qū)域中心。 圓弧定位在以半徑為R的gc的中心。算子具有灰度不變性,且只有光照變化,不會(huì)改變局部像素值的順序。 與算子獲得的價(jià)相同。</p><p&g

88、t;  圖九 打哈欠檢測(cè)</p><p>  在已經(jīng)定位眼睛的情況下,我們需要檢測(cè)打哈欠行為[10]的產(chǎn)生,首先我們可以根據(jù)臉部器官分布來(lái)定位嘴部位置。眼睛中心的水平距離近似等于眼睛與嘴巴的垂直距離;而嘴巴的寬度大致相當(dāng)于眼睛中心的水平距離。因此,根據(jù)這些比例,我們大致來(lái)確定嘴巴區(qū)域,然后重新使用LBP算法進(jìn)一步測(cè)試打哈欠行為。基于多個(gè)模擬人口打開(kāi)的狀態(tài)來(lái)進(jìn)行實(shí)證分析,當(dāng)打開(kāi)狀態(tài)超過(guò)70時(shí),我們認(rèn)為這是第一次

89、打哈欠。因此,我們可以根據(jù)打哈欠的長(zhǎng)度和頻率來(lái)確定疲勞的程度。圖9顯示了打哈欠檢測(cè)效果圖。</p><p><b>  實(shí)驗(yàn)結(jié)果</b></p><p>  為了驗(yàn)證疲勞駕駛測(cè)試的效果,我們模擬了駕駛員疲勞狀態(tài)下的監(jiān)控。然后分別在不同的光照條件(早上,下午,晚上)以及不同的狀態(tài)(清醒和疲勞)下檢測(cè)實(shí)驗(yàn)室人員。判別結(jié)果的準(zhǔn)確度超過(guò)90%。具體測(cè)試結(jié)果如表1所示。<

90、/p><p>  表一 疲勞監(jiān)測(cè)結(jié)果</p><p><b>  結(jié)論</b></p><p>  本文介紹了一種駕駛員疲勞駕駛的預(yù)警方法,在對(duì) AdaBoost算法強(qiáng)化訓(xùn)練的基礎(chǔ)上實(shí)現(xiàn)了一個(gè)穩(wěn)健的分類(lèi)器去定位人眼,并通過(guò)高寬比和眼瞼曲率的計(jì)算來(lái)判斷眼睛的狀態(tài)。 在夜間模式下,使用紅外圖像序列的方法可以準(zhǔn)確定位眼睛,然后通過(guò)PERCLOS的值確

91、定疲勞水平。 而且,通過(guò)基于LBP算法測(cè)試打哈欠的行為,我們可以進(jìn)一步判斷疲勞程度。 本文提出的系統(tǒng)具有計(jì)算成本低、精度高、實(shí)用性強(qiáng)、非接觸性好、靈敏度高、可靠性高等優(yōu)點(diǎn)。 它可以提高疲勞檢測(cè)的有效性和準(zhǔn)確性。</p><p><b>  參考文獻(xiàn)</b></p><p>  [1]M. Jia, et al., "Research on driver

92、9;s face detection and position method based on image processing," in 2012 24th Chinese Control and Decision Conference, CCDC 2012, May 23, 2012 - May 25, 2012, Taiyuan, China, 2012, pp. 1954-1959.</p><p&

93、gt;  [2]SONG Zhumei, WANG Hailin, LIU Hanhui. Drive Fatigue Monitoring and Identification Methods [J]. Journal of Shenzhen Institute of Information Technology, 2011, pp. 38-42.</p><p>  [3]Q. Ji, et al., &qu

94、ot;Real-time nonintrusive monitoring and prediction of driver fatigue," IEEE Transactions on Vehicular Technology, vol. 53, pp. 1052-1068, 2004.</p><p>  [4]Q. Yangon, et al., "A novel real-time fa

95、ce tracking algorithm for detection of driver fatigue," in 2010 International Symposium on Intelligent Information Technology and Security Informatics, IITSI 2010, April 2, 2010 - April 4, 2010, China, 2010, pp. 671

96、-676.</p><p>  [5]L. M. Bergasa and J. Nuevo, "Real-time system for monitoring driver vigilance," in IEEE International Symposium on Industrial Electronics 2005, ISIE 2005, June 20, 2005 - June 23,

97、 2005, Dubrovnik, Croatia, 2005, pp. 1303-1308.</p><p>  [6]P. Viola and M. Jones, "Rapid object detection using a boosted cascade of simple features," in 2001 IEEE Computer Society Conference on C

98、omputer Vision and Pattern Recognition, December 8, 2001 - December 14, 2001, Kauai, HI, United states, 2001, pp. I511-I518.</p><p>  [7]P. Campadelli, et al., "Precise eye and mouth localization,"

99、 International Journal of Pattern Recognition and Artificial Intelligence, vol. 23, pp. 359-377, 2009.</p><p>  [8]J. Wu, et al., "Fast asymmetric learning for cascade face detection," IEEE Transac

100、tions on Pattern Analysis and Machine Intelligence, vol. 30, pp. 369-382, 2008.</p><p>  [9]T. Ahonen, et al., "Face description with local binary patterns: Application to face recognition," IEEE T

101、ransactions on Pattern Analysis and Machine Intelligence, vol. 28, pp. 2037-2041, 2006.</p><p>  [10]X. Fan, et al., "Yawning detection for monitoring driver fatigue," in 6th International Conferen

102、ce on Machine Learning and Cybernetics, ICMLC 2007, August 19, 2007 - August 22, 2007, Hong Kong, China, 2007, pp. 664-668.</p><p>  [11]M.Balasubtamanian, S.Palanivel, V. Ramalingam, Real time face and mout

103、h recognition using radial basis function neural networks. Expert System with Applications (2008),doi:10.10.1016/j.eswa.2008.08.001.</p><p>  [12]P.S. Rau. Drowsy driver detection and warning system for comm

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫(kù)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論