2023年全國碩士研究生考試考研英語一試題真題(含答案詳解+作文范文)_第1頁
已閱讀1頁,還剩14頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)

文檔簡介

1、Robust regression for face recognitionImran Naseem a,?,1, Roberto Togneri b, Mohammed Bennamoun ca College of Engineering, Karachi Institute of Economics and Technology (KIET), Karachi 75190, Pakistan b School of EE ^ y

2、¼ X ^ b ð2Þ^ y being the predicted response variable. In classical statistics the error term e is conventionally taken as a zero-mean Gaussian noise [40]. A traditional method to optimize the regression is

3、 to minimize the least squares (LS) problemarg min |fflfflfflfflffl{zfflfflfflfflffl} ^ bX qj ¼ 1 r2 j ð ^ bÞ ð3Þwhere rjð ^ bÞ is the jth component of the residual vector r. However, i

4、n the presence of outliers, least squares estimation is inefficient and can be biased. Although it has been claimed that classical statistical methods are robust, they are only robust in the sense of type I error. Type I

5、 error corresponds to the rejection of null hypothesis when it is in fact true. It is straightforward to note that type I error rate for classical approaches in the presence of outliers tend to be lower than the nominal

6、value. This is often referred to as conservatism of classical statistics. However, due to contaminated data, type II error increases drastically. Type II error is the error when the null hypothesis is not rejected when i

7、t is in fact false. This drawback is often referred to as inadmissibility of the classical approaches. Additionally, classical statistical meth- ods are known to perform well with the homoskedastic data model. In many re

8、al scenarios, however, this assumption is not true and heteroskedasticity is indispensable, thereby emphasizing the need of robust estimation. Several approaches to robust estimation have been proposed such as R-estimato

9、rs and L-estimators. However, M-estimators have shown superiority due to their generality and high break- down point [29,40]. Primarily M-estimators are based on mini- mizing a function of residuals^ b ¼ arg min |ff

10、lfflfflfflffl{zfflfflfflfflffl} ^ b ARpFð ^ bÞ ? X qj ¼ 1 rðrjð ^ bÞÞ8>> : ð5Þg being a tuning constant called the Huber threshold. Many algorithms have been developed f

11、or calculating the Huber M-estimate in Eq. (4), some of the most efficient are based on Newton’s method [41]. M-estimators have been found to be robust and statistically efficient compared to classical methods [42–44]. A

12、lthough robust methods, in general, are superior to their classical counterparts, they have rarely been addressed in applied fields [31,40]. Several reasons have been discussed in [40] for this paradox, computational exp

13、ense related to the robust methods has been a major hindrance [42]. However, with recent developments in computational power, this reason has become insignificant. The reluctance in the use of robust regression methods m

14、ay also be credited to the belief of many statisticians that classical methods are robust.3. Robust Linear Regression Classification (RLRC) for robust face recognitionConsider N number of distinguished classes with pi nu

15、mber of training images from the ith class such that i ¼ 1,2, . . . ,N. Each grayscale training image is of an order a ? b and is representedas uiðmÞ ARa?b, i ¼ 1,2, . . . ,N and m ¼ 1,2, . . . ,

16、pi. Each gallery image is downsampled to an order c ? d and transformed to a vectorthrough column concatenation such that uiðmÞ ARa?b-wiðmÞARq?1,where q¼cd, cd5ab. Each image vector is normalized

17、 so that the maximum pixel value is 1. Using the concept that patterns from the same class lie on a linear subspace [1], we develop a class specific model Xi by stacking the q-dimensional image vectors,Xi ¼ ½wi

18、ð1Þ wið2Þ . . . . . . wiðpiÞ?ARq?pi, i ¼ 1,2, . . . ,N ð6ÞEach vector wiðmÞ, m ¼ 1,2, . . . ,pi, spans a subspace of Rq also called the column space of Xi. Ther

19、efore at the training level each class i is represented by a vector subspace, Xi, which is also called the regressor or predictor for class i. Let z be an unlabeled test image and our problem is to classify z as one of t

20、he classes i ¼ 1,2, . . . ,N. We transform and normalize the grayscale image z to an image vector yARq?1 as discussed for the gallery. If y belongs to the ith class it should be represented as a linear combination o

21、f the training images from the same class (lying in the same subspace) i.e.y ¼ Xibi þe, i ¼ 1,2, . . . ,N ð7Þwhere bi ARpi?1. From the perspective of face recognition the training of the system c

22、orresponds to the development of the explanatory variable ðXiÞ which is normally done in a controlled environment, therefore the explanatory variable can safely be regarded as noise free. The issue of robustnes

23、s comes into play when a given test pattern is contaminated with noise which may arise due to luminance, malfunctioning of the sensor, channel noise, etc. Given that qZpi, the system of equations in Eq. (7) is well-condi

24、tioned and bi is estimated using robust Huber estima- tion as discussed in Section 2 [30]^ bi ¼ arg min |fflfflfflfflffl{zfflfflfflfflffl} ^ bi ARpiFð ^ biÞ ? X qj ¼ 1 rðrjð ^ biÞÞ

25、8 <:9 =;, i ¼ 1,2, . . . ,N ð8Þwhere rjð ^ biÞ is the jth component of the residualrð ^ biÞ ¼ y?Xi ^ bi, i ¼ 1,2, . . . ,N ð9ÞThe estimated vector of parameters,

26、^ bi, along with the pre- dictors Xi is used to predict the response vector for each class i:^ yi ¼ Xi ^ bi, i ¼ 1,2, . . . ,N ð10ÞWe now calculate the distance measure between the predicted response

27、vector ^ yi, i ¼ 1,2, . . . ,N and the original response vector y,diðyÞ ¼ Jy?^ yiJ2, i ¼ 1,2, . . . ,N ð11Þand rule in favor of the class with minimum distance i.e.min |ffl{zffl}idi

28、0;yÞ, i ¼ 1,2, . . . ,N ð12Þ4. Case study: face recognition in presence of severe illumination variationsThe proposed RLRC algorithm is extensively evaluated on various databases incorporating several

29、 modes of luminance variations. In particular we address three standard databases namely Yale Face Database B [18], CMU-PIE database [45] and AR database [46]. For all experiments images are histogram equalized and trans

30、formed to logarithm domain.4.1. Yale Face Database BYale Face Database B consists of 10 individuals with 9 poses incorporating 64 different illumination alterations for eachI. Naseem et al. / Pattern Recognition 45 (2012

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權(quán)或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論