2023年全國碩士研究生考試考研英語一試題真題(含答案詳解+作文范文)_第1頁
已閱讀1頁,還剩46頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、A Geometric Perspective on Machine Learning,何曉飛浙江大學(xué)計(jì)算機(jī)學(xué)院,1,Machine Learning: the problem,,f,何曉飛,Information(training data),f: X→Y,X and Y are usually considered as a Euclidean spaces.,2,Manifold Learning: geometric per

2、spective,The data space may not be a Euclidean space, but a nonlinear manifold.,3,Manifold Learning: the challenges,The manifold is unknown! We have only samples!How do we know M is a sphere or a torus, or else?

3、How to compute the distance on M? versus,This is unknown:,This is what we have:,?,?,or else…?,Topology,Geometry,Functional analysis,4,Manifold Learning: current solution,Find a Euclidean embedding, and then perf

4、orm traditional learning algorithms in the Euclidean space.,,5,Simplicity,6,Simplicity,7,Simplicity is relative,8,Manifold-based Dimensionality Reduction,Given high dimensional data sampled from a low dimensional manifol

5、d, how to compute a faithful embedding? How to find the mapping function ?How to efficiently find the projective function ?,9,A Good Mapping Function,If xi and xj are close to each other, we hope f(xi) and f(xj

6、) preserve the local structure (distance, similarity …)k-nearest neighbor graph:Objective function:Different algorithms have different concerns,10,Locality Preserving Projections,Principle: if xi and xj are close,

7、then their maps yi and yj are also close.,11,Locality Preserving Projections,Principle: if xi and xj are close, then their maps yi and yj are also close.,Mathematical formulation: minimize the integral of the gradient of

8、 f.,,12,Locality Preserving Projections,Principle: if xi and xj are close, then their maps yi and yj are also close.,Mathematical formulation: minimize the integral of the gradient of f.,,,Stokes’ Theorem:,13,Locality Pr

9、eserving Projections,Principle: if xi and xj are close, then their maps yi and yj are also close.,Mathematical formulation: minimize the integral of the gradient of f.,,,Stokes’ Theorem:,,LPP finds a linear approximation

10、 to nonlinear manifold, while preserving the local geometric structure.,14,Manifold of Face Images,,Expression (Sad >>> Happy),,Pose (Right >>> Left),15,Manifold of Handwritten Digits,,,Thickness,,,Slan

11、t,16,Learning target:Training Examples:Linear Regression Model,Active and Semi-Supervised Learning: A Geometric Perspective,17,Generalization Error,Goal of RegressionObtain a learned function that mi

12、nimizes the generalization error (expected error for unseen test input points).Maximum Likelihood Estimate,,,,,18,Gauss-Markov Theorem,,,,,For a given x, the expected prediction error is:,19,Gauss-Markov Theorem,,,,,Fo

13、r a given x, the expected prediction error is:,Good!,Bad!,20,Experimental Design Methods,Three most common scalar measures of the size of the parameter (w) covariance matrix:A-optimal Design: determinant of Cov(w).D-op

14、timal Design: trace of Cov(w).E-optimal Design: maximum eigenvalue of Cov(w).Disadvantage: these methods fail to take into account unmeasured (unlabeled) data points.,21,Manifold Regularization: Semi-Supervised Settin

15、g,Measured (labeled) points: discriminant structureUnmeasured (unlabeled) points: geometrical structure,,,?,22,Measured (labeled) points: discriminant structureUnmeasured (unlabeled) points: geometrical structure,,,?,,

16、,,random labeling,Manifold Regularization: Semi-Supervised Setting,23,Measured (labeled) points: discriminant structureUnmeasured (unlabeled) points: geometrical structure,,,?,,,,,,,random labeling,active learning,activ

17、e learning + semi-supervsed learning,Manifold Regularization: Semi-Supervised Setting,24,Unlabeled Data to Estimate Geometry,Measured (labeled) points: discriminant structure,25,Unlabeled Data to Estimate Geometry,Measur

18、ed (labeled) points: discriminant structureUnmeasured (unlabeled) points: geometrical structure,26,Unlabeled Data to Estimate Geometry,Measured (labeled) points: discriminant structureUnmeasured (unlabeled) points:

19、 geometrical structure,,Compute nearest neighbor graph G,27,Unlabeled Data to Estimate Geometry,Measured (labeled) points: discriminant structureUnmeasured (unlabeled) points: geometrical structure,,Compute nearest ne

20、ighbor graph G,28,Unlabeled Data to Estimate Geometry,Measured (labeled) points: discriminant structureUnmeasured (unlabeled) points: geometrical structure,,Compute nearest neighbor graph G,,29,Unlabeled Data to Estim

21、ate Geometry,Measured (labeled) points: discriminant structureUnmeasured (unlabeled) points: geometrical structure,,Compute nearest neighbor graph G,,30,Unlabeled Data to Estimate Geometry,Measured (labeled) points: d

22、iscriminant structureUnmeasured (unlabeled) points: geometrical structure,,Compute nearest neighbor graph G,,31,Laplacian Regularized Least Square (Belkin and Niyogi, 2006),Linear objective functionSolution,32,Acti

23、ve Learning,How to find the most representative points on the manifold?,33,Objective: Guide the selection of the subset of data points that gives the most amount of information.Experimental design: select samples to lab

24、elManifold Regularized Experimental DesignShare the same objective function as Laplacian Regularized Least Squares, simultaneously minimize the least square error on the measured samples and preserve the local geometri

25、cal structure of the data space.,Active Learning,34,, In order to make the estimator as stable as possible, the size of the covariance matrix should be as small as

26、 possible.D-optimality: minimize the determinant of the covariance matrix,,Analysis of Bias and Variance,35,Select the first data point such that is maximized,Suppose k points have b

27、een selected, choose the (k+1)th point such that .Update,The algorithm,36,Consider feature space F induced by some nonlinear mapping φ, and =K(xi, xi).K(·, ·): positive semi-

28、definite kernel functionRegression model in RKHS: Objective function in RKHS:,,,,,,,Nonlinear Generalization in RKHS,,37,Select the first data point such that is maximized,Supp

29、ose k points have been selected, choose the (k+1)th point such that .Update,Nonlinear Generalization in RKHS,38,A Synthetic Example,A-optimal Design,Laplacian Regularized Optimal Design

30、,39,A Synthetic Example,,,,,,,,,,,,,,,,,A-optimal Design,Laplacian Regularized Optimal Design,40,Application to image/video compression,,,,,41,Video compression,42,Topology,Can we always map a manifold to a Euclidean spa

31、ce without changing its topology?,…,,,,,?,43,Topology,Simplicial Complex,Homology Group,Betti Numbers,Euler Characteristic,Good Cover,Sample Points,,Homotopy,,Number of components, dimension,…,44,Topology,The Euler Chara

32、cteristic is a topological invariant, a number that describes one aspect of a topological space’s shape or structure.,,,1,-2,0,1,2,The Euler Characteristic of Euclidean space is 1!,0,0,45,Challenges,Insufficient sample p

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論