圖像拼接外文翻譯--運用matlab進行圖像拼接_第1頁
已閱讀1頁,還剩18頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權,請進行舉報或認領

文檔簡介

1、<p>  中文4800漢字,2880單詞,16000英文字符</p><p>  出處:Patil T, Mishra S, Chaudhari P, et al. IMAGE STITCHING USING MATLAB[J]. International Journal of Engineering Trends & Technology, 2013, 4(3).</p>&

2、lt;p><b>  附錄B</b></p><p><b>  B.1</b></p><p>  運用MATLAB進行圖像拼接</p><p>  Tejasha Patil, Shweta Mishra ,Poorva Chaudhari , Shalaka Khandale</p><p&

3、gt;<b>  信息工程學院</b></p><p>  P.V.P.P.C.O.E.</p><p><b>  孟買, 印度</b></p><p><b>  摘要——</b></p><p>  圖像是我們?nèi)粘I钪胁豢苫蛉钡囊徊糠?。圖像拼接是一種將一系列較小的、重疊

4、的圖像處理產(chǎn)生為一幅全景圖像的技術。拼接完成的圖像通常應用在各種應用程序中,比如交互式全景圖像、建筑模擬、多節(jié)點電影以及其它從真實世界獲取圖像的3D建模相關的應用程序。</p><p>  圖像處理是信號處理的某種形式,因為它的輸入是一幅圖像,例如一張相片或是一個視頻幀;它的輸出也可能是一幅圖像,或者是一組圖像相關的特征或是參數(shù)。大多數(shù)的圖像處理技術通常將一幅圖像看作一個二維信號,并將其應用在標準信號處理技術中。

5、具體來說,圖像拼接能根據(jù)特征檢測得到最終的圖像,其提供了不同的平臺來令兩幅或多幅具有重疊的圖像拼接為了一張無縫的圖像。</p><p>  在這個過程中,尺度不變特征轉(zhuǎn)換(SIFT)算法[1]由于其良好的性能被用于執(zhí)行檢測和匹配控制點的步驟。創(chuàng)造一個自動高效的完整拼接方法的過程導致了對拼接平臺不同方法的分析。數(shù)個商業(yè)與在線軟件工具能夠用于拼接過程,它們能在不同的情況下提供多種選擇。</p><

6、p>  關鍵字:無縫拼接圖像,全景圖像</p><p><b>  1. 引言</b></p><p>  這個項目的目標是創(chuàng)造一個Matlab腳本,能夠?qū)煞鶊D像拼接在一起,進而生成一幅更大的圖像。給定一個從空間中某個單一點獲取具有各種方向的一系列圖像,那么將這些圖像映射到一個共同的參考框架中并形成一幅具有更寬視野的完美對齊的大圖像也是很有可能的。這通常被稱作

7、全景圖像拼接。</p><p>  在我們的實驗中,我們認為一個成功的圖像拼接算法不僅要在圖像重疊的區(qū)域能夠平滑地過渡,而且還要保持這樣的特性,即通常要與我們的視覺感知一致:結構留存:拼接后的圖像不應該打破現(xiàn)有結構或是創(chuàng)造出新的顯著結構。重疊區(qū)域的邊緣會因為結構失調(diào)而被破壞,造成明顯的鬼影現(xiàn)象。對比強烈:人眼對于大的強度變化是很敏感的。一副拼接后的圖像其重疊區(qū)域外的對比度不平衡會被人的感知放大。盡管圖像重疊區(qū)域的

8、結構一致,顏色過渡也相當平滑,但其從左到右不自然的顏色轉(zhuǎn)換依然暴露了輸入圖像內(nèi)在不匹配的強烈程度。在拼接過程中,輸入圖像中目標周圍的環(huán)境信息也應當被考慮在內(nèi)。</p><p><b>  2. 背景</b></p><p><b>  2.1 現(xiàn)有系統(tǒng)</b></p><p>  由于單攝像機其視場是受限的,有時會使用多臺

9、攝像機以期望擴大視場范圍。圖像拼接便是其中的方法之一,這些方法能夠被用于去除那些因重疊的視場而造成的冗余信息。但是,圖像拼接的常規(guī)實施過程對于內(nèi)存的需求以及計算量的要求是非常高的。在這個項目中,這個問題是通過執(zhí)行圖像拼接,然后對其壓縮,運用條帶法解決的。首先,通過將兩幅參考圖像傳輸至中間節(jié)點來確定拼接參數(shù)以完成拼接過程。然后,這些參數(shù)被傳回視覺節(jié)點并儲存起來。這些參數(shù)將會被用于決定在條帶法中即將到來的圖像的拼接方式。在完成一條的拼接之后

10、,使用基于條帶的壓縮技術能夠?qū)ζ溥M行進一步的壓縮。</p><p>  大多數(shù)現(xiàn)有的圖像拼接方法產(chǎn)生粗拼接以致于無法處理共同特征,例如,血管、彗星細胞和細胞組織,或是需要用戶輸入。使用Levenberg-Marquardt方法的圖像拼接方法可以使得尋找最佳關聯(lián)點的過程最優(yōu)化。Levenberg-Marquardt方法給出了很好的結果,但是它在計算方面非常昂貴而且容易陷入局部極小值中。這個項目中提供的方法要求最佳關

11、聯(lián)點的選擇遵循以下方式。在理想狀態(tài)下,當使用電控臺時,直接利用預期重疊的知識來尋找最佳關聯(lián)點。但是,由于平臺位置與理想狀態(tài)的偏離以及平臺或相機未對準的原因,重疊區(qū)域并不是完美的,當然也肯定無法精確到單個像素點。我們的算法通過尋找期望中心重疊像素點周圍的鄰域來尋找最佳關聯(lián)點,克服了以上的問題。因為手動獲得圖像定位非常不精確,所以為了發(fā)現(xiàn)最佳關聯(lián)點,需要在更大范圍內(nèi)進行搜索。這個項目的目標是創(chuàng)建一個能將兩幅圖像拼接在一起形成一幅大圖像的腳本

12、。圖像拼接已經(jīng)廣泛被用于照片類應用程序,也成了為許多攝影師量身打造的不可或缺的工具。這些拼接后的圖像成為全景圖像后,增加了圖像視覺上的美感,因而被海報、明信片以及其他打印材料行業(yè)所看好。這個項目將會展示如何利用Mat</p><p><b>  2.2 擬建系統(tǒng)</b></p><p>  圖像處理可以被定義為對圖像的技術分析,能夠大體上確定圖像的色調(diào)和顏色。圖像先是

13、經(jīng)過掃描或是數(shù)碼相機的演算,然后我們對它的位圖格式進行處理。這也意味著改善圖像,如利用程序或軟件從視頻源中提取出一個被掃描或輸入的圖像,也就是說,輸入和輸出都是圖像,圖像處理是對信息的處理。圖像處理分為2個主要分支,圖像增強和圖像復原。圖像增強是為了提高圖像質(zhì)量,或者是要強調(diào)圖像中特定的方面,并產(chǎn)生不同于原來的圖像。而圖像復原,是由于圖像受相機系統(tǒng)影響效果,如幾何失真影響后,從中恢復原始圖像。圖像處理不會降低現(xiàn)有數(shù)據(jù)量而是將它重新分布。

14、</p><p><b>  空間濾波:</b></p><p>  一個圖像可以通過濾波除去某個頻帶的空間頻率,如高頻率和低頻率。高頻率一般出現(xiàn)在亮度變化迅速的位置。而亮度緩慢變化代表低頻率。最高頻率通常在鋒利的邊緣或點處被發(fā)現(xiàn)。空間濾波操作包括高通、低通和邊緣檢測濾波器。高通濾波器突出圖像的高頻細節(jié)而減弱低頻信號,達到銳化效果。</p><p&

15、gt;<b>  銳化: </b></p><p>  圖像銳化的主要目標是突出圖像的細節(jié),或是為了增強被噪音或其它原因?qū)D像變模糊的細節(jié)。銳化強調(diào)圖像的邊緣,使之更容易發(fā)現(xiàn)和辨認。在銳化圖像時,不會有產(chǎn)生新的細節(jié)。模糊半徑的使用會影響銳化屬性。此外,每個像素和其鄰域之間的差異也會影響銳化效果。</p><p><b>  模糊:</b><

16、/p><p>  低通濾波器處理圖像的視覺效果是模糊圖像。這是因為亮度的急劇變化已經(jīng)減弱為緩慢的亮度變化,使之變得模糊的,且細節(jié)減少。模糊可以通過空間鄰域中的像素平均值來獲得。模糊的目的是減少相機噪聲的影響包括失真的或丟失的像素值。對于模糊效果,主要有2種被使用的技術:鄰域均值(高斯濾波器)和邊緣保持(中值濾波器)。模糊效果可以通過去除視覺上具有擾亂性的高頻模式來提高圖像的低頻細節(jié)。從原始圖像中減去一個低通濾波后的過

17、濾圖像,便可以得到一個銳化圖像。后者的操作被稱為反銳化掩模增強。</p><p><b>  邊緣檢測:</b></p><p>  邊緣經(jīng)常被用于圖像分析中,來尋找區(qū)域的邊界。邊緣出現(xiàn)在像素亮度突然變化的位置。邊緣本質(zhì)上來說能夠區(qū)別兩個明顯不同的區(qū)域,簡而言之,邊緣是兩個不同區(qū)域的邊界。邊緣檢測器中的羅伯特算子、Sobel算子算子、Prewitt算子、Canny算子

18、和Krish算子經(jīng)常被使用。圖像的邊緣檢測顯著地減少了數(shù)據(jù)量并過濾掉了不相關的信息,保持了圖像重要的結構性質(zhì)。邊緣檢測有很多方法,一般可以分為兩大類,基于搜索和基于零交叉。</p><p>  圖像拼接過程中的3個重要階段如下:</p><p><b>  1 圖像采集: </b></p><p>  需要被拼接的圖像是通過一個安裝在三角架上的

19、相機來獲得的。通過攝像機角度變化,以采取不同的重疊樣本圖像。</p><p><b>  2 圖像配準: </b></p><p>  圖像配準的過程,目的是找到變化的地方來對齊2個或多個重疊圖像,因為在3維空間里對齊的圖像,從不同位置通過觀察點得到的投影都是獨一無二的。圖像配準由四個主要部分組成:特征集:包括強度值,輪廓,紋理等。對于每個圖像配準方法必須有一個特征集

20、供選擇。相似度量:一個返回標量值的函數(shù),提供了對兩個特征之間相似性的一個指示。搜索集:它是用于圖像配準的一組變換點的集合。搜索策略:決定如何從搜索集中選擇下一處變換的算法。</p><p><b>  現(xiàn)有技術:</b></p><p>  (1)使用不同特征集的配準。 </p><p>  (2)使用不同相似性度量的配準。 </p>

21、;<p>  (3)步搜索策略的配準。</p><p><b>  3 圖像融合:</b></p><p>  圖像融合是在兩張配準圖像中調(diào)整像素值的過程,比如,當有圖像加入時,從一個圖像到下一個圖像的過渡是不可見的。它還應確保新圖像的質(zhì)量與所用的原始圖像具有相媲美的質(zhì)量。影像融合是為了使接縫在輸出圖像中不可見。接縫是一條在兩幅圖像的重疊部分可見的線。&

22、lt;/p><p><b>  現(xiàn)有技術:</b></p><p> ?。?)強度差異的線性分布</p><p> ?。?)中位強度差異的線性分布</p><p>  (3)關于重疊區(qū)域中對應像素值的強度調(diào)整</p><p><b>  圖像拼接流程:</b></p>

23、<p><b>  2.3方法論</b></p><p><b>  尺度不變特征算法</b></p><p>  Lowe提出的SIFT算法(尺度不變特征變換)是一個從圖像中提取特有的不變特征的方法。它已成功應用于各種基于特征匹配的計算機視覺的問題上,包括物體識別、構成估計、圖像檢索等等。但是,關于SIFT特征的正確匹配,在現(xiàn)實世

24、界的應用程序中仍然需要對算法的魯棒性進行提高。在本文中,提出了一個能夠?qū)δ繕宋矬w提供更可靠的基于SIFT算子的特征匹配改進算法。主要思想是在匹配之前,把從測試圖像以及圖像模型對象中提取的特征分成數(shù)個子集。根據(jù)由不同的八度產(chǎn)生的特征,這些特征被分到了不同子集中,而這些八度則是由不同的頻域產(chǎn)生的。[1]</p><p>  以下是生成圖像特征集的計算主要階段:</p><p>  1 尺度空間

25、極值檢測:計算的第一階段要遍歷所有的尺度和圖像位置。通過使用高斯差分函數(shù)來辨識潛在的具有尺度、方向不變性的興趣點,因而本身得到了很有效地執(zhí)行。</p><p>  2 定位關鍵點:在每個候選位置,一個詳細的模型適合于確定位置和尺度。關鍵點的選擇基于對它們穩(wěn)定性的測量。</p><p>  3 分配方向:基于局部圖像的梯度方向為每一個關鍵點位置分配一個或多個方向。之后所有的操作都是對于圖像數(shù)

26、據(jù)指定的方向、尺度和位置進行相對變換,因此能夠提供基于這些量的不變性。</p><p>  4 關鍵點描述子:在每個關鍵點周圍的區(qū)域,圖像的局部梯度會在選擇的尺度下進行測量。圖像的局部梯度利用選定的尺度在每個區(qū)域中測量關鍵點。這些變化作為一種表示允許有相當程度的局部形狀失真和亮度變化。</p><p>  該方法被命名為尺度不變特征變換(SIFT),因為它將圖像數(shù)據(jù)轉(zhuǎn)換成了與局部特征相協(xié)調(diào)

27、的尺度不變量。[4]</p><p>  隨機抽樣一致性算法 </p><p>  隨機抽樣一致性算法(RANSAC)是由Fischler和Bolles提出的一種通用參數(shù)估計方法,用來處理輸入數(shù)據(jù)中大比例的異常值。不同于許多魯棒估計技術,例如已經(jīng)被統(tǒng)計學的計算機視覺領域所采用的M-估計和最小中值平方算法,隨機抽樣一致性算法正在計算機視覺領域中得到發(fā)展。隨機抽樣一致性算法是一種重采樣技術,它

28、通過使用小數(shù)量用來估計潛在模型參數(shù)的觀測點(數(shù)據(jù)點)來產(chǎn)生候選解決方案。正如Fischler和Bolles所指出的,不同于傳統(tǒng)的采樣技術,需要使用盡可能多的數(shù)據(jù)來獲得一個初始解,然后進行異常點的刪除,隨機抽樣一致性算法使用最小的可能的集合,然后繼續(xù)用一致的數(shù)據(jù)點對集合進行擴大。</p><p>  基本的算法總結如下:</p><p>  1)隨機選擇確定模型參數(shù)所需的最小點數(shù)。</

29、p><p><b>  2)解決模型參數(shù)。</b></p><p>  3)確定一個集合中有多少點滿足預先定義的平衡。</p><p>  4) 如果全部數(shù)量點中的內(nèi)點數(shù)量的一部分超過了預先設定的閾值,使用所有已經(jīng)確定的內(nèi)點來重新估計模型參數(shù),直至結束。</p><p>  5)否則,繼續(xù)執(zhí)行步驟1(最大次數(shù)為N)。<

30、/p><p>  隨機抽樣一致性算法的優(yōu)點是其對模型參數(shù)穩(wěn)定估計的能力,即使是當較多數(shù)量的異常值出現(xiàn)在數(shù)據(jù)組中時,它也能夠以一個很高的精度來估計參數(shù)。隨機抽樣一致性算法的一個缺點是計算這些參數(shù)所需要的時間沒有上限。當計算的迭代次數(shù)有限時,得到的解決算法可能就不是最優(yōu)的,它甚至可能不是一個適合這個數(shù)據(jù)的好的方式。從這個角度考慮,RANSAC提供了一個權衡:通過更大的迭代次數(shù)計算產(chǎn)生一個概率增大的合理模型。RANSAC的

31、另一個缺點是它需要的設置問題特定的閾值。隨機抽樣一致性算法的另一個缺點是,它需要對具體情況進行閾值設置問題。隨機抽樣一致性算法只能為一個特定的數(shù)據(jù)集估計一個模型。對于任何一個模型的方法,當兩個(或更多)模型的實例存在時,隨機抽樣一致性算法可能會找不到其中任何一個。當超過一個模型的例子出現(xiàn)時,可供選擇的Hough變換可能是非常有用的[3]。下圖描述了關鍵點檢測。</p><p><b>  3. 初步成果

32、</b></p><p>  在本文中,我們提出了一種通過圖像變形得到的新的圖像拼接方法,其中重疊區(qū)域可能包含明顯的強度失調(diào)以及幾何誤差。下面的例子描述了兩個使用MATLAB拼接為全景圖并且實現(xiàn)了幾何對準。</p><p><b>  4. 結論</b></p><p>  在本文中,我們提出了一種新的圖像匹配算法。我們的算法可以顯

33、著地增加匹配次數(shù)、匹配精度。大量的實驗結果表明,我們的方法改善了傳統(tǒng)的檢測器,甚至在存有很大差異的情況下,而且新的檢測器獨具風格。圖像拼接器提供了一種性價比高且非常靈活的選項來獲得只有使用全景相機才能得到的全景圖像。拼接器拼接的全景圖像還能應用于相機無法獲得興趣目標全貌的程序。利用圖像拼接器,使用某個物體的具有重疊區(qū)域的圖像,可以構造某個物體的全貌。</p><p><b>  5. 致謝</b&

34、gt;</p><p>  本文介紹了信息技術部在PVPPCOE所做的研究。非常感謝索娜麗女士給予我們的指導。</p><p><b>  6. 參考文獻</b></p><p>  [1] Y. Yu, K. Huang, and T. Tan, “A Harris-like scaleinvariant feature detector,”

35、 in Proc. Asian Conf. Comput. Vis.,2009, pp. 586–595.</p><p>  [2] J. M. Morel and G. Yu, “Asift: A new framework for fullyaffine invariant image comparison,” SIAM J. Imag. Sci., vol.2, no. 2, pp.438–469, Ap

36、r. 2009.</p><p>  [3] J. Rabin, J. Delon, Y. Gousseau, and L. Moisan,“RANSAC: A robust algorithm for the recognition of multipleobjects,” in Proc. 3D’PVT, 2010.</p><p>  [4] M. Krulakova, Matrix

37、 technique of image processing inMatlab, ICCC'2010: proceedings of 11th InternationalCarpathian Control Conference, 26-28 May, 2010, Eger,Hungary, Rekatel 2010, pp. 203-206, ISBN 978-963-06-9289-2.</p><p&g

38、t;  [5] Wei Xu and Jane Mulligan. Performance evaluation ofcolor correction approaches for automatic multi-view imagestitching. In 2010 IEEE Conference on Computer Vision andPattern Recognition (CVPR 2010), pages 263 - 2

39、70, SanFrancisco, CA, USA, June 2010.</p><p>  [6] Oliver Whyte, Josef Sivic1, Andrew Zisserman, and JeanPonce. Nonuniform deblurring for shaken images. In 2010IEEE Conference on Computer Vision and PatternR

40、ecognition (CVPR 2010), pages 491 - 498, San Francisco,CA, USA, June 2010.</p><p>  [7] Xianyong Fang, Bin Luo, Haifeng Zhao, Jin Tang, BiaoHe, and Hao Wu. A new multi-resolution image stitching withlocal an

41、d global alignment. IET Computer Vision, 2010.</p><p>  [8] MathWorks, MATLAB Builder JA 2 user's guide. [online]August 18, 2010 [cited 12.01.2011] available from 〈http://www.mathworks.com/help/pdf-doc/j

42、avabuilder/javabuilder.pdf〉.</p><p>  [9] MathWorks, Bringing java classes and methods intoMATLAB workspace. [online] [cited 12.01.2011] availablefrom 〈http://www.mathworks.com/help/ techdoc/matlabexternal/f

43、4863.html〉</p><p>  [10] Chen Hui, Long AiQun, Peng YuHua. BuildingPanoramas from Photographs Taken with An UncalibratedHand-Held Camera. Chinese Journal of Computers,2009,(2):328-335.</p><p>  

44、[11] Hsieh, J.-W. Fast stitching algorithm for moving objectdetectionand mosaic construction. in IEEE International Conference onMultimedia & Expo. 2003. Baltimore, Maryland, USA.</p><p><b>  B.2&l

45、t;/b></p><p>  IMAGE STITCHING USING MATLAB</p><p>  Tejasha Patil, Shweta Mishra ,Poorva Chaudhari , Shalaka Khandale</p><p>  Information Tech. Department</p><p> 

46、 P.V.P.P.C.O.E.</p><p>  Mumbai, India</p><p><b>  Abstract—</b></p><p>  Images are an integral part of our daily lives. Image stitching is the process performed to gen

47、erate one panoramic image from a series of smaller, overlapping images. Stitched images are used in applications such as interactive panoramic viewing of images, architectural walk-through, multi-node movies and other ap

48、plications associated with modeling the 3D environment using images acquired from the real world.</p><p>  Image processing is any form of signal processing for which the input is an image, such as a photogr

49、aph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image processing techniques involve treating the image as a two-dimensional

50、signal and applying standard signal processing techniques to it. Specifically, image stitching presents different stages to render two or more overlapping images into a seamless st</p><p>  In this process,

51、Scale Invariant Feature Transform (SIFT) algorithm[1] can be applied to perform the detection and matching control points step, due to its good properties. The process of create an automatic and effective whole stitching

52、 process leads to analyze different methods of the stitching stages. Several commercial and online software tools are available to perform the stitching process, offering diverse options in different situations.</p>

53、;<p>  Key words : seamless stitched image, Panoramic image.</p><p>  1. Introduction</p><p>  This project’s goal is to create a Matlab script that will stitch two images together to cre

54、ate one larger image. Given a sequence of images taken from a single point in space, but with varying orientations, it is possible to map the images into a common reference frame and create a perfectly aligned larger pho

55、tograph with a wider field of view. This is normally referred to as panoramic</p><p>  image stitching.</p><p>  In our experiments, we observe that a successful image stitching algorithm should

56、 not only create a smooth transition within the overlapped region but also preserve the following properties, which are in general agreement with our visual perception: Structure preservation. The stitched image should n

57、ot break existing or create new salient structures. where the edge of the tower is broken in the overlapped region due to structure misalignment, causing obvious ghosting artifact. Intensity alignmen</p><p>

58、  2. CONTEXTUALIZATION</p><p>  2.1 Existing sysem</p><p>  Due to the limited Field-Of-View (FOV) of a single camera, it is sometimes desired to extend the FOV using multiple cameras. Image sti

59、tching is one of the methods that can be used to exploit and remove the redundancy created by the overlapping FOV. However, the memory requirement and the amount of computation for conventional implementation of image st

60、itching are very high. In this project, this problem is resolved by performing the image stitching and compression in a strip-by-strip manner. Fir</p><p>  Most of the existing methods of image stitching eit

61、her produce a ‘rough’ stitch that cannot deal with common features such as blood vessels, comet cells and histology, or they require some user input. Approaches for image stitching that optimize the search for the best c

62、orrelation point by using Levenberg-Marquardt method .Levenberg-Marquardt method gives good results, but it is computationally expensive and can get stuck at local minima. The approach offered in this project makes the s

63、election </p><p>  This project’s goal is to create a Matlab script that will stitch two images together to create one larger image. Image stitching has wide uses in photo applications and has become a requi

64、red toolset for many photographers. These stitched images, become panoramic views which increase the visual aesthetics of a scene, and are widely sought out for posters, postcards, and other printed materials. This proje

65、ct will be performed using point correspondences between the two images and utilizing Matlab</p><p>  2.2 Proposed System</p><p>  Image processing can be defined as analysis of picture using te

66、chniques that can basically identify shades and colors. It deals with images in bitmapped graphic format that have been scanned or captured with digital camera. It also means image improvement, such as refining a picture

67、 in a program or software that has been scanned or entered from a video source or in short, image processing is any form of information processing when both the input and output is images. Image processing is divided <

68、;/p><p>  Spatial filtering :</p><p>  An image can be filtered to remove a band of spatial frequencies, such as high frequencies and low frequencies. Where rapid brightness transitions are establi

69、shed, high frequencies will be there. In the other hand, slowly changing brightness transitions represent low frequencies. The highest frequencies normally will be found at the sharp edges or points.</p><p>

70、  Spatial filtering operations include high pass, low pass and edge detection filters. High pass filters accentuate the high frequency details of image and attenuate the low frequency, creating a sharpening effect.</p

71、><p>  Sharpening :</p><p>  The main aim in image sharpening is to highlight fine detail in the image, or to enhance detail that has been blurred due to noise or other effects. Sharpening emphasiz

72、es edges in the image and make them easier to see and recognize. In creating a sharpened image, no new details are actually created. The nature of sharpening is is influenced by the blurring radius used. In addition to t

73、hat, differences between each pixel and its neigbour too, can influence sharpening effect.</p><p>  Blurring :</p><p>  The visual effect of a low pass filter is image blurring. This is because

74、the sharp brightness transitions had been attenuated to small brightness transitions. It have less detail and blurry.</p><p>  Blurring can be done in spatial domain by pixel averaging in a neighbor. Blurrin

75、g is aimed to diminish the effects of camera noise, unauthentic pixel values or missing pixel values. For blurring effect, there are two mostly used techniques which are neighbourhood averaging (Gaussian filters) and edg

76、e preserving (median filters). The blurring effect can improve an image’s low frequency details by removing visually disruptive high frequency patterns. By subtracting a low pass filtered image from t</p><p>

77、;  Edge detection :</p><p>  Edges are often used in image analysis for finding region boundaries. They are pixels where brightness changes abruptly. An edge essentially distinguishes between two distinctly

78、different regions or in short, an edge is the border between two different regions. Robert operator, Sobel operator, Prewitt operator, Canny operator and Krish operator are among edge detectors that are often used. Edge

79、detection of an image reduces significantly the amount of data and filters out information that may be</p><p>  1.Image Acquisition:</p><p>  The images that need to be stitched are acquired usi

80、ng a camera mounted on a tripod stand. The camera angle is varied to take different overlapping sample images.</p><p>  2.Image Registration:</p><p>  The process of image registration aims to f

81、ind the translations to align two or more overlapping images such that the projection from the view point through any position in the aligned images into the 3D world is unique.</p><p>  Image Registration c

82、onsists of four main components: Feature set- The set of features includes the intensity values, contours, textures and so on. A feature set must be selected for each image registration method.</p><p>  Simi

83、larity measure- It is a function which returns a scalar value that provides an indication of the similarities between two features.</p><p>  Search set- It is a set of possible transformations for aligning t

84、he images.</p><p>  Search strategy- It is the algorithm that decides how to select the next transformations from the search set.</p><p>  Techniques used:</p><p>  Registration usi

85、ng a different feature set.</p><p>  Registration using different similarity measures.</p><p>  Registration with step search strategy.</p><p>  Image Merging:</p><p> 

86、 Image merging is the process of adjusting the values of pixels in two registered images, such that when the images are joined, the transition from one image to the next is invisible. It should also ensure that the new i

87、mage has a quality comparable to that of the original images used. Image merging can be carried out by making the seam invisible in the output image. The seam is the line that is visible at the point where the two images

88、 overlap.</p><p>  Techniques used:</p><p>  Linear distribution of intensity differences</p><p>  Linear distribution of median intensity differences</p><p>  Intensit

89、y adjustment with respect to corresponding pixels in overlapping region.</p><p>  Steps for Image Sitching :</p><p>  3. Methodology</p><p>  Scale Invarient Feature Algorithm</p

90、><p>  The SIFT algorithm (Scale Invariant Feature Transform)proposed by Lowe is an approach for extracting distinctiveinvariant features from images. It has been successfullyapplied to a variety of computer vi

91、sion problems based onfeature matching including object recognition, poseestimation, image retrieval and many others. However, in realworldapplications there is still a need for improvement of thealgorithm’s robustness w

92、ith respect to the correct matching ofSIFT features. In this paper, an improv</p><p>  Following are the major stages of computation used togenerate the set of image features:</p><p>  1. Scale-

93、space extrema detection: The first stage ofcomputation searches over all scales and image locations. It isimplemented efficiently by using a difference-of-Gaussianfunction to identify potential interest points that are i

94、nvariantto scale and orientation.</p><p>  2. Keypoint localization: At each candidate location, a detailedmodel is fit to determine location and scale. Keypoints areselected based on measures of their stabi

95、lity.</p><p>  3. Orientation assignment: One or more orientations areassigned to each keypoint location based on local imagegradient directions. All future operations are performed onimage data that has bee

96、n transformed relative to the assignedorientation, scale, and location for each feature, therebyproviding invariance to these transformations.</p><p>  4. Keypoint descriptor: The local image gradients areme

97、asured at the selected scale in the region around eachkeypoint. These are transformed into a representation thatallows for significant levels of local shape distortion andchange in illumination.</p><p>  Thi

98、s approach has been named the Scale Invariant FeatureTransform (SIFT), as it transforms image data into scaleinvariantcoordinates relative to local features. [4]</p><p>  RANdom SAmple Consensus algorithm<

99、;/p><p>  The Random sample Consensus algorithm (RANSAC)proposed by Fischler and Bolles is a general parameterestimation approach designed to cope with a large proportionof outliers in the input data. Unlike ma

100、ny of the commonrobust estimation techniques such as M-estimators and leastmediansquares that have been adopted by the computer visioncommunity from the statistics literature, RANSAC wasdeveloped from within the computer

101、 vision community.RANSAC is a resampling technique that generates candidatesolut</p><p>  The basic algorithm is summarized as follows:</p><p>  1) Select randomly the minimum number of points r

102、equired todetermine the model parameters.</p><p>  2) Solve for the parameters of the model.</p><p>  3) Determine how many points from the set of all points fitwith a predefined tolerance .<

103、/p><p>  4) If the fraction of the number of inliers over the total numberpoints in the set exceeds a predefined thresholdτ, re-estimatethe model parameters using all the identified inliers andterminate.</p&

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責。
  • 6. 下載文件中如有侵權或不適當內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論