版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進行舉報或認領(lǐng)
文檔簡介
1、 Scene recognition for mine rescue robot localization based on vision CUI Yi-an(崔益安)1, 2, CAI Zi-xing(蔡自興)1, WANG Lu(王 璐)1 1. School of Information Science and Engineering, Central South University, Changsha 410083, Chin
2、a; 2. School of Info-Physics Engineering, Central South University, Changsha 410083, China Received 18 April 2007; accepted 13 September 2007 Abstract: A new scene recognition system was presented based on fuzzy logic a
3、nd hidden Markov model(HMM) that can be applied in mine rescue robot localization during emergencies. The system uses monocular camera to acquire omni-directional images of the mine environment where the robot locates.
4、 By adopting center-surround difference method, the salient local image regions are extracted from the images as natural landmarks. These landmarks are organized by using HMM to represent the scene where the robot is,
5、and fuzzy logic strategy is used to match the scene and landmark. By this way, the localization problem, which is the scene recognition problem in the system, can be converted into the evaluation problem of HMM. The con
6、tributions of these skills make the system have the ability to deal with changes in scale, 2D rotation and viewpoint. The results of experiments also prove that the system has higher ratio of recognition and localizati
7、on in both static and dynamic mine environments. Key words: robot location; scene recognition; salient image; matching strategy; fuzzy logic; hidden Markov model 1 Introduction Search and rescue in disaster area in the d
8、omain of robot is a burgeoning and challenging subject[1]. Mine rescue robot was developed to enter mines during emergencies to locate possible escape routes for those trapped inside and determine whether it is safe
9、for human to enter or not. Localization is a fundamental problem in this field. Localization methods based on camera can be mainly classified into geometric, topological or hybrid ones[2]. With its feasibility and ef
10、fectiveness, scene recognition becomes one of the important technologies of topological localization. Currently most scene recognition methods are based on global image features and have two distinct stages: trainin
11、g offline and matching online. During the training stage, robot collects the images of the environment where it works and processes the images to extract global features that represent the scene. Some approaches were
12、 used to analyze the data-set of image directly and some primary features were found, such as the PCA method[3]. However, the PCA method is not effective in distinguishing the classes of features. Another type of app
13、roach uses appearance features including color, texture and edge density to represent the image. For example, ZHOU et al[4] used multi- dimensional histograms to describe global appearance features. This method is sim
14、ple but sensitive to scale and illumination changes. In fact, all kinds of global image features are suffered from the change of environment. LOWE[5] presented a SIFT method that uses similarity invariant descriptors
15、 formed by characteristic scale and orientation at interest points to obtain the features. The features are invariant to image scaling, translation, rotation and partially invariant to illumination changes. But
16、SIFT may generate 1 000 or more interest points, which may slow down the processor dramatically. During the matching stage, nearest neighbor strategy(NN) is widely adopted for its facility and intelligibility[6]. B
17、ut it cannot capture the contribution of individual feature for scene recognition. In experiments, the NN is not good enough to express the similarity between two patterns. Furthermore, the selected features can not
18、represent the scene thoroughly according to the state-of-art pattern recognition, which makes recognition not reliable[7]. So in this work a new recognition system is presented, which is more reliable and effective i
19、f it is used Foundation item: Project(60234030) supported by the National Natural Science Foundation of China; Project(A1420060159) supported by the Basic Research Program of the 11th Five-Year-Plan of China Correspond
20、ing author: CUI Yi-an; Tel: +86-731-8877075; E-mail: csu-iag@mail.csu.edu.cn CUI Yi-an, et al/Trans. Nonferrous Met. Soc. China 18(2008) 434 Fig.1 Saliency detection on real mine images: (a) Original image, (b) Obtained
21、 landmark regions Fig.2 Experiment on viewpoint changes 3 Scene recognition and localization Different from other scene recognition systems, our system doesn’t need training offline. In other words, our scenes are not
22、classified in advance. When robot wanders, scenes captured at intervals of fixed time are used to build the vertex of a topological map, which represents the place where robot locates. Although the map’s geometric la
23、yout is ignored by the localization system, it is useful for visualization and debugging[13] and beneficial to path planning. So localization means searching the best match of current scene on the map. In this paper
24、hidden Markov model is used to organize the extracted landmarks from current scene and create the vertex of topological map for its partial information resuming ability. Resembled by panoramic vision system, robot loo
25、ks around to get omni-images. From each image, salient local regions are detected and formed to be a sequence, named as landmark sequence whose order is the same as the image sequence. Then a hidden Markov model is
26、created based on the landmark sequence involving k salient local image regions, which is taken as the description of the place where the robot locates. In our system EVI-D70 camera has a view field of ±170?. Con
27、sidering the overlap effect, we sample environment every 45? to get 8 images. Let the 8 images as hidden state Si (1≤i≤8), the created HMM can be illustrated by Fig.3. The parameters of HMM, aij and bjk, are achieve
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 外文文獻及翻譯基于視覺的礦井救援機器人場景識別
- 外文文獻及翻譯基于視覺的礦井救援機器人場景識別
- 外文文獻及翻譯基于視覺的礦井救援機器人場景識別
- 外文文獻及翻譯:基于視覺的礦井救援機器人場景識別.doc
- 外文文獻及翻譯:基于視覺的礦井救援機器人場景識別.doc
- 外文文獻翻譯、搖臂式煤礦救援機器人移動平臺外文翻譯、中英文翻譯
- 中英文文獻翻譯-機器人.doc
- 中英文文獻翻譯-機器人.doc
- 爬壁機器人 外文翻譯 中英文文獻譯文
- 爬壁機器人 外文翻譯 中英文文獻譯文
- 中英文文獻翻譯-機器人.doc
- 外文文獻翻譯--智能管道檢測機器人
- 自重構(gòu)機器人外文文獻翻譯、機械類外文翻譯、中英文翻譯
- 外文翻譯--焊接機器人應(yīng)用現(xiàn)狀外文文獻翻譯
- 機器人外文翻譯(文獻翻譯,中英文翻譯)
- [機械模具數(shù)控自動化專業(yè)畢業(yè)設(shè)計外文文獻及翻譯]【期刊】resquake:遠程操作救援機器人-外文文獻
- 外文文獻翻譯----機器人技術(shù)發(fā)展趨勢
- 外文翻譯--西紅柿集群采摘機器人視覺系統(tǒng)(英文)
- 礦井救援機器人雙目立體視覺匹配算法的研究.pdf
- 基于機器視覺的自主式救援機器人的研究.pdf
評論
0/150
提交評論