版權(quán)說(shuō)明:本文檔由用戶(hù)提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
1、外文資料原文外文資料原文Efficient URL Caching for World Wide Web CrawlingAndrei Z. BroderIBM TJ Watson Research Center19 Skyline DrHawthorne, NY 10532abroder@us.ibm.comMarc NajorkMicrosoft Research1065 La AvenidaMountain View, CA 94
2、043najork@microsoft.comJanet L. WienerHewlett Packard Labs1501 Page Mill RoadPalo Alto, CA 94304janet.wiener@hp.comABSTRACTCrawling the web is deceptively simple: the basic algorithm is (a)Fetch a page (b) Parse it to ex
3、tract all linked URLs (c) For all the URLs not seen before, repeat (a)–(c). However, the size of the web (estimated at over 4 billion pages) and its rate of change (estimated at 7% per week) move this plan from a trivial
4、 programming exercise to a serious algorithmic and system design challenge. Indeed, these two factors alone imply that for a reasonably fresh and complete crawl of the web, step (a) must be executed about a thousand time
5、s per second, and thus the membership test (c) must be done well over ten thousand times per second against a set too large to store in main memory. This requires a distributed architecture, which further complicates the
6、 外文資料原文(estimated at over 20%) is never reached. See [9] for a discussion of the graph structure of the web that leads to this phenomenon.If we view web pages as nodes in a graph, and hyperlinks as directed edges among t
7、hese nodes, then crawling becomes a process known in mathematical circles as graph traversal. Various strategies for graph traversal differ in their choice of which node among the nodes not yet explored to explore next.
8、Two standard strategies for graph traversal are Depth First Search (DFS) and Breadth First Search (BFS) – they are easy to implement and taught in many introductory algorithms classes. (See for instance [34]). However, c
9、rawling the web is not a trivial programming exercise but a serious algorithmic and system design challenge because of the following two factors. 1. The web is very large. Currently, Google [20] claims to have indexed ov
10、er 3 billion pages. Various studies [3, 27, 28] have indicated that, historically, the web has doubled every 9-12 months.2. Web pages are changing rapidly. If “change” means “any change”, then about 40% of all web pages
11、change weekly [12]. Even if we consider only pages that change by a third or more, about 7% of all web pages change weekly [17]. These two factors imply that to obtain a reasonably fresh and 679 complete snapshot of the
12、web, a search engine must crawl at least 100 million pages per day. Therefore, step (a) must be executed about 1,000 times per second, and the membership test in step (c) must be done well over ten thousand times per sec
13、ond, against a set of URLs that is too large to store in main memory. In addition, crawlers typically use a distributed architecture to crawl more pages in parallel, which further complicates the membership test: it is p
14、ossible that the membership question can only be answered by a peer node, not locally.A crucial way to speed up the membership test is to cache a (dynamic) subset of the “seen” URLs in main memory. The main goal of this
15、paper is to investigate in depth several URL caching techniques for web crawling. We examined four practical techniques: random replacement, static cache, LRU, and CLOCK, and compared them against two theoretical limits:
16、 clairvoyant caching and infinite cache when run against a trace of a web crawl that issued over one billion HTTP requests. We found that simple caching techniques are extremely effective even at relatively small cache s
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶(hù)所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫(kù)僅提供信息存儲(chǔ)空間,僅對(duì)用戶(hù)上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶(hù)上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶(hù)因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 外文翻譯---基于網(wǎng)絡(luò)爬蟲(chóng)的有效url緩存
- 外文翻譯---基于網(wǎng)絡(luò)爬蟲(chóng)的有效url緩存
- 計(jì)算機(jī)外文翻譯---基于網(wǎng)絡(luò)爬蟲(chóng)的有效url緩存
- 畢業(yè)論文外文翻譯-網(wǎng)絡(luò)爬蟲(chóng)
- 基于URL及上下文的主題網(wǎng)絡(luò)爬蟲(chóng)研究.pdf
- 網(wǎng)絡(luò)爬蟲(chóng)
- 網(wǎng)絡(luò)爬蟲(chóng)畢業(yè)設(shè)計(jì)(含外文翻譯)
- 網(wǎng)絡(luò)蜘蛛,網(wǎng)絡(luò)爬蟲(chóng)
- 網(wǎng)絡(luò)爬蟲(chóng)文檔
- 網(wǎng)絡(luò)視頻爬蟲(chóng)的緩存和更新策略.pdf
- 網(wǎng)絡(luò)爬蟲(chóng)詳解
- 基于Python 的網(wǎng)絡(luò)爬蟲(chóng).docx
- 基于URL規(guī)則的聚焦爬蟲(chóng)及其應(yīng)用.pdf
- 面向網(wǎng)絡(luò)爬蟲(chóng)的海量URL數(shù)據(jù)管理技術(shù)研究.pdf
- 網(wǎng)絡(luò)爬蟲(chóng)技術(shù)淺析
- 惡意URL檢測(cè)項(xiàng)目中基于PageRank算法的網(wǎng)絡(luò)爬蟲(chóng)的設(shè)計(jì)和實(shí)現(xiàn).pdf
- 網(wǎng)絡(luò)爬蟲(chóng)源代碼
- 網(wǎng)絡(luò)爬蟲(chóng)技術(shù)分析
- 基于網(wǎng)絡(luò)爬蟲(chóng)技術(shù)的網(wǎng)絡(luò)新聞分析
- 網(wǎng)絡(luò)爬蟲(chóng)的設(shè)計(jì)與實(shí)現(xiàn)畢業(yè)論文(含外文翻譯)
評(píng)論
0/150
提交評(píng)論