版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)
文檔簡介
1、Skew-Aware Task Scheduling in Clouds Dongsheng Li+, Yixing Chen+, Richard Hu Hai§+National Lab for Parallel and Distributed Processing, School of Computer, National University of Defense Technology, China §Raff
2、les Business Institute, Singapore dsli@nudt.edu.cn Abstract—Data skew is an important reason for the emergence of stragglers in MapReduce-like cloud systems. In this paper, we propose a Skew-Aware Task Scheduling (SATS
3、) mechanism for iterative applications in MapReduce-like systems. The mechanism utilizes the similarity of data distribution in adjacent iterations of iterative applications to reduce the straggle problem caused
4、by data skew. It collects the data distribution information during the execution of tasks for the current iteration, and uses the information to guide data partitioning in tasks for the next iteration. We implement th
5、e mechanism in the HaLoop system and deploy it in a cluster. Experiments show that the proposed mechanism could deal with the data skew and improve the load balancing effectively. Keywords—Data Skew; Task Scheduling;
6、Cloud; Load balancing I. INTRODUCTION Cloud computing has become a promising technology in recent years, and MapReduce is one of the most successful realizations of large-scale data-intensive cloud computing platforms
7、 [1]-[3]. MapReduce uses a simple data parallel programming model with two basic operations, i.e., the Map and Reduce operations. Users can customize the Map function and Reduce function according to the application r
8、equirements. Each map task takes one piece of the input data and generates a set of intermediate key/value pairs using the Map function, which are shuffled to the Reduce tasks working with the Reduce function. This p
9、rogramming model is simple but robust, and many large-scale data processing applications can be expressed by the model. The MapReduce-like systems can automatically schedule multiple Map and/or Reduce tasks over dist
10、ributed machines in the Clouds. As the synchronization step only exists between the Map phase and the Reduce phase, tasks executing in the same phase have high parallelism, and thus the concurrency and the scalability
11、 of the system could be highly enhanced. Hadoop [4] and its variants (e.g., HaLoop [5] and Hadoop++ [6]) are typical MapReduce-like systems. Since there is a synchronization step between the Map phase and the Reduce ph
12、ase in MapReduce-like systems, one slow task in either phase may slow down the execution of the whole job. Such a slow task in the Map or Reduce phase is called a straggler. When stragglers come out, the execution ti
13、me of the whole job will increase, and the resource usage will be reduced. Recently, studies [7]-[8] show that data skew in the Map or Reduce phase has become one of the main reasons for stragglers. In many scientific
14、 computing and data analysis applications, data skew of the input data or intermediate data could cause severe load unbalancing problem. For example, PageRank [9] for large-scale search engineering is a typical applic
15、ation executed in MapReduce-like systems. The PageRank application performs a link analysis that assigns weights (ranks) to each vertex/webpage in the webpage link graph by iteratively aggregating the weights of its i
16、nbound neighbors. Studies [7], [8], [18] have shown that the degrees of webpage link graphs are much skewed and some vertexes are with a large degree of incoming edges. Since the MapReduce- like systems [4] use the ra
17、ndom hash algorithm to partition the intermediate data to Reducer nodes, the nodes that are responsible for the tasks of computing the weight of high- degree vertexes might take more time to finish their task, and thu
18、s become the straggles of the system. And the straggle problem caused by data skew has become an important research topic in MapReduce-like systems recently. In this paper, we propose a Skew-Aware Task Scheduling (SATS
19、) mechanism for MapReduce-like systems. The SATS mechanism is based on the observation that many applications in MapReduce-like systems are iterative computations [5], such as the PageRank [9], machine learning applic
20、ations, recursive relational queries and social network analysis. In iterative applications, data are processed iteratively until the computation satisfies a convergence or stopping condition, and each iteration
21、 in the computation could be one or multiple MapReduce jobs. There might be similarity between the data in two adjacent iterations, and the data distribution in the jobs of adjacent iterations might be similar. If the
22、 data distribution could be acquired before the execution of a MapReduce job, we might partition the data properly onto nodes in the system to improve the load balancing. Based on the idea, the SATS mechanism is desi
23、gned to utilize the similarity of data distribution in adjacent iterations to reduce the straggle problem caused by data skew. It collects the data distribution information during the task execution in the current ite
24、ration, and uses the information to guide the data partition in the next iteration. As data skew is often happened in the Reduce phase of the MapReduce jobs, the SATS mechanism focus on the straggler problem in the R
25、educe phase of the MapReduce Jobs. The main contribution of this paper is shown as below. Firstly, we design a skew-aware task scheduling mechanism, called SATS, to deal with the straggle problem caused by data skew in
26、 iterative applications in MapReduce-like systems. Secondly, we implement the SATS mechanism and build a prototype based on HaLoop [5], an open source MapReduce- like system. Finally, we perform compensative experiment
27、s to evaluate the SATS mechanism, and experimental results show that SATS can improve the load balancing effectively. 2013 IEEE Seventh International Symposium on Service-Oriented System Engineering978-0-7695-4944-6/12
28、 $26.00 © 2012 IEEE DOI 10.1109/SOSE.2013.64 3412013 IEEE Seventh International Symposium on Service-Oriented System Engineering978-0-7695-4944-6/12 $26.00 © 2012 IEEE DOI 10.1109/SOSE.2013.64 341information on
29、 key/value pairs extracted from jobs in the current iteration can be used to predict data distribution in the next iteration. Based on the idea, the SATS mechanism is designed to utilize the similarity of data distrib
30、ution in adjacent iterations to mitigate the straggle problem caused by data skew and enhance the load balancing. The SATS mechanism collects the data distribution information on the intermediate key/value pairs gener
31、ated by the Map tasks during the job execution in the current iteration, and utilizes the information to guide data partitioning to improve the load balancing of Reducer nodes in the next iteration. The components of
32、the SATS mechanism in MapReduce-like systems are shown in Figure 1. The Map, Reduce and JobTracker are the common components of MapReduce-like systems. Figure 1. The components of the SATS mechanism in MapReduce-like
33、systems The SATS mechanism is implemented by three modules, i.e., the collector module, the controller module, and the balancer module. In MapReduce-like systems, there is one TaskTracker working in distributed nodes f
34、or each Map or Reduce task. The collector module runs with the TaskTracker for the Reduce task, and gathers the data distribution information of intermediate key/value pairs in the MapReduce jobs. Each collector modu
35、le transfers the data distribution information gathered to the balancer module. The balancer module works in the JobTracker subsystem in MapReduce-like systems, gathers all the data distribution information from distri
36、buted collectors, and computes the global distribution of intermediate key/value pairs, and then determine a data partitioning scheme for jobs in the next iterative to deal with the data skew and improve the load bal
37、ancing of Reducer nodes. The balancer module adopts the HLF algorithm to calculate the data partitioning scheme, which is described later in subsection C. After the balancer module determines the data partitioning sch
38、eme, it notifies the controller modules distributed in the TaskTrackers that will do a Map task in the next iteration of the scheme. When the Map tasks in the next iteration generate the intermediate key/value pairs,
39、they will partition the key/value pairs according to the partitioning scheme instead of the default HashPartitioner scheme in Hadoop/ HaLoop, and then shuffle them to Reducer nodes accordingly to deal with the data sk
40、ew and improve the load balancing of Reducer nodes. We implement these modules in the SATS mechanism in the HaLoop [5] system, and illustrate the details of these modules in the next subsections. B. Collect the Data Di
41、stribution information In MapReduce-like systems, the intermediate data is generated in the form of key/value pairs, and the data with the same key are shuffled to one Reducer node. Therefore, the data distribution i
42、nformation is about keys generated and their “weight”, i.e., the number of related key/value pairs. The collector module runs in each TaskTracker for the Reduce tasks in distributed machines, and it counts the weight
43、of keys when the Reduce tasks execute on local nodes. As there are many distributed collector modules in the system, they should send the data distribution information in the form of keys and their weights to the bal
44、ancer module in the Master node on which the JobTracker runs. There are several ways to transfer the data distribution information from the distributed collector modules to the JobTracker in MapReduce-like systems. As
45、 there are periodic heartbeat messages between the JobTracker and the TaskTracker, we can use the heartbeat messages to piggyback the data distribution information, or we can transfer the data distribution informati
46、on from the TaskTracker to the JobTracker directly when needed. However, these implements need to rewrite or modify the communication mechanisms in MapReduce-like systems, and they may influent the InputMapJobTrackerM
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 外文翻譯--云計算中傾斜度感知的任務(wù)調(diào)度
- 外文翻譯--云計算中傾斜度感知的任務(wù)調(diào)度(英文).pdf
- 外文翻譯--云計算中傾斜度感知的任務(wù)調(diào)度(英文).pdf
- 外文翻譯--云計算中傾斜度感知的任務(wù)調(diào)度
- 外文翻譯--云計算中傾斜度感知的任務(wù)調(diào)度(譯文)
- 外文翻譯--云計算中傾斜度感知的任務(wù)調(diào)度(譯文).doc
- 外文翻譯--云計算中傾斜度感知的任務(wù)調(diào)度(譯文).doc
- 高架橋墩柱傾斜度的變形監(jiān)測
- 高架橋墩柱傾斜度的變形監(jiān)測
- 基于DVFS和熱量感知的移動云計算任務(wù)調(diào)度研究.pdf
- 金屬工件表面傾斜度及最大高度差在線測量系統(tǒng).pdf
- 云計算環(huán)境中任務(wù)調(diào)度算法研究.pdf
- 云計算任務(wù)調(diào)度研究.pdf
- 正畸拔除第二前磨牙對第三磨牙傾斜度的影響.pdf
- 云計算中任務(wù)調(diào)度算法的優(yōu)化與研究.pdf
- 工程專業(yè)資料為什么傾斜度及巖石完整性加大嵌巖深度?
- 云計算中基于交叉熵方法的任務(wù)調(diào)度研究.pdf
- 基于無線傳感器網(wǎng)絡(luò)的輸電線路傾斜度在線監(jiān)測系統(tǒng)設(shè)計與開發(fā).pdf
- 平均生長型青少年不同矢狀骨面型切牙與磨牙傾斜度研究.pdf
- [雙語翻譯]--外文翻譯---同類集群上并行任務(wù)圖的進(jìn)化調(diào)度(英文)
評論
0/150
提交評論