Nowadays, analyzing large amount of data is of paramount importance for many companies. Big data and business intelligence applications are facilitated by the MapReduce programming model while, at infrastructural layer, cloud computing provides flexible and cost effective solutions for allocating on demand large clusters. Capacity allocation in such systems is a key challenge to providing performance for MapReduce jobs and minimize cloud resource cost. The contribution of this paper is twofold: (i) we formulate a linear programming model able to minimize cloud resources cost and job rejection penalties for the execution of jobs of multiple classes with (soft) deadline guarantees, (ii) we provide new upper and lower bounds for MapReduce job execution time in shared Hadoop clusters. Moreover, our solutions are validated by a large set of experiments. We demonstrate that our method is able to determine the global optimal solution for systems including up to 1000 user classes in less than 0.5 seconds. Moreover, the execution time of MapReduce jobs are within 19% of our upper bounds on average.

Malekimajd, M., Rizzi, A., Ardagna, D., Ciavotta, M., Passacantando, M., Movaghar, A. (2015). Optimal capacity allocation for executing mapreduce jobs in cloud systems. In Proceedings - 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC 2014 (pp.385-392). Institute of Electrical and Electronics Engineers Inc. [10.1109/SYNASC.2014.58].

Optimal capacity allocation for executing mapreduce jobs in cloud systems

Ciavotta, M;Passacantando, M;
2015

Abstract

Nowadays, analyzing large amount of data is of paramount importance for many companies. Big data and business intelligence applications are facilitated by the MapReduce programming model while, at infrastructural layer, cloud computing provides flexible and cost effective solutions for allocating on demand large clusters. Capacity allocation in such systems is a key challenge to providing performance for MapReduce jobs and minimize cloud resource cost. The contribution of this paper is twofold: (i) we formulate a linear programming model able to minimize cloud resources cost and job rejection penalties for the execution of jobs of multiple classes with (soft) deadline guarantees, (ii) we provide new upper and lower bounds for MapReduce job execution time in shared Hadoop clusters. Moreover, our solutions are validated by a large set of experiments. We demonstrate that our method is able to determine the global optimal solution for systems including up to 1000 user classes in less than 0.5 seconds. Moreover, the execution time of MapReduce jobs are within 19% of our upper bounds on average.
paper
Capacity Allocation; Cloud Computing; MapReduce; Performance bounds;
English
International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC 2014 22-25 September
2014
Winkler, F; Negru, V; Ida, T; Jebelean, T; Petcu, D; Watt, SM; Zaharie, D
Proceedings - 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC 2014
9781479984480
2015
385
392
7034708
reserved
Malekimajd, M., Rizzi, A., Ardagna, D., Ciavotta, M., Passacantando, M., Movaghar, A. (2015). Optimal capacity allocation for executing mapreduce jobs in cloud systems. In Proceedings - 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC 2014 (pp.385-392). Institute of Electrical and Electronics Engineers Inc. [10.1109/SYNASC.2014.58].
File in questo prodotto:
File Dimensione Formato  
MICAS2014_2.pdf

Solo gestori archivio

Dimensione 356.04 kB
Formato Adobe PDF
356.04 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/186653
Citazioni
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 6
Social impact