Lizhe Wang, Gregor von Laszewski, Jai Dayal, Thomas R. Furlani

Slides:



Advertisements
Similar presentations
Evaluating the Cost-Benefit of Using Cloud Computing to Extend the Capacity of Clusters Presenter: Xiaoyu Sun.
Advertisements

Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
Green Cloud Computing Hadi Salimi Distributed Systems Lab, School of Computer Engineering, Iran University of Science and Technology,
Future Grid Introduction March MAGIC Meeting Gregor von Laszewski Community Grids Laboratory, Digital Science.
YZ X Rack Hot air Cold air Rack nod e. PCR Node i.Temp(t) Temp(x,y,z,t)
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
Student Visits August Geoffrey Fox
Efficient Resource Management for Cloud Computing Environments Andrew J. Younge Golisano College of Computing and Information Sciences Rochester Institute.
Efficient Resource Management for Cloud Computing Environments
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid Geoffrey Fox, Andrew J. Younge, Gregor von Laszewski, Archit Kulshrestha, Fugang.
Thermal Aware Resource Management Framework Xi He, Gregor von Laszewski, Lizhe Wang Golisano College of Computing and Information Sciences Rochester Institute.
E-Science Workflow Support with Grid-Enabled Microsoft Project Gregor von Laszewski and Leor E. Dilmanian, Rochester Institute of Technology Abstract von.
Cyberaide Virtual Appliance: On-demand Deploying Middleware for Cyberinfrastructure Tobias Kurze, Lizhe Wang, Gregor von Laszewski, Jie Tao, Marcel Kunze,
Green IT and Data Centers Darshan R. Kapadia Gregor von Laszewski 1.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
XI HE Computing and Information Science Rochester Institute of Technology Rochester, NY USA Rochester Institute of Technology Service.
FutureGrid SOIC Lightning Talk February Geoffrey Fox
Science of Cloud Computing Panel Cloud2011 Washington DC July Geoffrey Fox
Experimenting with FutureGrid CloudCom 2010 Conference Indianapolis December Geoffrey Fox
Cloud Computing Energy efficient cloud computing Keke Chen.
Andrew J. Younge Golisano College of Computing and Information Sciences Rochester Institute of Technology 102 Lomb Memorial Drive Rochester, New York
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
YZ X Rack Hot air Rack Node (x,y,z ) Node (x,y,z )
YZ X Rack Hot air Rack Node (x,y,z ) Node (x,y,z )
Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain.
Future Grid FutureGrid Overview Dr. Speaker. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research on the future.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid Dynamic Provisioning Experiments including Hadoop Fugang Wang, Archit Kulshrestha, Gregory G. Pike, Gregor von Laszewski, Geoffrey C. Fox.
Future Grid FutureGrid Overview Geoffrey Fox SC09 November
Summer Report Xi He Golisano College of Computing and Information Sciences Rochester Institute of Technology Rochester, NY
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Power-Aware Scheduling of Virtual Machines in DVFS-enabled Clusters
Future Grid Future Grid All Hands Meeting Introduction Indianapolis October Geoffrey Fox
Experiment Management with Microsoft Project Gregor von Laszewski Leor E. Dilmanian Acknowledgement: NSF NMI, CMMI, DDDAS
Building Effective CyberGIS: FutureGrid Marlon Pierce, Geoffrey Fox Indiana University.
RAIN: A system to Dynamically Generate & Provision Images on Bare Metal by Application Users Presented by Gregor von Laszewski Authors: Javier Diaz, Gregor.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
Gregor von Laszewski Rochester Institute of Technology.
Toward Green Data Center Computing Gregor von Laszewski Lizhe Wang.
Bio Gregor von Laszewski is conducting state-of-the-art work in Cloud computing and GreenIT at Indiana University as part of the Future Grid project. During.
Software Architecture for Dynamic Thermal Management in Datacenters Tridib Mukherjee Graduate Research Assistant IMPACT Lab ( Department.
Efficient Resource Management for Cloud Computing Environments Andrew J. Younge Golisano College of Computing and Information Sciences Rochester Institute.
Green Computing Metrics: Power, Temperature, CO2, … Computing system: Many-cores, Clusters, Grids and Clouds Algorithm and model: task scheduling, CFD.
Experiment Management with Microsoft Project Gregor von Laszewski Leor E. Dilmanian Link to presentation on wiki 12:13:33Service Oriented Cyberinfrastructure.
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid PI: Geoffrey Fox*, CoPIs: Kate Keahey +, Warren Smith -, Jose Fortes #, Andrew.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
XI HE Computing and Information Science Rochester Institute of Technology Rochester, NY USA Rochester Institute of Technology Service.
Scheduling MPI Workflow Applications on Computing Grids Juemin Zhang, Waleed Meleis, and David Kaeli Electrical and Computer Engineering Department, Northeastern.
Future Grid Future Grid Overview. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research that will invent the future.
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
Xi He Golisano College of Computing and Information Sciences Rochester Institute of Technology Rochester, NY THERMAL-AWARE RESOURCE.
IEEE Cloud 2011 Panel: Opportunities of Services Business in Cloud Age Fundamental Research Challenges for Cloud-based Service Delivery Gregor von Laszewski.
Euro-Par, HASTE: An Adaptive Middleware for Supporting Time-Critical Event Handling in Distributed Environments ICAC 2008 Conference June 2 nd,
Grappling Cloud Infrastructure Services with a Generic Image Repository Javier Diaz Andrew J. Younge, Gregor von Laszewski, Fugang.
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
1 Performance Impact of Resource Provisioning on Workflows Gurmeet Singh, Carl Kesselman and Ewa Deelman Information Science Institute University of Southern.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
Clouds , Grids and Clusters
FutureGrid: a Grid Testbed
Martin Swany Gregor von Laszewski Thomas Sterling Clint Whaley
Towards Green Aware Computing at Indiana University
Clouds from FutureGrid’s Perspective
OGCE Portal Applications for Grid Computing
Cyberinfrastructure and PolarGrid
OGCE Portal Applications for Grid Computing
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
PolarGrid and FutureGrid
Presentation transcript:

Lizhe Wang, Gregor von Laszewski, Jai Dayal, Thomas R. Furlani Thermal Aware Workload Scheduling with Backfilling for Green Data Centers Lizhe Wang, Gregor von Laszewski, Jai Dayal, Thomas R. Furlani RIT . IU. UB

Outline Background and related work Models Research problem definition Scheduling algorithm Performance study Conclusion

Context Cyberaide A project that aims to make advanced cyberinfrastructure easier to use GreenIT & Cyberaide How do we use advanced cyberinfrastructure in an efficient way Future Grid A newly NSF funded project to provide a testbed that integrates the ability of dynamic provisioning of resources. (Geoffrey C. Fox is PI) GPGPU’s Application use of special purpose hardware as part of the cyberinfrastructure

FutureGrid The goal of FutureGrid is to support the research that will invent the future of distributed, grid, and cloud computing. FutureGrid will build a robustly managed simulation environment or testbed to support the development and early use in science of new technologies at all levels of the software stack: from networking to middleware to scientific applications. The environment will mimic TeraGrid and/or general parallel and distributed systems This test-bed will enable dramatic advances in science and engineering through collaborative evolution of science applications and related software.

Other Participant Sites University of Virginia (UV) Technical University Dresden GWT-TUD GmbH, Germany University of Tennessee – Knoxville (UTK)

FutureGrid Hardware

FutureGrid Partners Indiana University Purdue University San Diego Supercomputer Center at University of California San Diego University of Chicago/Argonne National Labs University of Florida University of Southern California Information Sciences Institute, University of Tennessee Knoxville University of Texas at Austin/Texas Advanced Computing Center University of Virginia Center for Information Services and GWT-TUD from Technische Universtität Dresden.

Green computing a study and practice of using computing resources in an efficient manner such that its impact on the environment is as less hazardous as possible. least amount of hazardous materials are used computing resources are used efficiently in terms of energy and to promote recyclability

Cyberaide Project A middleware for Clusters, Grids and Clouds A collaboration between IU, RIT, KIT, … Project led by Dr. Gregor von Laszewski

Objective Towards next generation cyberinfrastructure Middleware for data centers, grids and clouds Environment respect To reduce temperatures of computing resources in a data center, thus reduce cooling system cost and improve system reliability Methodology: thermal aware workload distribution

Model Data center Workload Node: <x,y,z>, ta, Temp(t) TherMap: Temp(<x,y,z>,t) Workload Job ={jobj}, jobj=(p,tarrive,tstart,treq,Δtemp(t))

Thermal model t RC-thermal model Online task-temperature Nodei.Temp(t) Temp(Nodei.<x,y,z>,t) PR+ Nodei.Temp(0) task-temperature profile nodei <x,y,z> ambient temperature: TherMap=Temp(Nodei.<x,y,z>,t) Nodei.Temp(t) P C R Nodei.Temp(t) Temp(Nodei.<x,y,z>,t)

Research issue definition Given a data center, workload, maximum temperature permitted of the data center Min Tresponse Min Temperature

Cooling system control Concept framework Data center model Workload model input Workload placement schedule TASA-B input input online task-temperature Cooling system control

Concept framework Data center model Workload model input Workload placement schedule TASA-B input input RC-thermal model online task-temperature Cooling system control calculation task-temperature profile Thermal map

Concept framework Data center model Workload model input Workload placement schedule TASA-B input input RC-thermal model online task-temperature Cooling system control calculation task-temperature profile Thermal map Control

Concept framework Data center model Workload model input Workload placement schedule TASA-B input input RC-thermal model online task-temperature Cooling system control calculation task-temperature profile Thermal map Control profiling Profiling tool

Concept framework Data center model Workload model input Workload placement schedule TASA-B input input RC-thermal model online task-temperature Cooling system control calculation task-temperature profile Thermal map Control profiling provide information Calculate thermal map Profiling tool monitoring service CFD model

Scheduling framework Jobs Job queue Job submission Data center Job scheduling TASA-B Rack Update data center Information periodically

Task scheduling algorithm with backfilling (TASA-B) Sort all jobs with decreased order of task-temperature profile Sort all resource with increased order of predicted temperature Hot jobs are allocated to cool resources Predict resource temperature based on online-task temperature Backfill possible jobs

Backfilling nodek.tbfend , Time backfilling holes end time for backfilling Time backfilling holes Available time t0 nodemax1 nodemax2 Node nodek.tbfsta , backfilling start time of nodek

Backfilling nodek.Tempbfend, end temperature for backfilling Temperature backfilling holes Tempbfmax nodemax2 nodemax1 Node nodek.Tempbfsta, start temperature for backfilling of nodek

Simulation Data center: Workload: Computational Center for Research at UB Dell x86 64 Linux cluster consisting 1056 nodes 13 Tflop/s Workload: 20 Feb 2009 – 22 Mar. 2009 22385 jobs

Simulation result Metrics TASA Reduced average temperature 16.1 F Reduced maximum temperature 6.1 F Increase job response time 13.9% Saved power 5000 kW Reduced CO2 emission 1900kg /hour

Simulation result Metrics TASA-B Reduced average temperature 14.6 F Reduced maximum temperature 4.1 F Increase job response time 11% Saved power 4000 kW Reduced CO2 emission 1600kg /hour

Our work on Green data center computing Power aware virtual machine scheduling (cluster’09) Power aware parallel task scheduling (submitted) TASA (i-SPAN’09) TASA-B (ipccc’09) ANN based temperature prediction and task scheduling (submitted)

Final remark Green computing Thermal aware data center computing TASA-B Justification with a simulation study