Presentation is loading. Please wait.

Presentation is loading. Please wait.

Co-Allocation of Compute and Network Resources in the VIOLA Testbed Christoph Barz and Markus Pilz University of Bonn Institute of Computer Science IV.

Similar presentations


Presentation on theme: "Co-Allocation of Compute and Network Resources in the VIOLA Testbed Christoph Barz and Markus Pilz University of Bonn Institute of Computer Science IV."— Presentation transcript:

1 Co-Allocation of Compute and Network Resources in the VIOLA Testbed Christoph Barz and Markus Pilz University of Bonn Institute of Computer Science IV Oliver Wäldrich and Wolfgang Ziegler Fraunhofer Institute for Scientific Computing and Algorithms, Department of Bioinformatics Thomas Eickermann and Lidia Kirtchakova Research Centre Jülich, ZAM TERENA Networking Conference 2006 (15 - 18 May 2006, Catania, Italy)

2 Terena Networking Conference 2006 2 Agenda  Motivation  Resource Orchestration by MetaScheduling  Network Reservations with ARGON  Future Work  Motivation  Resource Orchestration by MetaScheduling  Network Reservations with ARGON  Future Work

3 Terena Networking Conference 2006 3 Motivation - Grid Projects (Examples) Large Scale Scientific Applications Extremely high Data Volumes High Computational Demand Distributed Resources http://www.realitygrid.org/Spice/ http://www.c3grid.de/ http://www.gac-grid.de/ Resource Orchestration via Advance Reservations Co-Scheduling Resource Orchestration of Computational Resources Storage Resources Instruments and Sensors Network Resources D-Grid Initiative SPICE on TeraGrid + UK NGS

4 Terena Networking Conference 2006 4 Motivation - Applications in VIOLA (Examples) AMG-OPT (simulation based on a hierarchical algebraic solver) TechSim (distributed simulation of complex technological systems) MetaTrace (simulation of pollutant transport in groundwater) KoDaVis (collaborative visualization of huge atmospheric datasets in heterogeneous environments)

5 Terena Networking Conference 2006 5 MetaTrace Demonstration Distribution of Chemicals in the Soil – Problem Decomposition TRACE:calculation of water-flow PARTRACE:distribution and chemical reactions of pollutants exchange of intermediate results: up to 1 GigaByte in 1 second FhG Sankt Augustin FZ Jülich caesar Uni Bonn FH BRS Network Reservation multiple point-to-point tunnels Layer 2/3 switching / routing MPLS Network Network Service Description Jülich-Cray PARTRACE 30 nodes FH BRS TRACE 6x2x2 CPUs Caesar TRACE 30x2 CPUs water-flow 1x/step water-flow 1x /step 30-100x /step Cluster Reservation MetaMPICH Cluster Requirements Network Requirements

6 Terena Networking Conference 2006 6 MetaScheduling Service - Architecture UNICORE Client Local Scheduler UNICORE Gateway Target System Interface Primary NJS Target System Interface NJS UNICORE Gateway Target System Interface NJS Local Scheduler Local Scheduler Adapter Job Queue Adapter Job Queue MetaScheduler Site ASite B Network RMS ARGON Link Usage 1) User specifies Job 4) MetaScheduler Reply (WS-Agreement) Adapter 3) Negotiation and Reservation Cluster 5) Job transfer to UNICORE System 6) All Job Components including Network QoS are provisioned automatically 2) MetaScheduling Request (WS-Agreement)

7 Terena Networking Conference 2006 7 MetaScheduling Service - Algorithm Requested Resources Time Constraints Determine next Availability of Res. n Determine next Availability of Res. 2 Determine next Availability of Res. 1 nextStartup = max(nextStartup, freeSlots[i]); i++ [i>n] [i≤n] [Common free slot found] t t t First-fit search for common start time of all job components on all resources t Space of time in which requested service can start Jülich-Cray PARTRACE 30 nodes FH BRS TRACE 6x2x2 CPUs Caesar TRACE 30x2 CPUs Common start time found! Network Service nextStartup MetaTrace ExampleFirst-fit Algorithm

8 Terena Networking Conference 2006 8 ARGON - Network Service AvailabilityReservationBindQueryCancel Modify Interface

9 Terena Networking Conference 2006 9 ARGON – Reservation Lifetime t req t begin t end t conf t bind Intermediate Phase Negotiation Phase Usage/Renegotiation Phase Advance Reservation time resources constraints Feasible solution space Constraints  Traffic engineering  Service availability  Policy rules  SLA  User requirements  … Constraints  Traffic engineering  Service availability  Policy rules  SLA  User requirements  … t act Activation Phase Negotiation Phase: - Availability check(s) - Admission decision - Reservation Negotiation Phase: - Availability check(s) - Admission decision - Reservation Intermediate Phase: - Re-optimization inexpensive - Binding of service parameters Intermediate Phase: - Re-optimization inexpensive - Binding of service parameters Activation Phase: - Automatic initiation - Configuration of network-devices - Duration dependant on service & devices Activation Phase: - Automatic initiation - Configuration of network-devices - Duration dependant on service & devices Usage/Renegotiation - Re-optimization expensive Phase:- Modification of parameters Usage/Renegotiation - Re-optimization expensive Phase:- Modification of parameters Query and Cancel can be used anytime after negotiation

10 Terena Networking Conference 2006 10 ARGON – Resource Optimisation Intermediate Phase Negotiation Phase Usage/Renegotiation Phase time Rerouting of intermediate phase flows is inexpensive. Online and offline algorithms can be used. capacity time shift start First Fit / Deadline Flexible Reservations Malleable Reservations capacity time capacity time increase capacity reduce duration reduce capacity increase duration Flexible Path Selection Rerouting/Planning of Accepted Flows 1 2

11 Terena Networking Conference 2006 11 ARGON - Network Architecture Overlay Model  Optical domain is not visible to IP domain  MPLS domain can not perform efficient TE  UNI signaling Overlay Model  Optical domain is not visible to IP domain  MPLS domain can not perform efficient TE  UNI signaling Multi-Region Network ProxyUniClient DataBase SNMP Server SNMP Client Controller RSVP Auto Discovery ARGON Listener CLI UNI GMPLS Switch MPLS Switch Administration Alcatel Service provisioning MPLS ASON/GMPLS

12 Terena Networking Conference 2006 12 Co-Allocation in the VIOLA Testbed  Reservation, Signaling and Provisioning  End-to-End Path Computation  Service Modeling Network Service Provisioning  AAA Support  Service Level Agreement (SLA)  Policy aware Provisioning Policy-based Framework  Optimization  Resource Modeling Network Resource Management  Multi-Domain  Multi-Layer  Multi-Region Network Architecture IETF nomenclature  Standardization activities: GRAAP, OGSA-RSS  Technologies: UNICORE, Web-Services, WS-Agreement Interoperability  Negotiation of a common time frame for all resources  Reservation of nodes at different clusters  Reservation of network services via ARGON Resource Orchestration MetaScheduler Concept: ARGON Concept (Allocation and Reservation in Grid-enabled Optical Networks) :

13 Terena Networking Conference 2006 13 ARGON - Where are we now? time1st year2nd year3rd year today Development Prototype deployment Application “Gridifying” Middleware Resource Broker ARGON Reservation Service ARGON Service Provisioning VIOLA Network Deployment and Tests Multi-Layer Single- Region Single- Domain Overall VIOLA requirements Infrastructure for Applications Grid Middleware Integration Network as a Grid resource Overall VIOLA requirements Infrastructure for Applications Grid Middleware Integration Network as a Grid resource Overall ARGON objectives Bandwidth On Demand in VIOLA Advance Network Reservation Interface inspired by EGEE specs Overall ARGON objectives Bandwidth On Demand in VIOLA Advance Network Reservation Interface inspired by EGEE specs Multi-Vendor Multi-Layer Multi-Region Multi- Domain

14 Terena Networking Conference 2006 14 Future Work MetaScheduler  More automatic resource pre-selection process  MetaScheduling Service for workflows  Porting to GT4  LUCIFER integration ARGON Multi-Region: MPLS and ASON/GMPLS layer must be coordinated Resource and Service Model enhancements GÉANT2 cooperation (JRA3 Inter-domain manager) LUCIFER integration (ARGON, UCLPv2, D-RAC)

15 Terena Networking Conference 2006 15 Conclusion  Demanding applications benefit from resources of multiple clusters and sites  Application driven resource selection for UNICORE Grid applications  Co-scheduling of computational, storage and network resources  MetaScheduling Service does orchestration of resources of multiple domains  ARGON provides Network Services with advance reservation capabilities and dedicated QoS

16 Terena Networking Conference 2006 16 The end Thank You! Contact: www.viola-testbed.de {barz, pilz}@cs.uni-bonn.de {th.eickermann, l.kirtchakova}@fz-juelich.de {Wolfgang.Ziegler, Oliver.Waeldrich}@scai.fraunhofer.de Contact: www.viola-testbed.de {barz, pilz}@cs.uni-bonn.de {th.eickermann, l.kirtchakova}@fz-juelich.de {Wolfgang.Ziegler, Oliver.Waeldrich}@scai.fraunhofer.de

17 Terena Networking Conference 2006 17 MetaScheduling Service – Algorithm (2) set n = number of requested resources set res [1..n] = requested resources set prop [1..n]= requested property per resource set freeSlots[1..n]= null set endOfPreviewWindow = false set nextStartup = currentTime + someMinutes set needNext = true while (endOfPreviewWindow = false & needNext = true) do { for 1..n do in parallel { freeSlots[i] = AvailableAt( res [i], prop [i], nextStartup) } set needNext = false for 1..n do { if ( nextStartup != freeSlots[i]) then { if ( freeSlots[i] != null) then { if( nextStartup < freeSlots[i]) then { set nextStartup = freeSlots[i] set needNext = true } } else { set endOfPreviewWindow = true } if (needNext = false & endOfPreviewWindow = false) then return freeSlots[1] else return “no common slot found”


Download ppt "Co-Allocation of Compute and Network Resources in the VIOLA Testbed Christoph Barz and Markus Pilz University of Bonn Institute of Computer Science IV."

Similar presentations


Ads by Google