Presentation is loading. Please wait.

Presentation is loading. Please wait.

1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform.

Similar presentations


Presentation on theme: "1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform."— Presentation transcript:

1 1

2  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform of computational intensive applications.  Highly independent processes which can be distributed. Ex) Docking simulation. 22 http://www.rocksclusters.org/rocks- register/ Site C Site B Site A Large-scale Environment Grid Environment OS:Debian lib: glibc2.0 OS: Redhat lib: glibc3.0 OS: Redhat lib: glibc2.0

3 Virtualized cluster which is composed of virtual machines (VMs). › build a private computational environment that can be customize for users. › relatively easy to deploy on a single physical cluster by utilizing cluster building tools. 3 Computers at a Site Local network (LAN) Virtual machines (VMs) OS:Debian lib: glibc2.0 Virtual machines (VMs) OS: Redhat lib: glibc3.0 Virtual local network lib: glibc3.0 lib: glibc2.0

4  Developed by UCSD Rocks is installed on clusters at Sites in PRAGMA test-bed.  Rocks virtual cluster : 1. A virtual cluster is allocated a VLAN ID and network 2. Virtual compute nodes are automatically installed via network boot technology (PXE boot) Frontend node Rocks Compute nodes VLA N 2 Physical NIC VLA N 2 VLAN Constructi on Physical NIC Virtual Frontend node Virtual Frontend node eth0 eth1 Virtual Compute node Virtual Compute node eth0 WAN Layer 2 communication is needed ( LAN ) Layer 2 communication is needed ( LAN ) Virtual Compute node Virtual Compute node eth0 PXE booting Issue : It is difficult to build a virtual cluster over multiple clusters at Grid site with Rocks.

5  Our Approach › Focus on Rocks › Seamlessly integrate N2N overlay network with Rocks 5 Develop a system which can build a virtual cluster over multiple clusters at Grid sites for computational intensive applications. Site A Site B Physical Network Rocks cluster A Rocks cluster B N2N Overlay Network Rocks virtual cluster

6  Developed by ntop project in Italy 1. Creates an encrypted layer 2 overlay network using P2P protocol. 2. Can establishes layer 2 network spanned on multiple sites. › Utilize TAP virtual network interface (VNIC) 3. Divides overlay networks in similar manner to VLAN ID › Community name 6 Site A Site B Physical NIC Physical NIC N2N VNIC Physical NIC N2N VNIC LAN WAN N2N Overlay network Community name ( network ID ) MAC address 13:14:15:16:18:26 11:22:33:44:55:66 MAC address

7  MVC Controller (MVC : Multi-site Virtual Cluster) Rocks Databese Physical NIC Rocks MVC Databese Physical NIC WA N LAN Frontend node Compute nodes Databes e Overlay network Constructor Resource Manager VM Manager Registers multiple Rocks cluster as resources for a virtual cluster. 1. 2.2. 3.3. rocks add mvc Site A:Site b Site ASite B

8 Site ASite B Rocks Physical NIC Rocks Physical NIC WA N LAN Frontend node Compute nodes N2N VNIC N2N VNIC N2N VNIC Resource Manager N2N Overlay network Cluster name ( Cluster ID ) Builds a Layer 2 overlay network for each virtual cluster. MVC Databese Overlay network Constructor VM Manager 1. 2.2. 3.3.  MVC Controller (MVC : Multi-site Virtual Cluster)

9 Site ASite B Rocks Physical NIC Rocks Physical NIC WA N LAN Frontend node Compute nodes N2N VNIC N2N VNIC N2N VNIC Resource Manager N2N Overlay network Cluster name ( Cluster ID ) Builds a Layer 2 overlay network for each virtual cluster. MVC Databese Overlay network Constructor VM Manager 1. 2.2. 3.3. N2N VNIC N2N VNIC N2N VNIC  MVC Controller (MVC : Multi-site Virtual Cluster)

10 Site ASite B  MVC Controller (MVC : Multi-site Virtual Cluster) Rocks Physical NIC Rocks Physical NIC Virtual Compute node Virtual Compute node eth0 Physical NIC N2N VNIC WA N LAN N2N VNIC Frontend node Compute nodes N2N Overlay network Cluster name ( Cluster ID ) Overlay network Constructor N2N VNIC Resource Manager MVC Databese Virtual Frontend node Virtual Frontend node eth0 eth1 WA N PXE ブート VM Manager 1. 2.2. 3.3. rocks start host vm overlay frontend rocks start host vm overlay compute nodeA Site =A Seamlessly connects virtual Frontend node and virtual Compute nodes to N2N overlay network.

11 Site ASite B  MVC Controller (MVC : Multi-site Virtual Cluster) Rocks Physical NIC Virtual Compute node Virtual Compute node eth0 Rocks Physical NIC Virtual Compute node Virtual Compute node eth0 Physical NIC N2N VNIC WA N LAN N2N VNIC Frontend node Compute nodes N2N Overlay network Cluster name ( Cluster ID ) Overlay network Constructor VM Manager N2N VNIC Resource Manager MVC Databese Virtual Frontend node Virtual Frontend node eth0 eth1 WA N PXE ブート 1. 2.2. 3.3. rocks start host vm overlay compute nodeB Site =B

12 Site ASite B Rocks Physical NIC Virtual Compute node Virtual Compute node eth0 Rocks Physical NIC Virtual Compute node Virtual Compute node eth0 Physical NIC N2N VNIC WA N LAN N2N VNIC Frontend node Compute nodes N2N VNIC MVC Databese Virtual Frontend node Virtual Frontend node eth0 eth1 WA N N2N Overlay network Cluster name ( Cluster ID ) Virtual LAN $ qsub -np $NSLOTS app.sh $ mpirun -np 2 app.mpi  Can use as well as a Rocks virtual cluster at local site.

13  Environment 13 1.Verify the possibility of building virtual cluster over multiple Rocks clusters. 2.Evaluate calculation performance for a computational intensive application. WAN emulator Frontend node of Rocks cluster B Frontend node of Rocks cluster A Switch(1Gbps) 4 compute nodes of cluster A 4 compute nodes of cluster B OS: CentOS 5.4 (Rocks 5.4) CPU: Intel Xeon 2.27G HZ * 2 (16core) Memory: 12GB Network: 1Gbps 4 of compute nodes: …… Site ASite B

14 1. Verified a virtual cluster over cluster A and B can be built through N2N overlay network 2. Verified possibility of building a virtual cluster in WAN environment. › Change the latency at WAN emulator  0ms, 20ms, 60ms, 100ms, 140ms › Calculate install time for 4 of virtual compute nodes  About 1.0GB packages to install 692 1365 2628 3099 3521 Verified virtual compute nodes can be installed in WAN A virtual cluster over multiple Rocks clusters can be built even if Rocks clusters are in WAN environment.

15  Measure execution time of a computational intensive application. › DOCK 6.2 (sample program)  30 pieces of compounds for a protein divided by 8 processes.  There are few communication between 8 processes › Change the latency and bandwidth at WAN emulator  20ms, 60ms, 100ms, 140ms / 500Mbps, 100Mbps, 30Mbps 15 The effect of the performance is small even if latency is high and bandwidth is narrow

16 16 have designed and been prototyping a virtual cluster solution over multiple cluster at Grid sites. Integrate N2N with Rocks seamlessly. Verify the calculation performance for distributed application will be scale even if in WAN. Environment. Conclusion 1.Manage multiple virtual clusters deployed by multiple users. 2.Make the install time of virtual compute nodes short. Improve the performance of N2N overlay network. Set a cache repository per site. Future work

17  Rocks with Xen Roll  N2N › RPM package installation.  Open some port for N2N › For edge nodes and a supernode  Install MVC Controller › Composed of Some new python scripts › Provide original rocks commands. (we still have been developing.) 17

18 Fin 18

19 19

20 20  1.0G for 3000 sec => around 300Kbps per node.  The limited bandwidth of N2N Overlay network is 40Mbps. Install time won’t change when we install over 100 virtual compute nodes

21  Verify the network overhead of our virtual cluster. › Change the latency and bandwidth at WAN emulator.  20ms, 60ms, 100ms, 140ms / 500Mbps, 100Mbps, 30Mbps 21 Limited bandwidth

22 Rocks VM Installer VLAN Constructo r MVC Controller Resource Manager MVC DB Overlay network Constructor Database (DB) Rocks VM Installer VLAN Constructo r MVC Controller Resource Manager MVC DB Overlay network Constructor Database (DB) Frontend node of site B Frontend node of site A VM Manipulat or Administrator Request virtual cluster Provide virtual cluster Manipulate Virtual Machine for MVC Build a overlay network over multiple sites Aggregate resource information from Rocks DBs VM Manipulat or MVC : Multi-site Virtual Cluster

23 Domain 0 Supernode nodeMAC addressCommunity EdgeA11:22:33:44:55:66Osaka EdgeB13:14:15:16:18:26Osaka Edge A 11:22:33:44:55: 66 N2N Edge B 13:14:15:16:18: 26 N2N VM1 domain U eth0 14:25:36:47:58 VM2 domain U eth0 41:52:63:74:85 Supernode drops Ethernet flames if the MAC address of source host is not registered at Supernode. drop through

24 24 domain 0 peth0 eth0 TAP Virtual network interface xenbr.edge1 (bridge) VM domainU xenbr.eth0 (bridge) Supernode nodeMAC addressCommunit y Edge A 11:22:33:44: 55:66 Osaka Edge B 13:14:15:16: 18:26 Osaka 13:14:15:16:18:26 Can Connect to N2N overlay network. VM container eth0 N2N MAC address Synchronization

25 (ID) (Supernode IP) (Supernode Port) (Community) (Key) (Edge port) Overlay (ID) (Node ID) (MAC) (Virtual Device name) (Overlay ID) Networks for MVC (ID) (node name) (site ID) (Membership, cpus etc..) nodes for MVC (ID) (Node id) (Physical Node id) (Memory, Slice etc..) vm_nodes for MVC (overlay ID) (Node ID) (ID) (fqdn) (IP or DNS name) (headnode name) (user) (SSHpubkey) (text) Site (Site ID) (Node ID) Resource Manager


Download ppt "1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform."

Similar presentations


Ads by Google