1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform.

Slides:



Advertisements
Similar presentations
A Communication Relay Mechanism toward Construction of Virtual Cluster on Orverlay Network PRAGMA14 Working Group March 2008 Yasuyuki Kusumoto Osaka.
Advertisements

1 Bio Applications Virtualization using Rocks Nadya Williams UCSD.
Wei Lu 1, Kate Keahey 2, Tim Freeman 2, Frank Siebenlist 2 1 Indiana University, 2 Argonne National Lab
1 Applications Virtualization in VPC Nadya Williams UCSD.
PlanetLab Architecture Larry Peterson Princeton University.
The Virtual Local Area Network (VLAN). Introduction Defining VLANDefining VLAN Viewing VLAN Membership by Port GroupViewing VLAN Membership by Port Group.
The Case for Enterprise Ready Virtual Private Clouds Timothy Wood, Alexandre Gerber *, K.K. Ramakrishnan *, Jacobus van der Merwe *, and Prashant Shenoy.
虛擬化技術 Virtualization Techniques
THE OVERLAY NETWORKS WITH VIRTUAL RESOURCES NEED NEW FORMS OF MONITORING Jiri Navratil, Tomas Kosnar, Jan Furman, Tomas Mrazek, Vojtech Krmicek CESNET.
An OpenFlow based virtual network environment for Pragma Cloud virtual clusters Kohei Ichikawa, Taiki Tada, Susumu Date, Shinji Shimojo (Osaka U.), Yoshio.
An Approach to Secure Cloud Computing Architectures By Y. Serge Joseph FAU security Group February 24th, 2011.
1 Week #1 Objectives Review clients, servers, and Windows network models Differentiate among the editions of Server 2008 Discuss the new Windows Server.
External perimeter of secure network public Internet SNMPdata transaction data control commands July 2003 Firewall Network Processor™: basic concept and.
1 Week #1 Objectives Review clients, servers, and Windows network models Differentiate among the editions of Server 2008 Discuss the new Windows Server.
Ashish Gupta, Marcia Zangrilli, Ananth I. Sundararaj, Peter A. Dinda, Bruce B. Lowekamp EECS, Northwestern University Computer Science, College of William.
雲端計算 Cloud Computing Network Virtualization. Agenda Introduction External network virtualization  What to be virtualized ? Network device virtualization.
© 2008 AT&T Intellectual Property. All rights reserved. CloudNet: Where VPNs Meet Cloud Computing Flexibly and Dynamically Timothy Wood Kobus van der Merwe,
Networking in VMware Workstation 8
Using the jFed tool to experiment from zero to hero Brecht Vermeulen FGRE, July 7 th, 2015.
Virtual LANs. VLAN introduction VLANs logically segment switched networks based on the functions, project teams, or applications of the organization regardless.
Minerva Infrastructure Meeting – October 04, 2011.
We will be covering VLANs this week. In addition we will do a practical involving setting up a router and how to create a VLAN.
Migrating Applications to Windows Azure Virtual Machines Michael Washam Senior Technical Evangelist Microsoft Corporation.
Virtual IP Network Windows Server 2012 Windows 08 Dual Subnets.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Additional SugarCRM details for complete, functional, and portable deployment.
Edge Based Cloud Computing as a Feasible Network Paradigm(1/27) Edge-Based Cloud Computing as a Feasible Network Paradigm Joe Elizondo and Sam Palmer.
© 1999, Cisco Systems, Inc. 3-1 Chapter 10 Controlling Campus Device Access Chapter 3 Connecting the Switch Block © 1999, Cisco Systems, Inc. 3-1.
Distributed Systems. Outline  Services: DNSSEC  Architecture Models: Grid  Network Protocols: IPv6  Design Issues: Security  The Future: World Community.
Windows Azure Virtual Machines Speaker Title Organization.
PrimoGENI Tutorial Miguel Erazo, Neil Goldman, Nathanael Van Vorst, and Jason Liu Florida International University Other project participants: Julio Ibarra.
+ CS 325: CS Hardware and Software Organization and Architecture Cloud Architectures.
Virtualization Infrastructure Administration Network Jakub Yaghob.
Introduction to networking Devices. Objectives  Be able to describe the common networking devices and their functionality, including:  Repeaters  Hubs.
Remote Access Chapter 4. Learning Objectives Understand implications of IEEE 802.1x and how it is used Understand VPN technology and its uses for securing.
So, Jung-ki Distributed Computing System LAB School of Computer Science and Engineering Seoul National University Implementation of Package Management.
Automatic Software Testing Tool for Computer Networks ADD Presentation Dudi Patimer Adi Shachar Yaniv Cohen
Virtual Machine Scheduling for Parallel Soft Real-Time Applications
การติดตั้งและทดสอบการทำคลัสเต อร์เสมือนบน Xen, ROCKS, และไท ยกริด Roll Implementation of Virtualization Clusters based on Xen, ROCKS, and ThaiGrid Roll.
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
Politecnico di Torino Dipartimento di Automatica ed Informatica TORSEC Group Performance of Xen’s Secured Virtual Networks Emanuele Cesena Paolo Carlo.
Author: Bill Buchanan. 1. Broadcast: What is the MAC address of this network address? 2. Requested host: All the hosts read the broadcast and checks.
Advanced Topics StratusLab Tutorial (Orsay, France) 28 November 2012.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Server Performance, Scaling, Reliability and Configuration Norman White.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
1.Registration block send request of registration to super peer via PRP. Process re-registration will be done at specific period to info availability of.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
Windows Azure Virtual Machines Anton Boyko. A Continuous Offering From Private to Public Cloud.
STORE AND FORWARD & CUT THROUGH FORWARD Switches can use different forwarding techniques— two of these are store-and-forward switching and cut-through.
Ian Gable University of Victoria 1 Deploying HEP Applications Using Xen and Globus Virtual Workspaces A. Agarwal, A. Charbonneau, R. Desmarais, R. Enge,
Virtual Machines Created within the Virtualization layer, such as a hypervisor Shares the physical computer's CPU, hard disk, memory, and network interfaces.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Tools and techniques for managing virtual machine images Andreas.
Full and Para Virtualization
Microsoft Virtual Academy. System Center 2012 Virtual Machine Manager SQL Server Windows Server Manages Microsoft Hyper-V Server 2008 R2 Windows Server.
Grid testing using virtual machines Stephen Childs*, Brian Coghlan, David O'Callaghan, Geoff Quigley, John Walsh Department of Computer Science Trinity.
Background Computer System Architectures Computer System Software.
© 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Understanding Virtualization Overhead.
PRAGMA18 Demonstration 2-4 March 2010 Kei Kokubo, Yasuyuki Kusumoto, Susumu Date Osaka University, Cybermedia Center Wen-Wai Yim, Jason Haga Department.
© 2007 UC Regents1 Rocks – Present and Future The State of Things Open Source Grids and Clusters Conference Philip Papadopoulos, Greg Bruno Mason Katz,
Use of HLT farm and Clouds in ALICE
Oracle Solaris Zones Study Purpose Only
Xen Summit Spring 2007 Platform Virtualization with XenEnterprise
GGF15 – Grids and Network Virtualization
IS3120 Network Communications Infrastructure
OPNFV Arno Installation & Validation Walk-Through
Presentation transcript:

1

 PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform of computational intensive applications.  Highly independent processes which can be distributed. Ex) Docking simulation register/ Site C Site B Site A Large-scale Environment Grid Environment OS:Debian lib: glibc2.0 OS: Redhat lib: glibc3.0 OS: Redhat lib: glibc2.0

Virtualized cluster which is composed of virtual machines (VMs). › build a private computational environment that can be customize for users. › relatively easy to deploy on a single physical cluster by utilizing cluster building tools. 3 Computers at a Site Local network (LAN) Virtual machines (VMs) OS:Debian lib: glibc2.0 Virtual machines (VMs) OS: Redhat lib: glibc3.0 Virtual local network lib: glibc3.0 lib: glibc2.0

 Developed by UCSD Rocks is installed on clusters at Sites in PRAGMA test-bed.  Rocks virtual cluster : 1. A virtual cluster is allocated a VLAN ID and network 2. Virtual compute nodes are automatically installed via network boot technology (PXE boot) Frontend node Rocks Compute nodes VLA N 2 Physical NIC VLA N 2 VLAN Constructi on Physical NIC Virtual Frontend node Virtual Frontend node eth0 eth1 Virtual Compute node Virtual Compute node eth0 WAN Layer 2 communication is needed ( LAN ) Layer 2 communication is needed ( LAN ) Virtual Compute node Virtual Compute node eth0 PXE booting Issue : It is difficult to build a virtual cluster over multiple clusters at Grid site with Rocks.

 Our Approach › Focus on Rocks › Seamlessly integrate N2N overlay network with Rocks 5 Develop a system which can build a virtual cluster over multiple clusters at Grid sites for computational intensive applications. Site A Site B Physical Network Rocks cluster A Rocks cluster B N2N Overlay Network Rocks virtual cluster

 Developed by ntop project in Italy 1. Creates an encrypted layer 2 overlay network using P2P protocol. 2. Can establishes layer 2 network spanned on multiple sites. › Utilize TAP virtual network interface (VNIC) 3. Divides overlay networks in similar manner to VLAN ID › Community name 6 Site A Site B Physical NIC Physical NIC N2N VNIC Physical NIC N2N VNIC LAN WAN N2N Overlay network Community name ( network ID ) MAC address 13:14:15:16:18:26 11:22:33:44:55:66 MAC address

 MVC Controller (MVC : Multi-site Virtual Cluster) Rocks Databese Physical NIC Rocks MVC Databese Physical NIC WA N LAN Frontend node Compute nodes Databes e Overlay network Constructor Resource Manager VM Manager Registers multiple Rocks cluster as resources for a virtual cluster rocks add mvc Site A:Site b Site ASite B

Site ASite B Rocks Physical NIC Rocks Physical NIC WA N LAN Frontend node Compute nodes N2N VNIC N2N VNIC N2N VNIC Resource Manager N2N Overlay network Cluster name ( Cluster ID ) Builds a Layer 2 overlay network for each virtual cluster. MVC Databese Overlay network Constructor VM Manager  MVC Controller (MVC : Multi-site Virtual Cluster)

Site ASite B Rocks Physical NIC Rocks Physical NIC WA N LAN Frontend node Compute nodes N2N VNIC N2N VNIC N2N VNIC Resource Manager N2N Overlay network Cluster name ( Cluster ID ) Builds a Layer 2 overlay network for each virtual cluster. MVC Databese Overlay network Constructor VM Manager N2N VNIC N2N VNIC N2N VNIC  MVC Controller (MVC : Multi-site Virtual Cluster)

Site ASite B  MVC Controller (MVC : Multi-site Virtual Cluster) Rocks Physical NIC Rocks Physical NIC Virtual Compute node Virtual Compute node eth0 Physical NIC N2N VNIC WA N LAN N2N VNIC Frontend node Compute nodes N2N Overlay network Cluster name ( Cluster ID ) Overlay network Constructor N2N VNIC Resource Manager MVC Databese Virtual Frontend node Virtual Frontend node eth0 eth1 WA N PXE ブート VM Manager rocks start host vm overlay frontend rocks start host vm overlay compute nodeA Site =A Seamlessly connects virtual Frontend node and virtual Compute nodes to N2N overlay network.

Site ASite B  MVC Controller (MVC : Multi-site Virtual Cluster) Rocks Physical NIC Virtual Compute node Virtual Compute node eth0 Rocks Physical NIC Virtual Compute node Virtual Compute node eth0 Physical NIC N2N VNIC WA N LAN N2N VNIC Frontend node Compute nodes N2N Overlay network Cluster name ( Cluster ID ) Overlay network Constructor VM Manager N2N VNIC Resource Manager MVC Databese Virtual Frontend node Virtual Frontend node eth0 eth1 WA N PXE ブート rocks start host vm overlay compute nodeB Site =B

Site ASite B Rocks Physical NIC Virtual Compute node Virtual Compute node eth0 Rocks Physical NIC Virtual Compute node Virtual Compute node eth0 Physical NIC N2N VNIC WA N LAN N2N VNIC Frontend node Compute nodes N2N VNIC MVC Databese Virtual Frontend node Virtual Frontend node eth0 eth1 WA N N2N Overlay network Cluster name ( Cluster ID ) Virtual LAN $ qsub -np $NSLOTS app.sh $ mpirun -np 2 app.mpi  Can use as well as a Rocks virtual cluster at local site.

 Environment 13 1.Verify the possibility of building virtual cluster over multiple Rocks clusters. 2.Evaluate calculation performance for a computational intensive application. WAN emulator Frontend node of Rocks cluster B Frontend node of Rocks cluster A Switch(1Gbps) 4 compute nodes of cluster A 4 compute nodes of cluster B OS: CentOS 5.4 (Rocks 5.4) CPU: Intel Xeon 2.27G HZ * 2 (16core) Memory: 12GB Network: 1Gbps 4 of compute nodes: …… Site ASite B

1. Verified a virtual cluster over cluster A and B can be built through N2N overlay network 2. Verified possibility of building a virtual cluster in WAN environment. › Change the latency at WAN emulator  0ms, 20ms, 60ms, 100ms, 140ms › Calculate install time for 4 of virtual compute nodes  About 1.0GB packages to install Verified virtual compute nodes can be installed in WAN A virtual cluster over multiple Rocks clusters can be built even if Rocks clusters are in WAN environment.

 Measure execution time of a computational intensive application. › DOCK 6.2 (sample program)  30 pieces of compounds for a protein divided by 8 processes.  There are few communication between 8 processes › Change the latency and bandwidth at WAN emulator  20ms, 60ms, 100ms, 140ms / 500Mbps, 100Mbps, 30Mbps 15 The effect of the performance is small even if latency is high and bandwidth is narrow

16 have designed and been prototyping a virtual cluster solution over multiple cluster at Grid sites. Integrate N2N with Rocks seamlessly. Verify the calculation performance for distributed application will be scale even if in WAN. Environment. Conclusion 1.Manage multiple virtual clusters deployed by multiple users. 2.Make the install time of virtual compute nodes short. Improve the performance of N2N overlay network. Set a cache repository per site. Future work

 Rocks with Xen Roll  N2N › RPM package installation.  Open some port for N2N › For edge nodes and a supernode  Install MVC Controller › Composed of Some new python scripts › Provide original rocks commands. (we still have been developing.) 17

Fin 18

19

20  1.0G for 3000 sec => around 300Kbps per node.  The limited bandwidth of N2N Overlay network is 40Mbps. Install time won’t change when we install over 100 virtual compute nodes

 Verify the network overhead of our virtual cluster. › Change the latency and bandwidth at WAN emulator.  20ms, 60ms, 100ms, 140ms / 500Mbps, 100Mbps, 30Mbps 21 Limited bandwidth

Rocks VM Installer VLAN Constructo r MVC Controller Resource Manager MVC DB Overlay network Constructor Database (DB) Rocks VM Installer VLAN Constructo r MVC Controller Resource Manager MVC DB Overlay network Constructor Database (DB) Frontend node of site B Frontend node of site A VM Manipulat or Administrator Request virtual cluster Provide virtual cluster Manipulate Virtual Machine for MVC Build a overlay network over multiple sites Aggregate resource information from Rocks DBs VM Manipulat or MVC : Multi-site Virtual Cluster

Domain 0 Supernode nodeMAC addressCommunity EdgeA11:22:33:44:55:66Osaka EdgeB13:14:15:16:18:26Osaka Edge A 11:22:33:44:55: 66 N2N Edge B 13:14:15:16:18: 26 N2N VM1 domain U eth0 14:25:36:47:58 VM2 domain U eth0 41:52:63:74:85 Supernode drops Ethernet flames if the MAC address of source host is not registered at Supernode. drop through

24 domain 0 peth0 eth0 TAP Virtual network interface xenbr.edge1 (bridge) VM domainU xenbr.eth0 (bridge) Supernode nodeMAC addressCommunit y Edge A 11:22:33:44: 55:66 Osaka Edge B 13:14:15:16: 18:26 Osaka 13:14:15:16:18:26 Can Connect to N2N overlay network. VM container eth0 N2N MAC address Synchronization

(ID) (Supernode IP) (Supernode Port) (Community) (Key) (Edge port) Overlay (ID) (Node ID) (MAC) (Virtual Device name) (Overlay ID) Networks for MVC (ID) (node name) (site ID) (Membership, cpus etc..) nodes for MVC (ID) (Node id) (Physical Node id) (Memory, Slice etc..) vm_nodes for MVC (overlay ID) (Node ID) (ID) (fqdn) (IP or DNS name) (headnode name) (user) (SSHpubkey) (text) Site (Site ID) (Node ID) Resource Manager