Download presentation
Presentation is loading. Please wait.
Published byKyler Chesson Modified over 9 years ago
1
Advanced Computing and Information Systems laboratory Virtual Private Clusters: Virtual Appliances and Networks in the Cloud Renato Figueiredo ACIS Lab - University of Florida FutureGrid Team
2
Advanced Computing and Information Systems laboratory 2 Outline Virtual appliances Virtual networks Virtual clusters Grid appliance and FutureGrid Educational uses
3
Advanced Computing and Information Systems laboratory 3 What is an appliance? Physical appliances Webster – “an instrument or device designed for a particular use or function”
4
Advanced Computing and Information Systems laboratory 4 What is an appliance? Hardware/software appliances TV receiver + computer + hard disk + Linux + user interface Computer + network interfaces + FreeBSD + user interface
5
Advanced Computing and Information Systems laboratory 5 What is a virtual appliance? A virtual appliance packages software and configuration needed for a particular purpose into a virtual machine “image” The virtual appliance has no hardware – just software and configuration The image is a (big) file It can be instantiated on hardware
6
Advanced Computing and Information Systems laboratory 6 Virtual appliance example Linux + Apache + MySQL + PHP copy instantiate LAMP image A web server Another Web server Repeat… Virtualization Layer
7
Advanced Computing and Information Systems laboratory 7 Clustered applications Replace LAMP with the middleware of your choice – e.g. MPI, Hadoop, Condor copy instantiate MPI image An MPI worker Another MPI worker Repeat… Virtualization Layer
8
Advanced Computing and Information Systems laboratory 8 What about the network? Multiple Web servers might be completely independent from each other MPI nodes are not Need to communicate and coordinate with each other Each worker needs an IP address, uses TCP/IP sockets Cluster middleware stacks assume a collection of machines, typically on a LAN (Local Area Network)
9
Advanced Computing and Information Systems laboratory 9 VMM + VN Virtualized machines and networks WAN Domain A Domain C Domain B V1V2 V3 Physical Infrastructure Virtual Infrastructure
10
Advanced Computing and Information Systems laboratory Why virtual networks? Cloud-bursting: Private enterprise LAN/cluster Run additional worker VMs on a cloud provider Extending the LAN to all VMs – seamless scheduling, data transfers Federated “Inter-cloud” environments: Multiple private LANs/clusters across various institutions inter-connected Virtual machines can be deployed on different sites and form a distributed virtual private cluster 10
11
Advanced Computing and Information Systems laboratory 11 Virtual cluster appliances Virtual appliance + virtual network copy instantiate MPI + Virtual Network An MPIworker Another MPI worker Repeat… Virtual machine Virtual network
12
Advanced Computing and Information Systems laboratory Where virtualization applies 12 Network Device Network Device Network Fabric (Virtual) machine (Virtual) machine Software Virtualized endpoints Virtualized fabric
13
Advanced Computing and Information Systems laboratory Example - VLAN 13 Network Device Network Device Virtual LAN (Virtual) machine (Virtual) machine Software Under control Switching: RECV portA Match VLAN tag, SEND portB
14
Advanced Computing and Information Systems laboratory Inter-cloud Virtual Networks Challenges - shared environment Lack of control of networking resources in Internet infrastructure Can’t program routers, switches Public networks – privacy is important Often, lack of privileged access to underlying resources May be “root” within a VM, but lacking hypervisor privileges Approach: Virtual Private Networks End-to-end; tunneling over shared infrastructure 14
15
Advanced Computing and Information Systems laboratory Example - VPNs 15 Network Device Network Device Internet (Virtual) machine (Virtual) machine Software Virtual Private Network No control Tunneling SEND: Encrypt msg, Encapsulate msg, Lookup endpoint SEND
16
Advanced Computing and Information Systems laboratory 16 Virtualization: core primitives Intercept events of interest: VM: trap on “privileged” instructions VN: intercept message sent or received Emulate behavior of event in the context of virtualized resource: VM: emulate the behavior of instruction intercepted in the context of the virtual machine issuing it VN: emulate the behavior of SEND/RECV in the context of the virtual network it is bound to
17
Advanced Computing and Information Systems laboratory 17 Layers Layer-2 virtualization VN supports all protocols layered on data link Not only IP but also other protocols Simpler integration E.g. ARP crosses layers 2 and 3 Downside: broadcast traffic if VN spans beyond LAN Layer-3 virtualization VN supports all protocols layered on IP TCP, UDP, DHCP, … Sufficient to handle many environments/applications Downside: tied to IP Innovative non-IP network protocols will not work
18
Advanced Computing and Information Systems laboratory Technologies and Techniques Amazon VPC: Virtual private network extending from enterprise to resources at a major IaaS commercial cloud OpenFlow: Open switching specification allowing programmable network devices through a forwarding instruction set OpenStack Quantum: Virtual private networking within a private cloud offered by a major open-source IaaS stack ViNe: Inter-cloud, high-performance user-level managed virtual network IP-over-P2P (IPOP) and GroupVPN Peer-to-peer, inter-cloud, self-organizing virtual network 18
19
Advanced Computing and Information Systems laboratory 19 ViNe Led by Mauricio Tsugawa, Jose Fortes at UF Focus: Virtual network architecture that allows VNs to be deployed across multiple administrative domains and offer full connectivity among hosts independently of connectivity limitations Internet organization: ViNe routers (VRs) are used by nodes as gateways to overlays, as Internet routers are used as gateways to route Internet messages VRs are dynamically reconfigurable Manipulation of operating parameters of VRs enables the management of VNs Slide provided by M. Tsugawa
20
Advanced Computing and Information Systems laboratory ViNe Architecture 20 Dedicated resources in each broadcast domain (LAN) for VN processing –ViNe Routers (VRs) No VN software needed on nodes (platform independence) VNs can be managed by controlling/reconfiguring VRs VRs transparently address connectivity problems for nodes VR = computer running ViNe software Easy deployment Proven mechanisms can be incorporated in physical routers and firewalls. In OpenFlow-enabled networks, flows can be directed to VRs for L3 processing Overlay routing infrastructure (VRs) decoupled with the management infrastructure Slide provided by M. Tsugawa
21
Advanced Computing and Information Systems laboratory Connectivity: ViNe approach VRs with connectivity limitations (limited-VRs) initiate connection (TCP or UDP) with VRs without limitations (queue-VRs) Messages destined to limited-VRs are sent to corresponding queue-VRs Long-lived connection possible between limited-VR and queue-VR Generally applicable (no dependency with network equipment, firewall/NAT type, etc) 21 Internet Limited VR Queue VR VR Send message Open connection Retrieve message Network virtualization processing only performed by VRs Firewall traversal only needed for inter-VR communication ViNe firewall traversal mechanism: Slide provided by M. Tsugawa
22
Advanced Computing and Information Systems laboratory ViNe routing performance L3 processing implemented in Java Mechanisms to avoid IP fragmentation Use of data structures with low access times in the routing module VR routing capacity over 880Mbps (using modern CPU cores) – Gigabit line rate (120Mbps total encapsulation overhead) Sufficient in many cases where WAN performance is less than Gbps Requires CPUs launched after 2006 (e.g., 2 GHz Intel Core2 microarchitecute) Slide provided by M. Tsugawa
23
Advanced Computing and Information Systems laboratory ViNe Management Architecture 23 ViNe Central Server Oversees global VN management Maintains ViNe-related information Authentication/authorization based on Public Key Infrastructure Remotely issue commands to reconfigure VR operation ViNe Central Server Requests VR Requests... Configuration actions VR operating parameters configurable at run-time Overlay routing tables, buffer size, encryption on/off Autonomic approaches possible Slide provided by M. Tsugawa
24
Advanced Computing and Information Systems laboratory Example: Inter-cloud BLAST 3 FutureGrid sites (US) UCSD (San Diego) UF (Florida) UC (Chicago) 3 Grid’5000 sites (France) Lille Rennes Sophia Grid’5000 is fully isolated from the Internet One machine white-listed to access FutureGrid ViNe queue VR (Virtual Router) for other sites Slide provided by M. Tsugawa
25
Advanced Computing and Information Systems laboratory CloudBLAST Experiment ViNe connected a virtual cluster across 3 FG and 3 Grid’5000 sites 750 VMs, 1500 cores Executed BLAST on Hadoop (CloudBLAST) with 870X speedup Slide provided by M. Tsugawa
26
Advanced Computing and Information Systems laboratory 26 IP-over-P2P / GroupVPN Application VNIC Virtual Router Virtual Router VNIC Application (Wide-area) Overlay network Isolated, private virtual address space 10.10.1.2 10.10.1.1 Unmodified applications Connect( 10.10.1.2,80) Capture/tunnel, scalable, resilient, self-configuring routing and object store
27
Advanced Computing and Information Systems laboratory 27 Virtual network: GroupVPN Key techniques: IP-over-P2P (IPOP) tunneling GroupVPN Web 2.0/social network interface Self-configuring Avoid administrative overhead of typical VPNs NAT and firewall traversal; DHCP virtual addresses Scalable and robust P2P routing deals with node joins and leaves Networks are isolated One or more private IP address spaces Decentralized DHCP serves addresses for each space
28
Advanced Computing and Information Systems laboratory 28 Bi-directional structured overlay (Brunet library) Self-configured NAT traversal Self-optimized links Direct, relay Self-healing structure Multi-hop path Overlay router Under the hood: overlay architecture Overlay router Direct path
29
Advanced Computing and Information Systems laboratory 29 GroupVPN Web interface Users can request to join a group, or create their own VPN group E.g. instructor creates a GroupVPN for class Determines who is allowed to connect to virtual network Owner can authorize users to join, remove users, authorize other to admin Actions typical of a certificate authority happen in the back-end without user having to deal with security operations E.g. sign/revoke a VPN X.509 certificate
30
Advanced Computing and Information Systems laboratory 30 Managing virtual IP spaces One P2P overlay supports multiple IPOP namespaces IP routing within a namespace Each IPOP namespace: a unique string Distributed Hash Table (DHT) stores mapping Key=namespace Value=DHCP configuration (IP range, lease,...) IPOP node configured with a namespace Query namespace for DHCP configuration Guess an IP address at random within range Attempt to store in DHT Key=namespace+IP Value=IPOPid (160-bit) IP->P2P Address resolution: Given namespace+IP, lookup IPOPid
31
Advanced Computing and Information Systems laboratory (c) Renato Figueiredo 31 B2B2 IPOP Namespaces N1 Namespace N1: 10.128.0.0/255.192.0.0 A 1 : 10.129.6.83 C1C1 B 1 : 10.129.6.71 D1D1 N1:10.129.6.71→Br unetID x1 x1x2 x4 x3 x5 x6 x7 x8 N2 A2A2 C2C2 D2D2 DHTLookup(N1:B1) x1 BrunetID “ARP cache” IPOP packet DHTCreate(N2, 10.128.0.0/255.192.0.0) N2:10.129.6.71→ BrunetID x2 DHTCreate(N2:A2,x2)
32
Advanced Computing and Information Systems laboratory (c) Renato Figueiredo 32 Optimization: Adaptive shortcuts At each node: Count IPOP packets to other nodes When number of packets within an interval exceeds threshold: Initiate connection setup; create edge Limit on number of shortcuts Overhead involved in connection maintenance Drop connections no longer in use
33
Advanced Computing and Information Systems laboratory Evaluation - cloud EC2/UFEC2GoGridUF/GoGrid Netperf stream phys (Mbps) 89.235.930.2 Netperf stream VN (Mbps) 75.319.225.7 Netperf RR trans/s phys 13.411.110.0 Netperf RR trans/s VN 13.310.79.8
34
Advanced Computing and Information Systems laboratory 34 Grid appliance - virtual clusters Same image, per-group VPNs copy instantiate Hadoop + Virtual Network A Hadoop worker Another Hadoop worker Repeat… Virtual machine Group VPN GroupVPN Credentials (from Web site) Virtual IP - DHCP 10.10.1.1 Virtual IP - DHCP 10.10.1.2
35
Advanced Computing and Information Systems laboratory 35 Grid appliance clusters Virtual appliances Encapsulate software environment in image Virtual disk file(s) and virtual hardware configuration The Grid appliance Encapsulates cluster software environments Current examples: Condor, MPI, Hadoop Homogeneous images at each node Virtual Network connecting nodes forms a cluster Deploy within or across domains
36
Advanced Computing and Information Systems laboratory 36 Grid appliance internals Host O/S Linux Grid/cloud stack MPI, Hadoop, Condor, … Glue logic for zero-configuration Automatic DHCP address assignment Multicast DNS (Bonjour, Avahi) resource discovery Shared data store - Distributed Hash Table Interaction with VM/cloud
37
Advanced Computing and Information Systems laboratory 37 One appliance, multiple hosts Allow same logical cluster environment to instantiate on a variety of platforms Local desktop, clusters; FutureGrid; Amazon EC2; Science Clouds… Avoid dependence on host environment Make minimum assumptions about VM and provisioning software Desktop: 1 image, VMware, VirtualBox, KVM Para-virtualized VMs (e.g. Xen) and cloud stacks – need to deal with idiosyncrasies Minimum assumptions about networking Private, NATed Ethernet virtual network interface
38
Advanced Computing and Information Systems laboratory 38 Configuration framework At the end of GroupVPN initialization: Each node of a private virtual cluster gets a DHCP address on virtual tap interface A barebones cluster Additional configuration required depending on middleware Which node is the Condor negotiator? Hadoop front-end? Which nodes are in the MPI ring? Key frameworks used: IP multicast discovery over GroupVPN Front-end queries for all IPs listening in GroupVPN Distributed hash table Advertise (put key,value), discover (get key)
39
Advanced Computing and Information Systems laboratory 39 Configuring and deploying groups Generate virtual floppies Through GroupVPN Web interface Deploy appliances image(s) FutureGrid (Nimbus/Eucalyptus), EC2 GUI or command line tools Use APIs to copy virtual floppy to image Submit jobs; terminate VMs when done
40
Advanced Computing and Information Systems laboratory 40 Demonstration Pre-instantiated VM to save us time: cloud-client.sh --conf alamo.conf --run --name grid-appliance-2.05.03.gz --hours 24 Connect to VM ssh root@VMip Check virtual network interface ifconfig Ping other VMs in the virtual cluster Submit Condor job
41
Advanced Computing and Information Systems laboratory 41 Use case: Education and Training Importance of experimental work in systems research Needs also to be addressed in education Complement to fundamental theory FutureGrid: a testbed for experimentation and collaboration Education and training contributions: Lower barrier to entry – pre-configured environments, zero-configuration technologies Community/repository of hands-on executable environments: develop once, share and reuse
42
Advanced Computing and Information Systems laboratory Educational appliances in FutureGrid A flexible, extensible platform for hands-on, lab-oriented education on FutureGrid Executable modules – virtual appliances Deployable on FutureGrid resources Deployable on other cloud platforms, as well as virtualized desktops Community sharing – Web 2.0 portal, appliance image repositories An aggregation hub for executable modules and documentation 42
43
Advanced Computing and Information Systems laboratory Support for classes on FutureGrid Classes are setup and managed using the FutureGrid portal Project proposal: can be a class, workshop, short course, tutorial Needs to be approved by FutureGrid project to become active Users can be added to a project Users create accounts using the portal Project leaders can authorize them to gain access to resources Students can then interactively use FG resources (e.g. to start VMs) 43
44
Advanced Computing and Information Systems laboratory Use of FutureGrid in classes Cloud computing/distributed systems classes U.of Florida, U. Central Florida, U. of Puerto Rico, Univ. of Piemonte Orientale (Italy), Univ. of Mostar (Croatia) Distributed scientific computing Louisiana State University Tutorials, workshops: Big Data for Science summer school A cloudy view on computing SC’11 tutorial – Clouds for science Science Cloud Summer School 44
45
Advanced Computing and Information Systems laboratory 45 Thank you! More information: http://www.futuregrid.org http://grid-appliance.org This document was developed with support from the National Science Foundation (NSF) under Grant No. 0910812 to Indiana University for "FutureGrid: An Experimental, High-Performance Grid Test-bed." Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF
46
Advanced Computing and Information Systems laboratory 46
47
Advanced Computing and Information Systems laboratory 47 Local appliance deployments Two possibilities: Share our “bootstrap” infrastructure, but run a separate GroupVPN Simplest to setup Deploy your own “bootstrap” infrastructure More work to setup Especially if across multiple LANs Potential for faster connectivity
48
Advanced Computing and Information Systems laboratory 48 PlanetLab bootstrap Shared virtual network bootstrap Runs 24/7 on 100s of machines on the public Internet Connect machines across multiple domains, behind NATs
49
Advanced Computing and Information Systems laboratory 49 PlanetLab bootstrap: approach Create GroupVPN and GroupAppliance on the Grid appliance Web site Download configuration floppy Point users to the interface; allow users you trust into the group Trusted users can download configuration floppies and boot up appliances
50
Advanced Computing and Information Systems laboratory 50 Private bootstrap: General approach Good choice for single-domain pools Create GroupVPN and GroupAppliance on the Grid appliance Web site Deploy a small IPOP/GroupVPN bootstrap P2P pool Can be on a physical machine, or appliance Detailed instructions at grid-appliance.org The remaining steps are the same as for the shared bootstrap
51
Advanced Computing and Information Systems laboratory 51 Connecting external resources GroupVPN can run directly on a physical machine, if desired Provides a VPN network interface Useful for example if you already have a local Condor pool Can “flock” to Archer Also allows you to install Archer stack directly on a physical machine if you wish
52
Advanced Computing and Information Systems laboratory 52 FutureGrid example - Eucalyptus Example using Eucalyptus (or ec2-run- instances on Amazon EC2): euca-run-instances ami-fd4aa494 -f floppy.zip --instance-type m1.large -k keypair GroupVPN floppy image Image ID on Eucalyptus server SSH public key to log in to instance
53
Advanced Computing and Information Systems laboratory 53 Where to go from here? Tutorials on FutureGrid and Grid appliance Web sites for various middleware stacks Condor, MPI, Hadoop A community resource for educational virtual appliances Success hinges on users effectively getting involved If you are happy with the system, let others know! Contribute with your own content – virtual appliance images, tutorials, etc
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.