Presentation is loading. Please wait.

Presentation is loading. Please wait.

D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Similar presentations


Presentation on theme: "D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,"— Presentation transcript:

1 D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting, Boston, July 12, 2012

2 A CIO’s view? GENI research computing Everything everything else Drawing is not to scale

3 A broader view of GENI GENI is: – a constituency demanding attention; – a process that can help to meet the needs of other constituencies; – a bundle of technologies coming to campus. Some are already there. GENI engages key technologies that can help give campus users what they want. – virtualizing, sharing, and interconnecting – infrastructure as a distributed service

4 Sponsored by the National Science Foundation4July 10, 2012 GENI Portal Home Page

5 Constructing “slices” I like to use TinkerToys as a metaphor for creating a slice in GENI. The parts are virtual infrastructure resources: compute, networking, storage, etc. Parts come in many types, shapes, sizes. Parts interconnect in various ways. We combine them to create useful built-to-order assemblies. Some parts are programmable. Where do the parts come from?

6 ExoGENI Racks Packaged infrastructure pod – servers, network, storage – off-the-shelf cloud software (‘exo’) – GENI-enabled ‘special sauce’ Cookie-cutter deployment – funded @14 campuses – linked for sharing, “plug and play” – multiple use Open Resource Control Architecture

7 ExoGENI software structure

8 Executive summary Virtualization Layer Physical: metal and glass Orchestration Service GENI API...API Cloud Service Stuff we build (automation) Standard stuff you need anyway

9 Competing Racks “brands” InstaGENI – Emulab/PlanetLab – lightweight VMs (vservers) – bare-metal provisioning – HP is an engaged sponsor ExoGENI – ORCA – off-the-shelf cloud software – hypervisor VMs (KVM) or bare-metal – IBM is an engaged vendor

10 Outline clouds: virtual servers and storage linking cloud clusters interconnecting with the campus keeping it safe and secure

11 EC2 The canonical public cloud Virtual Appliance Image

12 Infrastructure as a Service (IaaS) “Consumers of IaaS have access to virtual computers, network-accessible storage, network infrastructure components, and other fundamental computing resources…and are billed according to the amount or duration of the resources consumed.”

13 ClientServer(s) Cloud > server-based computing Client/server model (1980s - ) Now called Software-as-a-Service (SaaS)

14 Cloud Provider(s) Host Guest ClientService Host/guest model Service is hosted by a third party. – flexible programming model – cloud APIs for service to allocate/link resources – on-demand: pay as you grow

15 OS VMM Physical Platform ClientService IaaS: infrastructure services Deployment of private clouds is growing rapidly w/ open IaaS cloud software. Hosting performance and isolation is determined by virtualization layer Virtual machines: VMware, KVM, etc. A cloud solution for your campus?

16 OS VMM (optional) Physical Platform ClientService PaaS cloud services define the high-level programming models, e.g., for clusters or specific application classes. PaaS: platform services Hadoop, grids, batch job services, etc. can also be viewed as PaaS category. Note: can deploy them over IaaS.

17 OpenStack, the Cloud Operating System Management Layer That Adds Automation & Control [Anthony Young @ Rackspace]

18 Managing images “Let a thousand flowers bloom.” Curated image collections are needed! University IT can help. “Virtual appliance marketplace”

19 Connectivity: the missing link Cloud Providers Virtual Compute and Storage Infrastructure Transport Network Providers Cloud APIs (Amazon EC2..) Dynamic circuit APIs (NLR Sherpa, DOE OSCARS, I2 ION, OGF NSI …) Virtual Network Infrastructure

20 Linking clouds with L2 circuits c1 c2 C A B A B logical pipe (path) cross-domain link circuit providers cloud sites Campus clouds can serve as on-ramps to national fabrics.

21 Circuit stitching node Produce/consume VLAN tag Extends OpenStack and Euca to configure virtual NICs and attach VLANs. Connect adjacent circuits at network exchanges (e.g., StarLight). The last-hop provider to the cloud site is your campus or RON.

22 SC11 Demo: Solar Fuels Workflow

23 ExoGENI: recap ExoGENI is a network of standard OpenStack cloud sites deployed (deploying) at campuses. – Initial sites centrally managed from RENCI, other providers may join and advertise portions of their resources. Layered orchestration software (ORCA) manages multi-cloud slices and integrates with GENI. – Proxies GENI APIs, checks identity/authorization. Circuit backplane for L2 network connectivity. – By agreement with circuit providers.... Configurable/flexible L3 connectivity. – “Easy button” to configure IP network within slice. – Host campuses may offer L3 connectivity to slices.

24 Duke Network – Future Operation MCNC (Commodity + I-2/NLR) Campus “Backbone” Interchange Layer Duke Shared Cluster Resource Physics Department Institute for Genome Sciences & Policy Duke CS – Exo-Geni Research RENCI’s Breakable Experimental Network (BEN) Future External Data Flow: SDN-Mediated “Expressway” Links: Enable Layer2 Transport and ExoGENI Resource Access I-2/ION [Tracy Futhey]

25 ??? WAN fabric Public Internet Campus IP IP link Science Net Science DMZ Circuit service (L2) Packet service (L3) WAN fabric Public Internet Science Net Circuit service (L2) Packet service (L3) Science DMZ SDSN MPLS IP-VPN Ring VRFs virtual scinet Before After

26 Cloud-Based Credential Store IdP Issue user credentials PA Create project SA Register user Issue project x credentials Create slice in x Issue slice s credentials Create sliver in s 1 3 5 24 Delegate Credentials: who has access?

27 IdP Issue user credentials Users have roles e.g., student, faculty. Register user user registered GENI uses Shibboleth IdPs IdP.geniUser  T IdP.student  T IdP.enrolled(CS-114)  T IdP.geniUser  D IdP.faculty  D An IdP asserts facts about users. User attributes may include inCommon attributes harvested through indirect delegation to Shibboleth IdPs. These attributes may have parameters with simple values (strings or numbers). D T

28 Please work with inCommon and release IdP attributes to GENI!

29 Trust management: generalizing PKI An entity A delegates trust to another by endorsing its public key for possession of an attribute or role. The delegation is a fact written as a logic statement and issued in a credential signed by A. Other entities reason from these facts according to their own policy rules, which are declared in logic. Policy rules may also be signed and transmitted. trusts AB A.trusts  B Certificate Term of validity Issuer’s name (or key) Signature Payload: statement

30 Summary GENI is about incorporating new foundational infrastructure into campuses. – general-purpose  multi-use – best-of-breed  off-the-shelf GENI is automation for this infrastructure. – ease of use, power, flexibility, safety – Many others are working on these problems. – GENI is the focal point in the academic space. Rolling deployment, ramping up.... Best practices @ central IT will help!


Download ppt "D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,"

Similar presentations


Ads by Google